[{"content":"","date":"24 April 2026","externalUrl":null,"permalink":"/Portfolio/tags/api/","section":"Tags","summary":"","title":"Api","type":"tags"},{"content":"","date":"24 April 2026","externalUrl":null,"permalink":"/Portfolio/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":"","date":"24 April 2026","externalUrl":null,"permalink":"/Portfolio/categories/exam/","section":"Categories","summary":"","title":"Exam","type":"categories"},{"content":"","date":"24 April 2026","externalUrl":null,"permalink":"/Portfolio/tags/java/","section":"Tags","summary":"","title":"Java","type":"tags"},{"content":"","date":"24 April 2026","externalUrl":null,"permalink":"/Portfolio/tags/llm/","section":"Tags","summary":"","title":"Llm","type":"tags"},{"content":" The idea # In this article, I will show how to create an AI-driven application. The goal is to create an application, that can integrate with an LLM-API. If you want the code, check out the full project here:\nTatchapero/Internship-Report-Evaluator Java 0 0 AI assisted assessment # We want to make an application that can take an internship report as input, and in return receive an assessment of the report as output. The assessment should be based on:\nRequirements for the report Educational key values Learning goals System prompts \u0026amp; User prompts # When integrating with an LLM-API you need to provide a System prompt and a User prompt, but what\u0026rsquo;s the difference?\nAspect System Prompt User Prompt Who writes it Developer / system End user Purpose Defines overall behavior, rules, and role Asks for a specific task or response Priority Highest priority Lower than system prompt Scope Applies to the entire conversation Applies to a single request (or turn) Content type Instructions, constraints, tone, persona Questions, commands, inputs Flexibility Usually fixed or hidden from the user Fully controlled by the user Examples “You are a helpful assistant. Avoid harmful content.” “Explain black holes in simple terms.” When they conflict Overrides the user prompt Must adapt to system rules We\u0026rsquo;re going to make a System Prompt to set up the rules of the behavior. This is going to be a static file. Once created, we don\u0026rsquo;t touch it.\nThe User prompt however, should contain the report, when we send it to the LLM-API. The way we\u0026rsquo;re going to do this, is by making some merge fields in the template for the User prompt. This way, we can easily merge the report in the User prompt before we send it to the LLM-API.\nBuild # To build the application, we\u0026rsquo;re going to use a code agent. If you\u0026rsquo;re not familiar with code agents, check out this article:\nCode Agents 20 April 2026\u0026middot;1377 words\u0026middot;7 mins Project Exam Code Agent Code agents - AI systems that go beyond answering questions to autonomously build and execute tasks. It explains how they differ from traditional assistants, explores different types of agents, and demonstrates their capabilities by creating and deploying a full meditation quiz app from a single prompt. Steps:\nOpen PowerShell (or your preferred terminal) Navigate to a folder where you keep your projects cd C:\\Projects Create a new folder mkdir InternshipReportEvaluator Enter the new folder cd .\\InternshipReportEvaluator Create a new folder mkdir data Create a new folder mkdir prompts Open your code agent codex Type /plan to enter Plan mode Provide the following prompt: Prompt I need to make an application that can take an internship report as input, and return an AI generated assessment with feedback. The application should: 1. Receive an internship report as a markdown file 2. Use a rubric to make the assessment 3. Send prompts to an LLM using an API 4. Receive a response from the model 5. Return a structured assessment with feedback It is **important** that it should be a guiding/assisted assessment, and not a final assessment. Requirements: - Make a rubric from `krav-til-rapport.md`, `dare-share-care.md` and `læringsmål.md` and save it in the prompts folder as markdown - Make a systemprompt for evaluating internship reports, to be sent when calling the LLM API, and save it in the prompts folder as markdown - Make a userprompt with a merge field where a report can be inserted, and then sent when calling the LLM API, and save the userprompt in the prompts folder as markdown - Make the backend of the application in Java using Springboot - Make the frontend of the application in React - A user should be able to upload an internship report as markdown in the frontend, and then get an assessment in return - A user should be able to upload the internship report and edit it in the frontend before sending it for assessment - A user should be able to edit the rubric in the frontend - A user should be able to select assessment level or feedback style - The frontend should display if there are timeout errors or API errors - The LLM API should always return structured JSON as a response - The LLM API should be able to integrate with OpenAI\u0026#39;s API The code agent will undoubtedly ask some follow up questions, and these may vary based on LLM, model, etc., but just try to answer them to the best of your ability, and help guiding it through the process.\nGiven the prompt above, I was able to make the following site:\nLLM API Key # Now we need an API key for an LLM. In the prompt I provided for my code agent, I stated that it is a requirement that it supports OpenAI\u0026rsquo;s API, so for this case, I\u0026rsquo;m going to get an API key from OpenAI.\nUsing an LLM API endpoint is not free, as it is costly to use tokens. You need to add credits in order to use your API key.\nOnce you are logged in to an API platform for an LLM you can just\nCreate an API key Add credits And it should look like this:\nImportant Make sure you save your API key somewhere safe.\nAPI keys are not recoverable. If you lose it, you have to delete it from the API platform and create a new one.\nTest # Let\u0026rsquo;s test it.\nMy GitHub project always provides the LLM API response in the frontend, so the only thing you need to do is upload an internship report and click Assess.\nInternship Report 1 Internship Report 2 Internship Report 3 You can see the LLM API response in the frontend too. Here\u0026rsquo;s an example response:\nRaw model reponse { \u0026#34;assistantAssessmentNotice\u0026#34;: \u0026#34;Dette er vejledende feedback baseret på rapportens indhold og ikke en endelig bedømmelse.\u0026#34;, \u0026#34;summary\u0026#34;: \u0026#34;Rapporten giver en grundig og reflekteret beskrivelse af praktikforløbet hos V1 med fokus på tekniske opgaver, læring i praksis og personlig udvikling. Der er god kobling til teori og praksis, og rapporten indeholder konkrete eksempler og refleksioner, der understøtter en faglig dialog. Nogle områder kan styrkes med mere præcis dokumentation og uddybning, især omkring samarbejdsflader og konkrete læringsmål.\u0026#34;, \u0026#34;coverage\u0026#34;: [ { \u0026#34;item\u0026#34;: \u0026#34;Formelle krav og rapportgrundlag\u0026#34;, \u0026#34;status\u0026#34;: \u0026#34;opfyldt\u0026#34;, \u0026#34;comment\u0026#34;: \u0026#34;Rapporten fremstår som den studerendes egen refleksion, har passende omfang (ca. 10.870 ord) og indeholder kvittering for evalueringsskema.\u0026#34; }, { \u0026#34;item\u0026#34;: \u0026#34;Praktikvirksomhed og kontekst\u0026#34;, \u0026#34;status\u0026#34;: \u0026#34;tilstrækkeligt grundlag\u0026#34;, \u0026#34;comment\u0026#34;: \u0026#34;Virksomheden og teamets fokus er beskrevet, men beskrivelsen af samarbejdsflader og processer kunne være mere detaljeret.\u0026#34; }, { \u0026#34;item\u0026#34;: \u0026#34;Læringsmål: viden\u0026#34;, \u0026#34;status\u0026#34;: \u0026#34;stærkt grundlag\u0026#34;, \u0026#34;comment\u0026#34;: \u0026#34;Rapporten viser god forståelse for virksomhedens drift, teknologier og organisatoriske rammer, især inden for cloud og sikkerhed.\u0026#34; }, { \u0026#34;item\u0026#34;: \u0026#34;Læringsmål: færdigheder\u0026#34;, \u0026#34;status\u0026#34;: \u0026#34;stærkt grundlag\u0026#34;, \u0026#34;comment\u0026#34;: \u0026#34;Den studerende dokumenterer konkrete tekniske opgaver, metoder og begrundelser, herunder automatisering, CI/CD og sikkerhedsprincipper.\u0026#34; }, { \u0026#34;item\u0026#34;: \u0026#34;Læringsmål: kompetencer\u0026#34;, \u0026#34;status\u0026#34;: \u0026#34;tilstrækkeligt grundlag\u0026#34;, \u0026#34;comment\u0026#34;: \u0026#34;Der er god beskrivelse af læring, samarbejde og håndtering af komplekse situationer, men eksempler på tværfagligt samarbejde kunne uddybes.\u0026#34; }, { \u0026#34;item\u0026#34;: \u0026#34;Udførte opgaver og faglig refleksion\u0026#34;, \u0026#34;status\u0026#34;: \u0026#34;stærkt grundlag\u0026#34;, \u0026#34;comment\u0026#34;: \u0026#34;Rapporten indeholder konkrete opgaver og reflekterer over tekniske valg og læring med kobling til teori og metoder.\u0026#34; }, { \u0026#34;item\u0026#34;: \u0026#34;Personlige udviklingsmål\u0026#34;, \u0026#34;status\u0026#34;: \u0026#34;tilstrækkeligt grundlag\u0026#34;, \u0026#34;comment\u0026#34;: \u0026#34;Personlige mål og refleksioner er tydelige, især via personlighedstest og samarbejdserfaring, men kunne styrkes med mere konkrete fremtidige udviklingspunkter.\u0026#34; }, { \u0026#34;item\u0026#34;: \u0026#34;Udbytte og værdiskabelse\u0026#34;, \u0026#34;status\u0026#34;: \u0026#34;stærkt grundlag\u0026#34;, \u0026#34;comment\u0026#34;: \u0026#34;Rapporten reflekterer over værdi for både virksomheden og den studerende med konkrete eksempler og feedback.\u0026#34; }, { \u0026#34;item\u0026#34;: \u0026#34;DARE, SHARE, CARE\u0026#34;, \u0026#34;status\u0026#34;: \u0026#34;tilstrækkeligt grundlag\u0026#34;, \u0026#34;comment\u0026#34;: \u0026#34;Der er eksempler på initiativ, videndeling og ansvarlighed, men refleksionerne kunne være mere eksplicit strukturerede omkring DARE, SHARE og CARE.\u0026#34; }, { \u0026#34;item\u0026#34;: \u0026#34;Refleksionskvalitet og eksamensforberedelse\u0026#34;, \u0026#34;status\u0026#34;: \u0026#34;stærkt grundlag\u0026#34;, \u0026#34;comment\u0026#34;: \u0026#34;Rapporten er reflekterende med konkrete eksempler og fagligt sprog, hvilket giver et godt grundlag for mundtlig eksamen.\u0026#34; } ], \u0026#34;criteria\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;Formelle krav og rapportgrundlag\u0026#34;, \u0026#34;level\u0026#34;: \u0026#34;Stærkt grundlag\u0026#34;, \u0026#34;evidence\u0026#34;: \u0026#34;Rapporten er individuel, indeholder kvittering for evalueringsskema og har passende omfang.\u0026#34;, \u0026#34;strengths\u0026#34;: [\u0026#34;Tydelig personlig refleksion\u0026#34;, \u0026#34;Passende omfang\u0026#34;, \u0026#34;Dokumentation for evalueringsskema\u0026#34;], \u0026#34;improvements\u0026#34;: [\u0026#34;Ingen væsentlige forbedringer nødvendige her.\u0026#34;], \u0026#34;guidingQuestions\u0026#34;: [\u0026#34;Er der dokumentation for, at rapporten er den studerendes eget arbejde?\u0026#34;] }, { \u0026#34;name\u0026#34;: \u0026#34;Praktikvirksomhed og kontekst\u0026#34;, \u0026#34;level\u0026#34;: \u0026#34;Tilstrækkeligt grundlag\u0026#34;, \u0026#34;evidence\u0026#34;: \u0026#34;Beskrivelse af V1 som IT-konsulenthus og Cloud Operations-teamet, men begrænset om samarbejdsflader.\u0026#34;, \u0026#34;strengths\u0026#34;: [\u0026#34;Klar beskrivelse af virksomhedens fokus og teamets arbejdsområde.\u0026#34;], \u0026#34;improvements\u0026#34;: [\u0026#34;Uddyb samarbejdsflader og organisatoriske rammer for bedre kontekst.\u0026#34;], \u0026#34;guidingQuestions\u0026#34;: [\u0026#34;Hvordan er samarbejdet mellem teamet og andre afdelinger eller kunder konkret organiseret?\u0026#34;] }, { \u0026#34;name\u0026#34;: \u0026#34;Læringsmål: viden\u0026#34;, \u0026#34;level\u0026#34;: \u0026#34;Stærkt grundlag\u0026#34;, \u0026#34;evidence\u0026#34;: \u0026#34;Indsigt i cloud-teknologier, sikkerhedsprincipper og organisatoriske processer dokumenteret gennem konkrete eksempler.\u0026#34;, \u0026#34;strengths\u0026#34;: [\u0026#34;God forståelse for tekniske og organisatoriske rammer.\u0026#34;, \u0026#34;Kobling til teori som CIA-triaden og Zero Trust.\u0026#34;], \u0026#34;improvements\u0026#34;: [\u0026#34;Kan styrkes med mere detaljer om virksomhedens overordnede drift.\u0026#34;], \u0026#34;guidingQuestions\u0026#34;: [\u0026#34;Hvordan påvirker virksomhedens forretningsmodel den daglige drift?\u0026#34;] }, { \u0026#34;name\u0026#34;: \u0026#34;Læringsmål: færdigheder\u0026#34;, \u0026#34;level\u0026#34;: \u0026#34;Stærkt grundlag\u0026#34;, \u0026#34;evidence\u0026#34;: \u0026#34;Beskrivelse af brug af Bicep, CI/CD, Git, sikkerhedskonfiguration og agile metoder.\u0026#34;, \u0026#34;strengths\u0026#34;: [\u0026#34;Konkrete tekniske opgaver og metoder.\u0026#34;, \u0026#34;Begrundelser for valg og refleksion over praksis.\u0026#34;], \u0026#34;improvements\u0026#34;: [\u0026#34;Kan uddybe planlægning og prioritering af opgaver mere eksplicit.\u0026#34;], \u0026#34;guidingQuestions\u0026#34;: [\u0026#34;Hvordan planlagde og prioriterede du dine daglige opgaver?\u0026#34;] }, { \u0026#34;name\u0026#34;: \u0026#34;Læringsmål: kompetencer\u0026#34;, \u0026#34;level\u0026#34;: \u0026#34;Tilstrækkeligt grundlag\u0026#34;, \u0026#34;evidence\u0026#34;: \u0026#34;Beskrivelse af læring, mentorforhold og samarbejde, men begrænset om tværfagligt samarbejde.\u0026#34;, \u0026#34;strengths\u0026#34;: [\u0026#34;God beskrivelse af personlig læring og håndtering af usikkerhed.\u0026#34;], \u0026#34;improvements\u0026#34;: [\u0026#34;Inddrag flere eksempler på tværfagligt samarbejde og professionel deltagelse.\u0026#34;], \u0026#34;guidingQuestions\u0026#34;: [\u0026#34;Kan du give eksempler på samarbejde med andre faggrupper?\u0026#34;] }, { \u0026#34;name\u0026#34;: \u0026#34;Udførte opgaver og faglig refleksion\u0026#34;, \u0026#34;level\u0026#34;: \u0026#34;Stærkt grundlag\u0026#34;, \u0026#34;evidence\u0026#34;: \u0026#34;Konkrete opgaver med refleksion over tekniske valg og kobling til teori og metoder.\u0026#34;, \u0026#34;strengths\u0026#34;: [\u0026#34;God kobling mellem praksis og teori.\u0026#34;, \u0026#34;Refleksion over konsekvenser og læring.\u0026#34;], \u0026#34;improvements\u0026#34;: [\u0026#34;Ingen væsentlige forbedringer nødvendige.\u0026#34;], \u0026#34;guidingQuestions\u0026#34;: [\u0026#34;Hvordan valgte du specifikke tekniske løsninger?\u0026#34;] }, { \u0026#34;name\u0026#34;: \u0026#34;Personlige udviklingsmål\u0026#34;, \u0026#34;level\u0026#34;: \u0026#34;Tilstrækkeligt grundlag\u0026#34;, \u0026#34;evidence\u0026#34;: \u0026#34;Refleksion over personlighedstest og samarbejde, men mindre fokus på fremtidige udviklingspunkter.\u0026#34;, \u0026#34;strengths\u0026#34;: [\u0026#34;Tydelig selvindsigt og refleksion over samarbejdsroller.\u0026#34;], \u0026#34;improvements\u0026#34;: [\u0026#34;Formuler konkrete fremtidige udviklingsmål og handlinger.\u0026#34;], \u0026#34;guidingQuestions\u0026#34;: [\u0026#34;Hvilke personlige kompetencer vil du arbejde videre med?\u0026#34;] }, { \u0026#34;name\u0026#34;: \u0026#34;Udbytte og værdiskabelse\u0026#34;, \u0026#34;level\u0026#34;: \u0026#34;Stærkt grundlag\u0026#34;, \u0026#34;evidence\u0026#34;: \u0026#34;Feedback fra kollegaer og konkrete eksempler på værdi skabt for team og kunde.\u0026#34;, \u0026#34;strengths\u0026#34;: [\u0026#34;God dokumentation af værdi for virksomheden.\u0026#34;, \u0026#34;Refleksion over gensidig læring.\u0026#34;], \u0026#34;improvements\u0026#34;: [\u0026#34;Ingen væsentlige forbedringer nødvendige.\u0026#34;], \u0026#34;guidingQuestions\u0026#34;: [\u0026#34;Hvordan kunne du skabe endnu mere værdi i praktikforløbet?\u0026#34;] }, { \u0026#34;name\u0026#34;: \u0026#34;DARE, SHARE, CARE\u0026#34;, \u0026#34;level\u0026#34;: \u0026#34;Tilstrækkeligt grundlag\u0026#34;, \u0026#34;evidence\u0026#34;: \u0026#34;Eksempler på initiativ, videndeling og ansvarlighed, men ikke eksplicit opdelt efter DARE, SHARE, CARE.\u0026#34;, \u0026#34;strengths\u0026#34;: [\u0026#34;Konkrete situationer med initiativ og samarbejde.\u0026#34;], \u0026#34;improvements\u0026#34;: [\u0026#34;Strukturer refleksionerne tydeligere omkring DARE, SHARE og CARE.\u0026#34;], \u0026#34;guidingQuestions\u0026#34;: [\u0026#34;Kan du give konkrete eksempler på, hvordan du har vist mod, delt viden og udvist ansvarlighed?\u0026#34;] }, { \u0026#34;name\u0026#34;: \u0026#34;Refleksionskvalitet og eksamensforberedelse\u0026#34;, \u0026#34;level\u0026#34;: \u0026#34;Stærkt grundlag\u0026#34;, \u0026#34;evidence\u0026#34;: \u0026#34;Rapporten indeholder konkrete eksempler, fagligt sprog og balancerede vurderinger.\u0026#34;, \u0026#34;strengths\u0026#34;: [\u0026#34;God refleksion og faglig dybde.\u0026#34;, \u0026#34;Velegnet til eksamensdialog.\u0026#34;], \u0026#34;improvements\u0026#34;: [\u0026#34;Ingen væsentlige forbedringer nødvendige.\u0026#34;], \u0026#34;guidingQuestions\u0026#34;: [\u0026#34;Hvilke temaer vil du fremhæve til den mundtlige eksamen?\u0026#34;] } ], \u0026#34;nextSteps\u0026#34;: [ \u0026#34;Uddyb samarbejdsflader og organisatoriske rammer i virksomheden for bedre kontekst.\u0026#34;, \u0026#34;Formuler konkrete fremtidige personlige udviklingsmål med tilhørende handlinger.\u0026#34;, \u0026#34;Strukturer refleksionerne eksplicit omkring DARE, SHARE og CARE for at styrke denne del.\u0026#34;, \u0026#34;Forbered eksempler på tværfagligt samarbejde og planlægning af arbejdsopgaver til eksamen.\u0026#34;, \u0026#34;Overvej at fremhæve temaer og spørgsmål, der kan danne grundlag for en god eksamensdialog.\u0026#34; ], \u0026#34;risksAndUncertainties\u0026#34;: [ \u0026#34;Rapporten indeholder ikke detaljeret beskrivelse af samarbejdsflader, hvilket kan kræve uddybning ved eksamen.\u0026#34;, \u0026#34;Personlige udviklingsmål er reflekteret, men mangler konkrete fremtidige handlinger.\u0026#34;, \u0026#34;Refleksionerne omkring DARE, SHARE og CARE er implicitte og kan være svære at vurdere uden uddybning.\u0026#34;, \u0026#34;Der er begrænset dokumentation for tværfagligt samarbejde, hvilket kan være et opfølgningspunkt.\u0026#34; ] } What we\u0026rsquo;ve learned # Difference between a System prompt and a User prompt How to integrate to an LLM-API ","date":"24 April 2026","externalUrl":null,"permalink":"/Portfolio/projects/llm-api/","section":"Projects","summary":"In this article, I will show how to create an AI-driven application. The goal is to create an application, that can integrate with an LLM-API. We want to make an application that can take an internship report as input, and in return receive an assessment of the report as output.","title":"LLM-API","type":"projects"},{"content":"","date":"24 April 2026","externalUrl":null,"permalink":"/Portfolio/","section":"Portfolio","summary":"","title":"Portfolio","type":"page"},{"content":"","date":"24 April 2026","externalUrl":null,"permalink":"/Portfolio/categories/project/","section":"Categories","summary":"","title":"Project","type":"categories"},{"content":"","date":"24 April 2026","externalUrl":null,"permalink":"/Portfolio/tags/react/","section":"Tags","summary":"","title":"React","type":"tags"},{"content":"","date":"24 April 2026","externalUrl":null,"permalink":"/Portfolio/tags/springboot/","section":"Tags","summary":"","title":"Springboot","type":"tags"},{"content":"","date":"24 April 2026","externalUrl":null,"permalink":"/Portfolio/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":"","date":"20 April 2026","externalUrl":null,"permalink":"/Portfolio/tags/agent/","section":"Tags","summary":"","title":"Agent","type":"tags"},{"content":"","date":"20 April 2026","externalUrl":null,"permalink":"/Portfolio/tags/code/","section":"Tags","summary":"","title":"Code","type":"tags"},{"content":" What is a code agent? # There\u0026rsquo;s a distinctive difference between using an AI on a website, and using a Code Agent:\nAn assistant is an AI you can ask anything An agent is an AI that can do anything Assistants Agents Control flow You control the loop It controls the loop Time horizon One-shot ( or short conversations) Long-running tasks State management Context = prompt Context = environment Autonomy Supports human decisions Fully autonomous Decision-making Provides insights for users to act on Operates independently Complexity Adapts to user input dynamically Performs structured tasks Horizontal vs Vertical agents # Horizontal agents: Broad capability across domains (like ChatGPT) Vertical agents: Specialized for one domain Types of code agents # Claude Code OpenAI Codex Better imagination. Good for sparring and helping figure out what is wrong or what\u0026rsquo;s going on Faster and more reliable but needs more concise instructions Meditation Quiz App # Let\u0026rsquo;s make a Meditation Quiz App from scratch using a Code Agent, to demonstrate the power of this tool.\nChoose your Code Agent # What you choose is entirely up to you. As stated above, the difference between code agents are minimal, so it is mostly up to personal preferrence.\nClaude Code Quickstart OpenAI Codex Quickstart Tip If using the CLI, type /plan to switch to Plan mode\nOnce setup give it the prompt below:\nPrompt I need to make a website called Scenius. It\u0026#39;s a meditation quiz app, and it has the following requirements: Requirements: - Must be made in React - User must be able to select a block of questions, that shows the correct answers after finishing the block - It should be possible to take a quiz again - A quiz should remember the answers if you have taken it before - When starting a quiz, it should ask if you want to see your previous answers (if available), or start a new quiz - Blocks must be answered in order (0 through 4) - User must answer all questions correctly before proceeding to the next block - Should be able to create more questions in the future - Does not rely on a database - It needs to be deployed (preferably on GitHub as a static website) Here\u0026#39;s the quiz with the correct answers checked: # 0) Om meditation - og vælge ikke at have et problem ## #1 - Målet med meditation er at være helt tom for tanker * [ ] Rigtigt * [x] Forkert ## #2 - Hvor længe skal man meditere? * [ ] Det er lige meget * [ ] 20 minutter om dagen * [x] Så længe man på forhånd har besluttet sig for at gøre det * [ ] Altid over 10 minutter ad gangen ## #3 - Man kan kun meditere rigtigt hvis man sidder i lotusstilling * [ ] Rigtigt * [x] Forkert ## #4 - Meditation er en teknik * [ ] Rigtigt * [x] Forkert ## #5 - Hvis jeg opdager, at jeg tænker på et problem, hvad gør jeg så? * [ ] Gennemgår instruktionerne og vender tilbage til meditationen * [ ] Tænker på noget andet * [x] Ingenting - jeg er allerede fri af tankerne * [ ] Skubber tankerne væk ## #6 - Hvis jeg synes det er svært at meditere, hvad gør jeg så? * [x] Så vælger jeg ikke at gøre et problem ud af det * [ ] Så forsøger jeg at tænke på noget positivt * [ ] Så gør jeg det forkert --- # 1) Om at være stille ## #1 - At være stille har både en indre og en ydre del * [x] Rigtigt * [ ] Forkert ## #2 - Den indre del betyder: * [x] At jeg ikke forholder mig til tanker og følelser * [ ] At alle tanker og følelser står stille * [ ] At jeg kan fjerne alle tanker og følelser * [ ] At jeg ikke dagdrømmer ## #3 - Man kan ikke være stille når der er larm i nærheden? * [ ] Rigtigt * [x] Forkert ## #4 - Man kan godt være fuldkommen stille og samtidig have hovedet fuld af tanker igennem hele meditationen? * [x] Rigtigt * [ ] Forkert ## #5 - Man kan kun meditere hvis man sidder fuldkommen stille? * [ ] Rigtigt * [x] Forkert ## #6 - Stilhed er ... * [ ] en følelse * [ ] en oplevelse * [ ] en måde at have det på * [x] en indre position i forhold til tanker og følelser --- # 2) Om at være afslappet (ease of being) ## #1 - Den indre del af instruktionen er den vigtigste? * [x] Rigtigt * [ ] Forkert ## #2 - Det indre og ydre afspejler hinanden i meditationen. Hvad betyder det? * [ ] Hvis man ser anstrengt ud i ansigtet, så har man psykiske problemer * [ ] At hvis man smiler, så virker meditationen bedre * [ ] At hvis der er fred på ydersiden, så er der også fred på indersiden * [x] At hvis man er afslappet på ydersiden, er det lettere at være afslappet på indersiden og omvendt ## #3 - Det gælder om at blive mere og mere afslappet i kroppen i løbet af meditationen? * [ ] Rigtigt * [x] Forkert ## #4 - At være afslappet i forhold til sin oplevelse betyder, at vi * [x] ikke blander os i hvad vi oplever * [ ] observerer vores tanker og skubber dem væk * [ ] ikke må føle noget når vi mediterer ## #5 - Meditation virker ikke hvis man er anspændt i kroppen * [ ] Rigtigt * [x] Forkert ## #6 - Hvis jeg er helt afslappet i kroppen efter en meditation er det et tegn på at jeg har gjort det rigtigt? * [ ] Rigtigt * [ ] Forkert * [x] Måske --- # 3) Om at være opmærksom og lysvågen ## #1 - I denne meditationsform retter man sin opmærksomhed mod objekter i bevidstheden * [ ] Rigtigt * [x] Forkert ## #2 - Objekter i bevidstheden er: * [ ] Tanker, følelser og lyde * [x] Alt som har en begyndelse og en afslutning * [ ] Tanker og genstande ## #3 - Man kan ikke småsove og være opmærksom på samme tid? * [x] Rigtigt * [ ] Forkert ## #4 - At være opmærksom i meditation er ... * [ ] at være opmærksom på alt, der rører sig i bevidstheden * [x] ikke at hænge fast i noget * [ ] at være fast fokuseret på et punkt ## #5 - Når man er opmærksom er der ingen tanker? * [ ] Rigtigt * [x] Forkert ## #6 - Man skal anstrenge sig for at være opmærksom? * [ ] Rigtigt * [ ] Forkert * [x] Hverken rigtigt eller forkert --- # 4) Om at lade alting være ## #1 - Hvis man overholder den første instruktion og er fuldkommen stille, så lader man også alting være? * [x] Rigtigt * [ ] Forkert ## #2 - Du er dine tanker og følelser? * [ ] Rigtigt * [x] Forkert ## #3 - At lade alting være, som det er, er det samme som at vælge ikke at have et problem? * [x] Rigtigt * [ ] Forkert ## #4 - At lade alting være som det er, betyder at man først og fremmest skal lade de negative tanker være? * [ ] Rigtigt * [x] Forkert ## #5 - Når vi lader ALTING være, så har vi droppet vores relation til alt bevidsthedsindhold * [x] Rigtigt * [ ] Forkert Do you need any clarifications? Results # The prompt states that it must be deployed, and as such, it should provide a GitHub Actions workflow to let you deploy it - Just remember to go to your project settings on GitHub, then the Pages side menu, and change your Source to GitHub Actions.\nCheck it out here: Scenius\nWhat we\u0026rsquo;ve learned # The difference between an AI Assistant and a Code Agent Which Code Agents exists today and their pros \u0026amp; cons How to build a quiz app using a code agent ","date":"20 April 2026","externalUrl":null,"permalink":"/Portfolio/projects/code-agents/","section":"Projects","summary":"Code agents - AI systems that go beyond answering questions to autonomously build and execute tasks. It explains how they differ from traditional assistants, explores different types of agents, and demonstrates their capabilities by creating and deploying a full meditation quiz app from a single prompt.","title":"Code Agents","type":"projects"},{"content":" Automate RAG # This is a continuation of the RAG article. If you haven\u0026rsquo;t checked it out yet, I highly recommend giving it a read, since we are building upon another project:\nRAG 13 April 2026\u0026middot;779 words\u0026middot;4 mins Project Exam Rag Chunk Vector Embedding RAG (Retrieval Augmented Generation) is an AI technique that improves answers by combining information retrieval from external sources with language model generation. It helps overcome limitations of standard models by providing up-to-date, accurate, and context-specific responses. Goal # The ultimate goal is to make a RAG chatbot that continously improves its knowledgebase, the moment the information is available. To do this, we make a RAG that retrieves the articles from this website. As soon as a new article is released, the chatbot needs to know about it\nSolution # There are many ways to approach this problem, but the simplest solution that works on any Hugo/Blowfish site, is a simple script and a few changes to the GitHub workflow file. Here\u0026rsquo;s the idea:\nUse Dify.ai\u0026rsquo;s API endpoint to provide the knowledge Call the endpoint on each deploy Wipe the entire knowledge base, and rebuild it on every deploy Hugo and Blowfish website articles are markdown files. We can send these files to the Dify.ai using their API Rest endpoint. What we need are a Dify.ai:\nBase URL API key Dataset ID And here\u0026rsquo;s how we\u0026rsquo;re going to do it:\nLogin to your Dify.ai account and open the Knowledge tab Click on the Service API button. Here\u0026rsquo;s the Base URL https://api.dify.ai (omit the /v1) Click on the API Key and then Create new Secret key Copy the key and write it down somewhere. You will not be able to view it again Open the Knowledge base you want to use In the URL, there should be a UUID/GUID. This is your Dataset ID With all of this, we can now create a GitHub Actions workflow file, that uses this information to send our markdown files to Dify.ai:\nhugo.yaml name: Build and deploy on: push: branches: - main workflow_dispatch: permissions: contents: read pages: write id-token: write concurrency: group: pages cancel-in-progress: false defaults: run: shell: bash jobs: build: runs-on: ubuntu-latest env: DART_SASS_VERSION: 1.99.0 GO_VERSION: 1.26.1 HUGO_VERSION: 0.160.0 NODE_VERSION: 24.14.1 TZ: Europe/Oslo DIFY_BASE_URL: ${{ secrets.DIFY_BASE_URL }} DIFY_API_KEY: ${{ secrets.DIFY_API_KEY }} DIFY_DATASET_ID: ${{ secrets.DIFY_DATASET_ID }} steps: - name: Checkout uses: actions/checkout@v6 with: submodules: recursive fetch-depth: 0 - name: Setup Go uses: actions/setup-go@v6 with: go-version: ${{ env.GO_VERSION }} cache: false - name: Setup Node.js uses: actions/setup-node@v6 with: node-version: ${{ env.NODE_VERSION }} - name: Setup Pages id: pages uses: actions/configure-pages@v6 - name: Create directory for user-specific executable files run: | mkdir -p \u0026#34;${HOME}/.local\u0026#34; - name: Install Dart Sass run: | curl -sLJO \u0026#34;https://github.com/sass/dart-sass/releases/download/${DART_SASS_VERSION}/dart-sass-${DART_SASS_VERSION}-linux-x64.tar.gz\u0026#34; tar -C \u0026#34;${HOME}/.local\u0026#34; -xf \u0026#34;dart-sass-${DART_SASS_VERSION}-linux-x64.tar.gz\u0026#34; rm \u0026#34;dart-sass-${DART_SASS_VERSION}-linux-x64.tar.gz\u0026#34; echo \u0026#34;${HOME}/.local/dart-sass\u0026#34; \u0026gt;\u0026gt; \u0026#34;${GITHUB_PATH}\u0026#34; - name: Install Hugo run: | curl -sLJO \u0026#34;https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/hugo_extended_${HUGO_VERSION}_linux-amd64.tar.gz\u0026#34; mkdir \u0026#34;${HOME}/.local/hugo\u0026#34; tar -C \u0026#34;${HOME}/.local/hugo\u0026#34; -xf \u0026#34;hugo_extended_${HUGO_VERSION}_linux-amd64.tar.gz\u0026#34; rm \u0026#34;hugo_extended_${HUGO_VERSION}_linux-amd64.tar.gz\u0026#34; echo \u0026#34;${HOME}/.local/hugo\u0026#34; \u0026gt;\u0026gt; \u0026#34;${GITHUB_PATH}\u0026#34; - name: Verify installations run: | echo \u0026#34;Dart Sass: $(sass --version)\u0026#34; echo \u0026#34;Go: $(go version)\u0026#34; echo \u0026#34;Hugo: $(hugo version)\u0026#34; echo \u0026#34;Node.js: $(node --version)\u0026#34; - name: Install Node.js dependencies run: | [[ -f package-lock.json || -f npm-shrinkwrap.json ]] \u0026amp;\u0026amp; npm ci || true - name: Configure Git run: | git config core.quotepath false - name: Install jq run: | sudo apt-get update sudo apt-get install -y jq - name: Sync Hugo markdown to Dify run: | set -euo pipefail : \u0026#34;${DIFY_BASE_URL:?Missing DIFY_BASE_URL}\u0026#34; : \u0026#34;${DIFY_API_KEY:?Missing DIFY_API_KEY}\u0026#34; : \u0026#34;${DIFY_DATASET_ID:?Missing DIFY_DATASET_ID}\u0026#34; echo \u0026#34;Deleting existing Dify documents...\u0026#34; page=1 while :; do response=$(curl --silent --show-error --fail \\ --request GET \\ --url \u0026#34;${DIFY_BASE_URL}/v1/datasets/${DIFY_DATASET_ID}/documents?page=${page}\u0026amp;limit=100\u0026#34; \\ --header \u0026#34;Authorization: Bearer ${DIFY_API_KEY}\u0026#34;) ids=$(echo \u0026#34;$response\u0026#34; | jq -r \u0026#39;.data[]?.id\u0026#39;) has_more=$(echo \u0026#34;$response\u0026#34; | jq -r \u0026#39;.has_more // false\u0026#39;) if [ -n \u0026#34;$ids\u0026#34; ]; then while IFS= read -r id; do [ -z \u0026#34;$id\u0026#34; ] \u0026amp;\u0026amp; continue echo \u0026#34;Deleting document ${id}\u0026#34; curl --silent --show-error --fail \\ --request DELETE \\ --url \u0026#34;${DIFY_BASE_URL}/v1/datasets/${DIFY_DATASET_ID}/documents/${id}\u0026#34; \\ --header \u0026#34;Authorization: Bearer ${DIFY_API_KEY}\u0026#34; done \u0026lt;\u0026lt;\u0026lt; \u0026#34;$ids\u0026#34; fi if [ \u0026#34;$has_more\u0026#34; != \u0026#34;true\u0026#34; ]; then break fi page=$((page + 1)) done echo \u0026#34;Uploading Hugo markdown files...\u0026#34; find content -type f -name \u0026#34;*.md\u0026#34; | sort | while IFS= read -r file; do # Create unique name from path doc_name=$(echo \u0026#34;$file\u0026#34; \\ | sed \u0026#39;s#^content/##\u0026#39; \\ | sed \u0026#39;s#/index.md$##\u0026#39; \\ | sed \u0026#39;s#\\.md$##\u0026#39; \\ | sed \u0026#39;s#/#__#g\u0026#39;) echo \u0026#34;Uploading ${file} as ${doc_name}\u0026#34; curl --silent --show-error --fail \\ --request POST \\ --url \u0026#34;${DIFY_BASE_URL}/v1/datasets/${DIFY_DATASET_ID}/document/create-by-file\u0026#34; \\ --header \u0026#34;Authorization: Bearer ${DIFY_API_KEY}\u0026#34; \\ --form \u0026#34;file=@${file};filename=${doc_name}.md\u0026#34; \\ --form \u0026#39;data={\u0026#34;indexing_technique\u0026#34;:\u0026#34;high_quality\u0026#34;,\u0026#34;doc_form\u0026#34;:\u0026#34;text_model\u0026#34;,\u0026#34;process_rule\u0026#34;:{\u0026#34;mode\u0026#34;:\u0026#34;automatic\u0026#34;}}\u0026#39; done - name: Cache restore id: cache-restore uses: actions/cache/restore@v5 with: path: ${{ runner.temp }}/hugo_cache key: hugo-${{ github.run_id }} restore-keys: hugo- - name: Build the site run: | hugo build \\ --gc \\ --minify \\ --baseURL \u0026#34;${{ steps.pages.outputs.base_url }}/\u0026#34; \\ --cacheDir \u0026#34;${{ runner.temp }}/hugo_cache\u0026#34; - name: Cache save id: cache-save uses: actions/cache/save@v5 with: path: ${{ runner.temp }}/hugo_cache key: ${{ steps.cache-restore.outputs.cache-primary-key }} - name: Upload artifact uses: actions/upload-pages-artifact@v3 with: path: ./public deploy: environment: name: github-pages url: ${{ steps.deployment.outputs.page_url }} runs-on: ubuntu-latest needs: build steps: - name: Deploy to GitHub Pages id: deployment uses: actions/deploy-pages@v5 Once added, you should push this to your GitHub project, and the CI/CD pipeline should start, however it will fail because we need to provide the base URL, API key, and dataset ID as secrets.\nOpen your project settings on GitHub\nOpen Secrets and variables, then Actions on the left menu\nAdd 3 new repository secrets:\nDIFY_BASE_URL DIFY_API_KEY DIFY_DATASET_ID It is important, that you write secret names exactly like this - Otherwise the GitHub Actions will not detect these secrets.\nWhat we\u0026rsquo;ve learned # How to automatically feed the RAG more knowledge Using Dify.ai API REST endpoint ","date":"13 April 2026","externalUrl":null,"permalink":"/Portfolio/projects/rag-demo/","section":"Projects","summary":"This project shows you how to fully automate your chatbot’s knowledge so it never falls behind. Every time you publish new content, your chatbot instantly learns it—no manual updates, no maintenance headaches.","title":"Automate RAG","type":"projects"},{"content":"","date":"13 April 2026","externalUrl":null,"permalink":"/Portfolio/tags/automation/","section":"Tags","summary":"","title":"Automation","type":"tags"},{"content":"","date":"13 April 2026","externalUrl":null,"permalink":"/Portfolio/tags/chunk/","section":"Tags","summary":"","title":"Chunk","type":"tags"},{"content":"","date":"13 April 2026","externalUrl":null,"permalink":"/Portfolio/tags/embedding/","section":"Tags","summary":"","title":"Embedding","type":"tags"},{"content":"","date":"13 April 2026","externalUrl":null,"permalink":"/Portfolio/tags/rag/","section":"Tags","summary":"","title":"Rag","type":"tags"},{"content":" What is RAG? # Retrieval Augmented Generation. A technique used in AI to improve answers by combining:\nRetrieval: The system looks up relevant information from external sources (documents, databases, PDFs, websites, internal company data) Generation: The language model uses the retrieved information to generate a response Normal AI models rely only on what they were trained on. With RAG, the AI can provide up-to-date info, granted they are provided with new information. They are not limited by a models training cutoff. This means it can provide more accurate answers that are grounded in real data.\nAt its core, RAG solves a fundamental limitation, that LLM\u0026rsquo;s (Large Language Models) don\u0026rsquo;t know your data. LLM\u0026rsquo;s are trained on general data, and can therefore be outdated, hallucinate, and most importantly they don\u0026rsquo;t have access to private or internal information.\nHow does it work? # 1. Ingestion # Prepare your knowledge base and collect any data you can feed the RAG (PDFs, docs, webpages, databases, API\u0026rsquo;s etc.), and chunk the data into smaller pieces. LLM\u0026rsquo;s can\u0026rsquo;t process huge amounts of documents at once, and smaller chunks improves retrieval accuracy.\n2. Embedding # Each chunk is converted into a vector (a list of numbers representing meaning).\nExample:\n\u0026#34;Denmark has a strong welfare system\u0026#34; → [0.12, -0.98, 0.44, ...] This is done using an embedding model.\nThe key idea of doing this, is that similar meanings will have similar vectors. This enables semantic search, and not just a keyword matching.\n3. Storage (vector database) # All embeddings are stored in a vector database. This allows for fast similarity search like\n\u0026ldquo;Find chunks most similar to this question\u0026rdquo;\n4. Query # When a user asks a question like:\n\u0026ldquo;What are the benefits of Denmark\u0026rsquo;s welfare system?\u0026rdquo;\nThe RAG then converts the question to an embedding, so that it can make a vector representation. The system then finds the closest vectors in the vector database, and could retrieve something like:\n\u0026ldquo;Denmark provides universal healthcare\u0026hellip;\u0026rdquo;\n5. Augmentation # The retrieved chunks are inserted into the prompt:\nAnswer the question using the context below: [Chunk 1] [Chunk 2] [Chunk 3] Question: What are the benefits of Denmark’s welfare system? 6. Generation (LLM response) # The model can now read the retrieved context, and generate an answer that is grounded in it.\nCreate a RAG chatbot # Create an account at Dify.ai Sign in Provide knowledge # Navigate to the Knowledge tab Select Create Knowledge Select Import from file and upload a file (or use the file below) Download\nOnce uploaded, click Next Set Retrieval Setting to Hybrid Search Save \u0026amp; Process Create the chatbot # Navigate to the Studio tab In the CREATE APP card, select Create from Blank Expand the MORE BASIC APP TYPES and select Chatbot Provide a name and Create Provide some intructions (or use the instructions below) Instructions You are a helpful assistant that answers questions about the person described in the provided CV and supporting context.\nYour role:\nHelp users quickly find accurate information about this person. Be supportive, professional, and easy to talk to. Act like an assistant for this person, not as the person. Behavior:\nGive short, clear, and concise answers. Prefer direct answers over long explanations. Be friendly and helpful. Summarize when possible. Use bullet points only when they improve clarity. Grounding:\nOnly answer using information found in the provided CV or retrieved context. Do not invent details, infer personal facts, or guess. If the answer is not in the context, say so briefly and suggest what kind of information is available. Response rules:\nKeep answers brief unless the user asks for more detail. Focus on facts such as experience, skills, education, projects, achievements, roles, and background. When relevant, mention that the answer is based on the available CV/context. If multiple relevant facts exist, give the most important ones first. Tone:\nSupportive Professional Concise Helpful Examples:\nIf asked \u0026ldquo;What are his main skills?\u0026rdquo; give a short list of the most relevant skills from the CV. If asked \u0026ldquo;What experience does he have in X?\u0026rdquo; summarize only the experience supported by the context. If asked something not covered in the CV, respond: \u0026ldquo;I could not find that in the available CV/context.\u0026rdquo; Do not:\nPretend to have personal knowledge beyond the provided documents. Make up dates, titles, achievements, or opinions. Give long or repetitive answers. In Knowledge section, click + Add Select the file you uploaded and click Add Click Publish in top right corner and then Publish Update Ask the bot anything that it might be able to answer from the knowledge provided What we\u0026rsquo;ve learned # What is a RAG, and how does it work How to create a simple chatbot ","date":"13 April 2026","externalUrl":null,"permalink":"/Portfolio/projects/rag/","section":"Projects","summary":"RAG (Retrieval Augmented Generation) is an AI technique that improves answers by combining information retrieval from external sources with language model generation. It helps overcome limitations of standard models by providing up-to-date, accurate, and context-specific responses.","title":"RAG","type":"projects"},{"content":"","date":"13 April 2026","externalUrl":null,"permalink":"/Portfolio/tags/rest/","section":"Tags","summary":"","title":"Rest","type":"tags"},{"content":"","date":"13 April 2026","externalUrl":null,"permalink":"/Portfolio/tags/vector/","section":"Tags","summary":"","title":"Vector","type":"tags"},{"content":"","date":"12 April 2026","externalUrl":null,"permalink":"/Portfolio/tags/frontend/","section":"Tags","summary":"","title":"Frontend","type":"tags"},{"content":" Peli # Visit Peli here: Peli\nTech Stack # Java Javalin React PostgreSQL JUnit About # Peli is a platform built by four computer science students who share a passion for video games and software development. We created Peli to make it easy for gamers to stay up to date with upcoming game releases and keep track of the titles they\u0026rsquo;re most excited about.\nOur goal is to bring all upcoming releases into one simple, user-friendly place. With Peli, you can explore the latest game announcements, search for specific titles, and discover what\u0026rsquo;s coming next across different platforms and genres.\nBy creating an account, users can personalize their experience—log in, favorite games they\u0026rsquo;re looking forward to, and build their own list of upcoming releases to watch. Whether you\u0026rsquo;re waiting for a major AAA launch or a hidden indie gem, Peli helps you stay informed and organized.\nPeli is a student-built project driven by curiosity, creativity, and a love for gaming. We\u0026rsquo;re constantly learning and improving, and we hope Peli becomes a useful companion for gamers who don\u0026rsquo;t want to miss what\u0026rsquo;s next.\n❤️\n","date":"12 April 2026","externalUrl":null,"permalink":"/Portfolio/projects/peli/","section":"Projects","summary":"Peli is a platform built by four computer science students who share a passion for video games and software development. We created Peli to make it easy for gamers to stay up to date with upcoming game releases and keep track of the titles they’re most excited about.","title":"Peli","type":"projects"},{"content":"","date":"12 April 2026","externalUrl":null,"permalink":"/Portfolio/projects/","section":"Projects","summary":"","title":"Projects","type":"projects"},{"content":"","externalUrl":null,"permalink":"/Portfolio/authors/","section":"Authors","summary":"","title":"Authors","type":"authors"},{"content":"","externalUrl":null,"permalink":"/Portfolio/series/","section":"Series","summary":"","title":"Series","type":"series"}]