« AI Concept » : différence entre les versions

De Marijan Stajic | Wiki
Aller à la navigation Aller à la recherche
m (Marijan a déplacé la page IA Concept vers AI Concept)
 
(19 versions intermédiaires par le même utilisateur non affichées)
Ligne 1 : Ligne 1 :
= Notions =


There are other notions that are important to understand in relation with AI technologies.
== Large Language Model (LLM) ==
A '''Large Language Model (LLM)''' is the '''engine behind an AI application''' such as ChatGPT. In this case, the engine powering ChatGPT is GPT-4 (or GPT-4o, previously), which is the LLM used by the application.
'''Azure AI Foundry''' is a service that allows you to '''choose which Large Language Model (LLM)''' you want to use.
== Model Context Protocol (MCP) ==
A '''Model Context Protocol (MCP)''' is a protocol that '''standardizes communication between Large Language Models'''  (LLMs) trough an '''Agent AI''' and ''' external systems''' , such as ITSM tools (like ServiceNow), Kubernetes clusters, and more.
You can use an ''' MCP client''' , for example, '''Continue.dev'''  in your IDE (like VS Code) and then '''configure MCP servers''', such as your Kubernetes cluster, to enable your LLM to interact with these systems.
== Retrieval-Augmented Generation (RAG) ==
== Low-Code / No-Code ==
== Prompt Engineering ==
Prompt engineering is the practice of '''crafting clear''', '''structured instructions''' to '''guide AI models''' toward producing optimal outputs. Based on my experience :
* '''Rule 1 :''' '''DON'T''' write '''too much text at once'''; if possible, break the work into '''sequences'''. Otherwise, '''use functions''' of the app.
* '''Rule 2 :''' '''DON'T''' ask the '''AI to write prompt engineering instructions for another AI''', it creates an infinite loop and wastes time.
* '''Rule 3 :'''  🎯 '''Use emojis''' to clarify and sequence your prompt, helping the '''AI recognise when there's a new instruction'''.

Dernière version du 23 décembre 2025 à 21:57

Notions

There are other notions that are important to understand in relation with AI technologies.

Large Language Model (LLM)

A Large Language Model (LLM) is the engine behind an AI application such as ChatGPT. In this case, the engine powering ChatGPT is GPT-4 (or GPT-4o, previously), which is the LLM used by the application.

Azure AI Foundry is a service that allows you to choose which Large Language Model (LLM) you want to use.

Model Context Protocol (MCP)

A Model Context Protocol (MCP) is a protocol that standardizes communication between Large Language Models (LLMs) trough an Agent AI and external systems , such as ITSM tools (like ServiceNow), Kubernetes clusters, and more.

You can use an MCP client , for example, Continue.dev in your IDE (like VS Code) and then configure MCP servers, such as your Kubernetes cluster, to enable your LLM to interact with these systems.

Retrieval-Augmented Generation (RAG)

Low-Code / No-Code

Prompt Engineering

Prompt engineering is the practice of crafting clear, structured instructions to guide AI models toward producing optimal outputs. Based on my experience :

  • Rule 1 : DON'T write too much text at once; if possible, break the work into sequences. Otherwise, use functions of the app.
  • Rule 2 : DON'T ask the AI to write prompt engineering instructions for another AI, it creates an infinite loop and wastes time.
  • Rule 3 : 🎯 Use emojis to clarify and sequence your prompt, helping the AI recognise when there's a new instruction.