« AI Concept » : différence entre les versions

De Marijan Stajic | Wiki
Aller à la navigation Aller à la recherche
m (Marijan a déplacé la page IA Concept vers AI Concept)
Aucun résumé des modifications
Ligne 1 : Ligne 1 :
= Large Language Model (LLM) =


A '''Large Language Model (LLM)''' is the '''engine behind an AI application''' such as ChatGPT. In this case, the engine powering ChatGPT is GPT-4 (or GPT-4o, previously), which is the LLM used by the application.
'''Azure AI Foundry''' is a service that allows you to '''choose which Large Language Model (LLM)''' you want to use.
= Model Context Protocol (MCP) =
A '''Model Context Protocol (MCP)''' is a protocol that '''standardizes communication between Large Language Models'''  (LLMs) and ''' external systems''' , such as ITSM tools (like ServiceNow), Kubernetes clusters, and more.
You can use an ''' MCP client''' , for example, '''Continue.dev'''  in your IDE (like VS Code) and then '''configure MCP servers''', such as your Kubernetes cluster, to enable your LLM to interact with these systems.
= n8n (Workflow Automation) =
N8n is a workflow automation tool

Version du 6 novembre 2025 à 12:14

Large Language Model (LLM)

A Large Language Model (LLM) is the engine behind an AI application such as ChatGPT. In this case, the engine powering ChatGPT is GPT-4 (or GPT-4o, previously), which is the LLM used by the application.

Azure AI Foundry is a service that allows you to choose which Large Language Model (LLM) you want to use.

Model Context Protocol (MCP)

A Model Context Protocol (MCP) is a protocol that standardizes communication between Large Language Models (LLMs) and external systems , such as ITSM tools (like ServiceNow), Kubernetes clusters, and more.

You can use an MCP client , for example, Continue.dev in your IDE (like VS Code) and then configure MCP servers, such as your Kubernetes cluster, to enable your LLM to interact with these systems.

n8n (Workflow Automation)

N8n is a workflow automation tool