« AI Concept » : différence entre les versions

De Marijan Stajic | Wiki
Aller à la navigation Aller à la recherche
Aucun résumé des modifications
Aucun résumé des modifications
Ligne 11 : Ligne 11 :
You can use an ''' MCP client''' , for example, '''Continue.dev'''  in your IDE (like VS Code) and then '''configure MCP servers''', such as your Kubernetes cluster, to enable your LLM to interact with these systems.
You can use an ''' MCP client''' , for example, '''Continue.dev'''  in your IDE (like VS Code) and then '''configure MCP servers''', such as your Kubernetes cluster, to enable your LLM to interact with these systems.


= n8n (Workflow Automation) =
= Retrieval-Augmented Generation (RAG) =
 
N8n is a workflow automation tool

Version du 6 novembre 2025 à 21:55

Large Language Model (LLM)

A Large Language Model (LLM) is the engine behind an AI application such as ChatGPT. In this case, the engine powering ChatGPT is GPT-4 (or GPT-4o, previously), which is the LLM used by the application.

Azure AI Foundry is a service that allows you to choose which Large Language Model (LLM) you want to use.

Model Context Protocol (MCP)

A Model Context Protocol (MCP) is a protocol that standardizes communication between Large Language Models (LLMs) and external systems , such as ITSM tools (like ServiceNow), Kubernetes clusters, and more.

You can use an MCP client , for example, Continue.dev in your IDE (like VS Code) and then configure MCP servers, such as your Kubernetes cluster, to enable your LLM to interact with these systems.

Retrieval-Augmented Generation (RAG)