« AI Concept » : différence entre les versions
Aucun résumé des modifications |
mAucun résumé des modifications |
||
| Ligne 1 : | Ligne 1 : | ||
[[Fichier:N1 Kubernetes.png|none]] | [[Fichier:N1 Kubernetes.png|none]] | ||
= Large Language Model (LLM) = | = Glossary = | ||
== Large Language Model (LLM) == | |||
A '''Large Language Model (LLM)''' is the '''engine behind an AI application''' such as ChatGPT. In this case, the engine powering ChatGPT is GPT-4 (or GPT-4o, previously), which is the LLM used by the application. | A '''Large Language Model (LLM)''' is the '''engine behind an AI application''' such as ChatGPT. In this case, the engine powering ChatGPT is GPT-4 (or GPT-4o, previously), which is the LLM used by the application. | ||
| Ligne 7 : | Ligne 9 : | ||
'''Azure AI Foundry''' is a service that allows you to '''choose which Large Language Model (LLM)''' you want to use. | '''Azure AI Foundry''' is a service that allows you to '''choose which Large Language Model (LLM)''' you want to use. | ||
= Model Context Protocol (MCP) = | == Model Context Protocol (MCP) == | ||
A '''Model Context Protocol (MCP)''' is a protocol that '''standardizes communication between Large Language Models''' (LLMs) and ''' external systems''' , such as ITSM tools (like ServiceNow), Kubernetes clusters, and more. | A '''Model Context Protocol (MCP)''' is a protocol that '''standardizes communication between Large Language Models''' (LLMs) and ''' external systems''' , such as ITSM tools (like ServiceNow), Kubernetes clusters, and more. | ||
| Ligne 13 : | Ligne 15 : | ||
You can use an ''' MCP client''' , for example, '''Continue.dev''' in your IDE (like VS Code) and then '''configure MCP servers''', such as your Kubernetes cluster, to enable your LLM to interact with these systems. | You can use an ''' MCP client''' , for example, '''Continue.dev''' in your IDE (like VS Code) and then '''configure MCP servers''', such as your Kubernetes cluster, to enable your LLM to interact with these systems. | ||
= Retrieval-Augmented Generation (RAG) = | == Retrieval-Augmented Generation (RAG) == | ||
= Technology Stack = | |||
== LangChain == | |||
'''LangChain''' is an '''application framework''' that '''helps structure your prompts''' using PromptTemplate. For example, with an alerting system, when the '''AI is queried''', you can '''create a template''' that guides it to follow a consistent '''debugging structure''' in its responses : | |||
<pre class="linux"> | |||
from langchain import PromptTemplate | |||
prompt = PromptTemplate( | |||
input_variables=["alert", "logs", "metrics"], | |||
template=""" | |||
Tu es un expert Kubernetes. | |||
Un incident a été détecté : | |||
{alert} | |||
Voici les logs du pod : | |||
{logs} | |||
Voici ses métriques : | |||
{metrics} | |||
Analyse les causes probables et propose des actions correctives précises. | |||
""" | |||
) | |||
</pre> | |||
== K8sGPT == | |||
'''K8sGPT''' is an '''open-source tool''' that '''scans Kubernetes clusters''', detects issues, and uses a Large Language Model (LLM) such as Azure OpenAI to explain problems and suggest solutions in natural language. | |||
== Ollama == | |||
'''Ollama''' is an '''open-source tool''' that lets you '''download and run large language models''' (LLMs) like Llama 3 or Mistral locally, allowing you to use AI '''without relying on the cloud'''. | |||
Version du 8 novembre 2025 à 22:41
Glossary
Large Language Model (LLM)
A Large Language Model (LLM) is the engine behind an AI application such as ChatGPT. In this case, the engine powering ChatGPT is GPT-4 (or GPT-4o, previously), which is the LLM used by the application.
Azure AI Foundry is a service that allows you to choose which Large Language Model (LLM) you want to use.
Model Context Protocol (MCP)
A Model Context Protocol (MCP) is a protocol that standardizes communication between Large Language Models (LLMs) and external systems , such as ITSM tools (like ServiceNow), Kubernetes clusters, and more.
You can use an MCP client , for example, Continue.dev in your IDE (like VS Code) and then configure MCP servers, such as your Kubernetes cluster, to enable your LLM to interact with these systems.
Retrieval-Augmented Generation (RAG)
Technology Stack
LangChain
LangChain is an application framework that helps structure your prompts using PromptTemplate. For example, with an alerting system, when the AI is queried, you can create a template that guides it to follow a consistent debugging structure in its responses :
from langchain import PromptTemplate
prompt = PromptTemplate(
input_variables=["alert", "logs", "metrics"],
template="""
Tu es un expert Kubernetes.
Un incident a été détecté :
{alert}
Voici les logs du pod :
{logs}
Voici ses métriques :
{metrics}
Analyse les causes probables et propose des actions correctives précises.
"""
)
K8sGPT
K8sGPT is an open-source tool that scans Kubernetes clusters, detects issues, and uses a Large Language Model (LLM) such as Azure OpenAI to explain problems and suggest solutions in natural language.
Ollama
Ollama is an open-source tool that lets you download and run large language models (LLMs) like Llama 3 or Mistral locally, allowing you to use AI without relying on the cloud.
