Chat History
Overview
LLM (Large Language Models) are stateless by design, meaning they do not retain any memory of previous interactions between requests. Each time you interact with an LLM model, it processes the input independently, without any knowledge of past conversations. This stateless behavior ensures privacy and prevents the model from remembering personal or sensitive data between sessions.
Passing Chat History
Since LLM models do not remember previous interactions, you need to pass the conversation context (or chat history) with every new request. This allows the model to understand the flow of the conversation and generate appropriate responses based on prior exchanges.
The chat history should be formatted as an array of objects, with each object representing a message from either the user or the assistant. The correct format is:
[ { role: "user", content: "question" }, { role: "assistant", content: "answer" } ]
role
: Specifies whether the message is from the user or the assistant."user"
: The message is from the user (the person asking the question)."assistant"
: The message is from the assistant (the model’s response).
content
: The text content of the message (the question or answer).
Example
Here’s an example of how to pass chat history when making a request:
const chatHistory = [
{ role: "user", content: "What is Ragapi?" },
{
role: "assistant",
content:
"Ragapi is an API that helps you interact with documents using LLMs.",
},
]
const response = await fetch(serviceUrl, {
method: "POST",
headers: { "Content-Type": "application/json", "x-api-key": apiKey },
body: JSON.stringify({
pineconeNamespace,
pineconeIndexName,
query: "Tell me more about how Ragapi works",
chatHistory: chatHistory,
}),
})