LLama2
Usage
import { Ollama, Settings } from "llamaindex";
Settings.llm = new LlamaDeuce({ chatStrategy: DeuceChatStrategy.META });
Usage with Replication
import { Ollama, ReplicateSession, Settings } from "llamaindex";
const replicateSession = new ReplicateSession({
replicateKey,
});
Settings.llm = new LlamaDeuce({
chatStrategy: DeuceChatStrategy.META,
replicateSession,
});
Load and index documents
For this example, we will use a single document. In a real-world scenario, you would have multiple documents to index.
const document = new Document({ text: essay, id_: "essay" });
const index = await VectorStoreIndex.fromDocuments([document]);