Integration with the Ollama SDK.

import { ChatOllama } from "@langchain/ollama";

const model = new ChatOllama({
model: "llama3", // Default model.
});

const result = await model.invoke([
"human",
"What is a good name for a company that makes colorful socks?",
]);
console.log(result);

Hierarchy (view full)

Implements

Constructors

Properties

baseUrl: string = "http://127.0.0.1:11434"

The host URL of the Ollama server.

"http://127.0.0.1:11434"
checkOrPullModel: boolean = false

Whether or not to check the model exists on the local machine before invoking it. If set to true, the model will be pulled if it does not exist.

false
client: Ollama
model: string = "llama3"

The model to invoke. If the model does not exist, it will be pulled.

"llama3"
embeddingOnly?: boolean
f16Kv?: boolean
format?: string
frequencyPenalty?: number
keepAlive?: string | number = "5m"
"5m"
logitsAll?: boolean
lowVram?: boolean
mainGpu?: number
mirostat?: number
mirostatEta?: number
mirostatTau?: number
numBatch?: number
numCtx?: number
numGpu?: number
numKeep?: number
numPredict?: number
numThread?: number
numa?: boolean
penalizeNewline?: boolean
presencePenalty?: number
repeatLastN?: number
repeatPenalty?: number
seed?: number
streaming?: boolean
temperature?: number
tfsZ?: number
topK?: number
topP?: number
typicalP?: number
useMlock?: boolean
useMmap?: boolean
vocabOnly?: boolean

Methods

  • Parameters

    • Optionaloptions: unknown

    Returns Omit<ChatRequest, "messages"> & {
        tools?: ToolDefinition[];
    }