Documentation
Framework
Version
Class References
Function References
Interface References
Type Alias References
Variable References

Ollama Adapter

The Ollama adapter provides access to local models running via Ollama, allowing you to run AI models on your own infrastructure.

Installation

bash
npm install @tanstack/ai-ollama
npm install @tanstack/ai-ollama

Basic Usage

typescript
import { chat } from "@tanstack/ai";
import { ollama } from "@tanstack/ai-ollama";

const adapter = ollama({
  baseURL: "http://localhost:11434", // Default Ollama URL
});

const stream = chat({
  adapter,
  messages: [{ role: "user", content: "Hello!" }],
  model: "llama3",
});
import { chat } from "@tanstack/ai";
import { ollama } from "@tanstack/ai-ollama";

const adapter = ollama({
  baseURL: "http://localhost:11434", // Default Ollama URL
});

const stream = chat({
  adapter,
  messages: [{ role: "user", content: "Hello!" }],
  model: "llama3",
});

Configuration

typescript
import { ollama, type OllamaConfig } from "@tanstack/ai-ollama";

const config: OllamaConfig = {
  baseURL: "http://localhost:11434", // Ollama server URL
  // No API key needed for local Ollama
};

const adapter = ollama(config);
import { ollama, type OllamaConfig } from "@tanstack/ai-ollama";

const config: OllamaConfig = {
  baseURL: "http://localhost:11434", // Ollama server URL
  // No API key needed for local Ollama
};

const adapter = ollama(config);

Available Models

Ollama models depend on what you have installed locally. Common models include:

  • llama2 - Llama 2
  • llama3 - Llama 3
  • mistral - Mistral
  • codellama - Code Llama
  • phi - Phi models
  • gemma - Gemma models

To see available models, run:

bash
ollama list
ollama list

Example: Chat Completion

typescript
import { chat, toStreamResponse } from "@tanstack/ai";
import { ollama } from "@tanstack/ai-ollama";

const adapter = ollama({
  baseURL: "http://localhost:11434",
});

export async function POST(request: Request) {
  const { messages } = await request.json();

  const stream = chat({
    adapter,
    messages,
    model: "llama3", // Use a model you have installed
  });

  return toStreamResponse(stream);
}
import { chat, toStreamResponse } from "@tanstack/ai";
import { ollama } from "@tanstack/ai-ollama";

const adapter = ollama({
  baseURL: "http://localhost:11434",
});

export async function POST(request: Request) {
  const { messages } = await request.json();

  const stream = chat({
    adapter,
    messages,
    model: "llama3", // Use a model you have installed
  });

  return toStreamResponse(stream);
}

Example: With Tools

typescript
import { chat, toolDefinition } from "@tanstack/ai";
import { ollama } from "@tanstack/ai-ollama";
import { z } from "zod";

const adapter = ollama({
  baseURL: "http://localhost:11434",
});

const getLocalDataDef = toolDefinition({
  name: "get_local_data",
  description: "Get data from local storage",
  inputSchema: z.object({
    key: z.string(),
  }),
});

const getLocalData = getLocalDataDef.server(async ({ key }) => {
  // Access local data
  return { data: "..." };
});

const stream = chat({
  adapter,
  messages,
  model: "llama3",
  tools: [getLocalData],
});
import { chat, toolDefinition } from "@tanstack/ai";
import { ollama } from "@tanstack/ai-ollama";
import { z } from "zod";

const adapter = ollama({
  baseURL: "http://localhost:11434",
});

const getLocalDataDef = toolDefinition({
  name: "get_local_data",
  description: "Get data from local storage",
  inputSchema: z.object({
    key: z.string(),
  }),
});

const getLocalData = getLocalDataDef.server(async ({ key }) => {
  // Access local data
  return { data: "..." };
});

const stream = chat({
  adapter,
  messages,
  model: "llama3",
  tools: [getLocalData],
});

Setting Up Ollama

  1. Install Ollama:

    bash
    # macOS
    brew install ollama
    
    # Linux
    curl -fsSL https://ollama.com/install.sh | sh
    
    # Windows
    # Download from https://ollama.com
    
    # macOS
    brew install ollama
    
    # Linux
    curl -fsSL https://ollama.com/install.sh | sh
    
    # Windows
    # Download from https://ollama.com
    
  2. Pull a model:

    bash
    ollama pull llama3
    
    ollama pull llama3
    
  3. Start Ollama server:

    bash
    ollama serve
    
    ollama serve
    

Provider Options

Ollama supports various provider-specific options:

typescript
const stream = chat({
  adapter: ollama({ baseURL: "http://localhost:11434" }),
  messages,
  model: "llama3",
  providerOptions: {
    temperature: 0.7,
    numPredict: 1000,
    topP: 0.9,
    topK: 40,
  },
});
const stream = chat({
  adapter: ollama({ baseURL: "http://localhost:11434" }),
  messages,
  model: "llama3",
  providerOptions: {
    temperature: 0.7,
    numPredict: 1000,
    topP: 0.9,
    topK: 40,
  },
});

Custom Ollama Server

If you're running Ollama on a different host or port:

typescript
const adapter = ollama({
  baseURL: "http://your-server:11434",
});
const adapter = ollama({
  baseURL: "http://your-server:11434",
});

API Reference

ollama(config)

Creates an Ollama adapter instance.

Parameters:

  • config.baseURL - Ollama server URL (default: http://localhost:11434)

Returns: An Ollama adapter instance.

Benefits of Ollama

  • Privacy - Data stays on your infrastructure
  • Cost - No API costs
  • Customization - Use any compatible model
  • Offline - Works without internet

Next Steps