Framework-agnostic headless client for managing chat state and streaming.
npm install @tanstack/ai-client
npm install @tanstack/ai-client
The main client class for managing chat state.
import { ChatClient, fetchServerSentEvents } from "@tanstack/ai-client";
const client = new ChatClient({
connection: fetchServerSentEvents("/api/chat"),
initialMessages: [],
onMessagesChange: (messages) => {
console.log("Messages updated:", messages);
},
onToolCall: async ({ toolName, input }) => {
// Handle client tool execution
return { result: "..." };
},
});
import { ChatClient, fetchServerSentEvents } from "@tanstack/ai-client";
const client = new ChatClient({
connection: fetchServerSentEvents("/api/chat"),
initialMessages: [],
onMessagesChange: (messages) => {
console.log("Messages updated:", messages);
},
onToolCall: async ({ toolName, input }) => {
// Handle client tool execution
return { result: "..." };
},
});
Sends a user message and gets a response.
await client.sendMessage("Hello!");
await client.sendMessage("Hello!");
Appends a message to the conversation.
await client.append({
role: "user",
content: "Additional context",
});
await client.append({
role: "user",
content: "Additional context",
});
Reloads the last assistant message.
await client.reload();
await client.reload();
Stops the current response generation.
client.stop();
client.stop();
Clears all messages.
client.clear();
client.clear();
Manually sets the messages array.
client.setMessagesManually([...newMessages]);
client.setMessagesManually([...newMessages]);
Adds the result of a client-side tool execution.
await client.addToolResult({
toolCallId: "call_123",
tool: "toolName",
output: { result: "..." },
state: "output-available",
});
await client.addToolResult({
toolCallId: "call_123",
tool: "toolName",
output: { result: "..." },
state: "output-available",
});
Responds to a tool approval request.
await client.addToolApprovalResponse({
id: "approval_123",
approved: true,
});
await client.addToolApprovalResponse({
id: "approval_123",
approved: true,
});
Creates an SSE connection adapter.
import { fetchServerSentEvents } from "@tanstack/ai-client";
const adapter = fetchServerSentEvents("/api/chat", {
headers: {
Authorization: "Bearer token",
},
});
import { fetchServerSentEvents } from "@tanstack/ai-client";
const adapter = fetchServerSentEvents("/api/chat", {
headers: {
Authorization: "Bearer token",
},
});
Creates an HTTP stream connection adapter.
import { fetchHttpStream } from "@tanstack/ai-client";
const adapter = fetchHttpStream("/api/chat");
import { fetchHttpStream } from "@tanstack/ai-client";
const adapter = fetchHttpStream("/api/chat");
Creates a custom connection adapter.
import { stream } from "@tanstack/ai-client";
const adapter = stream(async (messages, data, signal) => {
// Custom implementation
const response = await fetch("/api/chat", {
method: "POST",
body: JSON.stringify({ messages, ...data }),
signal,
});
return processStream(response);
});
import { stream } from "@tanstack/ai-client";
const adapter = stream(async (messages, data, signal) => {
// Custom implementation
const response = await fetch("/api/chat", {
method: "POST",
body: JSON.stringify({ messages, ...data }),
signal,
});
return processStream(response);
});
Creates a typed array of client tools with proper type inference. This eliminates the need for as const when defining tool arrays and enables proper discriminated union type narrowing.
import { clientTools } from "@tanstack/ai-client";
import { myTool1, myTool2 } from "./tools";
// Create client implementations
const tool1Client = myTool1.client((input) => {
// Implementation
return { result: "..." };
});
const tool2Client = myTool2.client((input) => {
// Implementation
return { result: "..." };
});
// Create typed tools array (no 'as const' needed!)
const tools = clientTools(tool1Client, tool2Client);
// Now when you use these tools in chat options:
const chatOptions = createChatClientOptions({
connection: fetchServerSentEvents("/api/chat"),
tools, // Fully typed with literal tool names
});
// In your component:
messages.forEach((message) => {
message.parts.forEach((part) => {
if (part.type === "tool-call" && part.name === "myTool1") {
// ✅ TypeScript knows part.name is literally "myTool1"
// ✅ part.input is typed from myTool1's input schema
// ✅ part.output is typed from myTool1's output schema
}
});
});
import { clientTools } from "@tanstack/ai-client";
import { myTool1, myTool2 } from "./tools";
// Create client implementations
const tool1Client = myTool1.client((input) => {
// Implementation
return { result: "..." };
});
const tool2Client = myTool2.client((input) => {
// Implementation
return { result: "..." };
});
// Create typed tools array (no 'as const' needed!)
const tools = clientTools(tool1Client, tool2Client);
// Now when you use these tools in chat options:
const chatOptions = createChatClientOptions({
connection: fetchServerSentEvents("/api/chat"),
tools, // Fully typed with literal tool names
});
// In your component:
messages.forEach((message) => {
message.parts.forEach((part) => {
if (part.type === "tool-call" && part.name === "myTool1") {
// ✅ TypeScript knows part.name is literally "myTool1"
// ✅ part.input is typed from myTool1's input schema
// ✅ part.output is typed from myTool1's output schema
}
});
});
Helper function to create typed chat client options with proper type inference.
import { createChatClientOptions, clientTools } from "@tanstack/ai-client";
const tools = clientTools(tool1, tool2);
const chatOptions = createChatClientOptions({
connection: fetchServerSentEvents("/api/chat"),
tools,
});
// Use InferChatMessages to extract message types
type ChatMessages = InferChatMessages<typeof chatOptions>;
import { createChatClientOptions, clientTools } from "@tanstack/ai-client";
const tools = clientTools(tool1, tool2);
const chatOptions = createChatClientOptions({
connection: fetchServerSentEvents("/api/chat"),
tools,
});
// Use InferChatMessages to extract message types
type ChatMessages = InferChatMessages<typeof chatOptions>;
interface UIMessage {
id: string;
role: "user" | "assistant";
parts: MessagePart[];
createdAt?: Date;
}
interface UIMessage {
id: string;
role: "user" | "assistant";
parts: MessagePart[];
createdAt?: Date;
}
type MessagePart = TextPart | ThinkingPart | ToolCallPart | ToolResultPart;
type MessagePart = TextPart | ThinkingPart | ToolCallPart | ToolResultPart;
interface TextPart {
type: "text";
content: string;
}
interface TextPart {
type: "text";
content: string;
}
interface ThinkingPart {
type: "thinking";
content: string;
}
interface ThinkingPart {
type: "thinking";
content: string;
}
Thinking parts represent the model's internal reasoning process. They are typically displayed in a collapsible format and automatically collapse when the response text appears. Thinking parts are UI-only and are not sent back to the model in subsequent requests.
Note: Thinking parts are only available when using models that support reasoning/thinking (e.g., Anthropic Claude with thinking enabled, OpenAI GPT-5 with reasoning enabled).
interface ToolCallPart {
type: "tool-call";
id: string;
name: string;
arguments: string; // JSON string (may be incomplete during streaming)
input?: any; // Parsed tool input (typed from tool's inputSchema)
state: ToolCallState;
approval?: ApprovalRequest;
output?: any; // Tool execution output (typed from tool's outputSchema)
}
interface ToolCallPart {
type: "tool-call";
id: string;
name: string;
arguments: string; // JSON string (may be incomplete during streaming)
input?: any; // Parsed tool input (typed from tool's inputSchema)
state: ToolCallState;
approval?: ApprovalRequest;
output?: any; // Tool execution output (typed from tool's outputSchema)
}
When using typed tools with clientTools() and createChatClientOptions(), the input and output fields are automatically typed based on your tool's Zod schemas, and name becomes a discriminated union enabling type narrowing.
interface ToolResultPart {
type: "tool-result";
id: string;
toolCallId: string;
tool: string;
output: any;
state: ToolResultState;
errorText?: string;
}
interface ToolResultPart {
type: "tool-result";
id: string;
toolCallId: string;
tool: string;
output: any;
state: ToolResultState;
errorText?: string;
}
type ToolCallState =
| "pending"
| "approval-requested"
| "executing"
| "output-available"
| "output-error"
| "cancelled";
type ToolCallState =
| "pending"
| "approval-requested"
| "executing"
| "output-available"
| "output-error"
| "cancelled";
type ToolResultState =
| "pending"
| "executing"
| "output-available"
| "output-error";
type ToolResultState =
| "pending"
| "executing"
| "output-available"
| "output-error";
Configure stream processing with chunk strategies:
import { ImmediateStrategy, fetchServerSentEvents } from "@tanstack/ai-client";
const client = new ChatClient({
connection: fetchServerSentEvents("/api/chat"),
streamProcessor: {
chunkStrategy: new ImmediateStrategy(), // Emit every chunk
},
});
import { ImmediateStrategy, fetchServerSentEvents } from "@tanstack/ai-client";
const client = new ChatClient({
connection: fetchServerSentEvents("/api/chat"),
streamProcessor: {
chunkStrategy: new ImmediateStrategy(), // Emit every chunk
},
});
