Code Mode lets an LLM write and execute TypeScript programs inside a secure sandbox. Instead of making one tool call at a time, the model writes a short script that orchestrates multiple tools with loops, conditionals, Promise.all, and data transformations — then returns a single result.
You already have a chat app that uses tools. By the end of this guide, you'll have Code Mode set up so the LLM can compose those tools in TypeScript and execute them in a single sandbox call.
In a traditional agentic loop, every tool call adds a round-trip of messages: the model's tool-call request, the tool result, then the model's next reasoning step. A task that touches five tools can easily consume thousands of tokens in back-and-forth.
With Code Mode the model emits one execute_typescript call containing a complete program. The five tool invocations happen inside the sandbox, and only the final result comes back — one request, one response.
When tools are called individually, the model must decide what to do with each result in a new turn. With Code Mode, the model writes the logic up front: filter, aggregate, compare, branch. It can Promise.all ten API calls, pick the best result, and return a summary — all in a single execution.
Tools you pass to Code Mode are converted to typed function stubs that appear in the system prompt. The model sees exact input/output types, so it generates correct calls without guessing parameter names or shapes. TypeScript annotations in the generated code are stripped automatically before execution.
Generated code runs in an isolated environment (V8 isolate, QuickJS WASM, or Cloudflare Worker) with no access to the host file system, network, or process. The sandbox has configurable timeouts and memory limits.
pnpm add @tanstack/ai @tanstack/ai-code-mode zod
Pick an isolate driver:
# Node.js — fastest, uses V8 isolates (requires native compilation)
pnpm add @tanstack/ai-isolate-node
# QuickJS WASM — no native deps, works in browsers and edge runtimes
pnpm add @tanstack/ai-isolate-quickjs
# Cloudflare Workers — run on the edge
pnpm add @tanstack/ai-isolate-cloudflare
Define your tools with toolDefinition() and provide a server-side implementation with .server(). These become the external_* functions available inside the sandbox.
import { toolDefinition } from "@tanstack/ai";
import { z } from "zod";
const fetchWeather = toolDefinition({
name: "fetchWeather",
description: "Get current weather for a city",
inputSchema: z.object({ location: z.string() }),
outputSchema: z.object({
temperature: z.number(),
condition: z.string(),
}),
}).server(async ({ location }) => {
const res = await fetch(`https://api.weather.example/v1?city=${location}`);
return res.json();
});
import { createCodeMode } from "@tanstack/ai-code-mode";
import { createNodeIsolateDriver } from "@tanstack/ai-isolate-node";
const { tool, systemPrompt } = createCodeMode({
driver: createNodeIsolateDriver(),
tools: [fetchWeather],
timeout: 30_000,
});
import { chat } from "@tanstack/ai";
import { openaiText } from "@tanstack/ai-openai";
const result = await chat({
adapter: openaiText(),
model: "gpt-4o",
systemPrompts: [
"You are a helpful weather assistant.",
systemPrompt,
],
tools: [tool],
messages: [
{
role: "user",
content: "Compare the weather in Tokyo, Paris, and New York City",
},
],
});
The model will generate something like:
const cities = ["Tokyo", "Paris", "New York City"];
const results = await Promise.all(
cities.map((city) => external_fetchWeather({ location: city }))
);
const warmest = results.reduce((prev, curr) =>
curr.temperature > prev.temperature ? curr : prev
);
return {
comparison: results.map((r, i) => ({
city: cities[i],
temperature: r.temperature,
condition: r.condition,
})),
warmest: cities[results.indexOf(warmest)],
};
All three API calls happen in parallel inside the sandbox. The model receives one structured result instead of three separate tool-call round-trips.
Creates both the execute_typescript tool and its matching system prompt from a single config object. This is the recommended entry point.
const { tool, systemPrompt } = createCodeMode({
driver, // IsolateDriver — required
tools, // Array<ServerTool | ToolDefinition> — required, at least one
timeout, // number — execution timeout in ms (default: 30000)
memoryLimit, // number — memory limit in MB (default: 128, Node + QuickJS drivers)
getSkillBindings, // () => Promise<Record<string, ToolBinding>> — optional dynamic bindings
});
Config properties:
| Property | Type | Description |
|---|---|---|
| driver | IsolateDriver | The sandbox runtime to execute code in |
| tools | Array<ServerTool | ToolDefinition> | Tools exposed as external_* functions. Must have .server() implementations |
| timeout | number | Execution timeout in milliseconds (default: 30000) |
| memoryLimit | number | Memory limit in MB (default: 128). Supported by Node and QuickJS drivers |
| getSkillBindings | () => Promise<Record<string, ToolBinding>> | Optional function returning additional bindings at execution time |
The tool returns a CodeModeToolResult:
interface CodeModeToolResult {
success: boolean;
result?: unknown; // Return value from the executed code
logs?: Array<string>; // Captured console output
error?: {
message: string;
name?: string;
line?: number;
};
}
Lower-level functions if you need only the tool or only the prompt. createCodeMode calls both internally.
import { createCodeModeTool, createCodeModeSystemPrompt } from "@tanstack/ai-code-mode";
const tool = createCodeModeTool(config);
const prompt = createCodeModeSystemPrompt(config);
The interface that sandbox runtimes implement. You do not implement this yourself — pick one of the provided drivers:
interface IsolateDriver {
createContext(config: IsolateConfig): Promise<IsolateContext>;
}
Available drivers:
| Package | Factory function | Environment |
|---|---|---|
| @tanstack/ai-isolate-node | createNodeIsolateDriver() | Node.js |
| @tanstack/ai-isolate-quickjs | createQuickJSIsolateDriver() | Node.js, browser, edge |
| @tanstack/ai-isolate-cloudflare | createCloudflareIsolateDriver() | Cloudflare Workers |
For full configuration options for each driver, see Isolate Drivers.
These utilities are used internally and are exported for custom pipelines:
For a full comparison of drivers with all configuration options, see Isolate Drivers.
In brief: use the Node driver for server-side Node.js (fastest, V8 JIT), QuickJS for browsers or portable edge deployments (no native deps), and the Cloudflare driver when you deploy to Cloudflare Workers.
Code Mode emits custom events during execution that you can observe through the TanStack AI event system. These are useful for building UIs that show execution progress, debugging, or logging.
| Event | When | Payload |
|---|---|---|
| code_mode:execution_started | Code execution begins | { timestamp, codeLength } |
| code_mode:console | Each console.log/error/warn/info call | { level, message, timestamp } |
| code_mode:external_call | Before an external_* function runs | { function, args, timestamp } |
| code_mode:external_result | After a successful external_* call | { function, result, duration } |
| code_mode:external_error | When an external_* call fails | { function, error, duration } |
To display these events in your React app, see Showing Code Mode in the UI.