The otelMiddleware factory wires TanStack AI into your existing OpenTelemetry setup. Every chat() call produces a root span, one child span per agent-loop iteration, and one grandchild span per tool call — all with GenAI semantic-convention attributes. It also records GenAI token and duration histograms when a Meter is provided.
Install @opentelemetry/api — it's an optional peer dependency of @tanstack/ai:
pnpm add @opentelemetry/apipnpm add @opentelemetry/apiWire up your OTel SDK however you already do (e.g. @opentelemetry/sdk-node). Then pass a Tracer (and optionally a Meter) into the middleware. The OTel middleware lives on its own subpath — importing it never affects users who don't need OTel:
import { chat } from '@tanstack/ai'
import { otelMiddleware } from '@tanstack/ai/middlewares/otel'
import { openaiText } from '@tanstack/ai-openai/adapters'
import { trace, metrics } from '@opentelemetry/api'
const otel = otelMiddleware({
tracer: trace.getTracer('my-app'),
meter: metrics.getMeter('my-app'),
})
const result = await chat({
adapter: openaiText('gpt-4o'),
messages: [{ role: 'user', content: 'hi' }],
middleware: [otel],
stream: false,
})import { chat } from '@tanstack/ai'
import { otelMiddleware } from '@tanstack/ai/middlewares/otel'
import { openaiText } from '@tanstack/ai-openai/adapters'
import { trace, metrics } from '@opentelemetry/api'
const otel = otelMiddleware({
tracer: trace.getTracer('my-app'),
meter: metrics.getMeter('my-app'),
})
const result = await chat({
adapter: openaiText('gpt-4o'),
messages: [{ role: 'user', content: 'hi' }],
middleware: [otel],
stream: false,
})chat gpt-4o (root, kind: INTERNAL)
├── chat gpt-4o #0 (iteration, kind: CLIENT)
│ ├── execute_tool get_weather
│ └── execute_tool get_time
└── chat gpt-4o #1 (iteration, kind: CLIENT)chat gpt-4o (root, kind: INTERNAL)
├── chat gpt-4o #0 (iteration, kind: CLIENT)
│ ├── execute_tool get_weather
│ └── execute_tool get_time
└── chat gpt-4o #1 (iteration, kind: CLIENT)Iteration spans are numbered (#0, #1, ...) so distinct iterations of the same chat are easy to pick apart in trace viewers.
| Level | Attribute | Value |
|---|---|---|
| root / iteration | gen_ai.system | openai, anthropic, ... |
| iteration | gen_ai.operation.name | chat |
| root / iteration | gen_ai.request.model | requested model |
| iteration | gen_ai.response.model | actual model |
| iteration | gen_ai.request.temperature | from config |
| iteration | gen_ai.request.top_p | from config |
| iteration | gen_ai.request.max_tokens | from config |
| iteration | gen_ai.usage.input_tokens | per iteration |
| iteration | gen_ai.usage.output_tokens | per iteration |
| iteration | gen_ai.response.finish_reasons | [stop], [tool_calls], ... |
| root | gen_ai.usage.input_tokens | rolled up |
| root | gen_ai.usage.output_tokens | rolled up |
| root | tanstack.ai.iterations | iteration count |
| tool | gen_ai.tool.name | tool name |
| tool | gen_ai.tool.call.id | tool call id |
| tool | gen_ai.tool.type | function |
| tool | tanstack.ai.tool.outcome | success / error |
Two GenAI-standard histograms:
Both gen_ai.response.id and gen_ai.response.model are deliberately excluded from metric attributes to keep cardinality low (per-request custom-model names and request IDs would blow up the series set).
By default, only metadata lands on spans. To record prompt and completion content, set captureContent: true. Content is captured as OTel span events following the GenAI convention:
Pass a redact function to strip PII before anything is recorded:
otelMiddleware({
tracer,
captureContent: true,
redact: (text) => text.replace(/\b\d{3}-\d{2}-\d{4}\b/g, '[SSN]'),
})otelMiddleware({
tracer,
captureContent: true,
redact: (text) => text.replace(/\b\d{3}-\d{2}-\d{4}\b/g, '[SSN]'),
})If redact throws, the middleware writes the literal sentinel "[redaction_failed]" into the span event and logs a warning — it never falls back to the raw content. This is the load-bearing invariant for users who ship traces to third-party backends: a broken redactor should shut off capture, not leak prompts.
Accumulated assistant text (the gen_ai.choice event) is capped at maxContentLength characters (default 100 000); longer completions are truncated with a trailing "…" marker.
Multimodal content (images, audio, video, documents) is represented as placeholder strings ([image], [audio], ...) to preserve message order without dumping binary data onto spans. Use onSpanEnd if you need richer multimodal capture.
Prompt/system/user message events fire from onConfig at the start of every iteration, which means the full conversation history (as the adapter will re-send it) is re-emitted on each iteration span. This mirrors what the provider actually sees on the wire.
All four extensions are optional. Each wraps user code in try/catch — a thrown callback becomes a log line, never a broken chat.
Override default span names. info.kind is 'chat' | 'iteration' | 'tool'.
otelMiddleware({
tracer,
spanNameFormatter: (info) =>
info.kind === 'tool' ? `tool:${info.toolName}` : `chat:${info.ctx.model}`,
})otelMiddleware({
tracer,
spanNameFormatter: (info) =>
info.kind === 'tool' ? `tool:${info.toolName}` : `chat:${info.ctx.model}`,
})Add custom attributes to every span. Fires once per span.
otelMiddleware({
tracer,
attributeEnricher: () => ({
'tenant.id': getCurrentTenant(),
}),
})otelMiddleware({
tracer,
attributeEnricher: () => ({
'tenant.id': getCurrentTenant(),
}),
})Mutate SpanOptions immediately before tracer.startSpan(...). Useful for adding links, custom start times, or extra default attributes.
Fires just before every span.end(). Common uses: record custom events, emit per-tool metrics via your own Meter.
const toolDuration = meter.createHistogram('tool.duration')
otelMiddleware({
tracer,
onSpanEnd: (info, span) => {
if (info.kind === 'tool') {
// span is still recording; read timestamps from your own store if needed
toolDuration.record(1, { 'tool.name': info.toolName })
}
},
})const toolDuration = meter.createHistogram('tool.duration')
otelMiddleware({
tracer,
onSpanEnd: (info, span) => {
if (info.kind === 'tool') {
// span is still recording; read timestamps from your own store if needed
toolDuration.record(1, { 'tool.name': info.toolName })
}
},
})