by Alem Tuzlak, Jack Herrington, and Tanner Linsley on Dec 19, 2025.

It's been two weeks since we released the first alpha of TanStack AI. To us, it feels like decades ago. We've prototyped through 5-6 different internal architectures to bring you the best experience possible.
Our goals were simple: move away from monolithic adapters and their complexity, while expanding the flexibility and power of our public APIs. This release delivers on both.
We wanted to support everything AI providers offer—image generation, video, audio, text-to-speech, transcription—without updating every adapter simultaneously.
We're a small team. Adding image support shouldn't mean extending BaseAdapter, updating 5+ provider implementations, ensuring per-model type safety for each, and combing through docs manually. That's a week per provider. Multiply that by 20 providers and 6 modalities.
So we split the monolith.
Instead of:
import { openai } from '@tanstack/ai-openai'
import { openai } from '@tanstack/ai-openai'
You now have:
import { openaiText, openaiImage, openaiVideo } from '@tanstack/ai-openai'
import { openaiText, openaiImage, openaiVideo } from '@tanstack/ai-openai'
Incremental feature support. Add image generation to OpenAI this week, Gemini next week, video for a third provider the week after. Smaller releases, same pace.
Easier maintenance. Our adapter abstraction had grown to 7 type generics with only text, summarization, and embeddings. Adding 6 more modalities would have exploded complexity. Now each adapter is focused—3 generics max.
Better bundle size. You control what you pull in. Want only text? Import openaiText. Want text and images? Import both. Your bundle, your choice.
Faster contributions. Add support for your favorite provider with a few hundred lines. We can review and merge it quickly.
What do we support now?
You have a use-case with AI? We support it.
We made breaking changes. Here's what and why.
Before:
chat({
adapter: openai(),
model: 'gpt-4',
// now you get typesafety...
})
chat({
adapter: openai(),
model: 'gpt-4',
// now you get typesafety...
})
After:
chat({
adapter: openaiText('gpt-4'),
// immediately get typesafety
})
chat({
adapter: openaiText('gpt-4'),
// immediately get typesafety
})
Fewer steps to autocomplete. No more type errors from forgetting to define the model.
Quick terminology:
The old providerOptions were tied to the model, not the provider. Changing from gpt-4 to gpt-3.5-turbo changes those options. So we renamed them:
chat({
adapter: openaiText('gpt-4'),
modelOptions: {
text: {},
},
})
chat({
adapter: openaiText('gpt-4'),
modelOptions: {
text: {},
},
})
Settings like temperature work across providers. Our other modalities already put config at the root:
generateImage({
adapter,
numberOfImages: 3,
})
generateImage({
adapter,
numberOfImages: 3,
})
So we brought chat in line:
chat({
adapter: openaiText('gpt-4'),
modelOptions: {
text: {},
},
temperature: 0.6,
})
chat({
adapter: openaiText('gpt-4'),
modelOptions: {
text: {},
},
temperature: 0.6,
})
Start typing to see what's available.
chat({
- adapter: openai(),
+ adapter: openaiText("gpt-4"),
- model: "gpt-4",
- providerOptions: {
+ modelOptions: {
text: {}
},
- options: {
- temperature: 0.6
- },
+ temperature: 0.6
})
chat({
- adapter: openai(),
+ adapter: openaiText("gpt-4"),
- model: "gpt-4",
- providerOptions: {
+ modelOptions: {
text: {}
},
- options: {
- temperature: 0.6
- },
+ temperature: 0.6
})
Standard Schema support. We're dropping the Zod constraint for tools and structured outputs. Bring your own schema validation library.
On the roadmap:
Community contributions welcome.
We've shipped a major architectural overhaul, new modalities across the board, and a cleaner API. The adapters are easy to make, easy to maintain, and easy to reason about. Your bundle stays minimal.
We're confident in this direction. We think you'll like it too.
