Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
72c117d
refactor: migrate ai-groq + ai-openrouter onto @tanstack/openai-base …
tombeckenham May 11, 2026
1c8e1f4
ci: apply automated fixes
autofix-ci[bot] May 11, 2026
b320df5
fix(openai-base, ai-openrouter, ai): silent failures in chat-completi…
tombeckenham May 11, 2026
d9a74c4
feat(ai-openrouter, openai-base): OpenRouter Responses (beta) adapter
tombeckenham May 12, 2026
d741f2f
chore(ai-groq): remove dead unused message-param types
tombeckenham May 12, 2026
f66f82f
fix(ai-openrouter): pass UNKNOWN-fallback events through verbatim
tombeckenham May 12, 2026
d5e492d
refactor(adapters): remove asChunk casts, enforce satisfies StreamChunk
tombeckenham May 12, 2026
50214f7
fix(ai-openrouter): preserve assistant/tool message content fidelity
AlemTuzlak May 12, 2026
e8cce25
fix(ai-groq): correct ChatCompletionNamedToolChoice shape
AlemTuzlak May 12, 2026
ba9936e
test(ai-groq): reset pendingMockCreate between tests
AlemTuzlak May 12, 2026
74cbd77
test(e2e): route OpenRouter summarize through createOpenRouterSummarize
AlemTuzlak May 12, 2026
0ecfd3b
chore(ai-openrouter): declare zod as peer dependency
AlemTuzlak May 12, 2026
5eb7aa7
fix(ai-groq): drop spurious timestamp field from processStreamChunks …
AlemTuzlak May 12, 2026
a773bd5
fix(ai-openrouter): stringify error.code on response.failed events
AlemTuzlak May 12, 2026
993df3e
fix(ai-openrouter): default image data URI mime type to octet-stream
AlemTuzlak May 12, 2026
6a9ce76
fix(openai-base): stop processing chunks after top-level error event
AlemTuzlak May 12, 2026
06dd544
fix(openai-base, ai-openrouter): route Responses structuredOutput thr…
AlemTuzlak May 12, 2026
39c927b
fix(ai-openrouter): extract text from array-shaped tool message content
AlemTuzlak May 12, 2026
335adaf
chore(ai-groq): declare @tanstack/ai as workspace devDependency
AlemTuzlak May 12, 2026
7bb2b82
fix(ai-openrouter): route audio URLs to text fallback on chat-complet…
AlemTuzlak May 12, 2026
272fe5f
docs(ai-groq): correct message-types header — Groq SDK was dropped
AlemTuzlak May 12, 2026
2bc993c
fix(ai-openrouter): reject inline document data on chat-completions
AlemTuzlak May 12, 2026
9d6a1e8
refactor: rename @tanstack/openai-base → @tanstack/openai-compatible
tombeckenham May 13, 2026
091daa6
ci: apply automated fixes
autofix-ci[bot] May 13, 2026
e90c8d9
refactor: rename @tanstack/openai-compatible → @tanstack/ai-openai-co…
tombeckenham May 13, 2026
813e296
docs(ai-openai-compatible, ai-openrouter): explain the protocol-vs-pr…
tombeckenham May 13, 2026
c4070ae
ci: apply automated fixes
autofix-ci[bot] May 13, 2026
72c8aea
docs(adapters/openrouter): add Chat Completions vs Responses (beta) s…
tombeckenham May 13, 2026
dad9e55
refactor(ai-openai-compatible): narrow to chat/responses; decouple fr…
tombeckenham May 13, 2026
62aad90
ci: apply automated fixes
autofix-ci[bot] May 13, 2026
ebd6244
refactor(ai): rename chat-stream-wrapper to chat-stream-summarize
tombeckenham May 13, 2026
e0dcb77
refactor(summarize): unify provider summarize adapters on chat-stream…
tombeckenham May 13, 2026
0db4c12
ci: apply automated fixes
autofix-ci[bot] May 13, 2026
a39e2bc
refactor(ai-openai-compatible): vendor wire types; drop openai dep
tombeckenham May 13, 2026
71cf0f4
ci: apply automated fixes
autofix-ci[bot] May 13, 2026
7aff8b1
refactor(openai-base): rename, adopt openai SDK, decouple ai-openrouter
tombeckenham May 13, 2026
b566897
ci: apply automated fixes
autofix-ci[bot] May 13, 2026
20e8397
Corrected package versions
tombeckenham May 13, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 35 additions & 0 deletions .changeset/decouple-openrouter-collapse-openai-base.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
---
'@tanstack/openai-base': minor
'@tanstack/ai-openai': patch
'@tanstack/ai-grok': patch
'@tanstack/ai-groq': patch
'@tanstack/ai-openrouter': patch
---

Decouple `@tanstack/ai-openrouter` from the shared OpenAI base, and collapse the base into a thinner shim over the `openai` SDK.

Three changes that ship together:

**1. Rename `@tanstack/ai-openai-compatible` → `@tanstack/openai-base`.** The previous name implied a multi-vendor protocol surface. After ai-openrouter is decoupled (see below), the only remaining consumers (`ai-openai`, `ai-grok`, `ai-groq`) all back onto the `openai` SDK with a different `baseURL` — "base" describes that role accurately. Imports change:

```diff
- import { OpenAICompatibleChatCompletionsTextAdapter } from '@tanstack/ai-openai-compatible'
+ import { OpenAIBaseChatCompletionsTextAdapter } from '@tanstack/openai-base'
- import { OpenAICompatibleResponsesTextAdapter } from '@tanstack/ai-openai-compatible'
+ import { OpenAIBaseResponsesTextAdapter } from '@tanstack/openai-base'
```

`@tanstack/ai-openai-compatible@0.2.x` remains published for anyone with a pinned lockfile reference but will receive no further updates.

**2. `@tanstack/openai-base` adopts the `openai` SDK directly.** The previous package vendored ~720 LOC of hand-written wire-format types (`ChatCompletion`, `ResponseStreamEvent`, etc.) and exposed abstract `callChatCompletion*` / `callResponse*` hooks subclasses had to implement. Both are gone:

- The base now depends on `openai` again and imports types directly from `openai/resources/...`. The vendored `src/types/` directory is removed; consumers that imported wire types from the package (e.g. `import type { ResponseInput } from '@tanstack/ai-openai-compatible'`) should now import from the openai SDK.
- The abstract SDK-call methods are removed. The base constructor takes a pre-built `OpenAI` client (`new OpenAIBaseChatCompletionsTextAdapter(model, name, openaiClient)`) and calls `client.chat.completions.create` / `client.responses.create` itself. Subclasses (`ai-openai`, `ai-grok`, `ai-groq`) now just construct the SDK with their provider-specific `baseURL` and pass it to `super` — `callChatCompletion*` / `callResponse*` overrides go away.

The other extension hooks (`extractReasoning`, `extractTextFromResponse`, `processStreamChunks`, `makeStructuredOutputCompatible`, `transformStructuredOutput`, `mapOptionsToRequest`, `convertMessage`) remain. Groq's `processStreamChunks` and `makeStructuredOutputCompatible` overrides (for `x_groq.usage` promotion and Groq's structured-output schema quirks) are unchanged.

**3. Decouple `@tanstack/ai-openrouter` from the OpenAI base entirely.** OpenRouter ships its own SDK (`@openrouter/sdk`) with a camelCase shape, so inheriting from the OpenAI-shaped base forced a snake_case ↔ camelCase round-trip on every request and stream event. ai-openrouter now extends `BaseTextAdapter` directly and inlines its own stream processors (`OpenRouterTextAdapter` for chat-completions, `OpenRouterResponsesTextAdapter` for the Responses beta), reading OpenRouter's camelCase types natively. The `@tanstack/openai-base` and `openai` dependencies are removed from ai-openrouter; only `@openrouter/sdk`, `@tanstack/ai`, and `@tanstack/ai-utils` remain.

Public API is unchanged: `openRouterText`, `openRouterResponsesText`, `createOpenRouterText`, `createOpenRouterResponsesText`, the OpenRouter tool factories, provider routing surface (`provider`, `models`, `plugins`, `variant`, `transforms`), app attribution headers (`httpReferer`, `appTitle`), `:variant` model suffixing, `RequestAbortedError` propagation, and the OpenRouter-specific structured-output null-preservation all behave the same. The ~300 LOC of inbound/outbound shape converters (`toOpenRouterRequest`, `toChatCompletion`, `adaptOpenRouterStreamChunks`, `toSnakeResponseResult`, …) are gone.

`ai-ollama` remains on `BaseTextAdapter` directly — its native API uses a different wire format from Chat Completions and was never on the shared base.
63 changes: 49 additions & 14 deletions docs/adapters/openrouter.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,16 +35,17 @@ const stream = chat({
## Configuration

```typescript
import { createOpenRouter, type OpenRouterConfig } from "@tanstack/ai-openrouter";

const config: OpenRouterConfig = {
apiKey: process.env.OPENROUTER_API_KEY!,
baseURL: "https://openrouter.ai/api/v1", // Optional
httpReferer: "https://your-app.com", // Optional, for rankings
xTitle: "Your App Name", // Optional, for rankings
};

const adapter = createOpenRouter(config.apiKey, config);
import { createOpenRouterText } from "@tanstack/ai-openrouter";

const adapter = createOpenRouterText(
"openai/gpt-5",
process.env.OPENROUTER_API_KEY!,
{
serverURL: "https://openrouter.ai/api/v1", // Optional
httpReferer: "https://your-app.com", // Optional, for rankings
appTitle: "Your App Name", // Optional, for rankings
},
);
```

## Available Models
Expand Down Expand Up @@ -122,18 +123,52 @@ OpenRouter can automatically route requests to the best available provider:
```typescript
const stream = chat({
adapter: openRouterText("openrouter/auto"),
messages,
providerOptions: {
messages,
modelOptions: {
models: [
"openai/gpt-4o",
"anthropic/claude-3.5-sonnet",
"google/gemini-pro",
],
route: "fallback", // Use fallback if primary fails
},
});
```


## Chat Completions vs Responses (beta)

OpenRouter exposes two OpenAI-compatible wire formats, and the adapter
package ships one of each:

| Adapter | Endpoint | Status | When to use |
| -------------------------- | ------------------------- | -------- | ---------------------------------------------------------------------------- |
| `openRouterText` | `/v1/chat/completions` | Stable | Default for almost everything. Broadest model + tool support. |
| `openRouterResponsesText` | `/v1/responses` | Beta | OpenAI Responses-shaped request/response; richer multi-turn state on OpenAI-style models. |

Both adapters route to any underlying model OpenRouter supports
(`anthropic/...`, `google/...`, `meta-llama/...`, etc.) — the wire format
describes how your client talks to OpenRouter, not which provider answers.
`/v1/responses` is OpenAI's newer API surface; OpenRouter implements it so
clients that prefer that wire format can use it across the same 300+
model catalogue.

```typescript
import { chat } from "@tanstack/ai";
import { openRouterResponsesText } from "@tanstack/ai-openrouter";

const stream = chat({
adapter: openRouterResponsesText("anthropic/claude-sonnet-4.5"),
messages: [{ role: "user", content: "Hello!" }],
});
```

Caveats while the Responses adapter is in beta:

- Function tools are supported; OpenRouter's branded server-tools (web
search, file search, …) are not yet wired through this path — use
`openRouterText` if you need those.
- If in doubt, prefer `openRouterText`. The Chat Completions endpoint has
broader provider coverage and feature parity today.

## Next Steps

- [Getting Started](../getting-started/quick-start) - Learn the basics
Expand Down
239 changes: 28 additions & 211 deletions packages/typescript/ai-anthropic/src/adapters/summarize.ts
Original file line number Diff line number Diff line change
@@ -1,237 +1,54 @@
import { BaseSummarizeAdapter } from '@tanstack/ai/adapters'
import {
createAnthropicClient,
generateId,
getAnthropicApiKeyFromEnv,
} from '../utils'
import { ChatStreamSummarizeAdapter } from '@tanstack/ai/adapters'
import { getAnthropicApiKeyFromEnv } from '../utils'
import { AnthropicTextAdapter } from './text'
import type { InferTextProviderOptions } from '@tanstack/ai/adapters'
import type { ANTHROPIC_MODELS } from '../model-meta'
import type {
StreamChunk,
SummarizationOptions,
SummarizationResult,
} from '@tanstack/ai'
import type { AnthropicClientConfig } from '../utils'

/** Cast an event object to StreamChunk. */
const asChunk = (chunk: Record<string, unknown>) =>
chunk as unknown as StreamChunk

/**
* Configuration for Anthropic summarize adapter
*/
export interface AnthropicSummarizeConfig extends AnthropicClientConfig {}

/**
* Anthropic-specific provider options for summarization
*/
export interface AnthropicSummarizeProviderOptions {
/** Temperature for response generation (0-1) */
temperature?: number
/** Maximum tokens in the response */
maxTokens?: number
}

/** Model type for Anthropic summarization */
export type AnthropicSummarizeModel = (typeof ANTHROPIC_MODELS)[number]

/**
* Anthropic Summarize Adapter
*
* Tree-shakeable adapter for Anthropic summarization functionality.
* Import only what you need for smaller bundle sizes.
*/
export class AnthropicSummarizeAdapter<
TModel extends AnthropicSummarizeModel,
> extends BaseSummarizeAdapter<TModel, AnthropicSummarizeProviderOptions> {
readonly kind = 'summarize' as const
readonly name = 'anthropic' as const

private client: ReturnType<typeof createAnthropicClient>

constructor(config: AnthropicSummarizeConfig, model: TModel) {
super({}, model)
this.client = createAnthropicClient(config)
}

async summarize(options: SummarizationOptions): Promise<SummarizationResult> {
const { logger } = options
const systemPrompt = this.buildSummarizationPrompt(options)

logger.request(`activity=summarize provider=anthropic`, {
provider: 'anthropic',
model: options.model,
})

try {
const response = await this.client.messages.create({
model: options.model,
messages: [{ role: 'user', content: options.text }],
system: systemPrompt,
max_tokens: options.maxLength || 500,
temperature: 0.3,
stream: false,
})

const content = response.content
.map((c) => (c.type === 'text' ? c.text : ''))
.join('')

return {
id: response.id,
model: response.model,
summary: content,
usage: {
promptTokens: response.usage.input_tokens,
completionTokens: response.usage.output_tokens,
totalTokens:
response.usage.input_tokens + response.usage.output_tokens,
},
}
} catch (error) {
logger.errors('anthropic.summarize fatal', {
error,
source: 'anthropic.summarize',
})
throw error
}
}

async *summarizeStream(
options: SummarizationOptions,
): AsyncIterable<StreamChunk> {
const { logger } = options
const systemPrompt = this.buildSummarizationPrompt(options)
const id = generateId(this.name)
const model = options.model
let accumulatedContent = ''
let inputTokens = 0
let outputTokens = 0

logger.request(`activity=summarize provider=anthropic`, {
provider: 'anthropic',
model,
stream: true,
})

try {
const stream = await this.client.messages.create({
model: options.model,
messages: [{ role: 'user', content: options.text }],
system: systemPrompt,
max_tokens: options.maxLength || 500,
temperature: 0.3,
stream: true,
})

for await (const event of stream) {
logger.provider(`provider=anthropic type=${event.type}`, {
chunk: event,
})

if (event.type === 'message_start') {
inputTokens = event.message.usage.input_tokens
} else if (event.type === 'content_block_delta') {
if (event.delta.type === 'text_delta') {
const delta = event.delta.text
accumulatedContent += delta
yield asChunk({
type: 'TEXT_MESSAGE_CONTENT',
messageId: id,
model,
timestamp: Date.now(),
delta,
content: accumulatedContent,
})
}
} else if (event.type === 'message_delta') {
outputTokens = event.usage.output_tokens
yield asChunk({
type: 'RUN_FINISHED',
runId: id,
model,
timestamp: Date.now(),
finishReason: event.delta.stop_reason as
| 'stop'
| 'length'
| 'content_filter'
| null,
usage: {
promptTokens: inputTokens,
completionTokens: outputTokens,
totalTokens: inputTokens + outputTokens,
},
})
}
}
} catch (error) {
logger.errors('anthropic.summarize fatal', {
error,
source: 'anthropic.summarize',
})
throw error
}
}

private buildSummarizationPrompt(options: SummarizationOptions): string {
let prompt = 'You are a professional summarizer. '

switch (options.style) {
case 'bullet-points':
prompt += 'Provide a summary in bullet point format. '
break
case 'paragraph':
prompt += 'Provide a summary in paragraph format. '
break
case 'concise':
prompt += 'Provide a very concise summary in 1-2 sentences. '
break
default:
prompt += 'Provide a clear and concise summary. '
}

if (options.focus && options.focus.length > 0) {
prompt += `Focus on the following aspects: ${options.focus.join(', ')}. `
}

if (options.maxLength) {
prompt += `Keep the summary under ${options.maxLength} tokens. `
}

return prompt
}
}

/**
* Creates an Anthropic summarize adapter with explicit API key.
* Type resolution happens here at the call site.
*
* @param model - The model name (e.g., 'claude-sonnet-4-5', 'claude-3-5-haiku-latest')
* @param apiKey - Your Anthropic API key
* @param config - Optional additional configuration
* @returns Configured Anthropic summarize adapter instance with resolved types
* @example
* ```typescript
* const adapter = createAnthropicSummarize('claude-sonnet-4-5', 'sk-ant-...');
* ```
*/
export function createAnthropicSummarize<
TModel extends AnthropicSummarizeModel,
>(
model: TModel,
apiKey: string,
config?: Omit<AnthropicSummarizeConfig, 'apiKey'>,
): AnthropicSummarizeAdapter<TModel> {
return new AnthropicSummarizeAdapter({ apiKey, ...config }, model)
): ChatStreamSummarizeAdapter<
TModel,
InferTextProviderOptions<AnthropicTextAdapter<TModel>>
> {
return new ChatStreamSummarizeAdapter(
new AnthropicTextAdapter({ apiKey, ...config }, model),
model,
'anthropic',
)
}

/**
* Creates an Anthropic summarize adapter with automatic API key detection.
* Type resolution happens here at the call site.
* Creates an Anthropic summarize adapter with API key from `ANTHROPIC_API_KEY`.
*
* @param model - The model name (e.g., 'claude-sonnet-4-5', 'claude-3-5-haiku-latest')
* @param config - Optional configuration (excluding apiKey which is auto-detected)
* @returns Configured Anthropic summarize adapter instance with resolved types
* @example
* ```typescript
* const adapter = anthropicSummarize('claude-sonnet-4-5');
* await summarize({ adapter, text: 'Long article text...' });
* ```
*/
export function anthropicSummarize<TModel extends AnthropicSummarizeModel>(
model: TModel,
config?: Omit<AnthropicSummarizeConfig, 'apiKey'>,
): AnthropicSummarizeAdapter<TModel> {
const apiKey = getAnthropicApiKeyFromEnv()
return createAnthropicSummarize(model, apiKey, config)
): ChatStreamSummarizeAdapter<
TModel,
InferTextProviderOptions<AnthropicTextAdapter<TModel>>
> {
return createAnthropicSummarize(model, getAnthropicApiKeyFromEnv(), config)
}
Loading
Loading