-
-
Notifications
You must be signed in to change notification settings - Fork 202
feat: dual ESM+CJS builds + toJSONResponse/fetchJSON for non-streaming runtimes #478
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
e118295
1bbc932
ee3f393
3fa270d
3222f82
3ab6871
43e59bb
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,9 @@ | ||
| --- | ||
| '@tanstack/ai': minor | ||
| '@tanstack/ai-client': minor | ||
| '@tanstack/ai-event-client': patch | ||
| --- | ||
|
|
||
| **Dual ESM + CJS output.** `@tanstack/ai`, `@tanstack/ai-client`, and `@tanstack/ai-event-client` now ship both ESM and CJS builds with type-aware dual `exports` maps (`import` → `./dist/esm/*.js`, `require` → `./dist/cjs/*.cjs`), plus a `main` field pointing at CJS. Fixes Metro / Expo / CJS-only resolvers that previously couldn't find `@tanstack/ai/adapters` or `@tanstack/ai-client` because the packages were ESM-only (#308). | ||
|
|
||
| **New `toJSONResponse(stream, init?)` on `@tanstack/ai`.** Drains the chat stream fully and returns a JSON-array `Response` with `Content-Type: application/json`. Use on server runtimes that can't emit `ReadableStream` responses (Expo's `@expo/server`, some edge proxies). Pair with the new `fetchJSON(url, options?)` connection adapter on `@tanstack/ai-client` — it fetches the array and replays each chunk into the normal `ChatClient` pipeline. Trade-off: no incremental rendering (every chunk arrives at once when the request resolves). Closes #309. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,95 @@ | ||
| --- | ||
| title: React Native & Expo | ||
| id: non-streaming-runtimes | ||
| order: 4 | ||
| description: "Run TanStack AI on React Native, Expo, and other runtimes that can't emit ReadableStream responses — using toJSONResponse on the server and fetchJSON on the client." | ||
| keywords: | ||
| - tanstack ai | ||
| - react native | ||
| - expo | ||
| - expo router | ||
| - metro bundler | ||
| - non-streaming | ||
| - toJSONResponse | ||
| - fetchJSON | ||
| - edge runtime | ||
| --- | ||
|
|
||
| You have a React Native or Expo app and you want to add AI chat, but the usual `toServerSentEventsResponse()` helper crashes on Expo's server runtime with: | ||
|
|
||
| ``` | ||
| TypeError: Cannot read properties of undefined (reading 'statusText') | ||
| ``` | ||
|
|
||
| …and Metro refuses to resolve `@tanstack/ai/adapters` at all. By the end of this guide, you'll have a working chat flow on Expo/React Native using a JSON-array fallback path. The same approach works for any deployment target that can't stream `ReadableStream` responses (some edge proxies, legacy serverless runtimes, etc.). | ||
|
|
||
| ## What's actually going wrong | ||
|
|
||
| Two separate problems show up on React Native / Expo: | ||
|
|
||
| 1. **Module resolution.** `@tanstack/ai` and `@tanstack/ai-client` ship dual ESM + CJS builds with `main`/`module`/`exports` all wired up. If your version is new enough, Metro resolves them out of the box. If you're stuck on an older version, upgrade — older releases were ESM-only and Metro can't consume them. | ||
|
|
||
| 2. **Response shape.** Expo's `@expo/server` runtime (and a few edge proxies) can't emit a `ReadableStream` body, which is what `toServerSentEventsResponse` and `toHttpResponse` return. The request silently fails on the client side and `isLoading` flips back to `false` immediately. | ||
|
|
||
| The fix for (2) is to drain the chat stream on the server, send the collected chunks as a single JSON array, and replay them on the client. You lose incremental rendering — the UI sees every chunk at once when the request resolves — but every other piece of the chat pipeline keeps working as-is. | ||
|
|
||
| ## Step 1: Return a JSON-array response on the server | ||
|
|
||
| Swap `toServerSentEventsResponse` for `toJSONResponse` in your API route. On Expo Router: | ||
|
|
||
| ```typescript | ||
| // app/api/chat+api.ts | ||
| import { chat, toJSONResponse } from "@tanstack/ai"; | ||
| import { openaiText } from "@tanstack/ai-openai"; | ||
|
|
||
| export async function POST(request: Request) { | ||
| const { messages } = await request.json(); | ||
|
|
||
| const stream = chat({ | ||
| adapter: openaiText("gpt-5.2"), | ||
| messages, | ||
| }); | ||
|
|
||
| return toJSONResponse(stream); | ||
| } | ||
| ``` | ||
|
|
||
| `toJSONResponse` iterates the whole stream, collects each `StreamChunk` into an array, and returns a plain `Response` with `Content-Type: application/json`. It accepts the same `init` options as `toServerSentEventsResponse` (including `abortController`) and honours any `Content-Type` you pass in `headers`. | ||
|
|
||
| ## Step 2: Use `fetchJSON` as the connection adapter on the client | ||
|
|
||
| Swap `fetchServerSentEvents` for `fetchJSON` in your `useChat` call: | ||
|
|
||
| ```typescript | ||
| import { useChat } from "@tanstack/ai-react"; | ||
| import { fetchJSON } from "@tanstack/ai-client"; | ||
|
|
||
| export function ChatScreen() { | ||
| const { messages, sendMessage, isLoading } = useChat({ | ||
| connection: fetchJSON("/api/chat"), | ||
| }); | ||
|
|
||
| // messages and isLoading behave identically to the streaming path — | ||
| // they just update all at once when the request resolves. | ||
| return <ChatUI messages={messages} onSend={sendMessage} busy={isLoading} />; | ||
| } | ||
| ``` | ||
|
|
||
| `fetchJSON` accepts the same `url` + `options` signature as the other connection adapters (static string or function, headers, credentials, custom `fetchClient`, extra body, abort signal). It POSTs the usual `{ messages, data }` body, decodes the response as a `StreamChunk[]`, and replays each chunk into the normal `ChatClient` pipeline — tool calls, approvals, thinking content, errors all behave the same way they do with SSE. | ||
|
|
||
| ## Step 3: Expect no incremental rendering | ||
|
|
||
| The one thing you give up: the UI won't update character-by-character. The request hangs until the server finishes the whole run, then the full message — including tool calls, results, and the final assistant turn — appears at once. | ||
|
|
||
| If this becomes a problem, the answer is to move to a runtime that supports streaming responses (Hono on Node, Next.js, TanStack Start, a real SSE endpoint proxied through a CDN that doesn't buffer) rather than to work around the limitation further. The JSON-array path is a pragmatic escape hatch, not the intended happy path. | ||
|
|
||
| ## Going back to streaming when you can | ||
|
|
||
| If you later deploy your server code to a runtime that *does* support streaming, you only need to change two call sites — `toJSONResponse` → `toServerSentEventsResponse` and `fetchJSON` → `fetchServerSentEvents`. Everything downstream (messages, tool calls, approvals, `useChat` state, error handling) is identical between the two paths, so there's no cleanup to chase through the app. | ||
|
|
||
| ## Next Steps | ||
|
|
||
| - [Streaming](./streaming) — the normal incremental-rendering path | ||
| - [Connection Adapters](./connection-adapters) — full list of client-side adapters, including `fetchJSON` | ||
| - [API Reference: `toJSONResponse`](../api/ai#tojsonresponsestream-init) — server-side helper reference | ||
| - [API Reference: `fetchJSON`](../api/ai-client#fetchjsonurl-options) — client-side adapter reference |
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -424,6 +424,81 @@ export function fetchHttpStream( | |||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||
| /** | ||||||||||||||||||||||||||||||||
| * Create a JSON-array connection adapter for server runtimes that cannot | ||||||||||||||||||||||||||||||||
| * stream `ReadableStream` responses (e.g. Expo's `@expo/server`, certain | ||||||||||||||||||||||||||||||||
| * edge proxies). Pair with `toJSONResponse(stream)` on the server: the | ||||||||||||||||||||||||||||||||
| * server drains the chat stream fully, JSON-serialises the collected | ||||||||||||||||||||||||||||||||
| * chunks into an array, and this adapter fetches the array and replays | ||||||||||||||||||||||||||||||||
| * each chunk one-by-one into the normal client pipeline. | ||||||||||||||||||||||||||||||||
| * | ||||||||||||||||||||||||||||||||
| * Trade-off: you lose incremental rendering — the UI sees every chunk | ||||||||||||||||||||||||||||||||
| * only after the request resolves. Use SSE/HTTP-stream adapters when the | ||||||||||||||||||||||||||||||||
| * runtime supports them. | ||||||||||||||||||||||||||||||||
| * | ||||||||||||||||||||||||||||||||
| * @param url - The API endpoint URL (or a function that returns the URL) | ||||||||||||||||||||||||||||||||
| * @param options - Fetch options (headers, credentials, body, etc.) or a function that returns options (can be async) | ||||||||||||||||||||||||||||||||
| * @returns A connection adapter for JSON-array responses | ||||||||||||||||||||||||||||||||
| * | ||||||||||||||||||||||||||||||||
| * @example | ||||||||||||||||||||||||||||||||
| * ```typescript | ||||||||||||||||||||||||||||||||
| * // Expo / RN client that hits an Expo API route returning toJSONResponse(stream) | ||||||||||||||||||||||||||||||||
| * const connection = fetchJSON('/api/chat') | ||||||||||||||||||||||||||||||||
| * | ||||||||||||||||||||||||||||||||
| * const client = new ChatClient({ connection }) | ||||||||||||||||||||||||||||||||
| * ``` | ||||||||||||||||||||||||||||||||
| */ | ||||||||||||||||||||||||||||||||
| export function fetchJSON( | ||||||||||||||||||||||||||||||||
| url: string | (() => string), | ||||||||||||||||||||||||||||||||
| options: | ||||||||||||||||||||||||||||||||
| | FetchConnectionOptions | ||||||||||||||||||||||||||||||||
| | (() => FetchConnectionOptions | Promise<FetchConnectionOptions>) = {}, | ||||||||||||||||||||||||||||||||
| ): ConnectConnectionAdapter { | ||||||||||||||||||||||||||||||||
| return { | ||||||||||||||||||||||||||||||||
| async *connect(messages, data, abortSignal) { | ||||||||||||||||||||||||||||||||
| const resolvedUrl = typeof url === 'function' ? url() : url | ||||||||||||||||||||||||||||||||
| const resolvedOptions = | ||||||||||||||||||||||||||||||||
| typeof options === 'function' ? await options() : options | ||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||
| const requestHeaders: Record<string, string> = { | ||||||||||||||||||||||||||||||||
| 'Content-Type': 'application/json', | ||||||||||||||||||||||||||||||||
| ...mergeHeaders(resolvedOptions.headers), | ||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||
| const requestBody = { | ||||||||||||||||||||||||||||||||
| messages, | ||||||||||||||||||||||||||||||||
| data, | ||||||||||||||||||||||||||||||||
| ...resolvedOptions.body, | ||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||
| const fetchClient = resolvedOptions.fetchClient ?? fetch | ||||||||||||||||||||||||||||||||
| const response = await fetchClient(resolvedUrl, { | ||||||||||||||||||||||||||||||||
| method: 'POST', | ||||||||||||||||||||||||||||||||
| headers: requestHeaders, | ||||||||||||||||||||||||||||||||
| body: JSON.stringify(requestBody), | ||||||||||||||||||||||||||||||||
| credentials: resolvedOptions.credentials || 'same-origin', | ||||||||||||||||||||||||||||||||
| signal: abortSignal || resolvedOptions.signal, | ||||||||||||||||||||||||||||||||
| }) | ||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||
| if (!response.ok) { | ||||||||||||||||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. servers usually put the actual diagnostic information in the body. e.g. OpenAI/Anthropic upstream rate limit: {"error":{"type":"rate_limit_error","message":"Rate limit exceeded for...","retryAfter":42}}
Suggested change
|
||||||||||||||||||||||||||||||||
| throw new Error( | ||||||||||||||||||||||||||||||||
| `HTTP error! status: ${response.status} ${response.statusText}`, | ||||||||||||||||||||||||||||||||
| ) | ||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||
| const payload = (await response.json()) as unknown | ||||||||||||||||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You need to wrap this in try catch otherwise the user will get an annoying "Unexpected token < in JSON at position 0" error instead of what probably happened e.g. a gateway error or something that returned html
Suggested change
|
||||||||||||||||||||||||||||||||
| if (!Array.isArray(payload)) { | ||||||||||||||||||||||||||||||||
| throw new Error( | ||||||||||||||||||||||||||||||||
| 'fetchJSON: expected response body to be a JSON array of StreamChunks. Did you forget to use `toJSONResponse(stream)` on the server?', | ||||||||||||||||||||||||||||||||
| ) | ||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||
| for (const chunk of payload) { | ||||||||||||||||||||||||||||||||
| yield chunk as StreamChunk | ||||||||||||||||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You never check for the abort signal in this yield loop despite adding it in toJSONResponse |
||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||
| }, | ||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||
| /** | ||||||||||||||||||||||||||||||||
| * Create a direct stream connection adapter (for server functions or direct streams) | ||||||||||||||||||||||||||||||||
| * | ||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should add an e2e test for the roundtrip toJSONResponse → fetchJSON