Skip to content

feat: smooth streaming mode for TUI response rendering#281

Merged
anandgupta42 merged 6 commits intomainfrom
feat/smooth-streaming
Mar 19, 2026
Merged

feat: smooth streaming mode for TUI response rendering#281
anandgupta42 merged 6 commits intomainfrom
feat/smooth-streaming

Conversation

@anandgupta42
Copy link
Copy Markdown
Contributor

What does this PR do?

Adds an opt-in ALTIMATE_SMOOTH_STREAMING feature flag that significantly improves TUI response rendering smoothness during LLM streaming. When enabled:

  • Uses <code> during streaming, <markdown> after completion — the <markdown> element re-lays out block elements (headers, code blocks, lists) on every token delta, causing visible text jumps and jerky scrolling. The <code> element does syntax coloring without block-level layout shifts, then swaps to rich <markdown> rendering once the message finishes.
  • Pre-merges delta events — consecutive message.part.delta events for the same part+field are merged within the 16ms batch window, reducing store updates from N-per-part to 1-per-part per frame.
  • Direct store path updates — replaces produce() proxy with direct setStore() path mutation on the delta hot path, avoiding Immer-style proxy creation on every token.
  • Faster scroll-to-bottom — reduces toBottom() delay from 50ms to 0ms.
  • Memoized trim()TextPart now uses createMemo() for trim() so it runs once per text change instead of 3x per reactive read (unconditional, always active).

Enable with: ALTIMATE_SMOOTH_STREAMING=true or OPENCODE_SMOOTH_STREAMING=true

Type of change

  • Bug fix
  • New feature
  • Breaking change
  • Documentation update

Issue for this PR

Closes #280

How did you verify your code works?

  • Built locally with bun run build:local and tested interactively
  • Verified streaming smoothness with long LLM responses (text, code blocks, headers, lists)
  • Confirmed text no longer appears in visible jumps during streaming
  • Confirmed scrolling is smooth during streaming
  • Verified rich markdown rendering activates after message completion
  • Typecheck passes (bun turbo typecheck)
  • Prettier passes on all changed files

Checklist

  • My code follows the project's coding standards
  • I have performed a self-review of my code
  • New code is behind a feature flag for safe rollout
  • All changes are gated behind ALTIMATE_SMOOTH_STREAMING (except the unconditional trim() memoization which is a pure perf improvement)

…response rendering

During LLM streaming, the `<markdown>` element re-lays out block elements on
every token delta, causing visible text jumps and jerky scrolling. This adds
an opt-in `ALTIMATE_SMOOTH_STREAMING` feature flag with four optimizations:

- Use `<code filetype="markdown">` during streaming (no layout-shifting blocks),
  swap to `<markdown>` after message completion for rich rendering
- Pre-merge consecutive `message.part.delta` events in the SDK 16ms batch
  window (N tokens → 1 store update per part per frame)
- Replace `produce()` with direct store path updates on the delta hot path
- Reduce `toBottom()` scroll delay from 50ms to 0ms
- Memoize `trim()` in `TextPart` (unconditional, no flag needed)

Enable with: `ALTIMATE_SMOOTH_STREAMING=true`
Copy link
Copy Markdown

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Claude Code Review

This repository is configured for manual code reviews. Comment @claude review to trigger a review.

Comment on lines +64 to +74
const key = `${props.messageID}:${props.partID}:${props.field}`
const existing = deltaMap.get(key)
if (existing !== undefined) {
const prev = merged[existing] as typeof event
;(prev.properties as typeof props).delta += props.delta
continue
}
deltaMap.set(key, merged.length)
}
merged.push(event)
}

This comment was marked as outdated.

Comment on lines 1486 to 1496
<markdown syntaxStyle={syntax()} streaming={false} content={trimmed()} conceal={ctx.conceal()} />
</Match>
<Match when={!Flag.OPENCODE_EXPERIMENTAL_MARKDOWN}>
<code
filetype="markdown"
drawUnstyledText={false}
streaming={true}
streaming={false}
syntaxStyle={syntax()}
content={props.part.text.trim()}
content={trimmed()}
conceal={ctx.conceal()}
fg={theme.text}

This comment was marked as outdated.

…ssion

- Clear `deltaMap` on non-delta events to preserve causal ordering
  (prevents folding deltas across intervening `message.part.updated` events)
- Clone event objects instead of mutating in-place during delta merge
- Restore dynamic `streaming` prop in fallback `<markdown>`/`<code>` blocks
  (`!props.message.time.completed`) to avoid regression for non-opt-in users
…xperience

Combines three streaming optimizations under one flag:

- **Smooth streaming** (`ALTIMATE_SMOOTH_STREAMING`): renders with `<code>`
  during streaming to avoid markdown layout jumps, swaps to `<markdown>`
  after message completion
- **Line buffering** (`ALTIMATE_LINE_STREAMING`): buffers deltas and flushes
  only on `\n` (complete lines). Remaining text flushes on message completion.
  No partial lines ever appear.
- **Width cap** (`ALTIMATE_CONTENT_MAX_WIDTH`): caps text at 100 columns for
  readability. Automatically disabled on small screens where the cap would
  exceed available width.

`ALTIMATE_CALM_MODE=true` enables all three with sensible defaults.
Individual flags still work independently for fine-grained control.

Includes 38 unit tests covering delta merging, line buffering, flag
composition, and width capping edge cases (small screens, empty buffers,
consecutive newlines, cross-message isolation).
- Flush/delete line buffer entries in `message.removed` handler to prevent
  memory leaks when messages are aborted or removed without `time.completed`
- Add clarifying comment explaining line streaming / smooth streaming
  interaction (line streaming branch handles its own direct store updates)
- Add "Calm Mode Quick Start" section to CLI docs with usage examples
- Update `ALTIMATE_LINE_STREAMING` doc to mention abort cleanup
…removed`

When streaming ends, `message.part.updated` writes the full final text via
`reconcile()`. Without clearing the line buffer first, `flushAllBuffersForMessage`
on `message.updated` would append the remaining buffered text on top of the
already-complete content — duplicating the trailing partial line.

Fix: discard all line buffer entries for a part when `message.part.updated`
fires (the server's content is authoritative). Also clear on
`message.part.removed` to prevent orphaned buffer entries.
@anandgupta42 anandgupta42 merged commit 69bd0d6 into main Mar 19, 2026
8 checks passed
@anandgupta42 anandgupta42 deleted the feat/smooth-streaming branch March 25, 2026 02:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: smooth streaming mode for TUI response rendering

1 participant