The markdown coalescing loop was processing chunks back-to-back without
yielding to the browser's paint cycle. At high token rates (250+ tok/s),
this caused complete UI freeze as the main thread was perpetually busy.
Add a requestAnimationFrame yield between processing batches. This allows
the browser to paint at screen FPS regardless of token throughput. Chunks
arriving during the yield are coalesced and processed together, so we
skip intermediate states and jump straight to the latest content.
Before: Chunk->process->Chunk->process->... (browser never paints = freeze)
After: Chunk->process->[RAF]->coalesced chunks->process->[RAF]->... (screen FPS)
Tested with 250 tok/s streams on 50K+ token contexts: smooth scrolling
and responsive UI throughout.
Replace full AST re-transformation with per-block caching strategy.
Previously, each streaming chunk triggered processor.run() on the entire
document (12 rehype/remark plugins including KaTeX and highlight.js).
Now transforms individual MDAST nodes and caches results by position hash.
In append-only streaming mode, stable blocks are reused directly from cache,
only the unstable trailing block is re-transformed.
- Add SvelteMap FIFO cache (5000 blocks, evicts oldest 1000 on overflow)
- Add getMdastNodeHash() for MDAST node fingerprinting by position
- Add isAppendMode() to detect streaming append patterns
- Add transformMdastNode() for single-node transformation with cache lookup
- Remove stringifyProcessedNode() (dead code after refactor)
Reduces streaming complexity from O(N × transforms) to O(1) for stable blocks.
Targets 200K token contexts without UI degradation on mobile devices.
Parse tool results line-by-line to display images immediately after their
[Attachment saved: xxx.png] markers. Fixes previous commit where all images
from all tool calls were shown in every section. Each tool call now displays
only its own images.
Uses Svelte derived for memoization to avoid re-parsing on every streaming
chunk. Parsing only occurs when section.toolResult or message.extra changes
Make model selector dropdown responsive:
- Mobile: full width (w-full max-w-[100vw])
- Desktop: adapts to longest model name (sm:w-max)
- Replace TruncatedText with responsive span (truncate on mobile, full text on desktop via sm:overflow-visible sm:whitespace-nowrap)
- Center status icons in fixed 24px wrapper to prevent layout shifts
- Add sm:pr-2 padding between text and icon zone on desktop
Fixes dropdown cutting off long model names on desktop while maintaining full-width display on mobile with proper text truncation
refactor(agentic-client): Extend BaseClient for store integration
refactor(chat-client): Extend BaseClient for store integration
refactor(conversations-client): Extend BaseClient for store integration