Skip to content

feat(tui): render markdown tables with Unicode box-drawing borders#10

Open
fcoury wants to merge 39 commits intomainfrom
refact/md-table
Open

feat(tui): render markdown tables with Unicode box-drawing borders#10
fcoury wants to merge 39 commits intomainfrom
refact/md-table

Conversation

@fcoury
Copy link
Owner

@fcoury fcoury commented Feb 22, 2026

feat(tui): render markdown tables with Unicode box-drawing borders

Problem

Agent responses containing Markdown tables render as raw pipe-delimited text in the TUI transcript. Tables are one of the most common structured output formats from LLMs, and rendering them as | A | B | lines wastes horizontal space, obscures alignment, and makes multi-column data hard to scan — especially when columns contain prose or mixed-width content.

Before — raw pipe-delimited table on main:

image

After — Unicode box table that adapts to terminal width:

Wide terminal Narrow terminal
image image

And a demonstration of the table rendering flow:

Dynamic.Markdown.Tables.mp4

A second, subtler problem: agents frequently wrap tables in ```markdown fences (treating them as code), which causes pulldown-cmark to parse the table as a fenced code block rather than a native table. Both problems need solving together.

Side benefit: The resize reflow infrastructure built for table width adaptation also fixes terminal resize for all markdown content. On main, widening the terminal wastes space and narrowing it causes visual corruption (see Resize handling for screenshots and details).

Mental model

The change introduces a two-region streaming model with table-aware holdback:

  1. Stable region — rendered lines committed to scrollback via a commit-animation queue. Once emitted, these lines are immutable.
  2. Tail region — mutable rendered lines displayed in the active-cell slot. When a table is detected in the accumulated source, content from the table header onward is held as tail because adding a single row can reshape every column width. Pre-table prose flows to stable normally.

The holdback decision is the core architectural choice: rather than trying to incrementally patch table renders, the system re-renders the entire accumulated markdown source on every committed delta and decides per-delta whether lines flow to stable or stay as mutable tail.

How lines are allocated between regions:

  • No table (None): lines progressively flow from tail to stable as new deltas arrive. Only the most recent not-yet-committed lines stay in tail.
  • Table confirmed (Confirmed { table_start }): content from the table header onward is held as tail until the stream ends, because a single new row can reshape every column width. Pre-table prose continues flowing to stable.
  • Pending header (PendingHeader { header_start }): transitional — a header row was seen but not yet confirmed by a delimiter row. Content from the speculative header onward is kept mutable.

The holdback state is maintained by an incremental scanner (TableHoldbackScanner) over append-only source chunks, with a stateless full-source scan retained under #[cfg(test)] for parity checks.

A parallel fence-unwrapping pass (unwrap_markdown_fences) strips ```md/```markdown wrappers when the fence body contains a header+delimiter pair, so pulldown-cmark sees raw table syntax instead of fenced code.

Non-goals

  • Incremental table rendering. The system does not attempt to patch prior lines when a new row arrives. It re-renders from source. This is intentional: markdown rendering is non-monotonic (a new row can change every column width), so incremental updates would require diffing rendered output, which is complex and error-prone.
  • Table editing or interaction. Tables are display-only transcript content.
  • Arbitrary markdown extensions. Only standard pipe tables (GFM) are supported. No grid tables, CSV, or custom formats.
  • Horizontal scrolling. When a table cannot fit the terminal width, it falls back to pipe-delimited format rather than implementing horizontal scroll.

Tradeoffs

Decision Upside Downside
Full re-render on each committed source chunk (newline-gated) Correct by construction; no incremental diffing bugs O(n) in source size per committed chunk; acceptable for typical agent messages (hundreds of lines)
Hold table region as tail during table streaming No flickering or duplicated partial table rows in scrollback; pre-table prose streams normally Long tables delay scrollback commit of table content until stream ends
Source-map resize remapping (rendered_line_end_source_bytes + remap_emitted_len_from_source_boundary) Correct width transition without dropping content; one re-render per resize O(N) boundary scan per resize (one re-render + linear scan of stored source-byte boundaries)
Fence unwrapping before parsing Handles the common ```markdown pattern transparently Buffers entire fence body before deciding; adds a preprocessing pass
Narrative/Structured column classification Intelligent shrinking prioritizes preserving short-token columns Heuristic thresholds (4 avg words, 28 avg char width) may misclassify edge cases
Spillover row detection Prevents pulldown-cmark's lenient parsing from producing malformed single-cell rows Heuristic-based; may occasionally miscategorize intentional sparse rows

Architecture

flowchart TD
    delta["Token delta from agent"]
    collector["MarkdownStreamCollector
    *(newline-gated raw source accumulator)*"]

    delta --> collector
    collector -- "commit_complete_source()" --> raw

    subgraph StreamController["StreamController — two-region streaming engine"]
        raw["raw_source
        *(append-only markdown accumulator)*"]

        raw --> recompute["recompute_streaming_render()"]

        subgraph pipeline["Render pipeline"]
            unwrap["unwrap_markdown_fences()"]
            pulldown["pulldown-cmark"]
            writer["Writer"]
            table_render["render_table_lines()"]
            unwrap --> pulldown --> writer --> table_render
        end

        recompute --> unwrap
        writer --> rendered["rendered_lines
        *(full re-render at current width)*"]
        table_render --> boxgrid["Unicode box grid
        or pipe fallback"]

        rendered --> holdback["holdback_scanner.state()"]
        holdback --> state["TableHoldbackState
        {None | PendingHeader | Confirmed}"]
        state --> sync["sync_stable_queue()"]
    end

    sync -- "Stable lines" --> commit["commit-animation queue"]
    commit --> agent["AgentMessageCell *(scrollback)*"]
    sync -- "Tail lines" --> tail["StreamingAgentTailCell *(active-cell slot, mutable)*"]
Loading

Key modules:

  • table_detect — canonical pipe-table detection, incremental holdback scanning, and fence-context tracking; shared by streaming controller, fence unwrapper, and holdback scanner
  • width — usable-width guards for prefix-column reservation
  • markdown — public entry points with fence unwrapping (append_markdown_agent)
  • streaming/controllerStreamCore (shared bookkeeping for source accumulation, re-rendering, holdback, and resize), StreamController (agent messages, holdback enabled), and PlanStreamController (plan blocks, holdback disabled)

Key types:

  • StreamCore — private shared engine for two-region streaming: source accumulation, re-rendering, stable/tail partitioning, and holdback scanning. Both StreamController and PlanStreamController delegate to it via composition.
  • StreamController / PlanStreamController — thin wrappers that add agent-message or plan-block emission semantics. Agent streams enable table holdback; plan streams remain fully incremental.
  • TableHoldbackState — None / PendingHeader { header_start } / Confirmed { table_start }
  • FenceTracker — incremental state machine for fence open/close (Outside / Markdown / Other)
  • TableState / TableCell — pulldown-cmark event accumulator
  • TableColumnKind / TableColumnMetrics — width-allocation classification (Narrative shrinks first; Structured resists collapse)
  • RenderedMarkdownWithSourceMap — render artifact carrying lines + per-line input byte offsets (line_end_input_bytes)
  • StreamingAgentTailCell / AgentMessageCell — history cell types for tail and committed content

Extracted helpers (shared across app, chatwidget, and streaming modules):

  • trailing_run_start<T> — find start index of the trailing run of a specific cell type in transcript
  • strip_line_indent — peel leading whitespace for fence detection
  • compute_target_stable_len — determine how many rendered lines should be stable based on holdback state
  • reset_history_emission_state / maybe_finish_stream_reflow — resize coordination between app-level reflow and active streams
  • is_blockquote_active — shared blockquote context check for the markdown writer
  • active_cell_is_stream_tail — chatwidget guard for stream-aware cell operations

Table rendering pipeline

The Writer in markdown_render.rs accumulates pulldown-cmark table events into a TableState, then passes it to render_table_lines() which runs a five-stage pipeline:

  1. Filter spillover rows — heuristic extraction of single-cell rows that are artifacts of pulldown-cmark's lenient parsing (accepts body rows without leading pipes, which can absorb trailing paragraphs).
  2. Normalize column counts — pad or truncate so every row matches the alignment count.
  3. Compute column widths — allocate widths with Narrative/Structured priority and iterative shrinking. Columns are classified as Narrative (long prose, >= 4 avg words or >= 28 avg char width) or Structured (short tokens). URL-like columns (long tokens with few words) are kept as Structured to resist collapse. The shrink loop removes one character at a time, preferring Narrative columns, until the total fits.
  4. Render box grid — Unicode borders (┌───┬───┐) or fall back to pipe-delimited format (| A | B |) when even 3-char-wide columns cannot fit.
  5. Append spillover — extracted spillover rows rendered as plain text after the table.

Resize handling

On main, terminal resize is essentially ignored for transcript content. History cells store pre-rendered Vec<Line> and are never reflowed — growing the terminal wastes horizontal space (text stays wrapped at the old width), and shrinking it causes visual corruption as pre-rendered lines are hard-clipped mid-word.

main — paragraph at half width, then after resize (no reflow):

SCR-20260217-shfw SCR-20260217-shha

This branch — prose reflows cleanly at any width:

SCR-20260217-shmc

This PR adds a resize reflow system, motivated by tables (whose column widths must adapt to available space) but benefiting all markdown content:

  1. Width-change detection. App tracks last_transcript_render_width and fires ChatWidget::on_terminal_resize() when it changes.

  2. Active stream updates immediately. StreamCore::set_width() re-renders once at the new width, then remaps emitted_stable_len via stored source-byte boundaries. The remap_emitted_len_from_source_boundary function walks rendered_line_end_source_bytes with a tie-break counter for wrapped lines sharing the same source offset, avoiding duplicated or dropped content across the width transition.

  3. Committed scrollback reflowed on a 75ms debounce. maybe_run_resize_reflow() clears the terminal and re-inserts every transcript cell at the new width. The debounce (RESIZE_REFLOW_DEBOUNCE) prevents expensive re-renders on every pixel of a drag resize.

  4. Source-based history cells. AgentMarkdownCell stores raw markdown source instead of pre-rendered lines, so it can re-render at any width during reflow. This replaces the run of AgentMessageCells emitted during streaming once the stream is consolidated.

  5. Stream-time reflow coordination. If a reflow runs while a stream is still active (cells are still AgentMessageCell, not yet consolidated), the reflow_ran_during_stream flag triggers a second reflow after ConsolidateAgentMessage to pick up the AgentMarkdownCell rendering at the current width.

Observability

  • tracing::trace! on every push_delta call in MarkdownStreamCollector
  • tracing::debug! on finalization with raw/rendered lengths (test-only finalize_and_drain; production finalize_and_drain_source has no tracing)
  • Table holdback state is updated incrementally per committed source chunk; tests verify parity against a stateless full-source scanner
  • Resize reflow is debounced at 75ms (RESIZE_REFLOW_DEBOUNCE); reflow_ran_during_stream flag triggers a re-reflow on stream consolidation to pick up the final AgentMarkdownCell rendering

Tests

  • Streaming/full-render parity: Multiple tests assert that streaming token-by-token through StreamController produces identical output to a single full append_markdown_agent call. Covers: basic tables, blockquoted tables, no-outer-pipes tables, two-column tables, markdown-fenced tables, multi-table responses.
  • Table holdback: Confirms Confirmed state for header+delimiter pairs, PendingHeader for lone headers, None for prose with pipes, and fence-awareness (non-markdown fences suppress holdback). Incremental scanner is checked against a stateless #[cfg(test)] full-source function for parity.
  • Resize correctness: set_width tests verify no duplicate lines after emit, no lost lines on partial drain, preserved in-flight tails for both agent and plan controllers, and correct output when resizing during an active confirmed-table stream.
  • Fence unwrapping: Tests confirm markdown fences with tables are unwrapped, non-markdown fences are preserved, markdown fences without tables stay as code blocks, and unclosed fences degrade gracefully.
  • Column width allocation: Direct unit tests assert TableColumnKind classification (including URL-like token resistance), preferred_column_floor caps, and next_column_to_shrink priority. End-to-end parity tests provide additional coverage.
  • Spillover detection: Direct unit tests for is_spillover_row cover each heuristic branch (single-cell, HTML content, label+HTML, trailing HTML label) and negative cases. End-to-end tests provide additional coverage.
  • Source-boundary resize remap: Tests verify no duplicate lines after partial emit + resize, and that un-emitted content survives width changes. Fence-unwrapping mapping invariants tested separately.
  • Interrupt behavior: Tests verify that stream events are dropped while an interrupt is pending, and that subsequent turns stream normally after an abort.

Copilot AI review requested due to automatic review settings February 22, 2026 15:08
@coderabbitai
Copy link

coderabbitai bot commented Feb 22, 2026

📝 Walkthrough

Walkthrough

Implements debounced resize/reflow and history reflow state, interrupt-aware streaming with transient tail handling, new in-flight and consolidated markdown/history cell types, a newline-gated markdown stream collector, width-aware markdown table rendering (with fence unwrapping), table-detection utilities, and a refactor of streaming controllers with table holdback logic.

Changes

Cohort / File(s) Summary
Resize & Reflow Management
codex-rs/tui/src/app.rs
Adds debounced resize reflow state and helpers, per-frame reflow scheduling, history rendering width tracking, and hooks to run/finish reflows after consolidation or stream end.
App Events
codex-rs/tui/src/app_event.rs
Adds AppEvent::ConsolidateAgentMessage(String) and AppEvent::ConsolidateProposedPlan(String) to emit consolidated cells after streaming finalization.
Chat widget & interrupt flow
codex-rs/tui/src/chatwidget.rs, codex-rs/tui/src/chatwidget/tests.rs
Adds interrupt_requested_for_turn and droppable-event logic, exposes has_active_agent/plan stream helpers, width-aware tail sync on resize, consolidation emission paths, and many tests for tail, interrupt, and resize behaviors.
History cell types & rendering
codex-rs/tui/src/history_cell.rs
Introduces AgentMarkdownCell and StreamingAgentTailCell, clears draw area before active-cell render, uses width util for reasoning summary, and expands tests for markdown consolidation and tail behavior.
Markdown processing & streaming
codex-rs/tui/src/markdown.rs, codex-rs/tui/src/markdown_stream.rs
Adds append_markdown/append_markdown_agent with fence-unwrapping for md fences, and MarkdownStreamCollector for newline-gated delta buffering and commit/finalize semantics plus tests.
Markdown rendering engine
codex-rs/tui/src/markdown_render.rs, codex-rs/tui/src/markdown_render_tests.rs
Adds width-aware table rendering pipeline, TableCell/TableState, column metrics/shrink heuristics, boxed/pipe fallbacks, and many table-focused tests.
Table detection utilities
codex-rs/tui/src/table_detect.rs
New module: parse table segments, header/delimiter validators, FenceTracker, fence parsing, blockquote stripping, and unit tests used by stream/table logic.
Streaming controller refactor
codex-rs/tui/src/streaming/controller.rs, codex-rs/tui/src/streaming/mod.rs
Introduces StreamCore, table holdback scanner, stable-prefix caching, width-aware tail rendering; StreamController::finalize/PlanStreamController::finalize now return (Option<Box<dyn HistoryCell>>, Option<String>); replaces drain_all with clear_queue.
Insert & render helpers
codex-rs/tui/src/insert_history.rs, codex-rs/tui/src/render/line_utils.rs, codex-rs/tui/src/width.rs
Fixes per-line MoveTo behavior to avoid wrap drift, gates is_blank_line_spaces_only to test builds, and adds usable_content_width utilities with tests.
Pager & TUI plumbing
codex-rs/tui/src/pager_overlay.rs, codex-rs/tui/src/tui.rs
Adds TranscriptOverlay::consolidate_cells to replace ranges with a consolidated cell (remapping highlights) and Tui::clear_pending_history_lines to drop unflushed history.
Bottom pane / interrupts
codex-rs/tui/src/bottom_pane/mod.rs
Esc handling now interrupts via Op::Interrupt when no visible status is present; adds tests for hidden-status interrupt behavior.
Dependency & module changes
codex-rs/Cargo.toml, codex-rs/tui/src/lib.rs
Replaces tree-sitter-highlight with syntect = "5" and registers new internal modules width and table_detect.
App-server & thread state changes
codex-rs/app-server/src/codex_message_processor.rs, codex-rs/app-server/src/thread_state.rs
Adds logging and adjusts listener lifecycle behavior to retain listeners outside teardown; updates tests accordingly.
Realtime mirroring & conversation
codex-rs/core/src/codex.rs, codex-rs/core/src/realtime_conversation.rs
Adds realtime text mirroring routes and helpers to extract/route conversational event text into realtime input; adds tests for extraction from JSON items.
Config/theme changes
codex-rs/core/src/config/{mod.rs,types.rs}, codex-rs/core/src/config/edit.rs, codex-rs/core/config.schema.json
Adds tui.theme/tui_theme config, renames background_terminal_timeout → background_terminal_max_timeout, provides syntax_theme_edit helper, and tests for deserialization.
Built-in agent role
codex-rs/core/src/agent/{role.rs,builtins/monitor.toml}
Adds a new "monitor" built-in role and embedded TOML content; updates role config resolution.
Tools & constants
codex-rs/core/src/tools/handlers/multi_agents.rs
Increases MAX_WAIT_TIMEOUT_MS from 300_000 to 3_600_000.
Misc tests & small tweaks
various core tests (shell.rs, other tests)
Test adjustments (path handling, initialization scaffolding) and added/updated tests across TUI and core to cover new behavior.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant App
    participant ChatWidget
    participant StreamController
    participant MarkdownStreamCollector
    participant HistoryCell

    User->>App: Input stream delta
    App->>ChatWidget: dispatch_event_msg
    alt interrupt_requested_for_turn && droppable event
        ChatWidget-->>App: drop event
    else normal flow
        ChatWidget->>StreamController: push(delta)
        StreamController->>MarkdownStreamCollector: push_delta
        MarkdownStreamCollector->>MarkdownStreamCollector: accumulate & check newline
        alt newline found
            MarkdownStreamCollector-->>StreamController: commit_complete_source
            StreamController->>StreamController: re-render with width
            StreamController-->>ChatWidget: updated lines
        end
        ChatWidget->>HistoryCell: insert_history_cell_lines
    end

    User->>App: Trigger finalize (stream end)
    App->>ChatWidget: finalize_turn
    ChatWidget->>StreamController: finalize
    StreamController->>MarkdownStreamCollector: finalize_and_drain_source
    MarkdownStreamCollector-->>StreamController: remaining markdown source
    StreamController-->>ChatWidget: (consolidated_cell, markdown_source)
    ChatWidget->>App: AppEvent::ConsolidateAgentMessage(markdown_source)
    App->>ChatWidget: apply consolidation
    ChatWidget->>App: update history (consolidated cell)
Loading
sequenceDiagram
    participant Terminal
    participant App
    participant ChatWidget
    participant StreamController
    participant TableHoldbackScanner

    Terminal->>App: resize (new width)
    App->>ChatWidget: on_terminal_resize(width)
    ChatWidget->>StreamController: on_terminal_resize(width)
    StreamController->>StreamController: update current_stream_width

    alt stream active
        StreamController->>TableHoldbackScanner: scan for table header/delimiter
        StreamController-->>ChatWidget: request redraw
    end

    ChatWidget->>App: request_redraw
    App->>App: schedule_resize_reflow (debounce)
    loop until debounce expires
        App->>App: maybe_run_resize_reflow
        alt debounce elapsed
            App->>ChatWidget: full re-render at new width
            ChatWidget->>StreamController: re-render current content
            StreamController->>TableHoldbackScanner: detect table boundaries
            StreamController-->>ChatWidget: updated lines
        end
    end
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Poem

🐇 I nibbled at streams in the night,
I stitched table boxes just right,
I hushed flurries when reflows chime,
Consolidated tails in time,
Hop, hop — the terminal gleams bright! ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat(tui): render markdown tables with Unicode box-drawing borders' accurately and concisely summarizes the main change—enabling Unicode box-drawing table rendering in the TUI, which is the primary feature delivered by this large changeset.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Description check ✅ Passed The PR description is comprehensive, detailed, and covers problem statement, solution architecture, tradeoffs, and implementation details with diagrams and extensive testing information.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch refact/md-table

Comment @coderabbitai help to get the list of available commands and usage tips.

@greptile-apps
Copy link

greptile-apps bot commented Feb 22, 2026

Greptile Summary

Introduces Unicode box-drawing table rendering for markdown tables in agent responses, replacing raw pipe-delimited text with adaptive width-aware tables. Implements a two-region streaming model with table-aware holdback that keeps table content mutable until the stream ends, preventing flickering from non-incremental column width recalculation. Also adds comprehensive terminal resize reflow for all markdown content, fixing visual corruption and wasted space that existed on main.

Key architectural components:

  • table_detect module - Canonical pipe-table structure detection with fence tracking, shared by streaming controller and fence unwrapper
  • StreamController two-region model - Stable region (committed to scrollback) and tail region (mutable during table streaming), with incremental TableHoldbackScanner tracking Confirmed/PendingHeader/None states
  • Fence unwrapping - Strips ```md/```markdown wrappers when fence body contains tables, so pulldown-cmark sees native table syntax
  • Table rendering pipeline - Spillover filtering, column normalization, Narrative/Structured classification, iterative width shrinking, Unicode box-drawing with pipe fallback
  • Resize reflow infrastructure - Width-change detection, 75ms debounced scrollback rebuild, source-backed AgentMarkdownCell for re-rendering at any width, stream-time coordination flag

Non-goals explicitly documented: incremental table rendering, table interaction, arbitrary markdown extensions, horizontal scrolling.

Test coverage: Streaming/full-render parity, table holdback scanner unification, resize correctness (no duplicate/lost lines), fence unwrapping edge cases, column width allocation, spillover detection, source-boundary remap invariants.

Confidence Score: 5/5

  • Safe to merge - well-architected feature with comprehensive test coverage and documented tradeoffs
  • Extremely thorough implementation with exceptional documentation, extensive test coverage (streaming parity, resize correctness, fence unwrapping, holdback scanner unification), clear architectural decisions with documented tradeoffs, intentional simplification pass that removed ~700 lines, and careful handling of edge cases (blockquoted fences, spillover rows, narrow terminals)
  • No files require special attention - all changes are well-tested and follow established patterns

Important Files Changed

Filename Overview
codex-rs/tui/src/table_detect.rs New canonical pipe-table detection module with fence tracking, well-tested with comprehensive edge case coverage
codex-rs/tui/src/streaming/controller.rs Core two-region streaming implementation with table holdback, resize handling, and extensive test coverage
codex-rs/tui/src/markdown.rs Markdown fence unwrapping with conservative table detection, properly handles blockquoted fences and edge cases
codex-rs/tui/src/markdown_render.rs Comprehensive table rendering pipeline with Unicode box-drawing, width allocation, spillover detection, well-documented
codex-rs/tui/src/app.rs Resize reflow infrastructure with debouncing, proper state management, and stream-time coordination
codex-rs/tui/src/chatwidget.rs Terminal resize handling integrated with streaming controllers, proper width propagation
codex-rs/tui/src/history_cell.rs New AgentMarkdownCell for source-backed reflow, maintains markdown source for width adaptation
codex-rs/tui/src/width.rs New utility module for usable width calculations with proper guards for narrow terminals
codex-rs/tui/src/markdown_render_tests.rs Expanded test coverage for table rendering, column classification, spillover detection
codex-rs/tui/src/chatwidget/tests.rs New streaming/full-render parity tests, resize correctness tests, holdback scanner tests

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[Agent Token Delta] --> B[MarkdownStreamCollector]
    B -->|newline-gated| C[StreamController.push_delta]
    
    C --> D{Contains newline?}
    D -->|Yes| E[commit_complete_source]
    D -->|No| F[Buffer delta]
    
    E --> G[TableHoldbackScanner.push_source_chunk]
    G --> H[FenceTracker.advance]
    H --> I{Detect table?}
    
    I -->|Header+Delimiter| J[Confirmed state]
    I -->|Header only| K[PendingHeader state]
    I -->|No table| L[None state]
    
    J --> M[recompute_streaming_render]
    K --> M
    L --> M
    
    M --> N[unwrap_markdown_fences]
    N --> O[pulldown-cmark Parser]
    O --> P[MarkdownWriter]
    P --> Q{Table events?}
    
    Q -->|Yes| R[TableState accumulator]
    R --> S[render_table_lines]
    S --> T{Width check}
    T -->|Fits| U[Unicode box grid]
    T -->|Too narrow| V[Pipe fallback]
    
    Q -->|No| W[Standard rendering]
    
    U --> X[sync_stable_queue]
    V --> X
    W --> X
    
    X --> Y{Holdback state?}
    Y -->|Confirmed/Pending| Z[Hold as tail]
    Y -->|None| AA[Enqueue to stable]
    
    AA --> AB[Commit animation queue]
    AB --> AC[AgentMessageCell scrollback]
    
    Z --> AD[StreamingAgentTailCell active slot]
    
    AE[Terminal Resize] --> AF[set_width]
    AF --> AG[Re-render at new width]
    AG --> AH[Rebuild stable queue]
    AH --> AI[Schedule reflow debounced]
    
    AI --> AJ{Debounce expired?}
    AJ -->|Yes| AK[Clear scrollback]
    AK --> AL[Re-insert all cells at new width]
    
    AM[Stream finalize] --> AN[ConsolidateAgentMessage]
    AN --> AO[AgentMarkdownCell with source]
Loading

Last reviewed commit: 460c417

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR upgrades the TUI transcript markdown renderer to display GitHub-flavored Markdown tables as Unicode box-drawing grids (with a pipe-table fallback for very narrow widths), while also introducing a streaming model that can hold tables in a mutable “tail” region until they’re stable. It additionally adds width-aware transcript reflow on terminal resize so both prose and tables rewrap cleanly.

Changes:

  • Add canonical pipe-table + fence detection (table_detect) and use it to implement table-aware streaming holdback and markdown-fence unwrapping for agent output.
  • Implement Unicode box-drawing table rendering in the markdown renderer (including alignment, wrapping, spillover heuristics, and pipe fallback).
  • Add resize handling / transcript reflow infrastructure and source-backed consolidated history cells (AgentMarkdownCell) to support correct re-rendering at new widths.

Reviewed changes

Copilot reviewed 20 out of 20 changed files in this pull request and generated no comments.

Show a summary per file
File Description
codex-rs/tui/src/width.rs Adds shared helpers for safe “usable width” calculation after reserving prefix columns.
codex-rs/tui/src/tui.rs Adds API to clear queued history lines (used during resize reflow).
codex-rs/tui/src/table_detect.rs New canonical module for pipe-table structure detection + fenced-code tracking.
codex-rs/tui/src/streaming/mod.rs Replaces drain_all with clear_queue to support resize queue rebuilds.
codex-rs/tui/src/streaming/controller.rs Major refactor: two-region streaming with table holdback + width change handling + expanded tests.
codex-rs/tui/src/markdown_stream.rs Collector now commits raw source chunks at newline boundaries; adds finalize-to-source API and tests.
codex-rs/tui/src/markdown_render.rs Enables table parsing and renders tables as Unicode box grids with width allocation + fallback.
codex-rs/tui/src/markdown.rs Adds append_markdown_agent and conservative markdown-fence unwrapping for table rendering.
codex-rs/tui/src/history_cell.rs Adds source-backed AgentMarkdownCell and transient StreamingAgentTailCell; uses width guards.
codex-rs/tui/src/app_event.rs Adds consolidation events for agent messages and proposed plans.
codex-rs/tui/src/app.rs Adds resize reflow debounce + consolidation handlers + pending insert invalidation on width change.
codex-rs/tui/src/pager_overlay.rs Adds overlay-side consolidation splice to stay in sync with main transcript.
codex-rs/tui/src/insert_history.rs Fixes terminal line-advance drift at exact-width rows by anchoring cursor movement per line.
codex-rs/tui/src/chatwidget.rs Integrates streaming tail cell, consolidation events, resize propagation, and interrupt delta-dropping.
codex-rs/tui/src/bottom_pane/mod.rs Ensures Esc interrupt still emits Op::Interrupt even when the status indicator is hidden.
codex-rs/tui/src/markdown_render_tests.rs Adds extensive table rendering and fallback tests; adjusts test coverage accordingly.
codex-rs/tui/src/render/line_utils.rs Makes is_blank_line_spaces_only test-only.
codex-rs/tui/src/lib.rs Wires in new width and table_detect modules.
codex-rs/tui/src/snapshots/...complex_snapshot.snap Updates snapshot expectations to reflect boxed-table output.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
codex-rs/tui/src/render/line_utils.rs (1)

29-37: ⚠️ Potential issue | 🔴 Critical

Remove #[cfg(test)] attribute — function is called from production code.

This function is used in production code at codex-rs/tui/src/markdown_stream.rs:118. Adding #[cfg(test)] will cause a compile error since the function will not exist in release builds while non-test code attempts to call it. The #[cfg(test)] attribute must be removed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@codex-rs/tui/src/render/line_utils.rs` around lines 29 - 37, Remove the
#[cfg(test)] attribute from the function is_blank_line_spaces_only so it exists
in non-test builds (the function is declared pub fn
is_blank_line_spaces_only(line: &Line<'_>) -> bool); simply delete the cfg
attribute line above that function so production callers (e.g.,
markdown_stream.rs) can link to it, then rebuild to verify compilation succeeds.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@codex-rs/tui/src/app.rs`:
- Around line 990-998: The current maybe_finish_stream_reflow calls
schedule_resize_reflow() but doesn't ensure a frame is requested when that
reflow is already due; change schedule_resize_reflow to return a bool (e.g.,
due_now) indicating the reflow deadline has already elapsed and the reflow
should run immediately, then in maybe_finish_stream_reflow call
tui.frame_requester().schedule_frame() (or schedule_frame_in(Duration::ZERO))
when that bool is true instead of always scheduling
schedule_frame_in(RESIZE_REFLOW_DEBOUNCE); update any callers of
schedule_resize_reflow to handle the new return value.

In `@codex-rs/tui/src/chatwidget.rs`:
- Around line 4114-4120: When an interrupt is requested the current code only
drops future stream deltas but allows already-queued lines in StreamController
to continue draining; update submit_op() so that when
interrupt_requested_for_turn is true (and not from_replay) you also clear
pending queued output to stop rendering immediately: detect the same condition
used in the if (from_replay / interrupt_requested_for_turn /
is_interrupt_droppable_stream_event(&msg)) block and call
StreamController::clear_queue() (or add and call a new method on
StreamController that immediately discards queued lines) to drain/discard
pending output instead of waiting for commit ticks.

In `@codex-rs/tui/src/markdown.rs`:
- Around line 79-93: strip_line_indent currently only counts spaces and
misinterprets leading tabs (which CommonMark treats as ≥4 spaces); update
strip_line_indent to iterate the line by bytes, tracking a byte index and a
visual column count where a space increments column by 1 and a tab increments
column by 4, stop when a non-space/tab is seen, return None if column >= 4,
otherwise return the slice starting at the byte index after consumed whitespace
(preserving the existing newline stripping logic); ensure you use the tracked
byte index for slicing to avoid UTF-8 boundary issues and keep the function name
strip_line_indent unchanged.

In `@codex-rs/tui/src/table_detect.rs`:
- Around line 29-47: parse_table_segments currently treats any line containing
'|' (even only escaped pipes) as a table candidate; change the guard so it only
returns Some when there's at least one unescaped separator or an explicit outer
pipe. In parse_table_segments, replace the trimmed.contains('|') check with
logic that either (a) scans for an unescaped '|' (i.e., a '|' not preceded by a
backslash) or (b) uses split_unescaped_pipe(content) and ensures it would
produce more than one segment before proceeding; keep the existing outer-strip
logic (strip_prefix/strip_suffix) so lines with real leading/trailing pipes
still count. Ensure the final then_some(segments) only happens when the
unescaped-separator check passes.

---

Outside diff comments:
In `@codex-rs/tui/src/render/line_utils.rs`:
- Around line 29-37: Remove the #[cfg(test)] attribute from the function
is_blank_line_spaces_only so it exists in non-test builds (the function is
declared pub fn is_blank_line_spaces_only(line: &Line<'_>) -> bool); simply
delete the cfg attribute line above that function so production callers (e.g.,
markdown_stream.rs) can link to it, then rebuild to verify compilation succeeds.

Comment on lines +990 to +998
/// After stream consolidation, schedule a follow-up reflow if one ran mid-stream.
fn maybe_finish_stream_reflow(&mut self, tui: &mut tui::Tui) {
if self.reflow_ran_during_stream {
self.schedule_resize_reflow();
tui.frame_requester()
.schedule_frame_in(RESIZE_REFLOW_DEBOUNCE);
}
self.reflow_ran_during_stream = false;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Ensure pending resize reflow gets a frame after stream end.
If a resize occurs during streaming and the debounce deadline elapses while streams are active, the pending reflow can sit idle until the next user input. Consider scheduling a frame when a pending reflow is already due as consolidation completes.

Suggested fix
 fn maybe_finish_stream_reflow(&mut self, tui: &mut tui::Tui) {
     if self.reflow_ran_during_stream {
         self.schedule_resize_reflow();
         tui.frame_requester()
             .schedule_frame_in(RESIZE_REFLOW_DEBOUNCE);
+    } else if self
+        .resize_reflow_pending_until
+        .is_some_and(|deadline| Instant::now() >= deadline)
+    {
+        tui.frame_requester().schedule_frame();
     }
     self.reflow_ran_during_stream = false;
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/// After stream consolidation, schedule a follow-up reflow if one ran mid-stream.
fn maybe_finish_stream_reflow(&mut self, tui: &mut tui::Tui) {
if self.reflow_ran_during_stream {
self.schedule_resize_reflow();
tui.frame_requester()
.schedule_frame_in(RESIZE_REFLOW_DEBOUNCE);
}
self.reflow_ran_during_stream = false;
}
/// After stream consolidation, schedule a follow-up reflow if one ran mid-stream.
fn maybe_finish_stream_reflow(&mut self, tui: &mut tui::Tui) {
if self.reflow_ran_during_stream {
self.schedule_resize_reflow();
tui.frame_requester()
.schedule_frame_in(RESIZE_REFLOW_DEBOUNCE);
} else if self
.resize_reflow_pending_until
.is_some_and(|deadline| Instant::now() >= deadline)
{
tui.frame_requester().schedule_frame();
}
self.reflow_ran_during_stream = false;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@codex-rs/tui/src/app.rs` around lines 990 - 998, The current
maybe_finish_stream_reflow calls schedule_resize_reflow() but doesn't ensure a
frame is requested when that reflow is already due; change
schedule_resize_reflow to return a bool (e.g., due_now) indicating the reflow
deadline has already elapsed and the reflow should run immediately, then in
maybe_finish_stream_reflow call tui.frame_requester().schedule_frame() (or
schedule_frame_in(Duration::ZERO)) when that bool is true instead of always
scheduling schedule_frame_in(RESIZE_REFLOW_DEBOUNCE); update any callers of
schedule_resize_reflow to handle the new return value.

Comment on lines +4114 to +4120
if !from_replay
&& self.interrupt_requested_for_turn
&& is_interrupt_droppable_stream_event(&msg)
{
tracing::trace!("dropping stream event while interrupt is pending");
return;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for StreamController definition and implementation
rg -n "struct StreamController|impl StreamController" -S
rg -n "clear|reset|drain|flush|drop.*queue|set_width" -S codex-rs/tui/src/streaming || true

Repository: fcoury/codex

Length of output: 13911


🏁 Script executed:

#!/bin/bash
# Look at the interrupt handling context in chatwidget.rs
sed -n '4100,4150p' codex-rs/tui/src/chatwidget.rs

# Search for interrupt_requested_for_turn usage throughout the codebase
rg -n "interrupt_requested_for_turn" -B 2 -A 2

# Look for submit_op and interrupt handling together
rg -n "submit_op|interrupt_requested" codex-rs/tui/src/chatwidget.rs | head -20

Repository: fcoury/codex

Length of output: 7643


🏁 Script executed:

#!/bin/bash
# Find where StreamController is declared in chatwidget
rg -n "stream.*controller|StreamController|StreamCore" codex-rs/tui/src/chatwidget.rs | head -20

# Look for where clear_queue is called in chatwidget context
rg -n "clear_queue|clear\(\)" codex-rs/tui/src/chatwidget.rs | head -20

# Check what happens after Op::Interrupt is submitted - trace through handle or update
rg -n "Op::Interrupt|handle.*interrupt|on_interrupt" codex-rs/tui/src/chatwidget.rs | head -30

Repository: fcoury/codex

Length of output: 2782


🏁 Script executed:

#!/bin/bash
# View the on_interrupted_turn implementation
sed -n '1756,1850p' codex-rs/tui/src/chatwidget.rs

# Check what happens to the stream_controller in turn end/abort scenarios
rg -n "on_interrupted_turn|on_agent_turn_end|take\(\)" codex-rs/tui/src/chatwidget.rs | head -30

Repository: fcoury/codex

Length of output: 5211


🏁 Script executed:

#!/bin/bash
# Look at finalize_turn more closely - especially the drain_until_idle call
sed -n '840,900p' codex-rs/tui/src/chatwidget.rs

# Check drain_until_idle to understand what it does
rg -n "fn drain_until_idle" -A 30 codex-rs/tui/src/chatwidget.rs

Repository: fcoury/codex

Length of output: 2597


🏁 Script executed:

#!/bin/bash
# View finalize_turn implementation
rg -n "fn finalize_turn" -A 50 codex-rs/tui/src/chatwidget.rs | head -80

# Look for where controller.finalize() is defined
rg -n "pub.*fn finalize" codex-rs/tui/src/streaming/controller.rs | head -10

Repository: fcoury/codex

Length of output: 2471


Consider clearing queued stream lines immediately when interrupt is requested for faster termination.

The current interrupt mechanism (dropping future deltas) doesn't immediately halt rendering of already-queued lines in StreamController. Lines already in the queue will continue draining via commit ticks until the turn fully finalizes. For more responsive interrupts, call clear_queue() when interrupt_requested_for_turn is set in submit_op(), or add a method to immediately drain and discard pending output. This trades graceful tail-off for immediate termination.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@codex-rs/tui/src/chatwidget.rs` around lines 4114 - 4120, When an interrupt
is requested the current code only drops future stream deltas but allows
already-queued lines in StreamController to continue draining; update
submit_op() so that when interrupt_requested_for_turn is true (and not
from_replay) you also clear pending queued output to stop rendering immediately:
detect the same condition used in the if (from_replay /
interrupt_requested_for_turn / is_interrupt_droppable_stream_event(&msg)) block
and call StreamController::clear_queue() (or add and call a new method on
StreamController that immediately discards queued lines) to drain/discard
pending output instead of waiting for commit ticks.

Comment on lines +79 to +93
// Strip a trailing newline and up to 3 leading spaces, returning the
// trimmed slice. Returns `None` when the line has 4+ leading spaces
// (which makes it an indented code line per CommonMark).
fn strip_line_indent(line: &str) -> Option<&str> {
let without_newline = line.strip_suffix('\n').unwrap_or(line);
let leading_ws = without_newline
.as_bytes()
.iter()
.take_while(|byte| **byte == b' ')
.count();
if leading_ws > 3 {
return None;
}
Some(&without_newline[leading_ws..])
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Handle tab-indented fences to avoid unwrapping indented code blocks.
strip_line_indent only counts spaces, so a tab-indented code block could be misparsed as a fence and unwrapped incorrectly (CommonMark treats a tab as ≥4 spaces).

🧩 Suggested fix
 fn strip_line_indent(line: &str) -> Option<&str> {
     let without_newline = line.strip_suffix('\n').unwrap_or(line);
-    let leading_ws = without_newline
-        .as_bytes()
-        .iter()
-        .take_while(|byte| **byte == b' ')
-        .count();
-    if leading_ws > 3 {
-        return None;
-    }
-    Some(&without_newline[leading_ws..])
+    let mut col = 0usize;
+    let mut idx = 0usize;
+    for b in without_newline.as_bytes() {
+        match b {
+            b' ' => { col += 1; idx += 1; }
+            b'\t' => { col += 4 - (col % 4); idx += 1; }
+            _ => break,
+        }
+        if col > 3 {
+            return None;
+        }
+    }
+    Some(&without_newline[idx..])
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@codex-rs/tui/src/markdown.rs` around lines 79 - 93, strip_line_indent
currently only counts spaces and misinterprets leading tabs (which CommonMark
treats as ≥4 spaces); update strip_line_indent to iterate the line by bytes,
tracking a byte index and a visual column count where a space increments column
by 1 and a tab increments column by 4, stop when a non-space/tab is seen, return
None if column >= 4, otherwise return the slice starting at the byte index after
consumed whitespace (preserving the existing newline stripping logic); ensure
you use the tracked byte index for slicing to avoid UTF-8 boundary issues and
keep the function name strip_line_indent unchanged.

Comment on lines +29 to +47
/// Split a pipe-delimited line into trimmed segments.
///
/// Returns `None` if the line is empty or has no `|` marker.
/// Leading/trailing pipes are stripped before splitting.
pub(crate) fn parse_table_segments(line: &str) -> Option<Vec<&str>> {
let trimmed = line.trim();
if trimmed.is_empty() || !trimmed.contains('|') {
return None;
}

let content = trimmed.strip_prefix('|').unwrap_or(trimmed);
let content = content.strip_suffix('|').unwrap_or(content);

let segments: Vec<&str> = split_unescaped_pipe(content)
.iter()
.map(|s| s.trim())
.collect();
(!segments.is_empty()).then_some(segments)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Avoid treating escaped-only pipes as table candidates.
parse_table_segments gates on trimmed.contains('|'), so lines with only escaped pipes (e.g., foo \| bar) are treated as table headers, which can delay streaming unnecessarily. Consider checking for unescaped separators (or explicit outer pipes) before returning Some.

💡 Suggested fix
 pub(crate) fn parse_table_segments(line: &str) -> Option<Vec<&str>> {
     let trimmed = line.trim();
-    if trimmed.is_empty() || !trimmed.contains('|') {
+    if trimmed.is_empty() {
         return None;
     }
 
+    let has_outer_pipe = trimmed.starts_with('|') || trimmed.ends_with('|');
     let content = trimmed.strip_prefix('|').unwrap_or(trimmed);
     let content = content.strip_suffix('|').unwrap_or(content);
+    if !has_outer_pipe && !has_unescaped_pipe(content) {
+        return None;
+    }
 
     let segments: Vec<&str> = split_unescaped_pipe(content)
         .iter()
         .map(|s| s.trim())
         .collect();
     (!segments.is_empty()).then_some(segments)
 }
+
+fn has_unescaped_pipe(content: &str) -> bool {
+    let bytes = content.as_bytes();
+    let mut i = 0;
+    while i < bytes.len() {
+        if bytes[i] == b'\\' {
+            i += 2;
+        } else if bytes[i] == b'|' {
+            return true;
+        } else {
+            i += 1;
+        }
+    }
+    false
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/// Split a pipe-delimited line into trimmed segments.
///
/// Returns `None` if the line is empty or has no `|` marker.
/// Leading/trailing pipes are stripped before splitting.
pub(crate) fn parse_table_segments(line: &str) -> Option<Vec<&str>> {
let trimmed = line.trim();
if trimmed.is_empty() || !trimmed.contains('|') {
return None;
}
let content = trimmed.strip_prefix('|').unwrap_or(trimmed);
let content = content.strip_suffix('|').unwrap_or(content);
let segments: Vec<&str> = split_unescaped_pipe(content)
.iter()
.map(|s| s.trim())
.collect();
(!segments.is_empty()).then_some(segments)
}
/// Split a pipe-delimited line into trimmed segments.
///
/// Returns `None` if the line is empty or has no `|` marker.
/// Leading/trailing pipes are stripped before splitting.
pub(crate) fn parse_table_segments(line: &str) -> Option<Vec<&str>> {
let trimmed = line.trim();
if trimmed.is_empty() {
return None;
}
let has_outer_pipe = trimmed.starts_with('|') || trimmed.ends_with('|');
let content = trimmed.strip_prefix('|').unwrap_or(trimmed);
let content = content.strip_suffix('|').unwrap_or(content);
if !has_outer_pipe && !has_unescaped_pipe(content) {
return None;
}
let segments: Vec<&str> = split_unescaped_pipe(content)
.iter()
.map(|s| s.trim())
.collect();
(!segments.is_empty()).then_some(segments)
}
fn has_unescaped_pipe(content: &str) -> bool {
let bytes = content.as_bytes();
let mut i = 0;
while i < bytes.len() {
if bytes[i] == b'\\' {
i += 2;
} else if bytes[i] == b'|' {
return true;
} else {
i += 1;
}
}
false
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@codex-rs/tui/src/table_detect.rs` around lines 29 - 47, parse_table_segments
currently treats any line containing '|' (even only escaped pipes) as a table
candidate; change the guard so it only returns Some when there's at least one
unescaped separator or an explicit outer pipe. In parse_table_segments, replace
the trimmed.contains('|') check with logic that either (a) scans for an
unescaped '|' (i.e., a '|' not preceded by a backslash) or (b) uses
split_unescaped_pipe(content) and ensures it would produce more than one segment
before proceeding; keep the existing outer-strip logic
(strip_prefix/strip_suffix) so lines with real leading/trailing pipes still
count. Ensure the final then_some(segments) only happens when the
unescaped-separator check passes.

@fcoury
Copy link
Owner Author

fcoury commented Feb 22, 2026

@codex review

@fcoury
Copy link
Owner Author

fcoury commented Feb 22, 2026

@copilot review

Copy link

Copilot AI commented Feb 22, 2026

@fcoury I've opened a new pull request, #11, to work on those changes. Once the pull request is ready, I'll request review from you.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 460c417068

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Add proper table rendering to the markdown renderer using Unicode
box-drawing characters (┌─┬┐│├─┼┤└─┴┘) instead of raw pipe syntax.
Supports column alignment, cell wrapping at narrow widths, tables
inside blockquotes, escaped pipes, and styled inline content.
Wire the Unicode box-drawing table renderer into the live streaming
path with holdback-aware commit logic, post-stream cell consolidation,
and debounced resize reflow.

Key additions:
- StreamCore: shared bookkeeping deduplicating Stream/PlanStreamController
- Table holdback: fence-aware state machine keeps buffer mutable during table detection
- AgentMarkdownCell: stores raw markdown source, re-renders at any width on demand
- ConsolidateAgentMessage: backward-walk replaces streamed cells with single consolidated cell
- Resize reflow: debounced (75ms) re-render of all transcript cells after terminal resize
Add module, type, and function-level doc comments across the table
rendering change. Fix typos ("taht", "mirros", "stabe", "durin").
Expand streaming controller module docs to cover holdback and resize
handling. Document width allocation, spillover detection, consolidation,
and prewrapped line rendering contracts. No runtime behavior changes
in documentation edits; also includes extracted draw helpers in app.rs
and a fence-detection fix in markdown.rs.
Add module-level, type-level, and function-level documentation across
the table rendering pipeline: table_detect, width, markdown,
markdown_render, and streaming/controller. Separate spillover lines
from table grid output via RenderedTableLines so spillover prose
routes through normal word wrapping instead of bypassing it.
Address reviewer concerns about self-referential test coverage:

- Column-metrics: 6 tests for TableColumnKind classification,
  preferred_column_floor caps, and next_column_to_shrink priority.
- Spillover detection: 6 tests for is_spillover_row covering each
  heuristic branch and negative cases.
- tick_batch(0) now returns empty instead of silently draining one line
- has_live_tail avoids Vec allocation via field comparison
- Convert 15 sync-only controller tests from tokio::test to #[test]
…examples

Strip blockquote prefixes during table detection only when the fence
itself is inside a blockquote, preventing false unwrapping of fences
that contain blockquoted table examples as illustration content.
Remove accumulated duplication and dead code across the md-table
branch: extract shared helpers (trailing_run_start<T>, strip_line_indent,
compute_target_stable_len, reset_history_emission_state,
maybe_finish_stream_reflow, is_blockquote_active, active_cell_is_stream_tail),
collapse redundant conditionals (normalize_row, FenceTracker Option state,
merged match arms), delete duplicate methods and tests, and fix stale
doc comments. No behavioral changes; -86 net lines.
…line

Replace clone with mem::take in finalize(), return Cow from
unwrap_markdown_fences for zero-copy on fence-free messages, use
index-based iteration in reflow loop, add #[inline] on ~20 hot-path
functions, add with_capacity hints on hot Vecs/Strings, box large
ActiveFence variant, and eliminate minor allocations (into_iter,
Span::from for literals, write!-based plain_text, redundant .max(1)).
Improve resize reflow scheduling so due reflows request an immediate frame
instead of waiting for unrelated redraws, and clear queued stream output on
`Op::Interrupt` so stale lines stop rendering immediately.

Harden markdown/table parsing by treating tab indentation as 4-column
whitespace in `strip_line_indent` and ignoring escaped-only pipes in
`parse_table_segments` unless explicit outer pipes are present.
Disable table holdback for `PlanStreamController` by adding a
per-stream `TableHoldbackMode` in `StreamCore`. Agent streams keep
holdback enabled, while plan streams now enqueue table lines
incrementally.

Also treat plan live tail as active in `stream_controllers_idle()` and
add regression coverage in `codex-rs/tui/src/streaming/controller.rs`
and `codex-rs/tui/src/chatwidget/tests.rs` to prevent premature
`StopCommitAnimation` and preserve streaming behavior.
Adjust table column classification in `markdown_render.rs` so token-heavy,
low-word-density columns are treated as Structured instead of Narrative.

This keeps URL-heavy link columns from being first in the shrink order,
reducing rare but severe narrow-column wrapping in rendered tables.
- Use codex_protocol::protocol instead of codex_core::protocol (made
  private upstream)
- Add missing word_wrap_lines import in history_cell tests
- Update column classification test for URL-like token heuristic
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
codex-rs/core/config.schema.json (1)

1517-1522: ⚠️ Potential issue | 🟠 Major

Add backward compatibility handling for renamed config field.

The field background_terminal_timeout was renamed to background_terminal_max_timeout in the ConfigToml struct (codex-rs/core/src/config/mod.rs, line 1079), but no #[serde(alias = "background_terminal_timeout")] attribute exists. Users with existing configurations referencing the old field name will encounter deserialization failures. Either add the alias attribute to the field or document a migration path in release notes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@codex-rs/core/config.schema.json` around lines 1517 - 1522, Config
deserialization will break for users who still have the old key
background_terminal_timeout because the ConfigToml struct renamed it to
background_terminal_max_timeout; update the ConfigToml definition in
codex-rs/core/src/config/mod.rs to accept the old name by adding a serde alias
for background_terminal_timeout on the background_terminal_max_timeout field
(e.g., add #[serde(alias = "background_terminal_timeout")]) so both keys
deserialize, or alternatively implement a custom deserialize fallback that maps
the old key to background_terminal_max_timeout during deserialization.
codex-rs/core/src/codex.rs (1)

2218-2248: ⚠️ Potential issue | 🟠 Major

Decouple realtime mirroring from send_event to avoid blocking on backpressured channels.
text_in uses .send().await on a bounded channel, which will block if the text queue fills up—unlike audio_in, which uses try_send() to avoid blocking on backpressure. This blocking behavior can delay event emission on the hot path. Spawn the mirroring in a separate task to keep send_event non-blocking.

Suggested refactoring
-        if let Err(err) = self.conversation.text_in(text).await {
-            debug!("failed to mirror event text to realtime conversation: {err}");
-        }
+        let conversation = Arc::clone(&self.conversation);
+        tokio::spawn(async move {
+            if let Err(err) = conversation.text_in(text).await {
+                debug!("failed to mirror event text to realtime conversation: {err}");
+            }
+        });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@codex-rs/core/src/codex.rs` around lines 2218 - 2248, send_event currently
calls maybe_mirror_event_text_to_realtime(&EventMsg) and awaits it, which can
block on conversation.text_in().spawn a background task instead so mirroring
doesn’t block the hot path: add an owned async helper (e.g.,
maybe_mirror_event_text_to_realtime_owned(self: Arc<Self>, msg: EventMsg) or
make the existing method take EventMsg by value) that performs
realtime_text_for_event + conversation.running_state().await +
conversation.text_in(text).await and logs errors, then in send_event call
tokio::spawn cloning/Arc-cloning self and passing the owned EventMsg (or cloned
msg) into the helper without .await; ensure the helper handles None text and
logs any send errors so the spawned task is self-contained.
🧹 Nitpick comments (2)
codex-rs/core/src/realtime_conversation.rs (1)

384-443: Tests cover the main cases well.

Consider adding a test for the edge case where "content" is missing entirely:

Optional: additional edge case test
#[test]
fn returns_none_when_content_missing() {
    let item = json!({
        "type": "message",
        "role": "assistant",
    });
    assert_eq!(realtime_text_from_conversation_item(&item), None);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@codex-rs/core/src/realtime_conversation.rs` around lines 384 - 443, Add a
unit test that covers the edge case where the "content" field is absent: inside
the existing tests module (where realtime_text_from_conversation_item is used)
add a test function (e.g., returns_none_when_content_missing) that constructs a
JSON object with "type": "message" and no "content" (optionally include "role")
and asserts that realtime_text_from_conversation_item(&item) returns None; place
it alongside the other tests so it runs with the current suite.
codex-rs/core/src/codex.rs (1)

5484-5532: Cap mirrored patch output/file list size to protect the realtime channel.
Patch stdout/stderr and file lists can be very large; mirroring them verbatim can overwhelm the realtime channel/UI. Consider truncation with a clear indicator.

✂️ Proposed guardrails
 fn realtime_text_for_event(msg: &EventMsg) -> Option<String> {
+    const REALTIME_MIRROR_MAX_CHARS: usize = 4_000;
+    const REALTIME_MIRROR_MAX_FILES: usize = 50;
     match msg {
         EventMsg::AgentMessage(event) => Some(event.message.clone()),
         EventMsg::ItemCompleted(event) => match &event.item {
             TurnItem::AgentMessage(item) => Some(agent_message_text(item)),
             _ => None,
         },
@@
         EventMsg::PatchApplyBegin(event) => {
             let mut files: Vec<String> = event
                 .changes
                 .keys()
                 .map(|path| path.display().to_string())
                 .collect();
             files.sort();
+            let truncated = files.len() > REALTIME_MIRROR_MAX_FILES;
+            let extra = files.len().saturating_sub(REALTIME_MIRROR_MAX_FILES);
+            if truncated {
+                files.truncate(REALTIME_MIRROR_MAX_FILES);
+            }
             let file_list = if files.is_empty() {
                 "none".to_string()
             } else {
                 files.join(", ")
             };
+            let file_list = if truncated {
+                format!("{file_list} …(+{extra} more)")
+            } else {
+                file_list
+            };
             Some(format!(
                 "apply_patch started ({count} file change(s))\nFiles: {file_list}",
                 count = files.len()
             ))
         }
         EventMsg::PatchApplyEnd(event) => {
@@
             if !event.stderr.is_empty() {
                 text.push_str(&format!("\nstderr:\n{}", event.stderr));
             }
+            if text.len() > REALTIME_MIRROR_MAX_CHARS {
+                let mut end = REALTIME_MIRROR_MAX_CHARS.min(text.len());
+                while !text.is_char_boundary(end) {
+                    end = end.saturating_sub(1);
+                }
+                text.truncate(end);
+                text.push_str("\n…(truncated)");
+            }
             Some(text)
         }
         _ => None,
     }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@codex-rs/core/src/codex.rs` around lines 5484 - 5532, The mirrored patch
output and file list in realtime_text_for_event can be unbounded and should be
truncated: in the branches handling EventMsg::PatchApplyBegin (the local
variable files and file_list) and EventMsg::PatchApplyEnd (event.stdout and
event.stderr), enforce a reasonable character/byte limit (e.g., 1000–2000 chars)
and replace any excess with a clear indicator like "…(truncated)"; update the
formatted messages to include the truncation marker and keep the count of total
files (files.len()) unchanged so consumers know more was omitted, ensuring you
perform truncation before building the final strings in realtime_text_for_event.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@codex-rs/Cargo.toml`:
- Line 279: The dependency switch to syntect ("syntect = \"5\"") is applied but
brings two acknowledged transitive security advisories (RUSTSEC-2024-0320 and
RUSTSEC-2025-0141); update the project governance around this by (1) pinning
syntect to the intended tested version (e.g., 5.3.0) in Cargo.toml so upgrades
are deliberate, (2) ensure deny.toml contains explicit exceptions for those two
RUSTSEC IDs with a brief justification and an expiration or review date, and (3)
add a short comment next to the "syntect = \"5\"" entry referencing the
deny.toml exceptions and an action item to re-evaluate replacing syntect if
transitive deps remain unmaintained.

In `@codex-rs/core/src/agent/builtins/monitor.toml`:
- Around line 3-24: In the developer_instructions string in monitor.toml fix two
grammar issues: change "You're name is Superman." to "Your name is Superman."
and reword "If you need multiple wait, increase the timeouts/yield times
exponentially." to "If you need to wait multiple times, increase the
timeouts/yield times exponentially." Ensure these edits are made in the
developer_instructions block so the user-facing instructions read clearly.

In `@codex-rs/core/src/codex.rs`:
- Around line 1522-1535: The route_realtime_text_input function should ignore
empty or all-whitespace transcripts to avoid creating no-op user turns: in
route_realtime_text_input, trim the incoming text and if it's empty return early
instead of calling handlers::user_input_or_turn with Op::UserInput; keep using
self.next_internal_sub_id() and the existing Op::UserInput construction only
when the trimmed text is non-empty.

---

Outside diff comments:
In `@codex-rs/core/config.schema.json`:
- Around line 1517-1522: Config deserialization will break for users who still
have the old key background_terminal_timeout because the ConfigToml struct
renamed it to background_terminal_max_timeout; update the ConfigToml definition
in codex-rs/core/src/config/mod.rs to accept the old name by adding a serde
alias for background_terminal_timeout on the background_terminal_max_timeout
field (e.g., add #[serde(alias = "background_terminal_timeout")]) so both keys
deserialize, or alternatively implement a custom deserialize fallback that maps
the old key to background_terminal_max_timeout during deserialization.

In `@codex-rs/core/src/codex.rs`:
- Around line 2218-2248: send_event currently calls
maybe_mirror_event_text_to_realtime(&EventMsg) and awaits it, which can block on
conversation.text_in().spawn a background task instead so mirroring doesn’t
block the hot path: add an owned async helper (e.g.,
maybe_mirror_event_text_to_realtime_owned(self: Arc<Self>, msg: EventMsg) or
make the existing method take EventMsg by value) that performs
realtime_text_for_event + conversation.running_state().await +
conversation.text_in(text).await and logs errors, then in send_event call
tokio::spawn cloning/Arc-cloning self and passing the owned EventMsg (or cloned
msg) into the helper without .await; ensure the helper handles None text and
logs any send errors so the spawned task is self-contained.

---

Nitpick comments:
In `@codex-rs/core/src/codex.rs`:
- Around line 5484-5532: The mirrored patch output and file list in
realtime_text_for_event can be unbounded and should be truncated: in the
branches handling EventMsg::PatchApplyBegin (the local variable files and
file_list) and EventMsg::PatchApplyEnd (event.stdout and event.stderr), enforce
a reasonable character/byte limit (e.g., 1000–2000 chars) and replace any excess
with a clear indicator like "…(truncated)"; update the formatted messages to
include the truncation marker and keep the count of total files (files.len())
unchanged so consumers know more was omitted, ensuring you perform truncation
before building the final strings in realtime_text_for_event.

In `@codex-rs/core/src/realtime_conversation.rs`:
- Around line 384-443: Add a unit test that covers the edge case where the
"content" field is absent: inside the existing tests module (where
realtime_text_from_conversation_item is used) add a test function (e.g.,
returns_none_when_content_missing) that constructs a JSON object with "type":
"message" and no "content" (optionally include "role") and asserts that
realtime_text_from_conversation_item(&item) returns None; place it alongside the
other tests so it runs with the current suite.

@fcoury
Copy link
Owner Author

fcoury commented Feb 23, 2026

Addressing the outside-diff + nitpick items from this review:

  • background_terminal_timeout compatibility: added #[serde(alias = "background_terminal_timeout")] on background_terminal_max_timeout in core/src/config/mod.rs (commit 589850724).
  • Realtime mirroring hot path: send_event no longer awaits mirroring; mirroring now runs in a spawned task to avoid backpressure blocking event emission (commit 589850724).
  • Mirrored patch payload guardrails: added truncation caps for patch file lists and patch output text in realtime_text_for_event with explicit truncation markers (commit 589850724).
  • Extra test coverage: added returns_none_when_content_missing in core/src/realtime_conversation.rs (commit 8fc937f97).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants