feat(tui): render markdown tables with Unicode box-drawing borders#10
feat(tui): render markdown tables with Unicode box-drawing borders#10
Conversation
📝 WalkthroughWalkthroughImplements debounced resize/reflow and history reflow state, interrupt-aware streaming with transient tail handling, new in-flight and consolidated markdown/history cell types, a newline-gated markdown stream collector, width-aware markdown table rendering (with fence unwrapping), table-detection utilities, and a refactor of streaming controllers with table holdback logic. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant App
participant ChatWidget
participant StreamController
participant MarkdownStreamCollector
participant HistoryCell
User->>App: Input stream delta
App->>ChatWidget: dispatch_event_msg
alt interrupt_requested_for_turn && droppable event
ChatWidget-->>App: drop event
else normal flow
ChatWidget->>StreamController: push(delta)
StreamController->>MarkdownStreamCollector: push_delta
MarkdownStreamCollector->>MarkdownStreamCollector: accumulate & check newline
alt newline found
MarkdownStreamCollector-->>StreamController: commit_complete_source
StreamController->>StreamController: re-render with width
StreamController-->>ChatWidget: updated lines
end
ChatWidget->>HistoryCell: insert_history_cell_lines
end
User->>App: Trigger finalize (stream end)
App->>ChatWidget: finalize_turn
ChatWidget->>StreamController: finalize
StreamController->>MarkdownStreamCollector: finalize_and_drain_source
MarkdownStreamCollector-->>StreamController: remaining markdown source
StreamController-->>ChatWidget: (consolidated_cell, markdown_source)
ChatWidget->>App: AppEvent::ConsolidateAgentMessage(markdown_source)
App->>ChatWidget: apply consolidation
ChatWidget->>App: update history (consolidated cell)
sequenceDiagram
participant Terminal
participant App
participant ChatWidget
participant StreamController
participant TableHoldbackScanner
Terminal->>App: resize (new width)
App->>ChatWidget: on_terminal_resize(width)
ChatWidget->>StreamController: on_terminal_resize(width)
StreamController->>StreamController: update current_stream_width
alt stream active
StreamController->>TableHoldbackScanner: scan for table header/delimiter
StreamController-->>ChatWidget: request redraw
end
ChatWidget->>App: request_redraw
App->>App: schedule_resize_reflow (debounce)
loop until debounce expires
App->>App: maybe_run_resize_reflow
alt debounce elapsed
App->>ChatWidget: full re-render at new width
ChatWidget->>StreamController: re-render current content
StreamController->>TableHoldbackScanner: detect table boundaries
StreamController-->>ChatWidget: updated lines
end
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
Greptile SummaryIntroduces Unicode box-drawing table rendering for markdown tables in agent responses, replacing raw pipe-delimited text with adaptive width-aware tables. Implements a two-region streaming model with table-aware holdback that keeps table content mutable until the stream ends, preventing flickering from non-incremental column width recalculation. Also adds comprehensive terminal resize reflow for all markdown content, fixing visual corruption and wasted space that existed on Key architectural components:
Non-goals explicitly documented: incremental table rendering, table interaction, arbitrary markdown extensions, horizontal scrolling. Test coverage: Streaming/full-render parity, table holdback scanner unification, resize correctness (no duplicate/lost lines), fence unwrapping edge cases, column width allocation, spillover detection, source-boundary remap invariants. Confidence Score: 5/5
Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[Agent Token Delta] --> B[MarkdownStreamCollector]
B -->|newline-gated| C[StreamController.push_delta]
C --> D{Contains newline?}
D -->|Yes| E[commit_complete_source]
D -->|No| F[Buffer delta]
E --> G[TableHoldbackScanner.push_source_chunk]
G --> H[FenceTracker.advance]
H --> I{Detect table?}
I -->|Header+Delimiter| J[Confirmed state]
I -->|Header only| K[PendingHeader state]
I -->|No table| L[None state]
J --> M[recompute_streaming_render]
K --> M
L --> M
M --> N[unwrap_markdown_fences]
N --> O[pulldown-cmark Parser]
O --> P[MarkdownWriter]
P --> Q{Table events?}
Q -->|Yes| R[TableState accumulator]
R --> S[render_table_lines]
S --> T{Width check}
T -->|Fits| U[Unicode box grid]
T -->|Too narrow| V[Pipe fallback]
Q -->|No| W[Standard rendering]
U --> X[sync_stable_queue]
V --> X
W --> X
X --> Y{Holdback state?}
Y -->|Confirmed/Pending| Z[Hold as tail]
Y -->|None| AA[Enqueue to stable]
AA --> AB[Commit animation queue]
AB --> AC[AgentMessageCell scrollback]
Z --> AD[StreamingAgentTailCell active slot]
AE[Terminal Resize] --> AF[set_width]
AF --> AG[Re-render at new width]
AG --> AH[Rebuild stable queue]
AH --> AI[Schedule reflow debounced]
AI --> AJ{Debounce expired?}
AJ -->|Yes| AK[Clear scrollback]
AK --> AL[Re-insert all cells at new width]
AM[Stream finalize] --> AN[ConsolidateAgentMessage]
AN --> AO[AgentMarkdownCell with source]
Last reviewed commit: 460c417 |
There was a problem hiding this comment.
Pull request overview
This PR upgrades the TUI transcript markdown renderer to display GitHub-flavored Markdown tables as Unicode box-drawing grids (with a pipe-table fallback for very narrow widths), while also introducing a streaming model that can hold tables in a mutable “tail” region until they’re stable. It additionally adds width-aware transcript reflow on terminal resize so both prose and tables rewrap cleanly.
Changes:
- Add canonical pipe-table + fence detection (
table_detect) and use it to implement table-aware streaming holdback and markdown-fence unwrapping for agent output. - Implement Unicode box-drawing table rendering in the markdown renderer (including alignment, wrapping, spillover heuristics, and pipe fallback).
- Add resize handling / transcript reflow infrastructure and source-backed consolidated history cells (
AgentMarkdownCell) to support correct re-rendering at new widths.
Reviewed changes
Copilot reviewed 20 out of 20 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| codex-rs/tui/src/width.rs | Adds shared helpers for safe “usable width” calculation after reserving prefix columns. |
| codex-rs/tui/src/tui.rs | Adds API to clear queued history lines (used during resize reflow). |
| codex-rs/tui/src/table_detect.rs | New canonical module for pipe-table structure detection + fenced-code tracking. |
| codex-rs/tui/src/streaming/mod.rs | Replaces drain_all with clear_queue to support resize queue rebuilds. |
| codex-rs/tui/src/streaming/controller.rs | Major refactor: two-region streaming with table holdback + width change handling + expanded tests. |
| codex-rs/tui/src/markdown_stream.rs | Collector now commits raw source chunks at newline boundaries; adds finalize-to-source API and tests. |
| codex-rs/tui/src/markdown_render.rs | Enables table parsing and renders tables as Unicode box grids with width allocation + fallback. |
| codex-rs/tui/src/markdown.rs | Adds append_markdown_agent and conservative markdown-fence unwrapping for table rendering. |
| codex-rs/tui/src/history_cell.rs | Adds source-backed AgentMarkdownCell and transient StreamingAgentTailCell; uses width guards. |
| codex-rs/tui/src/app_event.rs | Adds consolidation events for agent messages and proposed plans. |
| codex-rs/tui/src/app.rs | Adds resize reflow debounce + consolidation handlers + pending insert invalidation on width change. |
| codex-rs/tui/src/pager_overlay.rs | Adds overlay-side consolidation splice to stay in sync with main transcript. |
| codex-rs/tui/src/insert_history.rs | Fixes terminal line-advance drift at exact-width rows by anchoring cursor movement per line. |
| codex-rs/tui/src/chatwidget.rs | Integrates streaming tail cell, consolidation events, resize propagation, and interrupt delta-dropping. |
| codex-rs/tui/src/bottom_pane/mod.rs | Ensures Esc interrupt still emits Op::Interrupt even when the status indicator is hidden. |
| codex-rs/tui/src/markdown_render_tests.rs | Adds extensive table rendering and fallback tests; adjusts test coverage accordingly. |
| codex-rs/tui/src/render/line_utils.rs | Makes is_blank_line_spaces_only test-only. |
| codex-rs/tui/src/lib.rs | Wires in new width and table_detect modules. |
| codex-rs/tui/src/snapshots/...complex_snapshot.snap | Updates snapshot expectations to reflect boxed-table output. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
codex-rs/tui/src/render/line_utils.rs (1)
29-37:⚠️ Potential issue | 🔴 CriticalRemove
#[cfg(test)]attribute — function is called from production code.This function is used in production code at
codex-rs/tui/src/markdown_stream.rs:118. Adding#[cfg(test)]will cause a compile error since the function will not exist in release builds while non-test code attempts to call it. The#[cfg(test)]attribute must be removed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@codex-rs/tui/src/render/line_utils.rs` around lines 29 - 37, Remove the #[cfg(test)] attribute from the function is_blank_line_spaces_only so it exists in non-test builds (the function is declared pub fn is_blank_line_spaces_only(line: &Line<'_>) -> bool); simply delete the cfg attribute line above that function so production callers (e.g., markdown_stream.rs) can link to it, then rebuild to verify compilation succeeds.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@codex-rs/tui/src/app.rs`:
- Around line 990-998: The current maybe_finish_stream_reflow calls
schedule_resize_reflow() but doesn't ensure a frame is requested when that
reflow is already due; change schedule_resize_reflow to return a bool (e.g.,
due_now) indicating the reflow deadline has already elapsed and the reflow
should run immediately, then in maybe_finish_stream_reflow call
tui.frame_requester().schedule_frame() (or schedule_frame_in(Duration::ZERO))
when that bool is true instead of always scheduling
schedule_frame_in(RESIZE_REFLOW_DEBOUNCE); update any callers of
schedule_resize_reflow to handle the new return value.
In `@codex-rs/tui/src/chatwidget.rs`:
- Around line 4114-4120: When an interrupt is requested the current code only
drops future stream deltas but allows already-queued lines in StreamController
to continue draining; update submit_op() so that when
interrupt_requested_for_turn is true (and not from_replay) you also clear
pending queued output to stop rendering immediately: detect the same condition
used in the if (from_replay / interrupt_requested_for_turn /
is_interrupt_droppable_stream_event(&msg)) block and call
StreamController::clear_queue() (or add and call a new method on
StreamController that immediately discards queued lines) to drain/discard
pending output instead of waiting for commit ticks.
In `@codex-rs/tui/src/markdown.rs`:
- Around line 79-93: strip_line_indent currently only counts spaces and
misinterprets leading tabs (which CommonMark treats as ≥4 spaces); update
strip_line_indent to iterate the line by bytes, tracking a byte index and a
visual column count where a space increments column by 1 and a tab increments
column by 4, stop when a non-space/tab is seen, return None if column >= 4,
otherwise return the slice starting at the byte index after consumed whitespace
(preserving the existing newline stripping logic); ensure you use the tracked
byte index for slicing to avoid UTF-8 boundary issues and keep the function name
strip_line_indent unchanged.
In `@codex-rs/tui/src/table_detect.rs`:
- Around line 29-47: parse_table_segments currently treats any line containing
'|' (even only escaped pipes) as a table candidate; change the guard so it only
returns Some when there's at least one unescaped separator or an explicit outer
pipe. In parse_table_segments, replace the trimmed.contains('|') check with
logic that either (a) scans for an unescaped '|' (i.e., a '|' not preceded by a
backslash) or (b) uses split_unescaped_pipe(content) and ensures it would
produce more than one segment before proceeding; keep the existing outer-strip
logic (strip_prefix/strip_suffix) so lines with real leading/trailing pipes
still count. Ensure the final then_some(segments) only happens when the
unescaped-separator check passes.
---
Outside diff comments:
In `@codex-rs/tui/src/render/line_utils.rs`:
- Around line 29-37: Remove the #[cfg(test)] attribute from the function
is_blank_line_spaces_only so it exists in non-test builds (the function is
declared pub fn is_blank_line_spaces_only(line: &Line<'_>) -> bool); simply
delete the cfg attribute line above that function so production callers (e.g.,
markdown_stream.rs) can link to it, then rebuild to verify compilation succeeds.
| /// After stream consolidation, schedule a follow-up reflow if one ran mid-stream. | ||
| fn maybe_finish_stream_reflow(&mut self, tui: &mut tui::Tui) { | ||
| if self.reflow_ran_during_stream { | ||
| self.schedule_resize_reflow(); | ||
| tui.frame_requester() | ||
| .schedule_frame_in(RESIZE_REFLOW_DEBOUNCE); | ||
| } | ||
| self.reflow_ran_during_stream = false; | ||
| } |
There was a problem hiding this comment.
Ensure pending resize reflow gets a frame after stream end.
If a resize occurs during streaming and the debounce deadline elapses while streams are active, the pending reflow can sit idle until the next user input. Consider scheduling a frame when a pending reflow is already due as consolidation completes.
Suggested fix
fn maybe_finish_stream_reflow(&mut self, tui: &mut tui::Tui) {
if self.reflow_ran_during_stream {
self.schedule_resize_reflow();
tui.frame_requester()
.schedule_frame_in(RESIZE_REFLOW_DEBOUNCE);
+ } else if self
+ .resize_reflow_pending_until
+ .is_some_and(|deadline| Instant::now() >= deadline)
+ {
+ tui.frame_requester().schedule_frame();
}
self.reflow_ran_during_stream = false;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// After stream consolidation, schedule a follow-up reflow if one ran mid-stream. | |
| fn maybe_finish_stream_reflow(&mut self, tui: &mut tui::Tui) { | |
| if self.reflow_ran_during_stream { | |
| self.schedule_resize_reflow(); | |
| tui.frame_requester() | |
| .schedule_frame_in(RESIZE_REFLOW_DEBOUNCE); | |
| } | |
| self.reflow_ran_during_stream = false; | |
| } | |
| /// After stream consolidation, schedule a follow-up reflow if one ran mid-stream. | |
| fn maybe_finish_stream_reflow(&mut self, tui: &mut tui::Tui) { | |
| if self.reflow_ran_during_stream { | |
| self.schedule_resize_reflow(); | |
| tui.frame_requester() | |
| .schedule_frame_in(RESIZE_REFLOW_DEBOUNCE); | |
| } else if self | |
| .resize_reflow_pending_until | |
| .is_some_and(|deadline| Instant::now() >= deadline) | |
| { | |
| tui.frame_requester().schedule_frame(); | |
| } | |
| self.reflow_ran_during_stream = false; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@codex-rs/tui/src/app.rs` around lines 990 - 998, The current
maybe_finish_stream_reflow calls schedule_resize_reflow() but doesn't ensure a
frame is requested when that reflow is already due; change
schedule_resize_reflow to return a bool (e.g., due_now) indicating the reflow
deadline has already elapsed and the reflow should run immediately, then in
maybe_finish_stream_reflow call tui.frame_requester().schedule_frame() (or
schedule_frame_in(Duration::ZERO)) when that bool is true instead of always
scheduling schedule_frame_in(RESIZE_REFLOW_DEBOUNCE); update any callers of
schedule_resize_reflow to handle the new return value.
| if !from_replay | ||
| && self.interrupt_requested_for_turn | ||
| && is_interrupt_droppable_stream_event(&msg) | ||
| { | ||
| tracing::trace!("dropping stream event while interrupt is pending"); | ||
| return; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for StreamController definition and implementation
rg -n "struct StreamController|impl StreamController" -S
rg -n "clear|reset|drain|flush|drop.*queue|set_width" -S codex-rs/tui/src/streaming || trueRepository: fcoury/codex
Length of output: 13911
🏁 Script executed:
#!/bin/bash
# Look at the interrupt handling context in chatwidget.rs
sed -n '4100,4150p' codex-rs/tui/src/chatwidget.rs
# Search for interrupt_requested_for_turn usage throughout the codebase
rg -n "interrupt_requested_for_turn" -B 2 -A 2
# Look for submit_op and interrupt handling together
rg -n "submit_op|interrupt_requested" codex-rs/tui/src/chatwidget.rs | head -20Repository: fcoury/codex
Length of output: 7643
🏁 Script executed:
#!/bin/bash
# Find where StreamController is declared in chatwidget
rg -n "stream.*controller|StreamController|StreamCore" codex-rs/tui/src/chatwidget.rs | head -20
# Look for where clear_queue is called in chatwidget context
rg -n "clear_queue|clear\(\)" codex-rs/tui/src/chatwidget.rs | head -20
# Check what happens after Op::Interrupt is submitted - trace through handle or update
rg -n "Op::Interrupt|handle.*interrupt|on_interrupt" codex-rs/tui/src/chatwidget.rs | head -30Repository: fcoury/codex
Length of output: 2782
🏁 Script executed:
#!/bin/bash
# View the on_interrupted_turn implementation
sed -n '1756,1850p' codex-rs/tui/src/chatwidget.rs
# Check what happens to the stream_controller in turn end/abort scenarios
rg -n "on_interrupted_turn|on_agent_turn_end|take\(\)" codex-rs/tui/src/chatwidget.rs | head -30Repository: fcoury/codex
Length of output: 5211
🏁 Script executed:
#!/bin/bash
# Look at finalize_turn more closely - especially the drain_until_idle call
sed -n '840,900p' codex-rs/tui/src/chatwidget.rs
# Check drain_until_idle to understand what it does
rg -n "fn drain_until_idle" -A 30 codex-rs/tui/src/chatwidget.rsRepository: fcoury/codex
Length of output: 2597
🏁 Script executed:
#!/bin/bash
# View finalize_turn implementation
rg -n "fn finalize_turn" -A 50 codex-rs/tui/src/chatwidget.rs | head -80
# Look for where controller.finalize() is defined
rg -n "pub.*fn finalize" codex-rs/tui/src/streaming/controller.rs | head -10Repository: fcoury/codex
Length of output: 2471
Consider clearing queued stream lines immediately when interrupt is requested for faster termination.
The current interrupt mechanism (dropping future deltas) doesn't immediately halt rendering of already-queued lines in StreamController. Lines already in the queue will continue draining via commit ticks until the turn fully finalizes. For more responsive interrupts, call clear_queue() when interrupt_requested_for_turn is set in submit_op(), or add a method to immediately drain and discard pending output. This trades graceful tail-off for immediate termination.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@codex-rs/tui/src/chatwidget.rs` around lines 4114 - 4120, When an interrupt
is requested the current code only drops future stream deltas but allows
already-queued lines in StreamController to continue draining; update
submit_op() so that when interrupt_requested_for_turn is true (and not
from_replay) you also clear pending queued output to stop rendering immediately:
detect the same condition used in the if (from_replay /
interrupt_requested_for_turn / is_interrupt_droppable_stream_event(&msg)) block
and call StreamController::clear_queue() (or add and call a new method on
StreamController that immediately discards queued lines) to drain/discard
pending output instead of waiting for commit ticks.
| // Strip a trailing newline and up to 3 leading spaces, returning the | ||
| // trimmed slice. Returns `None` when the line has 4+ leading spaces | ||
| // (which makes it an indented code line per CommonMark). | ||
| fn strip_line_indent(line: &str) -> Option<&str> { | ||
| let without_newline = line.strip_suffix('\n').unwrap_or(line); | ||
| let leading_ws = without_newline | ||
| .as_bytes() | ||
| .iter() | ||
| .take_while(|byte| **byte == b' ') | ||
| .count(); | ||
| if leading_ws > 3 { | ||
| return None; | ||
| } | ||
| Some(&without_newline[leading_ws..]) | ||
| } |
There was a problem hiding this comment.
Handle tab-indented fences to avoid unwrapping indented code blocks.
strip_line_indent only counts spaces, so a tab-indented code block could be misparsed as a fence and unwrapped incorrectly (CommonMark treats a tab as ≥4 spaces).
🧩 Suggested fix
fn strip_line_indent(line: &str) -> Option<&str> {
let without_newline = line.strip_suffix('\n').unwrap_or(line);
- let leading_ws = without_newline
- .as_bytes()
- .iter()
- .take_while(|byte| **byte == b' ')
- .count();
- if leading_ws > 3 {
- return None;
- }
- Some(&without_newline[leading_ws..])
+ let mut col = 0usize;
+ let mut idx = 0usize;
+ for b in without_newline.as_bytes() {
+ match b {
+ b' ' => { col += 1; idx += 1; }
+ b'\t' => { col += 4 - (col % 4); idx += 1; }
+ _ => break,
+ }
+ if col > 3 {
+ return None;
+ }
+ }
+ Some(&without_newline[idx..])
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@codex-rs/tui/src/markdown.rs` around lines 79 - 93, strip_line_indent
currently only counts spaces and misinterprets leading tabs (which CommonMark
treats as ≥4 spaces); update strip_line_indent to iterate the line by bytes,
tracking a byte index and a visual column count where a space increments column
by 1 and a tab increments column by 4, stop when a non-space/tab is seen, return
None if column >= 4, otherwise return the slice starting at the byte index after
consumed whitespace (preserving the existing newline stripping logic); ensure
you use the tracked byte index for slicing to avoid UTF-8 boundary issues and
keep the function name strip_line_indent unchanged.
| /// Split a pipe-delimited line into trimmed segments. | ||
| /// | ||
| /// Returns `None` if the line is empty or has no `|` marker. | ||
| /// Leading/trailing pipes are stripped before splitting. | ||
| pub(crate) fn parse_table_segments(line: &str) -> Option<Vec<&str>> { | ||
| let trimmed = line.trim(); | ||
| if trimmed.is_empty() || !trimmed.contains('|') { | ||
| return None; | ||
| } | ||
|
|
||
| let content = trimmed.strip_prefix('|').unwrap_or(trimmed); | ||
| let content = content.strip_suffix('|').unwrap_or(content); | ||
|
|
||
| let segments: Vec<&str> = split_unescaped_pipe(content) | ||
| .iter() | ||
| .map(|s| s.trim()) | ||
| .collect(); | ||
| (!segments.is_empty()).then_some(segments) | ||
| } |
There was a problem hiding this comment.
Avoid treating escaped-only pipes as table candidates.
parse_table_segments gates on trimmed.contains('|'), so lines with only escaped pipes (e.g., foo \| bar) are treated as table headers, which can delay streaming unnecessarily. Consider checking for unescaped separators (or explicit outer pipes) before returning Some.
💡 Suggested fix
pub(crate) fn parse_table_segments(line: &str) -> Option<Vec<&str>> {
let trimmed = line.trim();
- if trimmed.is_empty() || !trimmed.contains('|') {
+ if trimmed.is_empty() {
return None;
}
+ let has_outer_pipe = trimmed.starts_with('|') || trimmed.ends_with('|');
let content = trimmed.strip_prefix('|').unwrap_or(trimmed);
let content = content.strip_suffix('|').unwrap_or(content);
+ if !has_outer_pipe && !has_unescaped_pipe(content) {
+ return None;
+ }
let segments: Vec<&str> = split_unescaped_pipe(content)
.iter()
.map(|s| s.trim())
.collect();
(!segments.is_empty()).then_some(segments)
}
+
+fn has_unescaped_pipe(content: &str) -> bool {
+ let bytes = content.as_bytes();
+ let mut i = 0;
+ while i < bytes.len() {
+ if bytes[i] == b'\\' {
+ i += 2;
+ } else if bytes[i] == b'|' {
+ return true;
+ } else {
+ i += 1;
+ }
+ }
+ false
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// Split a pipe-delimited line into trimmed segments. | |
| /// | |
| /// Returns `None` if the line is empty or has no `|` marker. | |
| /// Leading/trailing pipes are stripped before splitting. | |
| pub(crate) fn parse_table_segments(line: &str) -> Option<Vec<&str>> { | |
| let trimmed = line.trim(); | |
| if trimmed.is_empty() || !trimmed.contains('|') { | |
| return None; | |
| } | |
| let content = trimmed.strip_prefix('|').unwrap_or(trimmed); | |
| let content = content.strip_suffix('|').unwrap_or(content); | |
| let segments: Vec<&str> = split_unescaped_pipe(content) | |
| .iter() | |
| .map(|s| s.trim()) | |
| .collect(); | |
| (!segments.is_empty()).then_some(segments) | |
| } | |
| /// Split a pipe-delimited line into trimmed segments. | |
| /// | |
| /// Returns `None` if the line is empty or has no `|` marker. | |
| /// Leading/trailing pipes are stripped before splitting. | |
| pub(crate) fn parse_table_segments(line: &str) -> Option<Vec<&str>> { | |
| let trimmed = line.trim(); | |
| if trimmed.is_empty() { | |
| return None; | |
| } | |
| let has_outer_pipe = trimmed.starts_with('|') || trimmed.ends_with('|'); | |
| let content = trimmed.strip_prefix('|').unwrap_or(trimmed); | |
| let content = content.strip_suffix('|').unwrap_or(content); | |
| if !has_outer_pipe && !has_unescaped_pipe(content) { | |
| return None; | |
| } | |
| let segments: Vec<&str> = split_unescaped_pipe(content) | |
| .iter() | |
| .map(|s| s.trim()) | |
| .collect(); | |
| (!segments.is_empty()).then_some(segments) | |
| } | |
| fn has_unescaped_pipe(content: &str) -> bool { | |
| let bytes = content.as_bytes(); | |
| let mut i = 0; | |
| while i < bytes.len() { | |
| if bytes[i] == b'\\' { | |
| i += 2; | |
| } else if bytes[i] == b'|' { | |
| return true; | |
| } else { | |
| i += 1; | |
| } | |
| } | |
| false | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@codex-rs/tui/src/table_detect.rs` around lines 29 - 47, parse_table_segments
currently treats any line containing '|' (even only escaped pipes) as a table
candidate; change the guard so it only returns Some when there's at least one
unescaped separator or an explicit outer pipe. In parse_table_segments, replace
the trimmed.contains('|') check with logic that either (a) scans for an
unescaped '|' (i.e., a '|' not preceded by a backslash) or (b) uses
split_unescaped_pipe(content) and ensures it would produce more than one segment
before proceeding; keep the existing outer-strip logic
(strip_prefix/strip_suffix) so lines with real leading/trailing pipes still
count. Ensure the final then_some(segments) only happens when the
unescaped-separator check passes.
|
@codex review |
|
@copilot review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 460c417068
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
Add proper table rendering to the markdown renderer using Unicode box-drawing characters (┌─┬┐│├─┼┤└─┴┘) instead of raw pipe syntax. Supports column alignment, cell wrapping at narrow widths, tables inside blockquotes, escaped pipes, and styled inline content.
Wire the Unicode box-drawing table renderer into the live streaming path with holdback-aware commit logic, post-stream cell consolidation, and debounced resize reflow. Key additions: - StreamCore: shared bookkeeping deduplicating Stream/PlanStreamController - Table holdback: fence-aware state machine keeps buffer mutable during table detection - AgentMarkdownCell: stores raw markdown source, re-renders at any width on demand - ConsolidateAgentMessage: backward-walk replaces streamed cells with single consolidated cell - Resize reflow: debounced (75ms) re-render of all transcript cells after terminal resize
Add module, type, and function-level doc comments across the table
rendering change. Fix typos ("taht", "mirros", "stabe", "durin").
Expand streaming controller module docs to cover holdback and resize
handling. Document width allocation, spillover detection, consolidation,
and prewrapped line rendering contracts. No runtime behavior changes
in documentation edits; also includes extracted draw helpers in app.rs
and a fence-detection fix in markdown.rs.
Add module-level, type-level, and function-level documentation across the table rendering pipeline: table_detect, width, markdown, markdown_render, and streaming/controller. Separate spillover lines from table grid output via RenderedTableLines so spillover prose routes through normal word wrapping instead of bypassing it.
Address reviewer concerns about self-referential test coverage: - Column-metrics: 6 tests for TableColumnKind classification, preferred_column_floor caps, and next_column_to_shrink priority. - Spillover detection: 6 tests for is_spillover_row covering each heuristic branch and negative cases.
- tick_batch(0) now returns empty instead of silently draining one line - has_live_tail avoids Vec allocation via field comparison - Convert 15 sync-only controller tests from tokio::test to #[test]
…examples Strip blockquote prefixes during table detection only when the fence itself is inside a blockquote, preventing false unwrapping of fences that contain blockquoted table examples as illustration content.
Remove accumulated duplication and dead code across the md-table branch: extract shared helpers (trailing_run_start<T>, strip_line_indent, compute_target_stable_len, reset_history_emission_state, maybe_finish_stream_reflow, is_blockquote_active, active_cell_is_stream_tail), collapse redundant conditionals (normalize_row, FenceTracker Option state, merged match arms), delete duplicate methods and tests, and fix stale doc comments. No behavioral changes; -86 net lines.
…line Replace clone with mem::take in finalize(), return Cow from unwrap_markdown_fences for zero-copy on fence-free messages, use index-based iteration in reflow loop, add #[inline] on ~20 hot-path functions, add with_capacity hints on hot Vecs/Strings, box large ActiveFence variant, and eliminate minor allocations (into_iter, Span::from for literals, write!-based plain_text, redundant .max(1)).
Improve resize reflow scheduling so due reflows request an immediate frame instead of waiting for unrelated redraws, and clear queued stream output on `Op::Interrupt` so stale lines stop rendering immediately. Harden markdown/table parsing by treating tab indentation as 4-column whitespace in `strip_line_indent` and ignoring escaped-only pipes in `parse_table_segments` unless explicit outer pipes are present.
Disable table holdback for `PlanStreamController` by adding a per-stream `TableHoldbackMode` in `StreamCore`. Agent streams keep holdback enabled, while plan streams now enqueue table lines incrementally. Also treat plan live tail as active in `stream_controllers_idle()` and add regression coverage in `codex-rs/tui/src/streaming/controller.rs` and `codex-rs/tui/src/chatwidget/tests.rs` to prevent premature `StopCommitAnimation` and preserve streaming behavior.
Adjust table column classification in `markdown_render.rs` so token-heavy, low-word-density columns are treated as Structured instead of Narrative. This keeps URL-heavy link columns from being first in the shrink order, reducing rare but severe narrow-column wrapping in rendered tables.
- Use codex_protocol::protocol instead of codex_core::protocol (made private upstream) - Add missing word_wrap_lines import in history_cell tests - Update column classification test for URL-like token heuristic
460c417 to
8d0b455
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
codex-rs/core/config.schema.json (1)
1517-1522:⚠️ Potential issue | 🟠 MajorAdd backward compatibility handling for renamed config field.
The field
background_terminal_timeoutwas renamed tobackground_terminal_max_timeoutin the ConfigToml struct (codex-rs/core/src/config/mod.rs, line 1079), but no#[serde(alias = "background_terminal_timeout")]attribute exists. Users with existing configurations referencing the old field name will encounter deserialization failures. Either add the alias attribute to the field or document a migration path in release notes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@codex-rs/core/config.schema.json` around lines 1517 - 1522, Config deserialization will break for users who still have the old key background_terminal_timeout because the ConfigToml struct renamed it to background_terminal_max_timeout; update the ConfigToml definition in codex-rs/core/src/config/mod.rs to accept the old name by adding a serde alias for background_terminal_timeout on the background_terminal_max_timeout field (e.g., add #[serde(alias = "background_terminal_timeout")]) so both keys deserialize, or alternatively implement a custom deserialize fallback that maps the old key to background_terminal_max_timeout during deserialization.codex-rs/core/src/codex.rs (1)
2218-2248:⚠️ Potential issue | 🟠 MajorDecouple realtime mirroring from
send_eventto avoid blocking on backpressured channels.
text_inuses.send().awaiton a bounded channel, which will block if the text queue fills up—unlikeaudio_in, which usestry_send()to avoid blocking on backpressure. This blocking behavior can delay event emission on the hot path. Spawn the mirroring in a separate task to keepsend_eventnon-blocking.Suggested refactoring
- if let Err(err) = self.conversation.text_in(text).await { - debug!("failed to mirror event text to realtime conversation: {err}"); - } + let conversation = Arc::clone(&self.conversation); + tokio::spawn(async move { + if let Err(err) = conversation.text_in(text).await { + debug!("failed to mirror event text to realtime conversation: {err}"); + } + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@codex-rs/core/src/codex.rs` around lines 2218 - 2248, send_event currently calls maybe_mirror_event_text_to_realtime(&EventMsg) and awaits it, which can block on conversation.text_in().spawn a background task instead so mirroring doesn’t block the hot path: add an owned async helper (e.g., maybe_mirror_event_text_to_realtime_owned(self: Arc<Self>, msg: EventMsg) or make the existing method take EventMsg by value) that performs realtime_text_for_event + conversation.running_state().await + conversation.text_in(text).await and logs errors, then in send_event call tokio::spawn cloning/Arc-cloning self and passing the owned EventMsg (or cloned msg) into the helper without .await; ensure the helper handles None text and logs any send errors so the spawned task is self-contained.
🧹 Nitpick comments (2)
codex-rs/core/src/realtime_conversation.rs (1)
384-443: Tests cover the main cases well.Consider adding a test for the edge case where
"content"is missing entirely:Optional: additional edge case test
#[test] fn returns_none_when_content_missing() { let item = json!({ "type": "message", "role": "assistant", }); assert_eq!(realtime_text_from_conversation_item(&item), None); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@codex-rs/core/src/realtime_conversation.rs` around lines 384 - 443, Add a unit test that covers the edge case where the "content" field is absent: inside the existing tests module (where realtime_text_from_conversation_item is used) add a test function (e.g., returns_none_when_content_missing) that constructs a JSON object with "type": "message" and no "content" (optionally include "role") and asserts that realtime_text_from_conversation_item(&item) returns None; place it alongside the other tests so it runs with the current suite.codex-rs/core/src/codex.rs (1)
5484-5532: Cap mirrored patch output/file list size to protect the realtime channel.
Patch stdout/stderr and file lists can be very large; mirroring them verbatim can overwhelm the realtime channel/UI. Consider truncation with a clear indicator.✂️ Proposed guardrails
fn realtime_text_for_event(msg: &EventMsg) -> Option<String> { + const REALTIME_MIRROR_MAX_CHARS: usize = 4_000; + const REALTIME_MIRROR_MAX_FILES: usize = 50; match msg { EventMsg::AgentMessage(event) => Some(event.message.clone()), EventMsg::ItemCompleted(event) => match &event.item { TurnItem::AgentMessage(item) => Some(agent_message_text(item)), _ => None, }, @@ EventMsg::PatchApplyBegin(event) => { let mut files: Vec<String> = event .changes .keys() .map(|path| path.display().to_string()) .collect(); files.sort(); + let truncated = files.len() > REALTIME_MIRROR_MAX_FILES; + let extra = files.len().saturating_sub(REALTIME_MIRROR_MAX_FILES); + if truncated { + files.truncate(REALTIME_MIRROR_MAX_FILES); + } let file_list = if files.is_empty() { "none".to_string() } else { files.join(", ") }; + let file_list = if truncated { + format!("{file_list} …(+{extra} more)") + } else { + file_list + }; Some(format!( "apply_patch started ({count} file change(s))\nFiles: {file_list}", count = files.len() )) } EventMsg::PatchApplyEnd(event) => { @@ if !event.stderr.is_empty() { text.push_str(&format!("\nstderr:\n{}", event.stderr)); } + if text.len() > REALTIME_MIRROR_MAX_CHARS { + let mut end = REALTIME_MIRROR_MAX_CHARS.min(text.len()); + while !text.is_char_boundary(end) { + end = end.saturating_sub(1); + } + text.truncate(end); + text.push_str("\n…(truncated)"); + } Some(text) } _ => None, } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@codex-rs/core/src/codex.rs` around lines 5484 - 5532, The mirrored patch output and file list in realtime_text_for_event can be unbounded and should be truncated: in the branches handling EventMsg::PatchApplyBegin (the local variable files and file_list) and EventMsg::PatchApplyEnd (event.stdout and event.stderr), enforce a reasonable character/byte limit (e.g., 1000–2000 chars) and replace any excess with a clear indicator like "…(truncated)"; update the formatted messages to include the truncation marker and keep the count of total files (files.len()) unchanged so consumers know more was omitted, ensuring you perform truncation before building the final strings in realtime_text_for_event.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@codex-rs/Cargo.toml`:
- Line 279: The dependency switch to syntect ("syntect = \"5\"") is applied but
brings two acknowledged transitive security advisories (RUSTSEC-2024-0320 and
RUSTSEC-2025-0141); update the project governance around this by (1) pinning
syntect to the intended tested version (e.g., 5.3.0) in Cargo.toml so upgrades
are deliberate, (2) ensure deny.toml contains explicit exceptions for those two
RUSTSEC IDs with a brief justification and an expiration or review date, and (3)
add a short comment next to the "syntect = \"5\"" entry referencing the
deny.toml exceptions and an action item to re-evaluate replacing syntect if
transitive deps remain unmaintained.
In `@codex-rs/core/src/agent/builtins/monitor.toml`:
- Around line 3-24: In the developer_instructions string in monitor.toml fix two
grammar issues: change "You're name is Superman." to "Your name is Superman."
and reword "If you need multiple wait, increase the timeouts/yield times
exponentially." to "If you need to wait multiple times, increase the
timeouts/yield times exponentially." Ensure these edits are made in the
developer_instructions block so the user-facing instructions read clearly.
In `@codex-rs/core/src/codex.rs`:
- Around line 1522-1535: The route_realtime_text_input function should ignore
empty or all-whitespace transcripts to avoid creating no-op user turns: in
route_realtime_text_input, trim the incoming text and if it's empty return early
instead of calling handlers::user_input_or_turn with Op::UserInput; keep using
self.next_internal_sub_id() and the existing Op::UserInput construction only
when the trimmed text is non-empty.
---
Outside diff comments:
In `@codex-rs/core/config.schema.json`:
- Around line 1517-1522: Config deserialization will break for users who still
have the old key background_terminal_timeout because the ConfigToml struct
renamed it to background_terminal_max_timeout; update the ConfigToml definition
in codex-rs/core/src/config/mod.rs to accept the old name by adding a serde
alias for background_terminal_timeout on the background_terminal_max_timeout
field (e.g., add #[serde(alias = "background_terminal_timeout")]) so both keys
deserialize, or alternatively implement a custom deserialize fallback that maps
the old key to background_terminal_max_timeout during deserialization.
In `@codex-rs/core/src/codex.rs`:
- Around line 2218-2248: send_event currently calls
maybe_mirror_event_text_to_realtime(&EventMsg) and awaits it, which can block on
conversation.text_in().spawn a background task instead so mirroring doesn’t
block the hot path: add an owned async helper (e.g.,
maybe_mirror_event_text_to_realtime_owned(self: Arc<Self>, msg: EventMsg) or
make the existing method take EventMsg by value) that performs
realtime_text_for_event + conversation.running_state().await +
conversation.text_in(text).await and logs errors, then in send_event call
tokio::spawn cloning/Arc-cloning self and passing the owned EventMsg (or cloned
msg) into the helper without .await; ensure the helper handles None text and
logs any send errors so the spawned task is self-contained.
---
Nitpick comments:
In `@codex-rs/core/src/codex.rs`:
- Around line 5484-5532: The mirrored patch output and file list in
realtime_text_for_event can be unbounded and should be truncated: in the
branches handling EventMsg::PatchApplyBegin (the local variable files and
file_list) and EventMsg::PatchApplyEnd (event.stdout and event.stderr), enforce
a reasonable character/byte limit (e.g., 1000–2000 chars) and replace any excess
with a clear indicator like "…(truncated)"; update the formatted messages to
include the truncation marker and keep the count of total files (files.len())
unchanged so consumers know more was omitted, ensuring you perform truncation
before building the final strings in realtime_text_for_event.
In `@codex-rs/core/src/realtime_conversation.rs`:
- Around line 384-443: Add a unit test that covers the edge case where the
"content" field is absent: inside the existing tests module (where
realtime_text_from_conversation_item is used) add a test function (e.g.,
returns_none_when_content_missing) that constructs a JSON object with "type":
"message" and no "content" (optionally include "role") and asserts that
realtime_text_from_conversation_item(&item) returns None; place it alongside the
other tests so it runs with the current suite.
|
Addressing the outside-diff + nitpick items from this review:
|
feat(tui): render markdown tables with Unicode box-drawing borders
Problem
Agent responses containing Markdown tables render as raw pipe-delimited text in the TUI transcript. Tables are one of the most common structured output formats from LLMs, and rendering them as
| A | B |lines wastes horizontal space, obscures alignment, and makes multi-column data hard to scan — especially when columns contain prose or mixed-width content.Before — raw pipe-delimited table on
main:After — Unicode box table that adapts to terminal width:
And a demonstration of the table rendering flow:
Dynamic.Markdown.Tables.mp4
A second, subtler problem: agents frequently wrap tables in
```markdownfences (treating them as code), which causespulldown-cmarkto parse the table as a fenced code block rather than a native table. Both problems need solving together.Side benefit: The resize reflow infrastructure built for table width adaptation also fixes terminal resize for all markdown content. On
main, widening the terminal wastes space and narrowing it causes visual corruption (see Resize handling for screenshots and details).Mental model
The change introduces a two-region streaming model with table-aware holdback:
The holdback decision is the core architectural choice: rather than trying to incrementally patch table renders, the system re-renders the entire accumulated markdown source on every committed delta and decides per-delta whether lines flow to stable or stay as mutable tail.
How lines are allocated between regions:
None): lines progressively flow from tail to stable as new deltas arrive. Only the most recent not-yet-committed lines stay in tail.Confirmed { table_start }): content from the table header onward is held as tail until the stream ends, because a single new row can reshape every column width. Pre-table prose continues flowing to stable.PendingHeader { header_start }): transitional — a header row was seen but not yet confirmed by a delimiter row. Content from the speculative header onward is kept mutable.The holdback state is maintained by an incremental scanner (
TableHoldbackScanner) over append-only source chunks, with a stateless full-source scan retained under#[cfg(test)]for parity checks.A parallel fence-unwrapping pass (
unwrap_markdown_fences) strips```md/```markdownwrappers when the fence body contains a header+delimiter pair, sopulldown-cmarksees raw table syntax instead of fenced code.Non-goals
Tradeoffs
rendered_line_end_source_bytes+remap_emitted_len_from_source_boundary)```markdownpattern transparentlyArchitecture
flowchart TD delta["Token delta from agent"] collector["MarkdownStreamCollector *(newline-gated raw source accumulator)*"] delta --> collector collector -- "commit_complete_source()" --> raw subgraph StreamController["StreamController — two-region streaming engine"] raw["raw_source *(append-only markdown accumulator)*"] raw --> recompute["recompute_streaming_render()"] subgraph pipeline["Render pipeline"] unwrap["unwrap_markdown_fences()"] pulldown["pulldown-cmark"] writer["Writer"] table_render["render_table_lines()"] unwrap --> pulldown --> writer --> table_render end recompute --> unwrap writer --> rendered["rendered_lines *(full re-render at current width)*"] table_render --> boxgrid["Unicode box grid or pipe fallback"] rendered --> holdback["holdback_scanner.state()"] holdback --> state["TableHoldbackState {None | PendingHeader | Confirmed}"] state --> sync["sync_stable_queue()"] end sync -- "Stable lines" --> commit["commit-animation queue"] commit --> agent["AgentMessageCell *(scrollback)*"] sync -- "Tail lines" --> tail["StreamingAgentTailCell *(active-cell slot, mutable)*"]Key modules:
table_detect— canonical pipe-table detection, incremental holdback scanning, and fence-context tracking; shared by streaming controller, fence unwrapper, and holdback scannerwidth— usable-width guards for prefix-column reservationmarkdown— public entry points with fence unwrapping (append_markdown_agent)streaming/controller—StreamCore(shared bookkeeping for source accumulation, re-rendering, holdback, and resize),StreamController(agent messages, holdback enabled), andPlanStreamController(plan blocks, holdback disabled)Key types:
StreamCore— private shared engine for two-region streaming: source accumulation, re-rendering, stable/tail partitioning, and holdback scanning. BothStreamControllerandPlanStreamControllerdelegate to it via composition.StreamController/PlanStreamController— thin wrappers that add agent-message or plan-block emission semantics. Agent streams enable table holdback; plan streams remain fully incremental.TableHoldbackState— None / PendingHeader { header_start } / Confirmed { table_start }FenceTracker— incremental state machine for fence open/close (Outside / Markdown / Other)TableState/TableCell— pulldown-cmark event accumulatorTableColumnKind/TableColumnMetrics— width-allocation classification (Narrative shrinks first; Structured resists collapse)RenderedMarkdownWithSourceMap— render artifact carrying lines + per-line input byte offsets (line_end_input_bytes)StreamingAgentTailCell/AgentMessageCell— history cell types for tail and committed contentExtracted helpers (shared across app, chatwidget, and streaming modules):
trailing_run_start<T>— find start index of the trailing run of a specific cell type in transcriptstrip_line_indent— peel leading whitespace for fence detectioncompute_target_stable_len— determine how many rendered lines should be stable based on holdback statereset_history_emission_state/maybe_finish_stream_reflow— resize coordination between app-level reflow and active streamsis_blockquote_active— shared blockquote context check for the markdown writeractive_cell_is_stream_tail— chatwidget guard for stream-aware cell operationsTable rendering pipeline
The
Writerinmarkdown_render.rsaccumulates pulldown-cmark table events into aTableState, then passes it torender_table_lines()which runs a five-stage pipeline:┌───┬───┐) or fall back to pipe-delimited format (| A | B |) when even 3-char-wide columns cannot fit.Resize handling
On
main, terminal resize is essentially ignored for transcript content. History cells store pre-renderedVec<Line>and are never reflowed — growing the terminal wastes horizontal space (text stays wrapped at the old width), and shrinking it causes visual corruption as pre-rendered lines are hard-clipped mid-word.main— paragraph at half width, then after resize (no reflow):This branch — prose reflows cleanly at any width:
This PR adds a resize reflow system, motivated by tables (whose column widths must adapt to available space) but benefiting all markdown content:
Width-change detection.
Apptrackslast_transcript_render_widthand firesChatWidget::on_terminal_resize()when it changes.Active stream updates immediately.
StreamCore::set_width()re-renders once at the new width, then remapsemitted_stable_lenvia stored source-byte boundaries. Theremap_emitted_len_from_source_boundaryfunction walksrendered_line_end_source_byteswith a tie-break counter for wrapped lines sharing the same source offset, avoiding duplicated or dropped content across the width transition.Committed scrollback reflowed on a 75ms debounce.
maybe_run_resize_reflow()clears the terminal and re-inserts every transcript cell at the new width. The debounce (RESIZE_REFLOW_DEBOUNCE) prevents expensive re-renders on every pixel of a drag resize.Source-based history cells.
AgentMarkdownCellstores raw markdown source instead of pre-rendered lines, so it can re-render at any width during reflow. This replaces the run ofAgentMessageCells emitted during streaming once the stream is consolidated.Stream-time reflow coordination. If a reflow runs while a stream is still active (cells are still
AgentMessageCell, not yet consolidated), thereflow_ran_during_streamflag triggers a second reflow afterConsolidateAgentMessageto pick up theAgentMarkdownCellrendering at the current width.Observability
tracing::trace!on everypush_deltacall inMarkdownStreamCollectortracing::debug!on finalization with raw/rendered lengths (test-onlyfinalize_and_drain; productionfinalize_and_drain_sourcehas no tracing)RESIZE_REFLOW_DEBOUNCE);reflow_ran_during_streamflag triggers a re-reflow on stream consolidation to pick up the finalAgentMarkdownCellrenderingTests
StreamControllerproduces identical output to a single fullappend_markdown_agentcall. Covers: basic tables, blockquoted tables, no-outer-pipes tables, two-column tables, markdown-fenced tables, multi-table responses.Confirmedstate for header+delimiter pairs,PendingHeaderfor lone headers,Nonefor prose with pipes, and fence-awareness (non-markdown fences suppress holdback). Incremental scanner is checked against a stateless#[cfg(test)]full-source function for parity.set_widthtests verify no duplicate lines after emit, no lost lines on partial drain, preserved in-flight tails for both agent and plan controllers, and correct output when resizing during an active confirmed-table stream.TableColumnKindclassification (including URL-like token resistance),preferred_column_floorcaps, andnext_column_to_shrinkpriority. End-to-end parity tests provide additional coverage.is_spillover_rowcover each heuristic branch (single-cell, HTML content, label+HTML, trailing HTML label) and negative cases. End-to-end tests provide additional coverage.