-
Notifications
You must be signed in to change notification settings - Fork 4.1k
Description
Description
OpenCode v1.0.201 exhibits sustained 100%+ CPU usage during active LLM streaming in long sessions, caused by an O(n) text buffer rendering loop in OpenTUI's native Zig renderer. The CPU drops to near-zero when idle, but memory accumulates and is not reclaimed even after session compaction. This makes large refactoring sessions extremely resource-intensive and can render the system sluggish.
Summary
| Metric | During Streaming | When Idle |
|---|---|---|
| CPU | 100-117% sustained | 0.1-0.5% |
| State | R+ (Running) | S+ (Sleeping) |
| Physical Footprint | 6.9 GB (peak 7.0 GB) | Remains high |
| RSS | 1.0-1.4 GB | 1.2 GB (not reclaimed) |
Root Cause
CPU samples show 100% of time spent in OpenTUI's text buffer renderer:
textBufferViewMeasureForDimensions
→ UnifiedTextBufferView.calculateVirtualLinesGeneric
→ rope.Rope.walkNode (recursive)
→ segment_callback
→ ArrayListAlignedUnmanaged.append
→ ArenaAllocator.alloc
→ PageAllocator.alloc
→ mmap (kernel)
This is in .7885f4ebfb6353d3-00000000.dylib - OpenTUI's native Zig renderer. The renderer appears to:
- Recalculate virtual lines for ALL content on every update (not just viewport)
- Allocate new memory via
mmapfor every calculation (not reusing allocations) - During streaming, this triggers expensive O(n) recalculation for every token
Session Context
The session was performing a large codebase refactoring:
| Metric | Value |
|---|---|
| Session Title | "Migrating from snowfall-lib to blueprint" |
| Files Modified | 198 |
| Lines Added | 3,565 |
| Lines Deleted | 833 |
| Session Parts | 12,248 |
| Session Messages | 530+ |
| Runtime | 1h 15m+ |
Environment Details
System Information
| Component | Value |
|---|---|
| OS | macOS 26.2 Tahoe (Build 25C56) |
| Kernel | Darwin 25.2.0 (xnu-12377.61.12~1/RELEASE_ARM64_T8132) |
| Chip | Apple M4 |
| CPU Cores | 10 (4 performance + 6 efficiency) |
| RAM | 16 GB |
| Architecture | arm64 |
OpenCode Environment
| Component | Value |
|---|---|
| OpenCode Version | 1.0.201 |
| Installation Method | bunx opencode-ai@latest |
| Bun Version | 1.3.2 |
| Shell | zsh |
| Terminal Multiplexer | tmux 3.5a |
| Terminal Type | tmux-256color |
Active Language Servers
| LSP | PID | CPU | Memory |
|---|---|---|---|
| bash-language-server | 35839 | 0% | 2.2% |
| nixd | 85540 | 0% | 0.2% |
| pyright | 86334 | 0% | 0.3% |
OpenCode Configuration
Process Statistics Over Time
During Active Agent Work
| Timestamp | CPU % | MEM % | RSS | Physical Footprint | Elapsed | State |
|---|---|---|---|---|---|---|
| 17:57:54 | 109.5 | 8.4 | 1.4 GB | 5.5 GB | 00:00:00 | R+ |
| 18:04:13 | 102.2 | 7.4 | 1.2 GB | 6.1 GB | 01:04:01 | R+ |
| 18:09:52 | 113.7 | 8.0 | 1.3 GB | 6.7 GB | 01:12:01 | R+ |
| 18:12:37 | 100.9 | 6.3 | 1.0 GB | 6.9 GB | 01:14:48 | R+ |
After Agent Completed (Idle)
| Timestamp | CPU % | MEM % | RSS | Elapsed | State |
|---|---|---|---|---|---|
| 18:15:37 | 0.1 | 7.1 | 1.2 GB | 01:17:49 | S+ |
Key Observation: CPU dropped from ~110% to 0.1% once the agent finished, but memory (RSS 1.2 GB, 7.1% of 16GB) remained high and was not reclaimed.
Memory Map Analysis (vmmap)
Process: opencode [85447]
Physical footprint: 6.7G
Physical footprint (peak): 6.8G
ReadOnly portion of Libraries: Total=925.1M resident=214.1M(23%)
Writable regions: Total=63.5G written=6.6G(10%) resident=1.1G(2%) swapped_out=5.7G(9%)
REGION TYPE SIZE RESIDENT DIRTY SWAPPED REGIONS
=========== ======= ======== ===== ======= =======
VM_ALLOCATE 6.1G 612.7M 612.7M 5.4G 167013
WebKit Malloc 12.9G 354.4M 287.8M 57.4M 60
JS VM Gigacage 1.5G 39.9M 11.6M 144K 7
JS VM Gigacage (reserved) 50.5G 0K 0K 0K 2
IOAccelerator 640.2M 53.1M 53.1M 233.1M 426
JS JIT generated code 512.0M 15.7M 15.6M 8880K 3
__TEXT 334.5M 191.9M 0K 0K 342
=========== ======= ======== ===== ======= =======
TOTAL 77.7G 1.4G 1.0G 5.7G 169429
Key observations:
- VM_ALLOCATE: 6.1 GB virtual, 612.7 MB resident, 5.4 GB swapped - 167,013 regions
- WebKit Malloc: 12.9 GB virtual - JSC heap
- JS VM Gigacage: 52 GB reserved
CPU Sample Analysis (Full)
5-second sample showing the hot path:
Analysis of sampling opencode (pid 85447) every 1 millisecond
Process: opencode [85447]
Date/Time: 2025-12-25 18:09:51.760 -0500
Launch Time: 2025-12-25 16:57:48.086 -0500
OS Version: macOS 26.2 (25C56)
Physical footprint: 6.7G
Call graph:
4193 Thread_5448600 DispatchQueue_1: com.apple.main-thread (serial)
+ 4193 start (in dyld)
+ 4193 ??? (in opencode) [~15 frames of Bun internals]
+ 4193 ??? (in <unknown binary>) [JIT code]
+ 4182 ??? (in <unknown binary>)
+ 4143 ??? (in <unknown binary>)
+ 4142 ??? (in <unknown binary>)
+ 3812 ??? (in <unknown binary>)
+ 3809 ??? (in <unknown binary>)
+ 3767 ??? (in <unknown binary>) [0x263aec94954]
+ 3767 textBufferViewMeasureForDimensions + 256
+ 3767 UnifiedTextBufferView.calculateVirtualLinesGeneric + 444
+ 3767 rope.Rope.walkNode + 144
+ 3767 rope.Rope.walkNode + 172
+ 3767 walkLinesAndSegments.WalkContext.walker + 156
+ 3767 calculateVirtualLinesGeneric.WrapContext.segment_callback + 624
+ 3767 ArrayListAlignedUnmanaged.append + 356
+ 3767 ArenaAllocator.alloc + 120
+ 3767 PageAllocator.alloc + 136
+ 3767 mmap + 80
+ 3767 __mmap + 8
Hot Path Breakdown:
| Function | Library | Samples | % |
|---|---|---|---|
__mmap |
libsystem_kernel.dylib | 3767 | 89.8% |
PageAllocator.alloc |
OpenTUI (.dylib) | 3767 | 89.8% |
ArenaAllocator.alloc |
OpenTUI (.dylib) | 3767 | 89.8% |
ArrayListAlignedUnmanaged.append |
OpenTUI (.dylib) | 3767 | 89.8% |
segment_callback |
OpenTUI (.dylib) | 3767 | 89.8% |
rope.Rope.walkNode |
OpenTUI (.dylib) | 3767 | 89.8% |
calculateVirtualLinesGeneric |
OpenTUI (.dylib) | 3767 | 89.8% |
textBufferViewMeasureForDimensions |
OpenTUI (.dylib) | 3767 | 89.8% |
Storage Analysis
Global OpenCode Storage
~/.local/share/opencode/
├── bin/ 325 MB (LSP servers)
├── log/ 608 KB (log files)
├── snapshot/ 485 MB (git snapshots)
├── storage/ 385 MB (session data)
│ ├── message/ 81 MB (531 directories)
│ ├── part/ 283 MB (12,248 directories)
│ ├── session_diff/ ~17 MB (498 files)
Current Session
{
"id": "ses_4a87a33a1ffee9ZS0Sir0p7Akd",
"version": "1.0.201",
"title": "Migrating from snowfall-lib to blueprint",
"summary": {
"additions": 3565,
"deletions": 833,
"files": 198
}
}- Session diff: 420 KB, 199 JSON entries, 1,394 lines
System Resource Impact
Memory Pressure
Swap I/O:
Swapins: 4470468
Swapouts: 8129927
Compressor Stats:
Pages used by compressor: 254459
Pages decompressed: 78905151
Pages compressed: 133605383
System-wide memory free percentage: 60%
Swap Usage
vm.swapusage: total = 1024.00M used = 156.88M free = 867.12M (encrypted)
Load Average
Load Avg: 2.46, 2.82, 2.95 (on 10-core system)
Related Issues
High CPU / Performance Issues
| Issue | Title | Status |
|---|---|---|
| #5220 | Glob search uses 100% of CPU | OPEN |
| #4818 | Heavy CPU Usage and making my M1 mac laggy after updating | OPEN |
| #5130 | Mac unresponsive, significant lag | OPEN |
| #3822 | Output starts to slow to a crawl | OPEN |
Memory Issues
| Issue | Title | Status |
|---|---|---|
| #5363 | opencode eating 70gb of memory? | OPEN |
| #4315 | Memory stays unbounded: ACP session map + compacted tool outputs | OPEN |
| #3013 | Uses a huge amount of memory | OPEN |
| #6119 | TUI fails to render with critical memory leak (72.5G VIRT) | OPEN |
| #5700 | Too high memory usage | OPEN |
TUI Freeze / Hang Issues
| Issue | Title | Status |
|---|---|---|
| #731 | Critical Stability Issues: App Freezing & Hanging Analysis | OPEN |
| #5361 | TUI freezes for ~10 seconds periodically on WSL2 | OPEN |
| #4239 | TUI hangs when shell outputs massive log | OPEN |
Related Pull Requests (Open)
These PRs appear to directly address the root cause:
| PR | Title | Description |
|---|---|---|
| #3346 | perf(tui/chat): virtualize message rendering to remove O(n) reflows | Virtualizes chat blocks, caches per-block content, binary search for visible blocks |
| #2867 | perf(tui): Eliminate O(n) scans from render hot path | Local shimmer detection, header caching, virtual viewport rendering |
| #1493 | fix: implement LRU cache for session memory management | LRU eviction, TTL expiration, memory pressure handling |
| #4963 | feat/perf: add pagination to modified files in sidebar | Pagination for large file lists |
| #5432 | fix(grep): stream ripgrep output to prevent memory exhaustion | Stream instead of buffer |
Key PR: #3346 - Virtualize Message Rendering
Changes:
- Build per-block caches once, use binary search for visible blocks
- Shimmer gating: Only animate when in-flight, re-render only when at bottom and backlog ≤ 2000 lines
Impact:
- Streaming remains smooth on large histories
- Rendering work scales with viewport, not backlog
Key PR: #2867 - Eliminate O(n) Scans
Optimizations:
- Local shimmer detection - O(parts-in-message) instead of O(all-parts)
- Virtual viewport rendering - O(1) scrolling regardless of message count
- Fast selection updates - 10-50x speedup
Performance:
- Before: ~3-10 FPS with 100+ messages
- After: Smooth 60 FPS
Key PR: #1493 - LRU Cache for Session Memory
Features:
- LRU eviction at capacity
- TTL-based expiration
- Memory pressure handling (aggressive cleanup >80%)
Impact:
- 50-80% reduction in memory for long sessions
Technical Recommendations
For OpenTUI Text Buffer
- Cache virtual line calculations: Don't recalculate for unchanged content
- Virtualize to viewport: Only calculate virtual lines for visible rows
- Reuse allocations: Use object pooling instead of continuous mmap
- Batch updates during streaming: Coalesce rapid updates
For Session Memory Management
- Implement LRU eviction: As proposed in PR fix: implement LRU cache for session memory management #1493
- Actually clear tool outputs on compaction: Don't just mark as compacted
- Stream large outputs: Don't buffer entire tool outputs in memory
For Rendering Pipeline
- Implement virtualized scrolling: As proposed in PR perf(tui/chat): virtualize message rendering to remove O(n) reflows during streaming #3346
- Cache rendered blocks: Don't re-render unchanged message blocks
- Throttle re-renders during streaming: Limit to 30-60 FPS
OpenCode version
1.0.201
Steps to reproduce
-
Start OpenCode in a project directory:
bunx opencode-ai@latest
-
Begin a large refactoring task that:
- Involves many files (100+ file modifications)
- Generates substantial LLM output
- Runs for an extended period (30+ minutes)
- Uses tools that produce verbose output
-
Monitor system resources:
while true; do ps -p <PID> -o pid,%cpu,%mem,rss; sleep 5; done
-
Observe:
- CPU spikes to 100%+ during LLM streaming
- Memory accumulates and doesn't decrease after compaction
- System becomes sluggish
Screenshot and/or share link
Share link: https://opncd.ai/share/XmViOCAJ
Diagnostic data provided above instead of screenshots - includes:
- Full CPU samples from macOS
samplecommand - Memory maps from
vmmap - Heap analysis from
heap - Process statistics over time
Operating System
macOS 26.2 (Tahoe, Build 25C56) - Apple M4, 16GB RAM
Terminal
tmux 3.5a + zsh (TERM=tmux-256color)
{ "$schema": "https://opencode.ai/config.json", "tools": { "webfetch": false }, "permission": { "edit": "allow", "webfetch": "allow", "bash": { "git push": "ask", "npm install": "ask", "*": "allow" } }, "plugin": [ "opencode-openai-codex-auth@4.2.0", "opencode-google-antigravity-auth" ], "mcp": { "exa": { "enabled": true, "type": "remote", "url": "https://mcp.exa.ai/mcp?..." }, "deepwiki": { "enabled": true, "type": "remote", "url": "https://mcp.deepwiki.com/mcp" } }, "autoupdate": false, "experimental": { "disable_paste_summary": true } }