Skip to content

Conversation

@monotykamary
Copy link

Intent

This PR is diagnostic-only to document memory behavior and leak suspects observed via opencode-style harnesses. It is not intended to be merged as-is.

Approach

  • Added Solid harnesses that simulate opencode session rendering (code/reasoning/diff) and generic streaming updates.
  • Added native metrics sampling (text buffer arenas, rope segments, mem registry, highlight caches, grapheme pool) and renderable counts.
  • Logged metrics to packages/core/dev/native-metrics.jsonl for repeatable runs.
  • Fixed Solid reconciler removal to destroy renderables immediately (avoids next-tick backlog growth) and added cleanup regression tests.

Metrics (native-metrics.jsonl)

Run Harness Mode Iter Streaming Final/Peak RSS MB Renderables TextBuffers Arena MB View Arena MB Highlights Grapheme used/total
mjyrzvhk-i2mt8f solid code (highlight=false) 400 true 30.75 / 83.63 253 120 9.37 0.00 0/0 0/0
mjys0ilj-wi5rfq solid text (highlight=true) 400 true 133.39 / 157.80 253 120 9.39 0.00 11520/8640 0/0
mjys14pg-sii5ly solid code (highlight=true) 400 true 94.52 / 96.31 253 120 0.02 0.00 0/0 0/0
mjyryg7b-zj2fth opencode diff=true emoji=true 300 true 130.52 / 204.73 1573 600 2.60 3.98 351/519 0/256
mjys1g1w-6uh48b opencode diff=true emoji=true 300 false 153.84 / 163.17 1573 600 4.03 1.19 901/1469 0/256

Notes:

  • Renderable counts and text buffer counts plateau at maxMessages across runs.
  • RSS growth is bounded after plateau; no monotonic growth was observed once the list size stabilizes.
  • Grapheme pool usage remains flat even when emoji stress is enabled.

@monotykamary monotykamary changed the title Memory investigation harnesses + native metrics chore: memory investigation harnesses + native metrics Jan 3, 2026
@kommander
Copy link
Collaborator

I get similar numbers in my tests. There are some issues around SyntaxStyle usage, the TextBufferView and on buffer resizing that I know of. ~150mb are not comparable to gigabytes being used by the opencode tui, where I see an initial spike to ~3gb, coming down to ~1.3gb, but then growing ridiculously with usage. It seems to be exclusively a MacOS issue, as I didn't see it happen on other platforms. I am looking into it.

@kommander
Copy link
Collaborator

I am certain this was improved in the latest opencode release https://github.com/anomalyco/opencode/releases/tag/v1.1.2 with the "session search" core feature that does not enforce loading the whole session anymore.

@kommander kommander closed this Jan 5, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants