npm i -g @codex-infinity/codex-infinity
Codex Infinity is a smarter coding agent that can run forever.
Based on OpenAI Codex CLI with autonomous workflow extensions.
Two arguments turn Codex into a fully autonomous coding agent:
--auto-next-steps-- After each response, automatically continues with the next logical steps (including testing)--auto-next-idea-- Generates and implements new improvement ideas for your codebase
# Autonomous coding -- completes tasks then moves to the next one
codex-infinity --auto-next-steps "fix all lint errors and add tests"
# Fully autonomous -- dreams up and implements improvements forever
codex-infinity --auto-next-steps --auto-next-idea
# Full auto mode with autonomous continuation
codex-infinity --full-auto --auto-next-stepsnpm install -g @codex-infinity/codex-infinityThen run codex-infinity to get started.
Run codex-infinity and select Sign in with ChatGPT to use your Plus, Pro, Team, Edu, or Enterprise plan.
Or use an API key:
export OPENAI_API_KEY=sk-...
codex-infinity "your prompt"| Flag | Description |
|---|---|
--auto-next-steps |
Auto-continue with next logical steps after each response |
--auto-next-idea |
Auto-brainstorm and implement new improvement ideas |
--full-auto |
Low-friction sandboxed automatic execution |
--yolo |
Skip approvals and sandbox (dangerous) |
--yolo2 |
Like yolo + disable command timeouts |
--yolo3 |
Like yolo2 + pass full host environment |
--yolo4 |
Like yolo3 + stream stdout/stderr directly |
-m MODEL |
Select model (e.g. gpt-5.3-codex, o3) |
--oss |
Use local model provider (LM Studio / Ollama) |
--search |
Enable live web search |
-i FILE |
Attach image(s) to initial prompt |
--cd DIR |
Set working directory |
--profile NAME |
Use config profile from config.toml |
# Fix a bug with full autonomy
codex-infinity --full-auto --auto-next-steps "fix the failing test in auth.test.ts"
# Refactor with idea generation
codex-infinity --auto-next-steps --auto-next-idea "refactor the API layer"
# Quick one-shot with yolo mode
codex-infinity --yolo "add error handling to all API endpoints"
# Use a specific model
codex-infinity -m gpt-5.3-codex --auto-next-steps "optimize database queries"
# Use local models
codex-infinity --oss -m llama3 "explain this codebase"- Autonomous operation --
--auto-next-stepskeeps it working without intervention - Idea generation --
--auto-next-ideabrainstorms and implements improvements - AnyLLM -- OpenAI, local models via LM Studio/Ollama, bring your own provider
- Local execution -- runs entirely on your machine
- Concise prompts -- stripped-down system prompts for faster, more focused responses
- Higher reliability -- increased retry limits for long-running autonomous sessions
cd codex-rs
cargo build --release -p codex-tui
./target/release/codex "your prompt here"cd codex-cli
npm installcodex-rs/-- Rust workspace (TUI, core, sandbox, etc.)codex-cli/-- npm package wrappersdk/-- TypeScript SDK
Based on OpenAI Codex CLI. Licensed under Apache-2.0.
