Skip to content

lee101/codex-infinity

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4,471 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Codex Infinity

Codex Infinity

npm i -g @codex-infinity/codex-infinity

Codex Infinity is a smarter coding agent that can run forever.

Based on OpenAI Codex CLI with autonomous workflow extensions.


What makes Codex Infinity different?

Two arguments turn Codex into a fully autonomous coding agent:

  • --auto-next-steps -- After each response, automatically continues with the next logical steps (including testing)
  • --auto-next-idea -- Generates and implements new improvement ideas for your codebase
# Autonomous coding -- completes tasks then moves to the next one
codex-infinity --auto-next-steps "fix all lint errors and add tests"

# Fully autonomous -- dreams up and implements improvements forever
codex-infinity --auto-next-steps --auto-next-idea

# Full auto mode with autonomous continuation
codex-infinity --full-auto --auto-next-steps

Quickstart

npm install -g @codex-infinity/codex-infinity

Then run codex-infinity to get started.

Authentication

Run codex-infinity and select Sign in with ChatGPT to use your Plus, Pro, Team, Edu, or Enterprise plan.

Or use an API key:

export OPENAI_API_KEY=sk-...
codex-infinity "your prompt"

CLI flags

Flag Description
--auto-next-steps Auto-continue with next logical steps after each response
--auto-next-idea Auto-brainstorm and implement new improvement ideas
--full-auto Low-friction sandboxed automatic execution
--yolo Skip approvals and sandbox (dangerous)
--yolo2 Like yolo + disable command timeouts
--yolo3 Like yolo2 + pass full host environment
--yolo4 Like yolo3 + stream stdout/stderr directly
-m MODEL Select model (e.g. gpt-5.3-codex, o3)
--oss Use local model provider (LM Studio / Ollama)
--search Enable live web search
-i FILE Attach image(s) to initial prompt
--cd DIR Set working directory
--profile NAME Use config profile from config.toml

Examples

# Fix a bug with full autonomy
codex-infinity --full-auto --auto-next-steps "fix the failing test in auth.test.ts"

# Refactor with idea generation
codex-infinity --auto-next-steps --auto-next-idea "refactor the API layer"

# Quick one-shot with yolo mode
codex-infinity --yolo "add error handling to all API endpoints"

# Use a specific model
codex-infinity -m gpt-5.3-codex --auto-next-steps "optimize database queries"

# Use local models
codex-infinity --oss -m llama3 "explain this codebase"

Features

  • Autonomous operation -- --auto-next-steps keeps it working without intervention
  • Idea generation -- --auto-next-idea brainstorms and implements improvements
  • AnyLLM -- OpenAI, local models via LM Studio/Ollama, bring your own provider
  • Local execution -- runs entirely on your machine
  • Concise prompts -- stripped-down system prompts for faster, more focused responses
  • Higher reliability -- increased retry limits for long-running autonomous sessions

Development

Build from source (Rust CLI)

cd codex-rs
cargo build --release -p codex-tui
./target/release/codex "your prompt here"

Build npm package

cd codex-cli
npm install

Project structure

  • codex-rs/ -- Rust workspace (TUI, core, sandbox, etc.)
  • codex-cli/ -- npm package wrapper
  • sdk/ -- TypeScript SDK

Docs

Based on OpenAI Codex CLI. Licensed under Apache-2.0.

About

infinite coding agent

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Rust 96.2%
  • TypeScript 2.1%
  • Python 0.8%
  • JavaScript 0.4%
  • Starlark 0.2%
  • Shell 0.1%
  • Other 0.2%