Fork of the original Codebuff by gensart-projs/codebuff
Codebuff is an AI coding assistant that edits your codebase through natural language instructions. Instead of using one model for everything, it coordinates specialized agents that work together to understand your project and make precise changes.
🆕 Enhanced Version: This fork includes a centralized LLM provider configuration system, dynamic model discovery, and improved extensibility for adding new AI providers and models.
- JSON-based configuration instead of hard-coded constants
- Runtime model management without code deployment
- Hot reload capability for configuration changes
- Multi-provider support with standardized interfaces
- Automatic model detection from provider APIs
- Health checks and latency testing
- Real-time availability monitoring
- Cost optimization with configurable pricing
- Plugin-based architecture for new providers
- Standardized integration patterns
- Flexible routing with fallback chains
- Multi-tenancy support per organization
Codebuff beats Claude Code at 61% vs 53% on our evals across 175+ coding tasks over multiple open-source repos that simulate real-world tasks.
When you ask Codebuff to "add authentication to my API," it might invoke:
- A File Explorer Agent scans your codebase to understand the architecture and find relevant files
- An Planner Agent plans which files need changes and in what order
- An Implementation Agents make precise edits
- A Review Agents validate changes
This multi-agent approach gives you better context understanding, more accurate edits, and fewer errors compared to single-model tools.
Since this is a fork, you'll need to build and install locally:
# Clone the fork
git clone https://github.com/gensart-projs/codebuff.git
cd codebuff
# Install dependencies
bun install
# Build the project
bun run build
# Link for global usage (development)
bun link
# Or run directly without global install
bun run start-bin --cwd /path/to/your/projectcd your-project
codebuff # If you used bun link
# OR
bun run start-bin --cwd . # Direct executionThen just tell Codebuff what you want and it handles the rest:
- "Fix the SQL injection vulnerability in user registration"
- "Add rate limiting to all API endpoints"
- "Refactor the database connection code for better performance"
Codebuff will find the right files, makes changes across your codebase, and runs tests to make sure nothing breaks.
This fork introduces a powerful centralized configuration system:
{
"providers": [
{
"id": "openrouter",
"name": "OpenRouter",
"type": "openrouter",
"baseUrl": "https://openrouter.ai/api/v1",
"auth": { "method": "api-key", "envVar": "OPEN_ROUTER_API_KEY" }
}
],
"models": [
{
"id": "claude-sonnet-4",
"providerId": "openrouter",
"modelId": "anthropic/claude-4-sonnet-20250522",
"pricing": { "inputTokensPerMillion": 3.0, "outputTokensPerMillion": 15.0 }
}
]
}Automatically discover available models:
// Discover models from providers
const discoveredModels = await llmConfigManager.discoverModels()
console.log(`Found ${discoveredModels.length} available models`)
// Check model health
const isHealthy = await modelDiscoveryManager.checkModelHealth('gpt-4', provider).config/llm-providers/
├── config.json # Main configuration
├── providers/ # Provider-specific configs
│ ├── openrouter.json
│ ├── openai.json
│ └── anthropic.json
├── models/ # Model-specific configs
│ ├── claude-models.json
│ ├── gpt-models.json
│ └── gemini-models.json
└── environments/ # Environment overrides
├── development.json
├── staging.json
└── production.json
To get started building your own agents, run:
codebuff init-agentsYou can write agent definition files that give you maximum control over agent behavior.
Implement your workflows by specifying tools, which agents can be spawned, and prompts. We even have TypeScript generators for more programmatic control.
For example, here's a git-committer agent that creates git commits based on the current git state:
export default {
id: 'git-committer',
displayName: 'Git Committer',
model: 'openai/gpt-5-nano',
toolNames: ['read_files', 'run_terminal_command', 'end_turn'],
instructionsPrompt:
'You create meaningful git commits by analyzing changes, reading relevant files for context, and crafting clear commit messages that explain the "why" behind changes.',
async *handleSteps() {
// Analyze what changed
yield { tool: 'run_terminal_command', command: 'git diff' }
yield { tool: 'run_terminal_command', command: 'git log --oneline -5' }
// Stage files and create commit with good message
yield 'STEP_ALL'
},
}Install the SDK package -- note this is different than the CLI codebuff package.
npm install @codebuff/sdkImport the client and run agents!
import { CodebuffClient } from '@codebuff/sdk'
// 1. Initialize the client
const client = new CodebuffClient({
apiKey: 'your-api-key',
cwd: '/path/to/your/project',
onError: (error) => console.error('Codebuff error:', error.message),
})
// 2. Do a coding task...
const result = await client.run({
agent: 'base', // Codebuff's base coding agent
prompt: 'Add comprehensive error handling to all API endpoints',
handleEvent: (event) => {
console.log('Progress', event)
},
})
// 3. Or, run a custom agent!
const myCustomAgent: AgentDefinition = {
id: 'greeter',
displayName: 'Greeter',
model: 'openai/gpt-5',
instructionsPrompt: 'Say hello!',
}
await client.run({
agent: 'greeter',
agentDefinitions: [myCustomAgent],
prompt: 'My name is Bob.',
customToolDefinitions: [], // Add custom tools too!
handleEvent: (event) => {
console.log('Progress', event)
},
})Learn more about the SDK here.
- OpenRouter - Unified API for Claude, GPT, Gemini, and more
- OpenAI - Direct GPT models including GPT-4, GPT-4o, and O-series
- Google AI - Gemini models with advanced reasoning
- Anthropic - Claude models for complex reasoning
- DeepSeek - Cost-effective reasoning models
- Vertex AI - Google Cloud fine-tuned models
- Extensible - Easy to add new providers via configuration
- Runtime Discovery: Automatically discover available models
- Health Monitoring: Continuous health checks and latency testing
- Cost Optimization: Configurable pricing with fallback strategies
- Load Balancing: Intelligent routing across multiple providers
Centralized Configuration: Manage all LLM providers through JSON configuration files Dynamic Model Discovery: Automatically detect new models and providers Cost Optimization: Intelligent routing based on pricing and performance Hot Reload: Update configurations without restarting the application
Plugin Architecture: Standardized interface for adding new providers Configuration Validation: Strict validation with Zod schemas Event-Driven Updates: Real-time configuration change notifications Multi-tenancy Ready: Per-organization configuration support
Provider Flexibility: Choose the most cost-effective provider for each task Dynamic Pricing: Update pricing without code deployment Fallback Strategies: Automatic fallback to cheaper alternatives Usage Analytics: Track costs across providers and models
This is a fork of the original Codebuff project. Special thanks to the original creators for building such an innovative multi-agent AI coding assistant.
- Multi-agent architecture with specialized agents
- Natural language code editing
- TypeScript SDK with full customization
- Support for any model on OpenRouter
- Published agent marketplace
- Comprehensive tool system
- Centralized LLM Provider Configuration System
- Dynamic Model Discovery and Health Monitoring
- Enhanced Multi-tenant Configuration Support
- Improved Extensibility for New Providers
- Hot Reload Configuration Management
- Advanced Routing and Fallback Strategies
Since this is a fork and not published to npm, you'll need to build and install locally:
CLI Installation:
# Clone the fork
git clone https://github.com/gensart-projs/codebuff.git
cd codebuff
# Install dependencies and build
bun install
bun run build
# Option 1: Link for global usage (development)
bun link
# Option 2: Run directly without linking
bun run start-bin --cwd /path/to/your/projectSDK Installation:
# Navigate to SDK directory
cd sdk
bun install
bun run build
# Link for local development
bun link
# Use in your project
bun link @codebuff/sdkProduction Considerations:
- This fork is for development and experimentation
- For production use, consider building your own packages
- Consider publishing to a private npm registry
- Use the original Codebuff for stable releases
- Create configuration directory:
mkdir -p .config/llm-providers- Copy example configuration:
cp node_modules/@codebuff/sdk/llm-config/llm-providers.example.json .config/llm-providers/config.json- Configure your API keys:
# Set environment variables for your providers
export OPEN_ROUTER_API_KEY="your-openrouter-key"
export OPEN_AI_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
export GEMINI_API_KEY="your-gemini-key"- Customize providers and models in the configuration file
Enhanced Configuration: Configuration Guide
Migration Guide: Migration Guide
Testing Guide: Testing Documentation
Running Codebuff locally: local-development.md
Original Documentation: codebuff.com/docs
Community: Discord
Support: support@codebuff.com
Fork Issues: gensart-projs/codebuff/issues
This enhanced version is actively maintained by gensart-projs. We welcome contributions, bug reports, and feature requests specific to the configuration system and multi-provider enhancements.
For issues related to the original Codebuff functionality, please refer to the original repository.


