Skip to content

gensart-projs/codebuff

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Codebuff - Enhanced LLM Provider Configuration

Fork of the original Codebuff by gensart-projs/codebuff

Codebuff vs Claude Code

Codebuff is an AI coding assistant that edits your codebase through natural language instructions. Instead of using one model for everything, it coordinates specialized agents that work together to understand your project and make precise changes.

🆕 Enhanced Version: This fork includes a centralized LLM provider configuration system, dynamic model discovery, and improved extensibility for adding new AI providers and models.

🚀 New Features in This Fork

🔧 Centralized LLM Provider Configuration

  • JSON-based configuration instead of hard-coded constants
  • Runtime model management without code deployment
  • Hot reload capability for configuration changes
  • Multi-provider support with standardized interfaces

🔍 Dynamic Model Discovery

  • Automatic model detection from provider APIs
  • Health checks and latency testing
  • Real-time availability monitoring
  • Cost optimization with configurable pricing

📈 Enhanced Extensibility

  • Plugin-based architecture for new providers
  • Standardized integration patterns
  • Flexible routing with fallback chains
  • Multi-tenancy support per organization

Original Features (Maintained)

Codebuff beats Claude Code at 61% vs 53% on our evals across 175+ coding tasks over multiple open-source repos that simulate real-world tasks.

Codebuff Demo

How it works

When you ask Codebuff to "add authentication to my API," it might invoke:

  1. A File Explorer Agent scans your codebase to understand the architecture and find relevant files
  2. An Planner Agent plans which files need changes and in what order
  3. An Implementation Agents make precise edits
  4. A Review Agents validate changes
Codebuff Multi-Agents

This multi-agent approach gives you better context understanding, more accurate edits, and fewer errors compared to single-model tools.

CLI: Install and start coding

Fork Installation (Development)

Since this is a fork, you'll need to build and install locally:

# Clone the fork
git clone https://github.com/gensart-projs/codebuff.git
cd codebuff

# Install dependencies
bun install

# Build the project
bun run build

# Link for global usage (development)
bun link

# Or run directly without global install
bun run start-bin --cwd /path/to/your/project

Run with the fork:

cd your-project
codebuff  # If you used bun link
# OR
bun run start-bin --cwd .  # Direct execution

Then just tell Codebuff what you want and it handles the rest:

  • "Fix the SQL injection vulnerability in user registration"
  • "Add rate limiting to all API endpoints"
  • "Refactor the database connection code for better performance"

Codebuff will find the right files, makes changes across your codebase, and runs tests to make sure nothing breaks.

🆕 Enhanced Configuration System

Configuration Management

This fork introduces a powerful centralized configuration system:

{
  "providers": [
    {
      "id": "openrouter",
      "name": "OpenRouter",
      "type": "openrouter",
      "baseUrl": "https://openrouter.ai/api/v1",
      "auth": { "method": "api-key", "envVar": "OPEN_ROUTER_API_KEY" }
    }
  ],
  "models": [
    {
      "id": "claude-sonnet-4",
      "providerId": "openrouter",
      "modelId": "anthropic/claude-4-sonnet-20250522",
      "pricing": { "inputTokensPerMillion": 3.0, "outputTokensPerMillion": 15.0 }
    }
  ]
}

Dynamic Model Discovery

Automatically discover available models:

// Discover models from providers
const discoveredModels = await llmConfigManager.discoverModels()
console.log(`Found ${discoveredModels.length} available models`)

// Check model health
const isHealthy = await modelDiscoveryManager.checkModelHealth('gpt-4', provider)

Configuration Files Structure

.config/llm-providers/
├── config.json              # Main configuration
├── providers/               # Provider-specific configs
│   ├── openrouter.json
│   ├── openai.json
│   └── anthropic.json
├── models/                  # Model-specific configs
│   ├── claude-models.json
│   ├── gpt-models.json
│   └── gemini-models.json
└── environments/            # Environment overrides
    ├── development.json
    ├── staging.json
    └── production.json

Create custom agents

To get started building your own agents, run:

codebuff init-agents

You can write agent definition files that give you maximum control over agent behavior.

Implement your workflows by specifying tools, which agents can be spawned, and prompts. We even have TypeScript generators for more programmatic control.

For example, here's a git-committer agent that creates git commits based on the current git state:

export default {
  id: 'git-committer',
  displayName: 'Git Committer',
  model: 'openai/gpt-5-nano',
  toolNames: ['read_files', 'run_terminal_command', 'end_turn'],

  instructionsPrompt:
    'You create meaningful git commits by analyzing changes, reading relevant files for context, and crafting clear commit messages that explain the "why" behind changes.',

  async *handleSteps() {
    // Analyze what changed
    yield { tool: 'run_terminal_command', command: 'git diff' }
    yield { tool: 'run_terminal_command', command: 'git log --oneline -5' }

    // Stage files and create commit with good message
    yield 'STEP_ALL'
  },
}

SDK: Run agents in production

Install the SDK package -- note this is different than the CLI codebuff package.

npm install @codebuff/sdk

Import the client and run agents!

import { CodebuffClient } from '@codebuff/sdk'

// 1. Initialize the client
const client = new CodebuffClient({
  apiKey: 'your-api-key',
  cwd: '/path/to/your/project',
  onError: (error) => console.error('Codebuff error:', error.message),
})

// 2. Do a coding task...
const result = await client.run({
  agent: 'base', // Codebuff's base coding agent
  prompt: 'Add comprehensive error handling to all API endpoints',
  handleEvent: (event) => {
    console.log('Progress', event)
  },
})

// 3. Or, run a custom agent!
const myCustomAgent: AgentDefinition = {
  id: 'greeter',
  displayName: 'Greeter',
  model: 'openai/gpt-5',
  instructionsPrompt: 'Say hello!',
}
await client.run({
  agent: 'greeter',
  agentDefinitions: [myCustomAgent],
  prompt: 'My name is Bob.',
  customToolDefinitions: [], // Add custom tools too!
  handleEvent: (event) => {
    console.log('Progress', event)
  },
})

Learn more about the SDK here.

Enhanced Multi-Provider Support

Supported Providers

  • OpenRouter - Unified API for Claude, GPT, Gemini, and more
  • OpenAI - Direct GPT models including GPT-4, GPT-4o, and O-series
  • Google AI - Gemini models with advanced reasoning
  • Anthropic - Claude models for complex reasoning
  • DeepSeek - Cost-effective reasoning models
  • Vertex AI - Google Cloud fine-tuned models
  • Extensible - Easy to add new providers via configuration

Dynamic Model Management

  • Runtime Discovery: Automatically discover available models
  • Health Monitoring: Continuous health checks and latency testing
  • Cost Optimization: Configurable pricing with fallback strategies
  • Load Balancing: Intelligent routing across multiple providers

Why choose this enhanced Codebuff

🚀 Enhanced Features

Centralized Configuration: Manage all LLM providers through JSON configuration files Dynamic Model Discovery: Automatically detect new models and providers Cost Optimization: Intelligent routing based on pricing and performance Hot Reload: Update configurations without restarting the application

🛠️ Technical Improvements

Plugin Architecture: Standardized interface for adding new providers Configuration Validation: Strict validation with Zod schemas Event-Driven Updates: Real-time configuration change notifications Multi-tenancy Ready: Per-organization configuration support

💰 Cost Benefits

Provider Flexibility: Choose the most cost-effective provider for each task Dynamic Pricing: Update pricing without code deployment Fallback Strategies: Automatic fallback to cheaper alternatives Usage Analytics: Track costs across providers and models

🙏 Credits and Acknowledgments

Original Authors

This is a fork of the original Codebuff project. Special thanks to the original creators for building such an innovative multi-agent AI coding assistant.

Original Features Maintained

  • Multi-agent architecture with specialized agents
  • Natural language code editing
  • TypeScript SDK with full customization
  • Support for any model on OpenRouter
  • Published agent marketplace
  • Comprehensive tool system

Enhancements by gensart-projs

  • Centralized LLM Provider Configuration System
  • Dynamic Model Discovery and Health Monitoring
  • Enhanced Multi-tenant Configuration Support
  • Improved Extensibility for New Providers
  • Hot Reload Configuration Management
  • Advanced Routing and Fallback Strategies

Get started

Install from Fork (Development/Testing)

Since this is a fork and not published to npm, you'll need to build and install locally:

CLI Installation:

# Clone the fork
git clone https://github.com/gensart-projs/codebuff.git
cd codebuff

# Install dependencies and build
bun install
bun run build

# Option 1: Link for global usage (development)
bun link

# Option 2: Run directly without linking
bun run start-bin --cwd /path/to/your/project

SDK Installation:

# Navigate to SDK directory
cd sdk
bun install
bun run build

# Link for local development
bun link

# Use in your project
bun link @codebuff/sdk

Production Considerations:

  • This fork is for development and experimentation
  • For production use, consider building your own packages
  • Consider publishing to a private npm registry
  • Use the original Codebuff for stable releases

Configuration Setup

  1. Create configuration directory:
mkdir -p .config/llm-providers
  1. Copy example configuration:
cp node_modules/@codebuff/sdk/llm-config/llm-providers.example.json .config/llm-providers/config.json
  1. Configure your API keys:
# Set environment variables for your providers
export OPEN_ROUTER_API_KEY="your-openrouter-key"
export OPEN_AI_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
export GEMINI_API_KEY="your-gemini-key"
  1. Customize providers and models in the configuration file

Resources

Enhanced Configuration: Configuration Guide

Migration Guide: Migration Guide

Testing Guide: Testing Documentation

Running Codebuff locally: local-development.md

Original Documentation: codebuff.com/docs

Community: Discord

Support: support@codebuff.com

Fork Issues: gensart-projs/codebuff/issues


📈 Development Status

This enhanced version is actively maintained by gensart-projs. We welcome contributions, bug reports, and feature requests specific to the configuration system and multi-provider enhancements.

For issues related to the original Codebuff functionality, please refer to the original repository.

About

Generate code from the terminal!

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • TypeScript 95.0%
  • MDX 1.8%
  • Go 1.3%
  • JavaScript 1.3%
  • Shell 0.3%
  • CSS 0.2%
  • Other 0.1%