A FastAPI-based API gateway for multiple LLM providers (OpenAI, Anthropic, DeepSeek).
-
🚀 Multiple LLM Provider Support
- OpenAI
- Anthropic
- DeepSeek
- Easily extensible for more providers
-
🔄 Unified API Interface
- Compatible with OpenAI's chat completion API
- Streaming support
- Automatic provider selection based on model name
-
🔍 Advanced Logging System
- JSON structured logging
- Request tracing with trace_id
- Colored console output
- Log rotation
- Docker-friendly logging
-
🛡️ Built-in Security
- Rate limiting
- API key validation
- CORS protection
- Production-ready security checks
-
🎯 Production Ready
- Health checks
- Docker support
- Environment-based configuration
- Comprehensive error handling
- Clone the repository:
git clone <repository-url>
cd llm-api-gateway- Create and configure your
.envfile:
cp .env.example .env
# Edit .env with your API keys and settings- Start the service:
docker-compose up -dThe API will be available at http://localhost:8000.
- Install dependencies:
pip install -r requirements.txt- Configure environment:
cp .env.example .env
# Edit .env with your API keys and settings- Run the application:
uvicorn app.main:app --host 0.0.0.0 --port 8000OPENAI_API_KEY: OpenAI API keyANTHROPIC_API_KEY: Anthropic API keyDEEPSEEK_API_KEY: DeepSeek API keyDEBUG: Enable debug mode (default: false)ENV: Environment (development/production)FORCE_COLOR: Enable colored logging output (default: true)
RATE_LIMIT_ENABLED: Enable rate limiting (default: true)RATE_LIMIT_REQUESTS: Number of requests allowed (default: 100)RATE_LIMIT_PERIOD: Time window in seconds (default: 60)
LOG_LEVEL: Logging level (default: INFO)LOG_DIR: Log directory (default: logs)LOG_MAX_BYTES: Maximum log file size (default: 10MB)LOG_BACKUP_COUNT: Number of backup files (default: 5)
Once running, visit:
- OpenAPI documentation:
http://localhost:8000/docs - ReDoc documentation:
http://localhost:8000/redoc
POST /api/v1/chat/completions: Chat completion endpoint- Compatible with OpenAI's chat completion API
- Supports streaming responses
- Automatic provider selection based on model prefix
app/
├── api/
│ └── v1/
│ └── endpoints.py
├── core/
│ ├── config/
│ │ └── settings.py
│ ├── middleware/
│ │ ├── rate_limit.py
│ │ └── request_logging.py
│ ├── providers/
│ │ └── http_client.py
│ ├── context.py
│ ├── exceptions.py
│ ├── handlers.py
│ └── logging_config.py
├── services/
│ └── chat/
│ └── service.py
└── main.py
- Create a new provider class in
app/core/providers/ - Implement the required interface methods
- Add provider configuration in
settings.py - Register the provider in
PROVIDER_CONFIGS
docker build -t llm-api-gateway .docker run -d \
-p 8000:8000 \
-v ./logs:/app/logs \
--env-file .env \
llm-api-gatewaydocker-compose up -dThe application uses a sophisticated logging system with:
- JSON structured logging for file output
- Colored console output (configurable via
FORCE_COLOR) - Request tracing with
trace_id - Automatic log rotation
- Docker-friendly logging configuration
- Console: Colored, human-readable format
- File: JSON format with additional metadata
Example console output:
2024-01-25 10:30:45 [INFO] app.main: Server started
2024-01-25 10:30:46 [INFO] app.api: Request received [trace_id: abc-123]
Example JSON log:
{
"trace_id": "abc-123",
"timestamp": "2024-01-25T10:30:45",
"level": "INFO",
"logger": "app.main",
"message": "Server started"
}- Fork the repository
- Create your feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request