Multi-agent scientific research automation system built on Google's Agent Development Kit (ADK).
ARL automates the complete scientific research pipeline:
- Literature review and paper ingestion
- Hypothesis generation from research gaps
- Experiment design and validation
- Python code generation for experiments
- Sandboxed execution with result capture
- Statistical analysis and hypothesis validation
- Research report generation
# Install uv (recommended)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install ARL
uv pip install -e .
# Set API key
export GOOGLE_API_KEY="your-key"
# Initialize
arl init
# Create project
arl project create --name "My Research" --domain cs
# Run research
arl research run --project <id> --request "Test hypothesis" --autoTerminal 1 - Backend:
cd arl-backend
# Start Docker services (PostgreSQL + Redis)
sudo systemctl start docker # If not already running
docker compose up -d
# Install dependencies and run migrations
uv sync
.venv/bin/alembic upgrade head
# Start backend
.venv/bin/uvicorn app.main:app --reloadSee arl-backend/SETUP.md for detailed setup instructions.
Terminal 2 - Frontend:
cd arl-frontend
npm install
echo "VITE_API_BASE_URL=http://localhost:8000" > .env
npm run devAccess at http://localhost:5173
- Multi-domain support: Computer Science, Biology, Physics, General Research
- Provider-agnostic LLM: Google Gemini, OpenAI, Anthropic, Azure OpenAI
- Hybrid deployment: Run locally or on Google Cloud
- A2A Protocol: Standardized agent-to-agent communication for microservices architecture
- Interactive collaboration: Human-in-the-loop at any stage
- Reproducible research: Complete experiment versioning and artifact management
- Docker sandbox: Secure isolated execution environment
- Installation Guide - Setup and prerequisites
- Quickstart Guide - Get running in 5 minutes
- User Guide - Complete feature documentation
- Architecture Overview - System design and components
- A2A Protocol Guide - Agent-to-agent communication and microservices
- Azure OpenAI Setup - Configure Azure OpenAI
- Examples Directory - Sample workflows and code
- Research Workflow Guide - Detailed workflow documentation
Built on Google ADK with specialized agents:
- Orchestrator: Main workflow coordination
- Literature Agent: Paper ingestion and analysis
- Hypothesis Agent: Testable hypothesis generation
- Experiment Designer: Protocol and parameter specification
- Code Generator: Python code generation with validation
- Execution Engine: Docker sandbox execution
- Analysis Agent: Statistical validation and interpretation
See Architecture Overview for detailed system design.
arl/
├── docs/ # Documentation
│ ├── installation.md
│ ├── quickstart.md
│ ├── user-guide.md
│ ├── architecture.md
│ ├── testing.md
│ ├── azure-setup.md
│ └── frontend-development.md
├── arl/ # Main package
│ ├── adk_agents/ # ADK agent implementations
│ ├── core/ # Core business logic
│ ├── integrations/ # External integrations
│ ├── cli/ # Command-line interface
│ └── storage/ # Data persistence
├── arl-frontend/ # React web UI
├── tests/ # Test suite
├── examples/ # Example workflows
└── scripts/ # Utility scripts
- Framework: Google ADK 1.0+, Python 3.10+
- LLM: LiteLLM (Google Gemini, OpenAI, Anthropic, Azure OpenAI)
- Execution: Docker (sandboxed experiments)
- Storage: SQLite (local), Cloud SQL (cloud)
- Scientific Stack: NumPy, pandas, scikit-learn, PyTorch
- Frontend: React 18, TypeScript, Tailwind CSS, shadcn/ui
We welcome contributions! Please see our development guides:
MIT License - See LICENSE file
If you use ARL in your research, please cite:
@software{arl2025,
title={AI Autonomous Research Lab},
author={ARL Team},
year={2025},
url={https://github.com/your-org/arl}
}