MetaAgent is a Mastra-based platform for building, testing, and deploying AI agents. Create agents through a guided interview process, then deploy them locally or to the cloud.
- Node.js 20+ and pnpm
- Docker and Docker Compose (or Podman with
podman compose) - Git
-
Clone and install dependencies:
git clone <repo-url> cd meta-agent pnpm i
-
Start infrastructure:
# Start PostgreSQL, Redis, and MinIO pnpm db:up # or: pnpm db:up:podman # Initialize MinIO bucket (requires mc client) brew install minio/stable/mc # macOS ./infra/scripts/minio-init.sh
-
Configure environment:
cp .env.example .env # Edit .env with your settings -
Build and start services:
# Build all packages pnpm -w build # Start all services in development mode pnpm -w dev
-
Access the application:
- Web UI: http://localhost:3000
- MinIO Console: http://localhost:9001 (minioadmin/minioadmin)
MetaAgent follows a microservices architecture:
- Web App (Remix): User interface for agent creation and management
- Builder Service: Processes agent specifications and generates projects
- Runner Service: Executes agents in controlled environments
- Queue System: BullMQ with Redis for background job processing
- Database: PostgreSQL with RLS for multi-tenant data isolation
- Object Storage: S3-compatible storage (MinIO) for artifacts
- Template-based Agent Creation: Choose from chatbot, web automation, or API copilot templates
- Interactive Interview Process: Guided questions to build agent specifications
- Real-time Spec Validation: Live validation with JSON Schema and Monaco editor
- Project Scaffolding: Generate complete, deployable Mastra projects
- Background Processing: Async job processing with queue management
- Multi-tenant Architecture: User isolation with row-level security
Core configuration (see .env.example for complete list):
# Database
DATABASE_URL=postgresql://user@localhost:5432/metaagent
REDIS_URL=redis://localhost:6379
# Object Storage (MinIO/S3)
S3_ENDPOINT=http://localhost:9000
S3_KEY=minioadmin
S3_SECRET=minioadmin
BUILDER_BUCKET=metaagent-artifacts
# Security - Egress Allow-List
ALLOW_HTTP_HOSTS=["api.openai.com","google.serper.dev"]
# Services
PORT=3000
BUILDER_PORT=3101
RUNNER_PORT=3102# Build all packages
pnpm -w build
# Run all services
pnpm -w dev
# Run tests
pnpm -w test
# Run specific service
pnpm --filter @metaagent/web dev
pnpm --filter @metaagent/builder dev
# Database migrations
# (TBD - migrations system pending)├── apps/
│ ├── desktop/ # Electron app (Week 7)
│ └── web/ # Remix web application
├── services/
│ ├── builder/ # Agent building and scaffolding service
│ └── runner/ # Agent execution service
├── packages/
│ ├── db/ # Database schema and utilities
│ ├── templates/ # Agent templates and metadata
│ ├── spec/ # Specification types and validation
│ ├── queue/ # BullMQ job queue utilities
│ ├── object-storage/ # S3/MinIO storage utilities
│ └── ... # Additional shared packages
├── infra/
│ ├── docker-compose.yml # Local development infrastructure
│ └── scripts/ # Setup and deployment scripts
└── docs/ # Documentation and specifications
- Architecture Overview
- Security
- Project Scaffolder
- Development Decisions
- Delivery Plan
- Complete Docs Index
- Follow the feature PR checklist
- Ensure tests pass:
pnpm -w test - Build successfully:
pnpm -w build - Update documentation as needed
A toolbox tool is provided to run OpenAI Codex CLI to review a GitHub PR.
Setup:
- Install Codex CLI:
npm i -g @openai/codex(orbrew install codex) - Export toolbox path:
export AMP_TOOLBOX="$(pwd)/toolboxes" - Optional: set
GITHUB_TOKENfor private repos and higher rate limits
Usage in Amp:
- Tool name:
codex-code-review - Required param:
pr_url(e.g.,https://github.com/owner/repo/pull/123) - Optional params:
model(defaultGPT-5-Codex),github_token,max_diff_bytes
Example invocation:
- pr_url: openai/codex#1
- model: GPT-5-Codex
The tool fetches the PR diff via GitHub API and invokes codex exec with a structured review prompt.
[License TBD]