This directory contains Pulumi infrastructure code to deploy the MCP Registry service to a Kubernetes cluster. It supports deploying the infrastructure locally (using an existing kubeconfig, e.g. with minikube) or to Google Cloud Platform (GCP).
Pre-requisites:
- Pulumi CLI installed
- Access to a Kubernetes cluster via kubeconfig. You can run a cluster locally with minikube.
- Ensure your kubeconfig is configured at the cluster you want to use. For minikube, run
minikube start && minikube tunnel. - Run
make local-upto deploy the stack. - Access the repository via the ingress load balancer. You can find its external IP with
kubectl get svc ingress-nginx-controller -n ingress-nginx. Then runcurl -H "Host: local.registry.modelcontextprotocol.io" -k https://<EXTERNAL-IP>/v0/pingto check that the service is up.
The stack is configured out of the box for local development. But if you want to make changes, run commands like:
PULUMI_CONFIG_PASSPHRASE="" pulumi config set mcp-registry:environment local
PULUMI_CONFIG_PASSPHRASE="" pulumi config set mcp-registry:githubClientSecret --secret <some-secret-value>make local-destroy and deleting the cluster (with minikube: minikube delete) will reset you back to a clean state.
Note: Deployments are automatically handled by GitHub Actions with separate workflows for staging and production:
-
Staging: All merges to the
mainbranch automatically build and deploy the latest code to thestagingenvironment via deploy-staging.yml. Staging always uses the:mainDocker image tag. -
Production: Deployment requires explicit configuration of the Docker image tag in
Pulumi.gcpProd.yamland is triggered automatically via deploy-production.yml when this config file is pushed tomain. This GitOps-style approach provides manual control and an audit trail for production versions.
To deploy a specific version to production:
-
Cut a release (if deploying a new version):
# Via GitHub UI: https://github.com/modelcontextprotocol/registry/releases # Or via gh CLI: gh release create v1.2.3 --generate-notes
This builds Docker images tagged as
1.2.3andlatest(note: image tags do not include the 'v' prefix) -
Update the production image tag in
Pulumi.gcpProd.yaml:# Edit deploy/Pulumi.gcpProd.yaml # Change line: mcp-registry:imageTag: 1.2.3 (note: no 'v' prefix for Docker image tags) git add deploy/Pulumi.gcpProd.yaml git commit -m "Deploy version 1.2.3 to production" git push
-
The production deployment workflow will automatically trigger and deploy the specified version
Manual Override: The steps below are preserved if a manual deployment override is needed.
Pre-requisites:
- Pulumi CLI installed
- A Google Cloud Platform (GCP) account
- A GCP Service Account with appropriate permissions
- Create a project:
gcloud projects create mcp-registry-prod - Set the project:
gcloud config set project mcp-registry-prod - Enable required APIs:
gcloud services enable storage.googleapis.com && gcloud services enable container.googleapis.com - Create a service account with necessary permissions, and get the key:
gcloud iam service-accounts create pulumi-svc sleep 10 gcloud projects add-iam-policy-binding mcp-registry-prod --member="serviceAccount:pulumi-svc@mcp-registry-prod.iam.gserviceaccount.com" --role="roles/container.admin" gcloud projects add-iam-policy-binding mcp-registry-prod --member="serviceAccount:pulumi-svc@mcp-registry-prod.iam.gserviceaccount.com" --role="roles/compute.admin" gcloud projects add-iam-policy-binding mcp-registry-prod --member="serviceAccount:pulumi-svc@mcp-registry-prod.iam.gserviceaccount.com" --role="roles/storage.admin" gcloud projects add-iam-policy-binding mcp-registry-prod --member="serviceAccount:pulumi-svc@mcp-registry-prod.iam.gserviceaccount.com" --role="roles/storage.hmacKeyAdmin" gcloud iam service-accounts add-iam-policy-binding $(gcloud projects describe mcp-registry-prod --format="value(projectNumber)")-compute@developer.gserviceaccount.com --member="serviceAccount:pulumi-svc@mcp-registry-prod.iam.gserviceaccount.com" --role="roles/iam.serviceAccountUser" gcloud iam service-accounts keys create sa-key.json --iam-account=pulumi-svc@mcp-registry-prod.iam.gserviceaccount.com
- Create a GCS bucket for Pulumi state:
gsutil mb gs://mcp-registry-prod-pulumi-state - Set Pulumi's backend to GCS:
pulumi login gs://mcp-registry-prod-pulumi-state - Get the passphrase file
passphrase.prod.txtfrom the registry maintainers - Init the GCP stack:
PULUMI_CONFIG_PASSPHRASE_FILE=passphrase.prod.txt pulumi stack init gcpProd - Set the GCP credentials in Pulumi config:
# Base64 encode the service account key and set it pulumi config set --secret gcp:credentials "$(base64 < sa-key.json)"
- Deploy:
make prod-up - Access the repository via the ingress load balancer. You can find its external IP with:
kubectl get svc ingress-nginx-controller -n ingress-nginx. Then runcurl -H "Host: prod.registry.modelcontextprotocol.io" -k https://<EXTERNAL-IP>/v0/pingto check that the service is up.
├── main.go # Pulumi program entry point
├── Pulumi.yaml # Project configuration
├── Pulumi.local.yaml # Local stack configuration
├── Pulumi.gcpProd.yaml # GCP production stack configuration
├── Pulumi.gcpStaging.yaml # GCP staging stack configuration
├── Makefile # Build and deployment targets
├── go.mod # Go module dependencies
├── go.sum # Go module checksums
└── pkg/ # Infrastructure packages
├── k8s/ # Kubernetes deployment components
│ ├── backup.go # Database backup configuration
│ ├── cert_manager.go # SSL certificate management
│ ├── deploy.go # Deployment orchestration
│ ├── ingress.go # Ingress controller setup
│ ├── monitoring.go # Metrics and monitoring setup
│ ├── postgres.go # PostgreSQL database deployment
│ └── registry.go # MCP Registry deployment
└── providers/ # Kubernetes cluster providers
├── types.go # Provider interface definitions
├── gcp/ # Google Kubernetes Engine provider
└── local/ # Local kubeconfig provider
- Pulumi program starts in
main.go - Configuration is loaded from Pulumi config files
- Provider factory creates appropriate cluster provider (GCP or local)
- Cluster provider sets up Kubernetes access
k8s.DeployAll()orchestrates complete deployment:- Certificate manager for SSL/TLS
- Ingress controller for external access
- Database for data persistence
- Backup infrastructure for database
- Monitoring and metrics collection
- MCP Registry application
| Parameter | Description | Required |
|---|---|---|
environment |
Deployment environment (local/staging/prod) | Yes |
provider |
Kubernetes provider (local/gcp) | No (default: local) |
githubClientId |
GitHub OAuth Client ID | Yes |
githubClientSecret |
GitHub OAuth Client Secret | Yes |
imageTag |
Docker image tag for production environment | Yes (prod only) |
gcpProjectId |
GCP Project ID (required when provider=gcp) | No |
gcpRegion |
GCP Region (default: us-central1) | No |
The deployment uses K8up (a Kubernetes backup operator) that uses Restic under the hood.
When running locally they are stored in a Minio bucket. In staging and production, backups are stored in a GCS bucket.
# Expose MinIO web console
kubectl port-forward -n minio svc/minio 9000:9000 9001:9001Then open localhost:9001, login with username minioadmin and password minioadmin, and navigate to the k8up-backups bucket.
Backups are encrypted using Restic. To access the backup data:
- Download the backup files from the bucket:
# Local (MinIO) - ensure port-forward is active: kubectl port-forward -n minio svc/minio 9000:9000 9001:9001 AWS_ACCESS_KEY_ID=minioadmin AWS_SECRET_ACCESS_KEY=minioadmin \ aws --endpoint-url http://localhost:9000 s3 sync s3://k8up-backups/ ./backup-files/ # GCS (staging/production) gsutil -m cp -r gs://mcp-registry-{staging|prod}-backups/* ./backup-files/
- Install Restic
- Restore the backup:
PostgreSQL data will be in
RESTIC_PASSWORD=password restic -r ./backup-files restore latest --target ./restored-files
./restored-files/data/registry-pg-1/pgdata/
To configure kubectl to access an existing GKE cluster:
# Login
gcloud auth login
gcloud auth application-default login
# For production
gcloud container clusters get-credentials mcp-registry-prod --zone us-central1-b --project mcp-registry-prod
# For staging
gcloud container clusters get-credentials mcp-registry-staging --zone us-central1-b --project mcp-registry-stagingkubectl get pods
kubectl get deployment
kubectl get svc
kubectl get ingress
kubectl get svc -n ingress-nginxkubectl logs -l app=mcp-registry
kubectl logs -l app=postgreskubectl describe schedule.k8up.io
kubectl get backup