This repository contains the complete infrastructure-as-code for the Inkorporated homelab, a comprehensive self-hosted, open-source internal work environment designed for homelab deployments. It provides a complete, integrated solution that combines infrastructure automation, GitOps deployment, and a rich set of productivity and collaboration tools.
Inkorporated now supports a Hybrid Cloud architecture, enabling workloads to run on both on-premises Proxmox clusters and public cloud providers (AWS, GCP).
- Proxmox: Hosts the core control plane and persistent workloads.
- Public Cloud (AWS/GCP): Used for burst capacity and temporary workloads.
- Networking: Hybrid nodes are connected via VPN (Tailscale/Wireguard - implementation detail).
- Storage:
- On-Prem: Longhorn / NFS
- Cloud: Cloud-native storage (e.g., gp2/standard)
See docs/ARCHITECTURE.md for detailed diagrams and topology.
The Inkorporated project is organized into several key areas:
- Infrastructure as Code:
infrastructure/terraform: Modular Terraform for Hybrid Cloud provisioning (Proxmox, AWS, GCP).infrastructure/ansible: Ansible roles for hybrid k3s setup (LXC, VMs, Cloud).
- GitOps Deployment: ArgoCD for declarative infrastructure and application management using ApplicationSets.
- Zero-trust Access: Cloudflare Tunnel for secure external access
- Centralized Authentication: Authentik SSO/OIDC integration
- Observability: Prometheus, Grafana, and Loki stack
- Storage Solutions: Longhorn and NFS CSI driver
- Backup & Recovery: Velero with MinIO
- Service Mesh: Traefik with forward-auth middleware
- Version Control: Gitea instance for GitOps workflow
- CI/CD: Gitea runners for automated deployments
All services follow a consistent deployment pattern:
- Create Namespace
- Create Core Deployment with appropriate resources
- Create Service
- Create PodDisruptionBudget (where applicable)
- Create ConfigMap (where applicable)
- Create Ingress (where applicable)
- Create Secrets (where applicable)
The project implements a centralized configuration management system that separates sensitive credentials from main settings:
- Main Settings File:
cline_mcp_settings.json- Contains server configurations and settings (no sensitive data) - Configuration File:
.devcontainer/.env- Contains sensitive credentials and environment-specific settings (not version controlled) - Environment-Specific Configurations: Different files for different environments (
.devcontainer/.env.dev,.devcontainer/.env.staging,.devcontainer/.env.prod)
The repository now supports multiple environments with dedicated configuration directories:
config/environments/- Environment-specific configurationsapps/environments/- Environment-specific application overrides
- dev - Development environment
- staging - Staging environment
- autopush - Autopush environment
- uat - User Acceptance Testing environment
- canary - Canary environment for feature testing
- prod - Production environment
- priv - Private environment for sensitive services
The configuration system follows security best practices:
- Separation of Concerns: Sensitive credentials are separated from settings
- Version Control Safety: No sensitive data is committed to version control
- File Permissions: Config files are secured with restrictive permissions (600)
- Environment Variables: Configuration loaded at runtime
- Zero-trust Architecture: Cloudflare Tunnel for external access
- Centralized Authentication: Authentik for SSO
- Network Segmentation: pfSense firewall with VLANs
The project implements a multi-zone network architecture with the following components:
- Cloudflare Tunnel (cloudflared): Zero-trust access to services
- Traefik: Ingress controller with TLS termination
- Authentik: Central identity provider with OIDC + 2FA enforcement
- Longhorn: Distributed block storage
- NFS CSI Driver: Synology NAS integration
- MinIO: S3 object storage for backups
- Velero: Backup and restore solution
- cert-manager: TLS automation
- MetalLB: LoadBalancer provider
- Rocket.Chat: Team chat platform
- WorkAdventure: 2D virtual office
- Jitsi Meet: Video conferencing
- Coturn: TURN/STUN server for NAT traversal
- AppFlowy: Collaborative knowledge base
- LinkWarden: Bookmark manager
- Homebox: Inventory tracker
- Home Assistant: Smart home hub
- Kasm Workspaces: Browser-based workspaces
- Coder: Cloud IDE workspaces
- Ollama: Local LLM runner
- Open WebUI: Ollama web interface
- Langflow: Visual LangChain builder
- Kokoro TTS: Local TTS server
- Docling: Document parsing server
- SearXNG: Metasearch engine
- SurfSense: AI research agent
- Perplexica: AI search engine
- Vaultwarden: Bitwarden-compatible password manager
- HashiCorp Vault: Secrets management
The project follows a structured implementation approach:
- Validate physical infrastructure readiness
- Create Proxmox Cloud-Init template
- Workstation setup with required tools
- Create bootstrap repository with Terraform/Ansible
- Create apps repository with GitOps manifests
- Provision VMs with Terraform
- Install k3s with Ansible
- Bootstrap ArgoCD
- Deploy storage solutions (Longhorn, NFS CSI)
- Deploy database services (CloudNativePG, MongoDB)
- Deploy backup and monitoring stack
- Deploy core services (Authentik, Traefik, Cloudflared)
- Configure authentication providers and groups
- Deploy user dashboard (Homepage)
- Enable backups and testing
- Performance and security hardening
- Documentation runbook creation
A comprehensive testing framework has been implemented to ensure reliability and security:
- Configuration validation tests
- Security permission checks
- Kubernetes manifest validation
- Integration testing for services
- Automated test suite with CI/CD integration
- Infrastructure health monitoring tests
- Backup and restore validation tests
The project follows these development workflow patterns:
- GitOps with ArgoCD for infrastructure and application deployment
- Environment variable configuration for domain flexibility
- Centralized configuration management system
- Kubernetes manifest pattern for custom resources
- Helm chart pattern for application deployment
- Service-oriented architecture with proper namespace separation
- MCP-based infrastructure automation workflows
- Automated security and compliance validation
The project uses the following tooling preferences:
- Container Orchestration: Kubernetes (k3s)
- Deployment Management: ArgoCD (GitOps)
- Authentication: Authentik (OIDC + SSO)
- Ingress Controller: Traefik with forward-auth
- Zero-trust Access: Cloudflare Tunnel (cloudflared)
- Network Security: pfSense VM
- Storage: Longhorn (block storage), NFS CSI driver (Synology NAS)
- Backup & DR: Velero with MinIO
- Monitoring: kube-prometheus-stack (Prometheus, Grafana, Loki)
- Infrastructure as Code: Terraform, Ansible
- Development Tools: kubectl, helm, terraform, ansible, cloudflared CLI
- MCP Integration: context7 MCP server for documentation and patterns
- Security Tools: Automated vulnerability scanning with MCP
- Resource quotas and limits for all namespaces
- High availability with PodDisruptionBudgets
- Efficient resource utilization for all services
- Monitoring and alerting for performance metrics
- Backup and restore testing for disaster recovery
- Automated performance optimization with MCP tools
All documentation follows these standards:
- Comprehensive technical documentation
- Service-specific documentation with configurations
- Implementation status tracking
- Configuration management guidelines
- Security and best practices documentation
- Troubleshooting and maintenance guides
- MCP integration documentation
- Infrastructure automation workflows documentation
While the project is primarily focused on infrastructure and backend services, accessibility considerations include:
- User-friendly dashboard interfaces
- Clear navigation and organization of services
- Consistent design patterns across applications
- Support for keyboard navigation where applicable
The project supports internationalization through:
- Configurable domain names and URLs
- Environment variable configuration for localization
- Multi-language support in user-facing applications
- Flexible configuration management for different regions
- All sensitive data stored as Kubernetes secrets
- Sealed secrets for Git-safe storage
- Proper RBAC and access controls
- Network policies for security segmentation
- Regular security updates for container images
- Centralized authentication with SSO
- Zero-trust network architecture
- Backup and disaster recovery procedures
- Automated security scanning with MCP tools
- Infrastructure hardening and compliance validation
- User accesses any service via subdomain (e.g.,
https://chat.example.com) - Traefik forwards request to Authentik for authentication
- Authentik validates credentials and issues JWT
- User is redirected back to service with authenticated session
- Service validates JWT and grants access
- Services communicate through internal Kubernetes networks
- External services use Cloudflare Tunnel for secure access
- All data is encrypted in-transit using TLS
- Persistent data stored on Longhorn volumes or MinIO
- Regular security audits and updates
- Automated container image updates
- Monitoring and alerting review
- Backup testing and verification
- Performance optimization
- Documentation updates
- Automated infrastructure health checks
- MCP-based operational workflows
Planned features and improvements:
- Enhanced AI integration capabilities
- Improved monitoring and alerting
- Additional security hardening
- Performance optimization
- Automated testing and CI/CD improvements
- MCP-based infrastructure automation
- Advanced backup and disaster recovery
- Enhanced observability with AI insights
- More comprehensive security scanning
- Automated compliance validation
If you find this repository helpful and would like to support its development, consider making a donation:
Your support helps maintain and improve this collection of development tools and templates. Thank you for contributing to open source!
