This project demonstrates a complete CI/CD pipeline for a web application (Dice Game) using Jenkins, Docker, DockerHub, Helm, and Kubernetes (Minikube), all running on a single AWS EC2 instance.
The goal of this setup is to:
- Gain strong hands‑on DevOps experience
- Understand real‑world CI/CD issues and fixes
- Have a reusable reference for future deployments
- Be interview‑ready for 3–5 years experience roles
GitHub
│
▼
Jenkins (EC2)
│
├─ Build Docker Image
├─ Push Image to DockerHub
└─ Deploy using Helm
│
▼
Kubernetes (Minikube)
│
▼
Dice Game App
Key Design Decision Jenkins and Minikube are installed on the same EC2 instance to avoid networking, kubeconfig, and tunnel complexity during learning.
- AWS EC2 (Ubuntu 22.04 / 24.04)
- Jenkins (Pipeline as Code)
- Docker & DockerHub
- Kubernetes (Minikube – Docker driver)
- Helm (Application deployment)
- GitHub (Source control)
Dice-Game/
├── Dockerfile
├── Jenkinsfile
├── index.html
├── helm/
│ └── dice-game/
│ ├── Chart.yaml
│ ├── values.yaml
│ └── templates/
│ ├── deployment.yaml
│ └── service.yaml
-
Instance type: t3.medium (recommended)
-
Disk: 30 GB
-
Open ports:
- 22 (SSH)
- 8080 (Jenkins)
sudo apt update
sudo apt install -y docker.io
sudo usermod -aG docker ubuntu
sudo usermod -aG docker jenkins
newgrp dockerVerify:
docker run hello-worldsudo snap install kubectl --classiccurl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikubeStart Minikube without sudo:
minikube start --driver=docker --memory=3000mb --cpus=2Verify:
kubectl get nodescurl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bashsudo apt install -y openjdk-17-jdkcurl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | \
sudo tee /usr/share/keyrings/jenkins-keyring.asc > /dev/nullecho "deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] https://pkg.jenkins.io/debian-stable binary/" | \
sudo tee /etc/apt/sources.list.d/jenkins.list > /dev/nullsudo apt update
sudo apt install -y jenkins
sudo systemctl start jenkins
sudo systemctl enable jenkins
sudo usermod -aG docker jenkins
newgrp dockerGet Jenkins Initial Password:
sudo cat /var/lib/jenkins/secrets/initialAdminPasswordAccess Jenkins: http://<EC2_PUBLIC_IP>:8080
Required Plugins:
- Go to Manage Jenkins → Plugins → Available Plugins
- Install these plugins:
- Docker Pipeline
- Kubernetes CLI
- Pipeline
- Git
- Credentials Binding
Restart Jenkins after installation
DockerHub Credentials:
- Go to Manage Jenkins → Credentials → System → Global credentials
- Click Add Credentials
- Select Username with password
- Enter:
- Username:
<your-dockerhub-username> - Password:
<your-dockerhub-token> - ID:
dockerhub-credentials
- Username:
Create DockerHub Token:
- Login to DockerHub → Account Settings → Security
- Create New Access Token with Read/Write permissions
- Use this token as password in Jenkins
Minikube certificates are created under the ubuntu user. To allow Jenkins access:
sudo cp -r /home/ubuntu/.minikube /var/lib/jenkins/
sudo cp -r /home/ubuntu/.kube /var/lib/jenkins/
sudo chown -R jenkins:jenkins /var/lib/jenkins/.minikube /var/lib/jenkins/.kube
sudo sed -i 's|/home/ubuntu|/var/lib/jenkins|g' /var/lib/jenkins/.kube/configVerify:
sudo su - jenkins
kubectl get nodes- New Item → Pipeline → Name:
dice-game-pipeline - Pipeline Definition: Pipeline script from SCM
- SCM: Git
- Repository URL:
https://github.com/<your-username>/Dice-Game.git - Branch:
*/main - Script Path:
Jenkinsfile
- Checkout code from GitHub
- Build Docker image
- Push image to DockerHub
- Deploy to Kubernetes using Helm
Jenkinsfile Requirements:
- Uses
dockerhub-credentialsfor authentication - Implements
withCredentialsbinding - Helm
upgrade --installfor deployment
Because Minikube uses the Docker driver:
- NodePort is not exposed on EC2 public IP
- This is expected behavior
kubectl port-forward svc/dice-game 8090:80From laptop:
ssh -i Project-kp.pem -L 8090:localhost:8090 ubuntu@<EC2_PUBLIC_IP>Browser:
http://localhost:8090
Reason: Free‑tier instance incompatibility and IAM misconfiguration Fix: Switched to Minikube for learning.
Reason: kubectl was pointing to Jenkins (8080) Fix: Reset kubeconfig and context using Minikube
Reason: Minikube created under ubuntu, Jenkins user lacked access
Fix: Copied .minikube and .kube to Jenkins home and fixed ownership
Reason: DockerHub token had insufficient scopes Fix: Created Read/Write access token and updated Jenkins credentials
Reason: Required plugins not installed (Docker Pipeline, Kubernetes CLI) Fix: Installed plugins via Manage Jenkins → Plugins
Reason: DockerHub credentials not properly configured in Jenkins Fix: Added credentials with correct ID matching Jenkinsfile
Reason: Minikube Docker driver networking limitation
Fix: Used kubectl port-forward + SSH tunneling
- Difference between Minikube and EKS networking
- Kubernetes Service vs NodePort behavior
- Importance of kubeconfig and Linux permissions
- Secure DockerHub authentication using tokens
- Helm‑based deployments over raw manifests
- Real CI/CD troubleshooting
Answer: With Minikube Docker driver, NodePorts bind to the Minikube container network, not the host network. Port‑forward or ingress is required.
Answer: Jenkins uses kubeconfig and certificate‑based authentication. The Jenkins user must have access to Kubernetes cert files.
Answer: Helm provides versioning, rollback, templating, and environment‑specific configuration, making deployments manageable.
Answer: The DockerHub token lacked write permissions. Using a Read/Write token fixed the issue.
Answer: Replace Minikube with EKS, use Ingress with ALB, store secrets in AWS Secrets Manager, and add multi‑env Helm values.
helm rollback dice-game <revision>Answer: NodePort exposes services at node level; Ingress provides L7 routing, TLS, and domain‑based access.
Answer: Use credential bindings, access tokens, least‑privilege IAM, and never hardcode secrets.
This project represents a real‑world DevOps CI/CD implementation with genuine troubleshooting experience. It is reusable, extendable, and production‑aligned.
✅ Status: Successfully Deployed and Verified