🚀 Advanced Kubernetes Deployment Patterns
The Traffic Controller Story
Imagine you run the world’s busiest airport. Planes (your apps) need to land safely, passengers (users) need smooth connections, and you can’t shut down the runway to fix things. That’s Kubernetes deployment patterns!
You’re the Air Traffic Controller. Your job? Get new planes in the sky, retire old ones, and never crash anything.
🕸️ Service Mesh Concepts
What is a Service Mesh?
Think of your airport with hundreds of planes talking to each other. Now imagine every plane has a co-pilot robot that handles all the radio chatter automatically.
That’s a service mesh!
Without Service Mesh:
Plane A ----calls----> Plane B (manually)
With Service Mesh:
Plane A -> [Robot Co-pilot] -> [Robot Co-pilot] -> Plane B
(handles routing, security, retries)
Why Do We Need It?
Your apps are like planes. They need to:
- Find each other (service discovery)
- Talk securely (encryption)
- Retry if busy (resilience)
- Report problems (observability)
The service mesh handles ALL of this. Your app just flies!
Real Example: Istio
# Istio injects a sidecar proxy automatically
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
metadata:
labels:
app: my-app
# Istio adds envoy proxy here!
Key Players:
- Istio - Most popular, feature-rich
- Linkerd - Lightweight, simple
- Consul Connect - HashiCorp’s solution
🚦 Traffic Management Patterns
The Traffic Light System
Your airport has traffic lights for planes. You control:
- Who goes where (routing)
- How fast (rate limiting)
- What happens on problems (retries, timeouts)
Pattern 1: Weighted Routing
Send some passengers to the new terminal, most to the old one.
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
name: my-app
spec:
hosts:
- my-app
http:
- route:
- destination:
host: my-app
subset: v1
weight: 90 # 90% traffic
- destination:
host: my-app
subset: v2
weight: 10 # 10% traffic
Pattern 2: Header-Based Routing
VIP passengers get the new terminal!
http:
- match:
- headers:
user-type:
exact: "premium"
route:
- destination:
host: my-app
subset: v2 # Premium gets v2
Pattern 3: Circuit Breaker
If a terminal is overwhelmed, stop sending people there!
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: my-app
spec:
host: my-app
trafficPolicy:
outlierDetection:
consecutive5xxErrors: 5
interval: 30s
baseEjectionTime: 60s
🐤 Canary Deployments
The Canary in the Coal Mine
Miners used canaries to detect danger. If the bird stopped singing, get out!
Canary deployment: Send a tiny bit of traffic to the new version. If it fails, only a few users are affected.
graph TD A["Users 100%"] --> B{Load Balancer} B -->|95%| C["Version 1 - Stable"] B -->|5%| D["Version 2 - Canary"] D --> E{Healthy?} E -->|Yes| F["Increase to 25%"] E -->|No| G["Rollback to v1"]
Canary in Action
Step 1: Deploy canary with 5% traffic
weight: 95 # v1
weight: 5 # v2 (canary)
Step 2: Watch metrics (errors, latency)
Step 3: If good, increase traffic gradually
weight: 75 # v1
weight: 25 # v2
Step 4: Eventually, 100% to v2
🔵🟢 Blue-Green Deployments
Two Terminals, One Switch
You have two identical terminals:
- Blue = Currently serving passengers
- Green = New version, waiting
When Green is ready, flip the switch! All passengers go to Green.
graph TD A["DNS/Load Balancer"] --> B{Switch} B -->|Active| C["🔵 Blue - v1"] B -->|Standby| D["🟢 Green - v2"] E["After Testing"] --> F{Flip Switch} F --> G["🟢 Green - Now Active"] F --> H["🔵 Blue - Now Standby"]
Blue-Green Example
Step 1: Both versions running
# Blue is live
kubectl get svc my-app
# Points to: blue-deployment
Step 2: Test Green thoroughly
Step 3: Flip the switch!
kubectl patch svc my-app \
-p '{"spec":{"selector":{"version":"green"}}}'
Step 4: If problems, flip back instantly!
Canary vs Blue-Green
| Canary | Blue-Green |
|---|---|
| Gradual rollout | Instant switch |
| Less resource | Double resources |
| Harder rollback | Easy rollback |
| Test with real traffic | Test before switch |
📦 GitOps Fundamentals
The Single Source of Truth
Imagine your airport’s control tower has ONE master flight plan. Every controller reads from it. No one makes changes without updating the plan.
That’s GitOps!
Git = Your master flight plan Ops = Automatic deployment
graph TD A["Developer"] -->|1. Push Code| B["Git Repository"] B -->|2. Trigger| C["CI Pipeline"] C -->|3. Build Image| D["Container Registry"] B -->|4. Detect Change| E["GitOps Operator"] E -->|5. Apply| F["Kubernetes Cluster"]
The Four Principles
- Declarative - Describe WHAT you want, not HOW
- Versioned - Everything in Git
- Automated - Changes applied automatically
- Self-healing - Drift detected and corrected
Why GitOps Rocks
Traditional:
Developer -> SSH -> Server -> Hope it works
GitOps:
Developer -> Git Push -> Automatic Magic -> Verified State
- Audit trail - Git history shows everything
- Rollback -
git revertfixes disasters - Security - No direct cluster access needed
🦑 ArgoCD Basics
Your Automated Pilot
ArgoCD is like an autopilot that:
- Watches your Git repo
- Compares it to your cluster
- Makes them match automatically
Installing ArgoCD
# Create namespace
kubectl create namespace argocd
# Install ArgoCD
kubectl apply -n argocd -f \
https://raw.githubusercontent.com/argoproj/\
argo-cd/stable/manifests/install.yaml
# Get admin password
kubectl -n argocd get secret \
argocd-initial-admin-secret \
-o jsonpath="{.data.password}" | base64 -d
ArgoCD Architecture
graph TD A["Git Repository"] --> B["ArgoCD Server"] B --> C["Application Controller"] C --> D{Compare} D -->|Match| E["✅ Synced"] D -->|Differ| F["⚠️ Out of Sync"] F --> G["Auto-Sync or Manual"] G --> H["Apply to Cluster"]
Key Components
- API Server - Web UI and CLI access
- Repo Server - Fetches and processes Git
- Application Controller - Watches and syncs
🔄 ArgoCD Applications Sync
Your Flight Plan Executor
An ArgoCD Application tells ArgoCD:
- WHERE to get the config (Git repo)
- WHAT to deploy (path in repo)
- WHERE to deploy (Kubernetes cluster)
Creating an Application
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/my/repo
targetRevision: main
path: kubernetes/my-app
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true # Delete removed resources
selfHeal: true # Fix drift automatically
Sync Strategies
Manual Sync:
argocd app sync my-app
Automatic Sync:
syncPolicy:
automated:
prune: true
selfHeal: true
Sync Status Icons
| Status | Meaning |
|---|---|
| ✅ Synced | Git = Cluster |
| ⚠️ OutOfSync | Git ≠ Cluster |
| 🔄 Syncing | Applying changes |
| ❌ Failed | Sync error |
| 💚 Healthy | App running well |
| 💔 Degraded | App has issues |
🌳 App-of-Apps Pattern
The Master Flight Plan
One Application that manages all other Applications!
Instead of creating 50 apps manually, create ONE app that points to a folder containing 50 app definitions.
graph TD A["Root Application"] --> B["apps/ folder in Git"] B --> C["app-1.yaml"] B --> D["app-2.yaml"] B --> E["app-3.yaml"] B --> F["app-n.yaml"] C --> G["Deploys App 1"] D --> H["Deploys App 2"] E --> I["Deploys App 3"]
The Root App
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: root-app
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/my/repo
path: argocd-apps # Folder with app YAMLs
targetRevision: main
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
selfHeal: true
Child App Example
# argocd-apps/frontend.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: frontend
namespace: argocd
spec:
source:
repoURL: https://github.com/my/repo
path: apps/frontend
destination:
server: https://kubernetes.default.svc
namespace: frontend
Benefits
- Single source - One repo rules all
- Scalable - Add apps by adding files
- Consistent - Same pattern everywhere
⏱️ Zero-Downtime Deployment
Never Close the Airport
Your passengers (users) should NEVER see “Airport Closed.” Even during upgrades!
Strategy 1: Rolling Update (Default)
Replace pods one by one, never all at once.
apiVersion: apps/v1
kind: Deployment
spec:
replicas: 4
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1 # Only 1 pod down
maxSurge: 1 # Only 1 extra pod
graph LR A["v1 v1 v1 v1"] --> B["v1 v1 v1 v2"] B --> C["v1 v1 v2 v2"] C --> D["v1 v2 v2 v2"] D --> E["v2 v2 v2 v2"]
Strategy 2: Readiness Probes
Don’t send traffic until the new pod is ready!
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
Strategy 3: Graceful Shutdown
Let existing requests finish before killing the pod.
spec:
terminationGracePeriodSeconds: 30
containers:
- name: app
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 10"]
The Zero-Downtime Checklist
✅ Rolling updates with proper maxUnavailable ✅ Readiness probes - only serve when ready ✅ Graceful shutdown - finish existing requests ✅ Pod disruption budgets - protect availability ✅ Health checks - detect problems early
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: my-app-pdb
spec:
minAvailable: 2 # Always keep 2 pods running
selector:
matchLabels:
app: my-app
🎯 Putting It All Together
graph TD A["Developer Pushes to Git"] --> B["ArgoCD Detects Change"] B --> C["Service Mesh Routes Traffic"] C --> D{Deployment Strategy} D -->|Canary| E["5% to New Version"] D -->|Blue-Green| F["Switch All Traffic"] D -->|Rolling| G["Gradual Pod Replace"] E --> H{Metrics OK?} H -->|Yes| I["Increase Traffic"] H -->|No| J["Rollback"] F --> K["Instant Rollback Available"] G --> L["Zero Downtime Maintained"]
Your New Superpowers
| Concept | Superpower |
|---|---|
| Service Mesh | Automatic networking magic |
| Traffic Management | Control every request |
| Canary | Safe, gradual rollouts |
| Blue-Green | Instant switches and rollbacks |
| GitOps | Git is your truth |
| ArgoCD | Automatic sync from Git |
| App-of-Apps | Manage everything from one place |
| Zero-Downtime | Users never see failures |
🏆 You Did It!
You’re now an Air Traffic Controller for Kubernetes!
Remember:
- Service Mesh = Robot co-pilots for every pod
- Traffic Patterns = Smart routing rules
- Canary = Test with a few users first
- Blue-Green = Two versions, instant switch
- GitOps = Git is the boss
- ArgoCD = Your autopilot
- App-of-Apps = One app to rule them all
- Zero-Downtime = Never close the airport
Now go deploy with confidence! 🚀
