π Kubernetes Topology Spread: Keeping Your Pods Balanced Everywhere
The Pizza Delivery Story
Imagine you run a pizza delivery company. You have delivery drivers in three neighborhoods (zones). What happens if ALL your drivers go to ONE neighborhood?
- π Zone A: 10 drivers (crowded!)
- π Zone B: 0 drivers (empty!)
- π Zone C: 0 drivers (empty!)
Problem: If Zone A has a power outage, ALL deliveries stop! Nobody can get pizza!
Smart Solution: Spread drivers evenly across all zones:
- π Zone A: 3 drivers
- π Zone B: 3 drivers
- π Zone C: 4 drivers
Now if one zone fails, the other zones still work! This is exactly what Kubernetes Topology Spread does for your Pods.
π― What is Topology Spread?
Topology Spread tells Kubernetes: βDonβt put all my eggs in one basket!β
It spreads your Pods across:
- Different zones (like data center regions)
- Different nodes (like individual servers)
- Different racks (like server cabinets)
graph TD A["Your Application"] --> B["Pod 1"] A --> C["Pod 2"] A --> D["Pod 3"] B --> E["Zone A"] C --> F["Zone B"] D --> G["Zone C"] style E fill:#90EE90 style F fill:#87CEEB style G fill:#FFB6C1
π Topology Spread Constraints
Think of constraints as rules you give to a school bus driver:
βMake sure kids from the same family donβt ALL sit in the front. Spread them out!β
The Three Magic Words
Every topology spread constraint has these parts:
| Part | What it means | Example |
|---|---|---|
| maxSkew | Maximum difference allowed | βNo more than 1 extra Pod per zoneβ |
| topologyKey | What to spread across | βSpread across zonesβ |
| whenUnsatisfiable | What to do if rule breaks | βDonβt scheduleβ or βJust try your bestβ |
Simple Example
apiVersion: v1
kind: Pod
metadata:
name: my-app
labels:
app: web
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: web
containers:
- name: web
image: nginx
What this says:
maxSkew: 1β Zones can differ by only 1 PodtopologyKey: zoneβ Spread across availability zonesDoNotScheduleβ If you canβt balance, donβt schedule at all!
π§ Topology Spread Config
Understanding maxSkew
maxSkew is like a seesaw rule:
Zone A: π’π’π’ (3 pods)
Zone B: π’π’ (2 pods)
Difference = 1 β
(maxSkew: 1 is satisfied!)
Zone A: π’π’π’π’ (4 pods)
Zone B: π’π’ (2 pods)
Difference = 2 β (maxSkew: 1 is broken!)
whenUnsatisfiable Options
| Option | What happens | Use when |
|---|---|---|
DoNotSchedule |
Pod waits forever | You NEED balance |
ScheduleAnyway |
Pod goes somewhere | Balance is nice, not required |
Advanced Example with Multiple Constraints
spec:
topologySpreadConstraints:
# Rule 1: Spread across zones
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: web
# Rule 2: Also spread across nodes
- maxSkew: 2
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app: web
This says:
- Zones MUST be balanced (difference β€ 1)
- Nodes SHOULD be balanced (difference β€ 2, but flexible)
πΊοΈ Zone and Region Awareness
What are Zones and Regions?
Think of it like addresses:
- Region = Country (us-east, eu-west)
- Zone = City within the country (us-east-1a, us-east-1b)
graph TD R["Region: us-east"] --> Z1["Zone: us-east-1a"] R --> Z2["Zone: us-east-1b"] R --> Z3["Zone: us-east-1c"] Z1 --> N1["Node 1"] Z1 --> N2["Node 2"] Z2 --> N3["Node 3"] Z2 --> N4["Node 4"] Z3 --> N5["Node 5"]
Common Topology Keys
| Key | Spreads across |
|---|---|
topology.kubernetes.io/zone |
Availability zones |
topology.kubernetes.io/region |
Geographic regions |
kubernetes.io/hostname |
Individual nodes |
Real-World Zone Spread Example
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
spec:
replicas: 6
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: web
containers:
- name: nginx
image: nginx:latest
Result with 3 zones:
- Zone A: 2 Pods β
- Zone B: 2 Pods β
- Zone C: 2 Pods β
If Zone C fails:
- Your app still runs with 4 Pods in Zones A and B!
π‘ Quick Tips
β Do This
- Use
maxSkew: 1for critical apps - Combine zone AND node spreading
- Use
DoNotSchedulewhen balance is critical
β Avoid This
- Setting
maxSkew: 0(impossible to satisfy!) - Forgetting to add
labelSelector - Using only region spread (zones give better protection)
π Summary
| Concept | Simple Explanation |
|---|---|
| Topology Spread | Rules to spread Pods evenly |
| maxSkew | Maximum allowed imbalance |
| topologyKey | What dimension to spread across |
| whenUnsatisfiable | What to do if balance fails |
| Zone | Data center section (city) |
| Region | Geographic area (country) |
π Youβre Ready!
Now you understand how Kubernetes keeps your Pods spread out like a smart pizza delivery manager. No more putting all drivers in one neighborhood!
Remember: Topology Spread = High Availability = Happy Users! π
