Topology Spread

Back

Loading concept...

🌍 Kubernetes Topology Spread: Keeping Your Pods Balanced Everywhere

The Pizza Delivery Story

Imagine you run a pizza delivery company. You have delivery drivers in three neighborhoods (zones). What happens if ALL your drivers go to ONE neighborhood?

  • πŸ“ Zone A: 10 drivers (crowded!)
  • πŸ“ Zone B: 0 drivers (empty!)
  • πŸ“ Zone C: 0 drivers (empty!)

Problem: If Zone A has a power outage, ALL deliveries stop! Nobody can get pizza!

Smart Solution: Spread drivers evenly across all zones:

  • πŸ“ Zone A: 3 drivers
  • πŸ“ Zone B: 3 drivers
  • πŸ“ Zone C: 4 drivers

Now if one zone fails, the other zones still work! This is exactly what Kubernetes Topology Spread does for your Pods.


🎯 What is Topology Spread?

Topology Spread tells Kubernetes: β€œDon’t put all my eggs in one basket!”

It spreads your Pods across:

  • Different zones (like data center regions)
  • Different nodes (like individual servers)
  • Different racks (like server cabinets)
graph TD A["Your Application"] --> B["Pod 1"] A --> C["Pod 2"] A --> D["Pod 3"] B --> E["Zone A"] C --> F["Zone B"] D --> G["Zone C"] style E fill:#90EE90 style F fill:#87CEEB style G fill:#FFB6C1

πŸ“‹ Topology Spread Constraints

Think of constraints as rules you give to a school bus driver:

β€œMake sure kids from the same family don’t ALL sit in the front. Spread them out!”

The Three Magic Words

Every topology spread constraint has these parts:

Part What it means Example
maxSkew Maximum difference allowed β€œNo more than 1 extra Pod per zone”
topologyKey What to spread across β€œSpread across zones”
whenUnsatisfiable What to do if rule breaks β€œDon’t schedule” or β€œJust try your best”

Simple Example

apiVersion: v1
kind: Pod
metadata:
  name: my-app
  labels:
    app: web
spec:
  topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: topology.kubernetes.io/zone
    whenUnsatisfiable: DoNotSchedule
    labelSelector:
      matchLabels:
        app: web
  containers:
  - name: web
    image: nginx

What this says:

  1. maxSkew: 1 β†’ Zones can differ by only 1 Pod
  2. topologyKey: zone β†’ Spread across availability zones
  3. DoNotSchedule β†’ If you can’t balance, don’t schedule at all!

πŸ”§ Topology Spread Config

Understanding maxSkew

maxSkew is like a seesaw rule:

Zone A: 🟒🟒🟒 (3 pods)
Zone B: 🟒🟒   (2 pods)
Difference = 1 βœ… (maxSkew: 1 is satisfied!)

Zone A: 🟒🟒🟒🟒 (4 pods)
Zone B: 🟒🟒     (2 pods)
Difference = 2 ❌ (maxSkew: 1 is broken!)

whenUnsatisfiable Options

Option What happens Use when
DoNotSchedule Pod waits forever You NEED balance
ScheduleAnyway Pod goes somewhere Balance is nice, not required

Advanced Example with Multiple Constraints

spec:
  topologySpreadConstraints:
  # Rule 1: Spread across zones
  - maxSkew: 1
    topologyKey: topology.kubernetes.io/zone
    whenUnsatisfiable: DoNotSchedule
    labelSelector:
      matchLabels:
        app: web
  # Rule 2: Also spread across nodes
  - maxSkew: 2
    topologyKey: kubernetes.io/hostname
    whenUnsatisfiable: ScheduleAnyway
    labelSelector:
      matchLabels:
        app: web

This says:

  • Zones MUST be balanced (difference ≀ 1)
  • Nodes SHOULD be balanced (difference ≀ 2, but flexible)

πŸ—ΊοΈ Zone and Region Awareness

What are Zones and Regions?

Think of it like addresses:

  • Region = Country (us-east, eu-west)
  • Zone = City within the country (us-east-1a, us-east-1b)
graph TD R["Region: us-east"] --> Z1["Zone: us-east-1a"] R --> Z2["Zone: us-east-1b"] R --> Z3["Zone: us-east-1c"] Z1 --> N1["Node 1"] Z1 --> N2["Node 2"] Z2 --> N3["Node 3"] Z2 --> N4["Node 4"] Z3 --> N5["Node 5"]

Common Topology Keys

Key Spreads across
topology.kubernetes.io/zone Availability zones
topology.kubernetes.io/region Geographic regions
kubernetes.io/hostname Individual nodes

Real-World Zone Spread Example

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-server
spec:
  replicas: 6
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: topology.kubernetes.io/zone
        whenUnsatisfiable: DoNotSchedule
        labelSelector:
          matchLabels:
            app: web
      containers:
      - name: nginx
        image: nginx:latest

Result with 3 zones:

  • Zone A: 2 Pods βœ…
  • Zone B: 2 Pods βœ…
  • Zone C: 2 Pods βœ…

If Zone C fails:

  • Your app still runs with 4 Pods in Zones A and B!

πŸ’‘ Quick Tips

βœ… Do This

  • Use maxSkew: 1 for critical apps
  • Combine zone AND node spreading
  • Use DoNotSchedule when balance is critical

❌ Avoid This

  • Setting maxSkew: 0 (impossible to satisfy!)
  • Forgetting to add labelSelector
  • Using only region spread (zones give better protection)

πŸŽ“ Summary

Concept Simple Explanation
Topology Spread Rules to spread Pods evenly
maxSkew Maximum allowed imbalance
topologyKey What dimension to spread across
whenUnsatisfiable What to do if balance fails
Zone Data center section (city)
Region Geographic area (country)

πŸš€ You’re Ready!

Now you understand how Kubernetes keeps your Pods spread out like a smart pizza delivery manager. No more putting all drivers in one neighborhood!

Remember: Topology Spread = High Availability = Happy Users! πŸŽ‰

Loading story...

Story - Premium Content

Please sign in to view this story and start learning.

Upgrade to Premium to unlock full access to all stories.

Stay Tuned!

Story is coming soon.

Story Preview

Story - Premium Content

Please sign in to view this concept and start learning.

Upgrade to Premium to unlock full access to all content.