Skip to content

CKA Road Trip: Deployment Stuck at 0 Replicas — The Silent Killer

A deployment with 2 desired replicas, 0 pods created, and not a single event. No errors. Just silence. That silence is the clue.


The Symptom

k get deploy video-app
# NAME        READY   UP-TO-DATE   AVAILABLE   AGE
# video-app   0/2     0            0           53s

k describe deploy video-app
# Replicas: 2 desired | 0 updated | 0 total | 0 available
# Events: <none>

No events at all. That's not a scheduling failure, not an image pull error — those would show events. Complete silence means nobody is even trying to create the pods.


The Chain

When you create a deployment, Kubernetes doesn't just magically make pods appear. The controller manager is the component that watches deployments and acts on them. It sees "desired: 2, actual: 0" and creates the pod objects. Without it, the deployment just sits there with nobody home to action it.

So Events: <none> on a deployment = controller manager isn't running.

k get pods -n kube-system
# kube-controller-manager-controlplane   0/1   CrashLoopBackOff   5    3m

There it is. Then:

k describe pod kube-controller-manager-controlplane -n kube-system
# exec: "kube-controller-manegaar": executable file not found in $PATH

Typo. kube-controller-manegaar instead of kube-controller-manager. One transposed letter, entire cluster stops creating pods.


Why You Can't Fix It With kubectl

The controller manager is a static pod — it's managed by the kubelet directly from a file on disk, not through the API server. Editing it via kubectl edit just modifies a read-only mirror copy that the kubelet immediately overwrites.

The source of truth is the manifest file:

vim /etc/kubernetes/manifests/kube-controller-manager.yaml
# fix: kube-controller-manegaar → kube-controller-manager

Save it. The kubelet watches that directory, detects the change, and restarts the pod automatically. No kubectl apply needed.

k get pods -n kube-system
# kube-controller-manager-controlplane   1/1   Running   0   30s

k get pods
# NAME                        READY   STATUS    RESTARTS   AGE
# video-app-xxx               1/1     Running   0          10s
# video-app-yyy               1/1     Running   0          10s

The Troubleshooting Chain

deployment 0 replicas, Events: <none>
        ↓ nobody is acting on the deployment
        ↓ k get pods -n kube-system
        ↓ kube-controller-manager CrashLoopBackOff
        ↓ k describe pod → typo in binary name
        ↓ fix /etc/kubernetes/manifests/kube-controller-manager.yaml
        ↓ kubelet restarts it automatically
        ↓ pods created, deployment Running

The Key Signal

Events: <none> on a deployment that has 0 pods is not normal. A scheduling failure has events. An image pull failure has events. Zero events means the controller manager never ran. That's your first check — not the deployment, not the pods, the controller manager.