After fixing a crashed controller manager by editing a file in /etc/kubernetes/manifests/, I had to understand why that worked instead of using kubectl. That question opened up three related concepts worth getting clear on.
A regular pod is created through the API server, stored in etcd, and managed by the controller manager. It lives and dies with the control plane. If the API server is down, you can't create new ones. If the controller manager is down, deployments don't react to changes.
This is what most things in a cluster are.
A static pod is defined as a YAML file on disk at /etc/kubernetes/manifests/ on a specific node. The kubelet watches that directory directly — no API server, no etcd, no controller manager involved. Drop a file in, the pod starts. Remove the file, the pod stops. Edit the file, the pod restarts.
The kubelet creates a read-only mirror copy of static pods in the API server so you can see them with kubectl get pods -n kube-system, but that mirror is read-only. Any changes you make via kubectl get overwritten immediately by the kubelet reading the file on disk.
The source of truth is always the file.
# control plane components are all static pods
ls /etc/kubernetes/manifests/
# etcd.yaml
# kube-apiserver.yaml
# kube-controller-manager.yaml
# kube-scheduler.yaml
The bootstrap problem. To create a pod via the API server, the API server needs to be running. But the API server is itself a pod. How does it start?
Answer: the kubelet starts before the API server exists. It reads /etc/kubernetes/manifests/, finds kube-apiserver.yaml, and starts it directly. No chicken-and-egg problem because the kubelet doesn't need the API server to manage static pods.
This is how kubeadm init bootstraps a cluster. It writes the manifest files, the kubelet picks them up, the control plane starts.
Almost never. The one valid reason: something that must keep running even if the entire control plane is down. A node-level security agent that monitors syscalls — if the cluster is compromised and the control plane goes down, you still want that agent running. As a regular pod or DaemonSet it would stop being managed. As a static pod it keeps running because the kubelet manages it locally.
For everything else, use a DaemonSet.
A DaemonSet says: run exactly one copy of this pod on every node. When a new node joins the cluster, the DaemonSet automatically schedules a pod on it. When a node is removed, the pod goes with it.
Use cases:
- Log collector on every node (Fluentd, Filebeat)
- Monitoring agent on every node (Prometheus node-exporter)
- Network plugin on every node (Cilium, Flannel)
- kube-proxy itself is a DaemonSet
Unlike a static pod, a DaemonSet is a proper Kubernetes resource — managed through the API server, updatable, rollbackable, visible and manageable with kubectl.
The question isn't really static pod vs DaemonSet technically. It's: do you need this to survive a control plane outage?
If yes — static pod. The kubelet manages it from a file, no dependency on the control plane.
If no — DaemonSet. Proper Kubernetes resource, fully manageable, runs on every node.
In practice the only things that genuinely need to survive a control plane outage are the control plane components themselves. Which is why they're static pods. Everything else — including most node agents — can be a DaemonSet.
|
Static pod |
Regular pod |
DaemonSet |
| Managed by |
kubelet (file on disk) |
controller manager |
controller manager |
| Defined in |
/etc/kubernetes/manifests/ |
etcd via API server |
etcd via API server |
| Runs on |
one specific node |
scheduled by scheduler |
every node |
| Survives control plane outage |
yes |
no |
no |
| Updatable via kubectl |
no |
yes |
yes |
| Use case |
control plane bootstrap |
application workloads |
node-level agents |