CKA Road Trip: What Actually Runs in a Kubernetes Cluster¶
Went down a rabbit hole on this one after debugging a crashed controller manager. Ended up mapping out every component and what it actually does. Here's what I found.
Two Tiers¶
Every Kubernetes cluster has two tiers of components: the control plane and the node components. They have completely different jobs.
Control plane — the brain. Stores state, makes decisions, watches for drift between desired and actual. Runs on the controlplane node.
Node components — the hands. Actually run containers, manage networking, report status back. Run on every node.
Control Plane Components¶
kube-apiserver¶
The front door. Every single thing in Kubernetes — kubectl, the controller manager, the scheduler, the kubelet — talks to the API server. Nothing talks directly to etcd except the API server. It handles authentication, authorization, validation, and admission control before anything gets stored.
etcd¶
The database. Stores every Kubernetes object — pods, deployments, services, configmaps, everything. It's a distributed key-value store. If etcd dies, the cluster loses all state. This is why etcd backups are critical in production.
kube-controller-manager¶
The reconciliation engine. Runs a set of controllers in a loop — each one watches for drift between desired state and actual state and corrects it. The ReplicaSet controller sees "desired: 3, actual: 1" and creates 2 pods. The Node controller sees a node hasn't reported in and marks it NotReady. If this component is down, nothing in the cluster reacts to anything.
kube-scheduler¶
Decides which node a pod runs on. Looks at resource requests, node capacity, affinity rules, taints and tolerations. It doesn't create the pod — it just writes a node assignment to etcd. The kubelet on that node then picks it up and creates the pod.
Node Components¶
kubelet¶
The agent running on every node. Watches the API server for pods assigned to its node, then calls the container runtime to actually create them. Runs probes, reports pod status, manages static pods from /etc/kubernetes/manifests/. The only Kubernetes component that actually touches Linux directly.
kube-proxy¶
Manages networking rules for services. Writes iptables (or IPVS) rules so that traffic to a ClusterIP gets routed to the right pod. When a service is created or a pod is added/removed, kube-proxy updates the rules on every node.
container runtime¶
The thing that actually runs containers. containerd or CRI-O. The kubelet talks to it via the CRI (Container Runtime Interface). It pulls images, creates containers, reports status.
kubectl¶
Not a cluster component. It's a CLI tool that runs on your machine and talks to the API server over HTTPS. Worth calling out because it's easy to think of it as part of the cluster — it's not, it's just a client.
Where Each Component Lives¶
controlplane node:
/etc/kubernetes/manifests/ ← static pods
kube-apiserver.yaml
kube-controller-manager.yaml
kube-scheduler.yaml
etcd.yaml
every node:
kubelet ← systemd service, not a pod
kube-proxy ← DaemonSet
container runtime (containerd) ← systemd service
The control plane components run as static pods. The kubelet and container runtime run as systemd services directly on the host. kube-proxy runs as a DaemonSet.
One Line Each¶
| Component | One line |
|---|---|
| kube-apiserver | front door — everything talks through here |
| etcd | the database — stores all cluster state |
| kube-controller-manager | reconciliation loop — desired vs actual |
| kube-scheduler | decides which node a pod runs on |
| kubelet | node agent — creates containers, runs probes |
| kube-proxy | writes iptables rules for service routing |
| container runtime | actually runs the containers |
| kubectl | client CLI — not a cluster component |