Commands troubleshooting
k get pvc -o yaml > pvc.yaml¶
The kubectl get/delete/apply cycle on static pods is pointless — kubelet owns them, not the API server. Kubectl just shows you a mirror of what kubelet is running.
Rule for the exam: control plane broken → go straight to /etc/kubernetes/manifests/. Don't touch kubectl for the fix.¶
cka-pod pod exposed internally within the service name cka-service and for cka-pod monitor(access through svc) purpose deployed cka-cronjob cronjob that run every minute .
Now cka-cronjob cronjob not working as expected, fix that issue
1. Cronjob curling cka-pod instead of cka-service Pod names aren't DNS resolvable. Only service names are. The cronjob was supposed to monitor via the service, so the curl target had to be cka-service.
2. Service endpoints empty Service selector was app=cka-pod but the pod had no labels. A service routes to pods by matching labels — no match means no endpoints, meaning the service exists but routes to nothing. Adding app=cka-pod to the pod made the service actually point somewhere.
3. Schedule */1 vs * Semantically identical, but KillerCoda string-matched for */1 * * * *.
The dependency chain: even after fixing the cronjob command to cka-service, it would still fail because the service had no endpoints. Both fixes were needed for curl to actually return 200.
How service discovery works in Kubernetes
When you do curl cka-service inside a pod, CoreDNS resolves cka-service to the service's ClusterIP (10.97.241.253). Then kube-proxy routes that to one of the endpoints — the actual pod IPs behind the service.
The chain: curl cka-service → CoreDNS → ClusterIP → kube-proxy → pod IP
Why labels matter
The service doesn't track pods by name. It uses a selector (app=cka-pod) and continuously watches for pods matching that selector. When a pod matches, its IP gets added to the Endpoints object. When it stops matching (deleted, label removed), it gets removed.
Service selector: app=cka-pod
↓ looks for pods with this label
Pod labels: app=cka-pod ← match → added to Endpoints
Pod labels: <none> ← no match → Endpoints empty
This is why k describe svc showing Endpoints: <none> is a red flag — the service is alive but deaf. Traffic hits the ClusterIP and goes nowhere.
Why curling the pod name directly fails
cka-pod is not a DNS entry. Kubernetes only registers service names in DNS, not pod names. The only way to reach a pod by name via DNS is if it's part of a StatefulSet (which creates stable DNS entries per pod). Regular pods — no.
Practical exam pattern for this type of issue:
- Cronjob/job failing → check logs of the job pod
Could not resolve host→ wrong name, or CoreDNS downFailed to connect→ name resolves but no endpoints, wrong port, or NetworkPolicy blocking- Check
k describe svc→ look at Endpoints line immediately