CKA Road Trip: CronJob Keeps Failing — Two Bugs, One Exercise¶
A cronjob running curl kept erroring with exit code 6. Fixed it, but only after realising I'd forgotten how services actually work. Two bugs, both fundamental.
The Symptom¶
k get pods
# cka-cronjob-xxx 0/1 Error 5 4m
# cka-pod 1/1 Running 0 4m
k logs cka-cronjob-xxx
# curl: (6) Could not resolve host: cka-pod
Exit code 6 in curl = DNS resolution failed. The host doesn't exist or can't be resolved.
How Services Actually Work — The Part I Forgot¶
A service doesn't know about pods by name. It finds pods using label selectors. The service defines a selector like app=cka-pod, Kubernetes finds all pods with that label, and builds an Endpoints list from their IPs. Traffic to the ClusterIP gets routed to those endpoints.
service selector: app=cka-pod
↓
find pods with label app=cka-pod
↓
build Endpoints list (pod IPs)
↓
ClusterIP routes traffic there
If no pods have matching labels → Endpoints: <none> → traffic goes nowhere.
DNS — Pods vs Services¶
Every service gets a DNS entry automatically:
Within the same namespace, just the service name works:
curl cka-service # works — service has a DNS entry
curl cka-pod # fails — pod names don't get DNS entries
Pods don't get DNS entries by default. Only services do. Curling a pod name directly will always fail with exit code 6.
The Two Bugs¶
Bug 1 — pod missing labels
The service selector was app=cka-pod but the pod had no labels. So:
The service existed. The pod existed. But they weren't connected because the label was missing.
Bug 2 — cronjob curling the wrong hostname
cka-pod is a pod name, not a DNS hostname. Should be cka-service.
The Fix¶
# fix 1 — add the missing label to the pod
k label pod cka-pod app=cka-pod
# verify the service now has endpoints
k get endpoints cka-service
# NAME ENDPOINTS AGE
# cka-service 192.168.1.184:80 4m
# fix 2 — edit the cronjob to curl the service name
k edit cronjob cka-cronjob
# change: curl cka-pod
# to: curl cka-service
Next cronjob run completes successfully.
How to Diagnose a Broken Service¶
# 1. check what the service is selecting
k describe svc cka-service
# Selector: app=cka-pod
# 2. check if any pods match that selector
k get pods -l app=cka-pod
# No resources found = label missing on pod
# 3. check endpoints directly
k get endpoints cka-service
# Endpoints: <none> = selector matches nothing
# 4. check pod labels
k describe pod cka-pod | grep Labels
# Labels: <none> = add the label
Endpoints: <none> on a service is the clearest signal the selector isn't matching any pods.
The Two Things Worth Remembering¶
Services find pods via labels, not names. No matching label = no endpoints = traffic goes nowhere. Always check k get endpoints <service> when a service isn't working.
curl the service name, not the pod name. Pod names don't resolve as DNS. Only services do.