Skip to content

kubectl logs — Advanced Patterns & Debugging

Reference: https://kubernetes.io/docs/reference/kubectl/quick-reference/

See also: log_reader.md.md for basic flags (-f, --tail, --since, --previous, -c, -l), output redirection, and namespaces.


The Mistake in the Raw Notes

kubectl get logs       # WRONG — does not exist
kubectl logs <pod>     # CORRECT

logs is a standalone verb, not a sub-command of get. get is for fetching resource objects. Logs are a stream from a container, not a resource. See log_reader.md.md for the full explanation.


Multi-Container Pods — Which Container's Logs?

When a pod has more than one container, kubectl logs needs to know which one:

kubectl logs my-pod                     # fails if pod has multiple containers
kubectl logs my-pod -c app              # logs from container named "app"
kubectl logs my-pod -c sidecar          # logs from container named "sidecar"

How to know what containers are in a pod:

kubectl get pod my-pod -o jsonpath='{.spec.containers[*].name}'
kubectl describe pod my-pod             # lists all containers under "Containers:"

Get logs from ALL containers in a pod at once:

kubectl logs my-pod --all-containers=true
kubectl logs my-pod --all-containers=true --prefix=true   # prefix each line with [pod/container-name]

--prefix=true is useful here — without it, logs from different containers are interleaved and you can't tell which line came from which.


--timestamps — When Did Each Line Happen?

kubectl logs my-pod --timestamps

Prepends an RFC3339 timestamp to every log line:

2024-01-15T10:30:45.123456789Z ERROR: database connection failed
2024-01-15T10:30:46.234567890Z INFO: retrying connection...

Useful when correlating with external events ("the deployment happened at 10:28, did logs show errors after that?").


Getting Logs by Deployment/DaemonSet/StatefulSet (not just Pod)

kubectl logs deployment/my-app              # logs from one pod in the deployment (arbitrary)
kubectl logs deployment/my-app --all-pods   # logs from ALL pods in the deployment
kubectl logs daemonset/my-ds                # logs from one pod
kubectl logs statefulset/my-sts

Why by label is better for "all pods":

kubectl logs -l app=my-app --all-containers=true

This hits all pods matching the label selector — which is what a Deployment uses internally. More reliable than deployment/name --all-pods.


Debugging CrashLoopBackOff

CrashLoopBackOff means the container keeps crashing and Kubernetes keeps restarting it with exponential backoff. The pattern:

Step 1 — Confirm it's CrashLoopBackOff:

kubectl get pods                          # STATUS shows CrashLoopBackOff

Step 2 — Get logs from the crashed instance:

kubectl logs my-pod --previous            # logs from BEFORE the last restart

Without --previous, you get logs from the current (newly started) instance — which might not have crashed yet, so logs are minimal or empty.

Step 3 — Check events for the pod:

kubectl describe pod my-pod

Scroll to the Events: section at the bottom. This shows: - Whether the image could be pulled - OOMKilled (out of memory) - Exit codes - Backoff timings

Step 4 — Check exit code:

kubectl get pod my-pod -o jsonpath='{.status.containerStatuses[0].lastState.terminated.exitCode}'

Exit Code Meaning
0 Exited cleanly (app finished — not a crash)
1 General application error
137 OOMKilled (128 + signal 9 = SIGKILL by OOM killer)
139 Segfault (128 + signal 11)
143 Graceful shutdown (128 + signal 15 = SIGTERM)

Exit code 137 means the container was killed by the OOM killer — increase memory limits.


Debugging ImagePullBackOff / ErrImagePull

The container never starts, so there are no logs from the container itself.

kubectl describe pod my-pod               # check Events — will show the pull error

Events will say something like: - Failed to pull image "my-image:tag": rpc error: ... not found → wrong image name or tag - 401 Unauthorized → missing or wrong imagePullSecrets - connection refused → can't reach the registry

No kubectl logs solution here — the container never ran. Fix the image reference or add imagePullSecrets.


kubectl events — The Other Source of Truth

Logs are what the application printed. Events are what Kubernetes itself recorded about the pod's lifecycle. They're separate:

kubectl events --for pod/my-pod           # events for a specific pod (newer kubectl)
kubectl get events --field-selector involvedObject.name=my-pod    # same, older syntax
kubectl get events -n my-namespace        # all events in namespace
kubectl get events --sort-by='.lastTimestamp'   # sorted by time

What events tell you that logs don't: - Scheduling failures (no nodes match nodeSelector, insufficient resources) - Image pull failures - Volume mount failures - Liveness probe failures (and the restart trigger) - Node pressure evictions

Events expire after ~1 hour by default. If a pod crashed and recovered more than an hour ago, the events are gone.


kubectl describe pod — The All-in-One Debug View

kubectl describe pod my-pod

This is the first thing you should run when a pod isn't behaving. It shows: - Current state (Running, Waiting, Terminated) and reason - Container image, ports, env vars, mounts - Conditions (Initialized, Ready, ContainersReady, PodScheduled) - Volume details - Events at the bottom — the most useful section for debugging

The Conditions section is useful for understanding which step failed: - PodScheduled: False → can't schedule (node selector, taints, no resources) - Initialized: False → init container failing - ContainersReady: False → readiness probe failing - Ready: False → pod exists but can't receive traffic


Grep Patterns for Common Issues

kubectl logs my-pod | grep -i error
kubectl logs my-pod | grep -i "connection refused"
kubectl logs my-pod | grep -i "timeout"
kubectl logs my-pod | grep -E "ERROR|FATAL|WARN"      # multiple patterns
kubectl logs my-pod | grep -v "health check"           # exclude noisy lines
kubectl logs my-pod --since=5m | grep -i error         # last 5 mins, errors only

Pipe to less for large logs:

kubectl logs my-pod | less

Navigate with j/k or arrow keys, /pattern to search, q to quit.


Streaming Logs from Multiple Pods (Stern)

stern is a third-party tool that streams logs from multiple pods simultaneously with colour-coded output by pod name. Not available in the CKA exam, but useful in real environments:

stern my-app                              # all pods with "my-app" in the name
stern -l app=my-app                       # all pods matching label
stern my-app --since 5m                   # last 5 minutes

In the exam, simulate multi-pod log streaming with:

kubectl logs -l app=my-app --all-containers=true --prefix=true -f


Quick Reference

# Basic
kubectl logs <pod>
kubectl logs <pod> -n <namespace>
kubectl logs <pod> -c <container>           # specific container
kubectl logs <pod> --all-containers=true    # all containers

# Filtering
kubectl logs <pod> --tail=100
kubectl logs <pod> --since=1h
kubectl logs <pod> --since-time="2024-01-15T10:00:00Z"
kubectl logs <pod> --timestamps

# Previous instance (crashed container)
kubectl logs <pod> --previous
kubectl logs <pod> --previous -c <container>

# By label (all pods)
kubectl logs -l app=my-app
kubectl logs -l app=my-app --all-containers=true --prefix=true

# By resource type
kubectl logs deployment/my-app
kubectl logs daemonset/my-ds

# Save to file
kubectl logs <pod> > /tmp/pod-logs.txt
kubectl logs <pod> --since=1h | grep -i error > /tmp/errors.txt

# Debugging
kubectl describe pod <pod>                  # events + state
kubectl get events --field-selector involvedObject.name=<pod>
kubectl get pod <pod> -o jsonpath='{.status.containerStatuses[0].lastState.terminated.exitCode}'