kubectl logs — Pod Logging & Namespaces¶
Reference: https://kubernetes.io/docs/reference/kubectl/quick-reference/
kubectl logs — It's Its Own Verb¶
Common mistake: kubectl get logs. That's wrong. get retrieves resource objects (pods, deployments, services). Logs are not a resource object — they're a stream of output from a running container.
logs is a standalone verb, the same way exec, cp, port-forward, and top are standalone verbs — they perform actions against a resource rather than fetching the resource itself.
kubectl logs Full Breakdown¶
Basic Usage¶
| Part | What it does |
|---|---|
kubectl logs |
Fetch logs from a container |
log-reader-pod |
The name of the pod to get logs from |
Outputs the current log content to stdout and exits.
All Useful Flags¶
-f / --follow — stream logs in real time (like tail -f). Doesn't exit — keeps watching and printing new lines as they arrive. Ctrl+C to stop.
--tail=N — only show the last N lines. Without this, logs dumps the entire log history which can be huge.
--since=<duration> — only logs from the last N time units. Accepts s (seconds), m (minutes), h (hours). So --since=30m = last 30 minutes.
--since-time=<timestamp> — logs from a specific ISO 8601 timestamp onwards.
-c <container> / --container — specify which container to get logs from. Required when a pod has multiple containers. If there's only one container, you can omit it.
--previous — get logs from the previous (crashed/restarted) instance of the container. Critical for debugging crash loops — by the time you look, the container has restarted and current logs might not show the crash reason. --previous shows what happened before the restart.
-n <namespace> — logs from a pod in a specific namespace. Without this, defaults to the default namespace.
-l <label-selector> — get logs from all pods matching a label. Useful for seeing logs across all replicas of a deployment at once.
Redirect Logs to a File¶
> — shell output redirection. Takes everything that the command prints to stdout and writes it to podalllogs.txt instead. Creates the file if it doesn't exist. Overwrites if it does.
>> vs >:
- > — overwrites the file
- >> — appends to the file
Verify it actually wrote something:
Or check the line count:
Combining flags with redirection:
This saves the last 100 lines from the past hour to a file — useful for handing off logs to someone else or grepping through them:
kubectl logs log-reader-pod | grep "ERROR"
kubectl logs log-reader-pod | grep "ERROR" > errors_only.txt
Namespaces — From the Ground Up¶
The Problem Without Namespaces¶
Imagine a Kubernetes cluster with no namespaces. Every resource — every pod, deployment, service, secret, configmap — lives in one giant flat pool.
You have 10 teams: frontend, backend, data-engineering, ML, infra, security... They all deploy into this one pool. After a few months you have:
- 3 different pods named nginx (naming collision — Kubernetes rejects the 2nd and 3rd)
- A frontend developer accidentally deletes the backend's deployment (no isolation)
- You can't set different resource limits for different teams (no boundary)
- You can't give one team read-only access and another full access (no RBAC scope)
Namespaces solve all of this.
What a Namespace Actually Is¶
A namespace is a logical boundary inside a cluster. It's a way to divide one physical cluster into multiple virtual spaces. Resources in namespace team-a are isolated from resources in namespace team-b.
The folder analogy: one cluster = one hard drive. Namespaces = folders on that hard drive. Two teams can each have a pod named nginx as long as they're in different namespaces — just like two folders can each contain a file named readme.txt.
Cluster
├── namespace: default
│ ├── pod: nginx
│ └── service: my-service
├── namespace: team-a
│ ├── pod: nginx ← same name, different namespace, no conflict
│ └── deployment: api
└── namespace: team-b
├── pod: nginx ← same name again, different namespace
└── configmap: settings
What Namespaces Give You¶
| Feature | How namespaces enable it |
|---|---|
| Name isolation | Two teams can both have a pod called nginx — no collision |
| RBAC scope | Give team-a full access to namespace: team-a, read-only to namespace: team-b |
| Resource quotas | Limit team-a to 4 CPUs and 8Gi memory; let team-b use 16 CPUs |
| Network policies | Block traffic between namespaces by default, allow selectively |
| Visibility | kubectl get pods -n team-a shows only team-a's pods |
Namespaces ≠ Clusters — Critical Distinction¶
This is a mistake many people make early on.
Namespaces are isolation, not fault isolation. They run on the same underlying nodes, share the same etcd, use the same control plane. If the cluster goes down, everything in every namespace goes down together.
- Namespace = same hard drive, different folders. The hard drive fails → all folders lose data.
- Separate clusters = separate hard drives. One fails → others are unaffected.
When to use separate namespaces: - Multiple teams or projects in the same cluster - Separating app layers (frontend, backend, monitoring) when it's the same environment - Dev and staging when they're on the same hardware and you're ok with shared fate
When to use separate clusters: - Production vs development (you don't want a dev deployment to affect prod) - Compliance requirements (data sovereignty, PCI-DSS, HIPAA — production data must be physically isolated) - Blast radius control (a bad deployment in prod should never affect staging) - Different SLAs (prod cluster needs 99.9% uptime; dev can go down for maintenance)
Built-in Namespaces¶
| Namespace | Purpose |
|---|---|
default |
Where resources go if you don't specify a namespace |
kube-system |
Kubernetes system components — API server, scheduler, controller manager, CoreDNS |
kube-public |
Publicly readable — used for cluster info |
kube-node-lease |
Node heartbeat objects — used internally by the cluster |
Never put your own workloads in kube-system. If you accidentally delete a system pod there it will be recreated, but you could cause instability.
Working With Namespaces¶
# List all namespaces
kubectl get namespaces
# Create a namespace
kubectl create namespace team-a
# Run a command in a specific namespace
kubectl get pods -n team-a
# Run a command across ALL namespaces
kubectl get pods -A
kubectl get pods --all-namespaces
# Set a default namespace for your session (so you don't have to type -n every time)
kubectl config set-context --current --namespace=team-a
-n flag is required whenever a resource isn't in the default namespace. Forgetting it and wondering why kubectl get pods returns nothing (or returns the wrong pods) is a classic CKA exam mistake.
Namespace-scoped vs Cluster-scoped Resources¶
Not every resource belongs to a namespace:
| Namespace-scoped (need -n) | Cluster-scoped (no namespace) |
|---|---|
| Pod, Deployment, Service | Node |
| ConfigMap, Secret | PersistentVolume |
| ServiceAccount | StorageClass |
| Role, RoleBinding | ClusterRole, ClusterRoleBinding |
| Ingress | Namespace itself |
You can't put a Node or a PersistentVolume in a namespace — they're cluster-wide by nature. kubectl get nodes -n team-a will error or return nothing.