Services, DNS & nslookup — Full Reference¶
Reference: https://kubernetes.io/docs/concepts/services-networking/service/
What Is a Service and Why Does It Exist¶
A pod has an IP address, but that IP is ephemeral — when the pod dies and restarts, it gets a new IP. You can't hardcode a pod IP in your app config. Also, if you have 3 replicas of a pod, you need something to load balance between them.
A Service solves both problems: - It gives you a stable IP and DNS name that never changes, regardless of what happens to the underlying pods - It load balances traffic across all matching pods
A Service doesn't run code. It's just a stable networking endpoint that Kubernetes maintains. It uses a label selector to find which pods it should forward traffic to.
Analogy: a Service is like a phone number that stays the same even when the person behind it moves. The pods are the people — they move (get rescheduled, restart, scale). The Service is the stable number you always call.
Service Types¶
| Type | Accessible from | Use case |
|---|---|---|
ClusterIP |
Inside the cluster only | Default. Internal communication between services |
NodePort |
Outside the cluster via <node-ip>:<port> |
Dev/test exposure without a load balancer |
LoadBalancer |
Outside via cloud load balancer | Production on cloud providers (AWS/GCP/Azure) |
ExternalName |
Inside cluster → external DNS name | Route internal traffic to an external service |
For CKA: ClusterIP is the default and what you use most. NodePort comes up occasionally. LoadBalancer is cloud-specific.
kubectl expose — Create a Service Imperatively¶
Full breakdown:
| Part | What it does |
|---|---|
kubectl expose |
Create a Service that exposes a resource |
pod nginx-pod-cka |
The resource to expose — a pod named nginx-pod-cka |
--name=nginx-service-cka |
Name of the Service object to create |
--port=80 |
The port the Service listens on (what callers use) |
This creates a ClusterIP Service by default. The Service's selector is automatically set to match the labels of the pod you exposed.
Expose a Deployment instead (more common in practice):
--port vs --target-port:
| Flag | What it means |
|---|---|
--port |
Port the Service listens on — what other pods call |
--target-port |
Port on the container the traffic is forwarded to |
If your container runs on port 8080 but you want callers to use port 80:
If you omit --target-port, it defaults to the same value as --port.
Expose as NodePort:
Service YAML — Full Structure¶
apiVersion: v1
kind: Service
metadata:
name: nginx-service-cka
namespace: default
spec:
type: ClusterIP # ClusterIP | NodePort | LoadBalancer
selector:
app: nginx # matches pods with this label
ports:
- port: 80 # port the Service listens on
targetPort: 80 # port on the container
protocol: TCP
# nodePort: 30080 # only for NodePort type — optional, auto-assigned if omitted
The selector is how the Service finds its pods. If a pod has label app: nginx, this Service includes it as a backend. If the pod is removed, it's removed from the pool. If a new pod with that label is added, it's automatically included.
Kubernetes Internal DNS — CoreDNS¶
Kubernetes runs CoreDNS — an internal DNS server that every pod uses automatically. Every Service gets a DNS record the moment it's created.
DNS name format:
So nginx-service-cka in the default namespace is reachable as:
From within the same namespace, you can use the short form:
CoreDNS resolves the short name to the full FQDN automatically for pods in the same namespace.
From a different namespace, the short name doesn't work — you need at minimum:
or the full FQDN.nslookup — What It Is and Why¶
nslookup is a DNS lookup tool. It queries a DNS server and returns the IP address for a given hostname.
In Kubernetes, you use it to verify: 1. The Service's DNS name resolves (CoreDNS is working) 2. The DNS name resolves to the correct ClusterIP 3. The Service is reachable by name from inside the cluster
You can't run nslookup from outside the cluster because cluster DNS is internal. You need to run it from inside a pod.
The One-Shot Test Pod Pattern¶
kubectl run test-nslookup \
--image=busybox:1.28 \
--rm -it \
--restart=Never \
-- nslookup nginx-service-cka
Full breakdown:
| Part | What it does |
|---|---|
kubectl run test-nslookup |
Create a pod named test-nslookup |
--image=busybox:1.28 |
Use busybox 1.28 — has nslookup built in. Use 1.28 specifically — newer busybox versions have a broken nslookup |
--rm |
Delete the pod automatically after it exits. Keeps the cluster clean |
-it |
-i = keep stdin open, -t = allocate a TTY. Together: attach to the pod interactively |
--restart=Never |
Create a bare Pod, not a Deployment. Pod runs once and exits |
-- nslookup nginx-service-cka |
-- separates kubectl flags from the command to run in the container. Runs nslookup nginx-service-cka inside busybox |
Why busybox:1.28 specifically? Newer versions of busybox ship with a version of nslookup that returns exit code 1 even on success, which causes the pod to appear failed. 1.28 is the stable version for this use case — it's the standard answer in CKA exercises.
Save the output to a file:
kubectl run test-nslookup \
--image=busybox:1.28 \
--rm -it \
--restart=Never \
-- nslookup nginx-service-cka > nginx-service.txt
The > redirects the pod's stdout to the local file. The DNS lookup result is saved.
Full Exercise Sequence¶
# 1. Create the pod
kubectl run nginx-pod-cka --image=nginx
# 2. Expose it as a Service
kubectl expose pod nginx-pod-cka --name=nginx-service-cka --port=80
# 3. Verify the Service exists
kubectl get svc nginx-service-cka
# 4. DNS lookup and save result
kubectl run test-nslookup --image=busybox:1.28 --rm -it --restart=Never \
-- nslookup nginx-service-cka > nginx-service.txt
# 5. Verify the file
cat nginx-service.txt
nginx-service.txt will contain something like:
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: nginx-service-cka
Address 1: 10.96.123.45 nginx-service-cka.default.svc.cluster.local
Inspecting Services¶
kubectl get svc # list all services
kubectl get svc -A # all namespaces
kubectl describe svc nginx-service-cka # full details — selector, endpoints, ports
kubectl get endpoints nginx-service-cka # shows the pod IPs the service routes to
kubectl get endpoints is the most useful debugging tool. If the Service exists but traffic isn't working, check the endpoints — if it shows <none>, the selector doesn't match any pods.
Common cause: pod label doesn't match Service selector. Check with:
Common Exam Patterns¶
"Expose a pod and verify DNS":
kubectl expose pod <pod> --name=<svc-name> --port=80
kubectl run test --image=busybox:1.28 --rm -it --restart=Never -- nslookup <svc-name> > result.txt
"Create a service for a deployment on port 8080":
"Check why a service has no traffic":
kubectl get endpoints <svc-name> # if <none>, selector is wrong
kubectl describe svc <svc-name> # check Selector field
kubectl get pods --show-labels # check pod labels
"Test connectivity from inside the cluster":
kubectl run test --image=busybox:1.28 --rm -it --restart=Never -- wget -qO- http://<svc-name>
kubectl run test --image=busybox:1.28 --rm -it --restart=Never -- nc -zv <svc-name> 80
Quick Reference¶
# Create
kubectl expose pod <pod> --name=<svc> --port=<port>
kubectl expose pod <pod> --name=<svc> --port=80 --target-port=8080
kubectl expose pod <pod> --name=<svc> --port=80 --type=NodePort
kubectl expose deployment <dep> --name=<svc> --port=80
# Inspect
kubectl get svc
kubectl describe svc <svc>
kubectl get endpoints <svc> # pod IPs behind the service
# DNS test
kubectl run test-nslookup --image=busybox:1.28 --rm -it --restart=Never -- nslookup <svc-name>
kubectl run test-nslookup --image=busybox:1.28 --rm -it --restart=Never -- nslookup <svc-name> > result.txt
# DNS name format
<svc>.<namespace>.svc.cluster.local # full FQDN
<svc> # short form (same namespace only)