ClusterIP Service — Full Reference¶
Reference: https://kubernetes.io/docs/concepts/services-networking/service/#type-clusterip
See also:
nslookup.md.mdfor DNS andkubectl expose,nodeport.md.mdfor external access,ingress.md.mdfor HTTP routing.
What Is ClusterIP¶
ClusterIP is the default Service type. It gives a Service a stable virtual IP address that is only reachable from within the cluster. Nothing outside the cluster can reach a ClusterIP service directly.
Inside cluster:
pod-a → nginx-service:80 (ClusterIP: 10.96.45.1) → pod-nginx
Outside cluster:
✗ Cannot reach 10.96.45.1 — it's a virtual IP that only exists inside the cluster network
Why ClusterIP Exists¶
Pods have ephemeral IPs — they change every time a pod restarts or gets rescheduled. You can't hardcode pod IPs in your app config.
ClusterIP gives you:
- Stable IP — never changes, even as pods behind it die and restart
- Stable DNS name — <service-name>.<namespace>.svc.cluster.local
- Load balancing — kube-proxy distributes traffic across all pods matching the selector
Creating a ClusterIP Service¶
Imperative (fastest)¶
kubectl expose pod nginx-pod --port=80 --name=nginx-service
kubectl expose deployment nginx-dep --port=80 --target-port=8080 --name=nginx-service
ClusterIP is the default — no --type flag needed.
YAML¶
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: default
spec:
type: ClusterIP # optional — this is the default
selector:
app: nginx # routes to pods with this label
ports:
- port: 80 # port the Service listens on
targetPort: 80 # port on the container (defaults to port value if omitted)
protocol: TCP
Headless ClusterIP (no virtual IP — DNS returns pod IPs directly)¶
Used by StatefulSets so each pod gets its own DNS record: pod-0.my-service.namespace.svc.cluster.local.
How kube-proxy Routes Traffic¶
ClusterIP isn't a real network interface — it's a virtual IP maintained by kube-proxy via iptables rules on every node. When a pod sends traffic to a ClusterIP, iptables intercepts it and rewrites the destination to one of the backing pod IPs (round-robin by default).
You'll never ping a ClusterIP successfully from outside the cluster — iptables only exists inside the cluster network.
DNS — How Other Pods Reach It¶
CoreDNS automatically creates a DNS record for every Service:
From a pod in the same namespace:
curl http://nginx-service # short name works
curl http://nginx-service.default # namespace qualifier
curl http://nginx-service.default.svc.cluster.local # full FQDN — always works
From a pod in a different namespace — short name doesn't resolve:
curl http://nginx-service.default # works
curl http://nginx-service # fails — resolves against current namespace
Inspect a ClusterIP Service¶
kubectl get svc nginx-service
kubectl describe svc nginx-service # selector, endpoints, ports
kubectl get endpoints nginx-service # actual pod IPs behind the service
kubectl get endpoints is the most useful debugging tool. If it shows <none>, the Service has no matching pods — selector mismatch.
# Debug selector mismatch
kubectl describe svc nginx-service | grep Selector # what the service expects
kubectl get pods --show-labels # what labels pods actually have
port vs targetPort¶
| Field | What it is |
|---|---|
port |
The port the Service listens on — what callers use |
targetPort |
The port on the container — what the app actually runs on |
ports:
- port: 80 # callers use: curl http://nginx-service:80
targetPort: 8080 # traffic forwarded to container port 8080
If your app runs on 8080 but you want a clean port 80 externally, set port: 80, targetPort: 8080.
If omitted, targetPort defaults to the same value as port.
Named Ports¶
# In pod spec:
ports:
- name: http
containerPort: 8080
# In Service spec:
ports:
- port: 80
targetPort: http # reference by name — robust to port number changes
Referencing by name means if the container changes ports, you only update the pod spec — the Service stays the same. Useful in large setups.
Common Exam Patterns¶
"Create a ClusterIP service for a deployment":
"Verify internal connectivity":
"Get the ClusterIP of a service":
"Why is the service not routing traffic?":
kubectl get endpoints my-svc # if <none> → selector broken
kubectl describe svc my-svc # check Selector field
kubectl get pods --show-labels # check pod labels
Quick Reference¶
# Create
kubectl expose pod <pod> --port=80 --name=<svc>
kubectl expose deployment <dep> --port=80 --target-port=8080 --name=<svc>
kubectl expose deployment <dep> --port=80 --dry-run=client -o yaml > svc.yaml
# Inspect
kubectl get svc
kubectl describe svc <name>
kubectl get endpoints <name> # pod IPs behind the service
kubectl get svc <name> -o jsonpath='{.spec.clusterIP}'
# Test from inside cluster
kubectl run test --image=busybox:1.28 --rm -it --restart=Never -- wget -qO- http://<svc-name>:<port>