Skip to content

K8s YAML Templates — All Major Kinds With Context

Pattern Rules (read once, apply everywhere)

  • Every resource has 4 top-level fields: apiVersion + kind + metadata + spec
  • metadata.name is always required
  • spec shape is different per Kind — that's what you actually need to learn
  • Indentation: 2 spaces, always. Tabs will break YAML silently.
  • Get the skeleton from kubectl dry-run, then edit only what the task needs

1. Pod

The smallest deployable unit. A pod wraps one or more containers that share the same network and storage. You create pods directly when the task says "create a pod" — not a Deployment. Deployments manage pods for you; a raw Pod manifest is self-contained and doesn't restart on node failure.

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  namespace: default
  labels:
    app: nginx
spec:
  containers:
    - name: nginx
      image: nginx:1.21
      ports:
        - containerPort: 80
      resources:
        requests:
          memory: "64Mi"
          cpu: "250m"
        limits:
          memory: "128Mi"
          cpu: "500m"

Variation — env vars from literal, ConfigMap, and Secret

Use when a container needs config injected at runtime without baking it into the image. Literal is simplest. ConfigMap/Secret refs are what exams test.

spec:
  containers:
    - name: app
      image: busybox
      env:
        - name: MY_VAR
          value: "hello"
        - name: FROM_CONFIGMAP
          valueFrom:
            configMapKeyRef:
              name: my-cm
              key: some-key
        - name: FROM_SECRET
          valueFrom:
            secretKeyRef:
              name: my-secret
              key: password

Variation — volume mount (emptyDir)

emptyDir is temporary storage that lives as long as the pod. Use it when two containers in the same pod need to share files, or when a task needs a writable scratch space. Data is lost when the pod is deleted.

spec:
  containers:
    - name: app
      image: nginx
      volumeMounts:
        - name: data-vol
          mountPath: /data
  volumes:
    - name: data-vol
      emptyDir: {}

Variation — nodeSelector and tolerations

nodeSelector: schedule this pod only on nodes with a specific label. tolerations: allow this pod to land on a tainted node (nodes are tainted to repel pods — tolerations are the exception pass).

spec:
  nodeSelector:
    disk: ssd
  tolerations:
    - key: "gpu"
      operator: "Equal"
      value: "true"
      effect: "NoSchedule"
  containers:
    - name: app
      image: nginx

Variation — serviceAccountName

Attach a ServiceAccount to a pod so the container can authenticate to the Kubernetes API. Required when a pod needs to call the API (e.g. list pods, read secrets). Without this, it uses the default SA.

spec:
  serviceAccountName: my-sa
  containers:
    - name: app
      image: nginx

Variation — init container

Runs to completion before the main container starts. Use for setup tasks: waiting for a DB to be ready, pre-populating a volume, running migrations. If the init container fails, the pod won't start.

spec:
  initContainers:
    - name: init-task
      image: busybox
      command: ["sh", "-c", "echo init done"]
  containers:
    - name: main
      image: nginx

Variation — multi-container (sidecar)

Two containers in one pod share the same network (localhost) and can share volumes. Classic use: log shipper alongside a web server, or a proxy alongside an app. Both containers start at the same time (unlike init).

spec:
  containers:
    - name: main
      image: nginx
    - name: sidecar
      image: busybox
      command: ["sh", "-c", "while true; do sleep 30; done"]

2. Deployment

Use when you need multiple replicas of a pod, rolling updates, and self-healing (if a pod dies, Deployment recreates it). This is the standard way to run stateless applications. Wraps a pod template.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deploy
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx          # Deployment uses this to find ITS pods
  template:
    metadata:
      labels:
        app: nginx        # MUST match selector.matchLabels — this is the pod
    spec:
      containers:
        - name: nginx
          image: nginx:1.21
          ports:
            - containerPort: 80

Critical: selector.matchLabels must exactly match template.metadata.labels. If they don't match, kubectl will reject the manifest.


3. Service

Pods have ephemeral IPs that change on restart. A Service gives a stable IP and DNS name that forwards traffic to pods matching its selector. Without a Service, pods can't reliably reach each other or be reached from outside the cluster.

apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
  namespace: default
spec:
  selector:
    app: nginx            # forwards traffic to pods with this label
  ports:
    - protocol: TCP
      port: 80            # port THIS service listens on
      targetPort: 80      # port the CONTAINER listens on
  type: ClusterIP         # internal only — default type

Variation — NodePort

Exposes the service on a port on every node's IP. Use when you need to reach the service from outside the cluster without a load balancer. nodePort must be in range 30000-32767.

spec:
  type: NodePort
  selector:
    app: nginx
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30080

Variation — Headless (for StatefulSets)

clusterIP: None means no stable IP is assigned. Instead, DNS returns the IPs of individual pods directly. Required for StatefulSets where each pod needs a stable individual DNS name (pod-0.svc, pod-1.svc, etc.).

spec:
  clusterIP: None
  selector:
    app: db
  ports:
    - port: 5432
      targetPort: 5432

4. ConfigMap

Stores non-sensitive config data (strings, config files) outside the container image. Decouples config from code. Pods consume it as env vars or as files mounted into the container.

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-cm
  namespace: default
data:
  APP_ENV: "production"
  config.properties: |
    host=localhost
    port=8080

Consuming — envFrom (inject ALL keys as env vars at once)

Useful when you have many keys and don't want to map them one by one.

spec:
  containers:
    - name: app
      image: nginx
      envFrom:
        - configMapRef:
            name: my-cm

Consuming — as a mounted volume (files)

Each key in the ConfigMap becomes a file at mountPath. Use when the app expects a config file on disk rather than env vars.

spec:
  containers:
    - name: app
      image: nginx
      volumeMounts:
        - name: config-vol
          mountPath: /etc/config
  volumes:
    - name: config-vol
      configMap:
        name: my-cm

5. Secret

Same concept as ConfigMap but for sensitive data (passwords, tokens, certs). Values must be base64 encoded in the manifest. Kubernetes stores them separately and only gives them to pods that request them. Base64 is NOT encryption — it's just encoding.

apiVersion: v1
kind: Secret
metadata:
  name: my-secret
  namespace: default
type: Opaque
data:
  password: cGFzc3dvcmQxMjM=    # echo -n 'password123' | base64

Encode: echo -n 'password123' | base64 Decode: echo 'cGFzc3dvcmQxMjM=' | base64 -d

Variation — TLS secret

Stores a TLS certificate and private key. Used by Ingress for HTTPS termination. The keys must literally be named tls.crt and tls.key.

type: kubernetes.io/tls
data:
  tls.crt: <base64-cert>
  tls.key: <base64-key>

6. PersistentVolume (PV)

A PV is a piece of storage provisioned in the cluster — think of it as a physical disk registered with Kubernetes. It exists independently of any pod. A pod never uses a PV directly — it goes through a PVC.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce         # RWO: one node | RWX: many nodes | ROX: many nodes read-only
  persistentVolumeReclaimPolicy: Retain   # keep data after PVC deleted
  hostPath:
    path: /data/pv          # only valid on single-node/test clusters

7. PersistentVolumeClaim (PVC)

A PVC is a request for storage. The pod says "I need 500Mi ReadWriteOnce" and Kubernetes binds it to a matching PV. The pod never references the PV directly — always through the PVC. Think PV = the disk, PVC = the ticket that claims it.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 500Mi

Consuming PVC in a Pod

Mount the claimed storage into the container at a path.

spec:
  containers:
    - name: app
      image: nginx
      volumeMounts:
        - name: storage
          mountPath: /data
  volumes:
    - name: storage
      persistentVolumeClaim:
        claimName: my-pvc

8. ServiceAccount

An identity for processes running inside pods. When a pod needs to talk to the Kubernetes API (list pods, read secrets, etc.), it authenticates using a ServiceAccount. You create the SA, then attach a Role to it via a RoleBinding. Every namespace has a "default" SA — never give it extra permissions.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-sa
  namespace: default

9. Role

Defines what actions are allowed on which resources — but only within one namespace. Think of it as a permission slip scoped to a room. A Role does nothing on its own — it must be bound to a subject via RoleBinding.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-reader
  namespace: default
rules:
  - apiGroups: [""]           # "" = core API group (pods, services, configmaps, secrets)
    resources: ["pods"]
    verbs: ["get", "list", "watch"]

Variation — multiple resource types and API groups

apps group covers Deployments, ReplicaSets, StatefulSets, DaemonSets. Core group ("") covers Pods, Services, ConfigMaps, Secrets, PVCs.

rules:
  - apiGroups: [""]
    resources: ["pods", "services"]
    verbs: ["get", "list"]
  - apiGroups: ["apps"]
    resources: ["deployments"]
    verbs: ["get", "list", "create", "update", "delete"]

10. RoleBinding

Attaches a Role to a subject (ServiceAccount, User, or Group) within a namespace. This is what actually grants the permissions defined in the Role. Role + RoleBinding = "this SA can do X in this namespace."

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: pod-reader-binding
  namespace: default
subjects:
  - kind: ServiceAccount
    name: my-sa
    namespace: default
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

11. ClusterRole

Same as Role but cluster-wide — not scoped to a namespace. Use when the resource doesn't belong to a namespace (nodes, PVs) or when you want to reuse the same role across multiple namespaces.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: node-reader
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]

12. ClusterRoleBinding

Attaches a ClusterRole to a subject cluster-wide. If you use a ClusterRoleBinding to bind a ClusterRole to a SA in namespace X, that SA gets those permissions across ALL namespaces.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: node-reader-binding
subjects:
  - kind: ServiceAccount
    name: my-sa
    namespace: default
roleRef:
  kind: ClusterRole
  name: node-reader
  apiGroup: rbac.authorization.k8s.io

13. NetworkPolicy

By default all pods can talk to all pods. NetworkPolicy lets you lock that down. You define which pods this policy applies TO (podSelector), and then which traffic is allowed IN (ingress) or OUT (egress). If you define a policyType but leave the rules empty = deny all for that direction.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-ingress-from-frontend
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: backend          # policy applies TO pods with this label
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: frontend
      ports:
        - protocol: TCP
          port: 8080
  egress:
    - to:
        - podSelector:
            matchLabels:
              app: db
      ports:
        - protocol: TCP
          port: 5432

Variation — Deny all ingress to all pods in namespace

Empty podSelector = applies to all pods. No ingress rules = deny all ingress.

spec:
  podSelector: {}
  policyTypes:
    - Ingress

Variation — Allow ingress from a specific namespace

  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: frontend-ns

Variation — AND logic (pod must be in namespace AND have label)

Both selectors are under the SAME list item (no second dash before podSelector).

  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              name: frontend-ns
          podSelector:            # no leading dash = AND with namespaceSelector above
            matchLabels:
              app: frontend

Variation — OR logic (pod in namespace OR pod has label)

Each selector has its OWN dash = separate rules = OR.

  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              name: frontend-ns
        - podSelector:            # separate dash = OR
            matchLabels:
              app: frontend

AND vs OR is the most commonly failed NetworkPolicy concept in the exam.


14. Ingress

A Service of type ClusterIP or NodePort handles TCP traffic. Ingress handles HTTP/HTTPS routing at layer 7 — route by hostname or URL path to different backend services. Requires an ingress controller (e.g. nginx) to already be running in the cluster.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
  namespace: default
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
    - host: myapp.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nginx-svc
                port:
                  number: 80

Variation — TLS termination

HTTPS traffic is decrypted at the Ingress. The TLS secret must contain a cert and key for the hostname. Traffic from Ingress to the backend service is plain HTTP.

spec:
  tls:
    - hosts:
        - myapp.example.com
      secretName: tls-secret
  rules:
    - host: myapp.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nginx-svc
                port:
                  number: 80

15. StatefulSet

Like a Deployment but for stateful apps (databases, message queues). Each pod gets a stable identity (web-0, web-1, web-2) and its own PVC that persists across restarts. Pods start and stop in order. Requires a headless Service for stable DNS per pod.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
  namespace: default
spec:
  serviceName: "web"        # must match a headless Service name
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: web
          image: nginx
          volumeMounts:
            - name: www
              mountPath: /usr/share/nginx/html
  volumeClaimTemplates:     # creates one PVC per pod automatically
    - metadata:
        name: www
      spec:
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 1Gi

16. DaemonSet

Ensures exactly one pod runs on every node (or every node matching a selector). Use for node-level agents: log collectors, monitoring exporters, network plugins. No replicas field — the number of pods = number of nodes. New nodes added to the cluster automatically get the pod.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: log-agent
  namespace: default
spec:
  selector:
    matchLabels:
      app: log-agent
  template:
    metadata:
      labels:
        app: log-agent
    spec:
      containers:
        - name: log-agent
          image: fluentd

17. Job

Runs a pod to completion — the pod does its work and exits. A Deployment restarts pods that exit; a Job doesn't. Use for batch tasks, data processing, one-off scripts. The pod must use restartPolicy: Never or OnFailure — Never means failed pods are left for inspection; OnFailure restarts the container.

apiVersion: batch/v1
kind: Job
metadata:
  name: pi-job
spec:
  completions: 1      # how many pods must succeed before Job is done
  parallelism: 1      # how many pods to run at the same time
  template:
    spec:
      containers:
        - name: pi
          image: perl
          command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never    # Never or OnFailure — NEVER use Always in a Job

18. CronJob

A Job that runs on a schedule (cron syntax). Each run creates a new Job which creates a pod. Use for periodic tasks: backups, report generation, cleanup scripts. The jobTemplate is just a Job spec.

apiVersion: batch/v1
kind: CronJob
metadata:
  name: hello-cron
spec:
  schedule: "*/5 * * * *"   # every 5 minutes
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - name: hello
              image: busybox
              command: ["sh", "-c", "echo hello"]
          restartPolicy: OnFailure

19. LimitRange

Sets default resource requests/limits for containers in a namespace. Without requests/limits, a container can consume all node resources and starve other pods. LimitRange enforces a floor and ceiling automatically so you don't have to set them manually on every pod.

apiVersion: v1
kind: LimitRange
metadata:
  name: limits
  namespace: default
spec:
  limits:
    - type: Container
      default:              # applied if container specifies no limits
        memory: "128Mi"
        cpu: "500m"
      defaultRequest:       # applied if container specifies no requests
        memory: "64Mi"
        cpu: "250m"
      max:                  # hard ceiling — container cannot exceed this
        memory: "512Mi"
        cpu: "1"

20. ResourceQuota

Caps total resource consumption for an entire namespace — not per container but in aggregate. Use to prevent one team's namespace from consuming all cluster resources. Pods that would exceed the quota are rejected.

apiVersion: v1
kind: ResourceQuota
metadata:
  name: ns-quota
  namespace: default
spec:
  hard:
    pods: "10"
    requests.cpu: "4"
    requests.memory: "4Gi"
    limits.cpu: "8"
    limits.memory: "8Gi"

apiVersion Quick Reference

Kind apiVersion
Pod, Service, ConfigMap, Secret, PV, PVC, ServiceAccount, LimitRange, ResourceQuota, Namespace v1
Deployment, ReplicaSet, DaemonSet, StatefulSet apps/v1
Job, CronJob batch/v1
Ingress, NetworkPolicy networking.k8s.io/v1
Role, RoleBinding, ClusterRole, ClusterRoleBinding rbac.authorization.k8s.io/v1
HorizontalPodAutoscaler autoscaling/v2

Common Mistakes That Fail Silently

Mistake What happens
selector.matchLabels doesn't match template.metadata.labels Manifest rejected
restartPolicy: Always in a Job Schema validation error
NetworkPolicy: wrong dash placement for AND vs OR Wrong traffic allowed/denied with no error
PVC accessModes or size doesn't match any PV PVC stays in Pending forever
Role used for cluster-wide resources (nodes, PVs) Role has no effect — need ClusterRole
apiGroups: [""] used for Deployments No effect — Deployments are in apiGroups: ["apps"]
Secret values not base64 encoded Pod fails to start with decode error
StatefulSet missing headless Service Pod DNS doesn't resolve