Skip to content

RBAC, Service Accounts, Contexts & Workload Types

Reference: https://kubernetes.io/docs/reference/access-authn-authz/rbac/


Context — What It Is and Why It Exists

A context is a saved combination of three things: - Cluster — which Kubernetes API server to talk to - User — which credentials to authenticate with - Namespace — the default namespace to use

All of this lives in ~/.kube/config (the kubeconfig file). You can have multiple clusters in one kubeconfig — your local dev cluster, a staging cluster, a production cluster. Contexts let you switch between them without touching credentials manually.

# See all available contexts
kubectl config get-contexts

# Switch to a different context
kubectl config use-context kubernetes-admin@kubernetes

kubectl config use-context kubernetes-admin@kubernetes breakdown:

Part What it does
kubectl config Subcommand for managing kubeconfig
use-context Set the currently active context
kubernetes-admin@kubernetes The context name — format is typically <user>@<cluster>

After running this, all subsequent kubectl commands go to that cluster, authenticated as that user.

Set default namespace for current context:

kubectl config set-context --current --namespace=team-a

Now kubectl get pods defaults to team-a without needing -n team-a every time.


RBAC — What It Is

RBAC = Role-Based Access Control. The system that controls who can do what in Kubernetes.

Without RBAC, anyone who can reach the API server can do anything — create pods, delete namespaces, read secrets. That's obviously not acceptable in any real environment. RBAC is how you restrict access.

The model: You define permissions (verbs on resources) in a Role or ClusterRole. You attach that to a subject (user, service account, or group) via a RoleBinding or ClusterRoleBinding. The subject can then only do exactly what the Role allows — nothing more.

RBAC in Kubernetes has 4 objects:

Object Scope Purpose
Role Namespace Defines allowed actions within one namespace
RoleBinding Namespace Attaches a Role (or ClusterRole) to a subject within one namespace
ClusterRole Cluster-wide Defines allowed actions across all namespaces (or for non-namespaced resources)
ClusterRoleBinding Cluster-wide Attaches a ClusterRole to a subject across the entire cluster

Service Account — Identity for Processes

When you log into AWS or GCP as a human, you use an IAM user. When a program (a Lambda, a container, a process) needs to call AWS APIs, you don't give it your personal IAM user — you give it a role with only the permissions it needs.

Kubernetes does the same thing with Service Accounts.

A Service Account is the identity for a process running inside a pod. When your pod needs to call the Kubernetes API (to list other pods, read configmaps, watch deployments), it authenticates as its Service Account. Kubernetes then applies RBAC to that Service Account to control what it can do.

Why not just use your own credentials inside the pod? - Your credentials have far more access than the app needs — violates least privilege - You can't rotate them without updating every pod - If the pod is compromised, the attacker gets your personal access - Service Accounts are pod-scoped and can be revoked per-pod without affecting other resources

How it works under the hood: Kubernetes automatically mounts a token for the pod's Service Account at /var/run/secrets/kubernetes.io/serviceaccount/token inside every container. The app reads that token to authenticate API calls. The token is short-lived and automatically rotated.

Default Service Account: Every namespace gets a default service account. Pods that don't explicitly specify a service account use it. The default service account has no RBAC permissions by default (in modern Kubernetes), so pods using it can't call the API.

# List service accounts
kubectl get serviceaccounts
kubectl get sa

# Create a service account
kubectl create serviceaccount my-app-sa -n team-a

# See what service account a pod is using
kubectl get pod my-pod -o yaml | grep serviceAccountName

Namespace vs Cluster Scope — When to Use Each

Namespace-scoped (Role + RoleBinding): - Team A should only manage resources in namespace team-a - A developer should be able to get/list/describe pods in dev, but not in prod - An app's service account should only read ConfigMaps in its own namespace

Cluster-scoped (ClusterRole + ClusterRoleBinding): - A monitoring agent (Prometheus) needs to scrape metrics from pods in every namespace - A cluster admin needs to manage nodes (nodes have no namespace) - A CI system needs to read any secret in any namespace - PersistentVolumes, StorageClasses, and Nodes are cluster-scoped resources — they can only be managed via ClusterRole

The hybrid case — ClusterRole bound with RoleBinding: You can bind a ClusterRole with a RoleBinding instead of a ClusterRoleBinding. This gives the subject the ClusterRole's permissions but only within one namespace. Useful for reusing a common set of permissions across teams without granting cluster-wide access.


Why Split Role from RoleBinding?

This is a design choice for reusability.

Without the split, you'd define permissions and the subject in one object. Every time you want to give those same permissions to a new service account, you'd duplicate the entire permissions definition.

With the split: - Define permissions once in a Role — e.g., "can get, list, create deployments" - Bind that Role to as many subjects as needed — sa-1, sa-2, a human user alice, a group devs - When permissions need to change, update the Role once — all bound subjects get the updated permissions automatically

The binding is just a link: "subject X gets the permissions defined in Role Y."


Desired State — The Kubernetes Mental Model

This is the most important concept in Kubernetes to understand deeply.

Kubernetes works on reconciliation, not imperative commands. You never say "start this pod." You declare what you want the world to look like and Kubernetes continuously works to make reality match your declaration.

Imperative (traditional): "Start nginx. Scale it to 3. Stop the 2nd one." Declarative (Kubernetes): "I want 3 replicas of nginx:1.25 running at all times."

Kubernetes has a control loop running constantly: 1. Read desired state (what you declared in your manifests) 2. Read current state (what's actually running) 3. Calculate the diff 4. Take actions to close the gap 5. Repeat forever

If one of your 3 nginx pods crashes — the controller notices (desired=3, current=2), and creates a new pod. You didn't do anything. The system self-heals.

This is why kubectl apply -f is idempotent — you're just updating the desired state. If nothing changed, nothing happens. If you changed the replica count from 3 to 5, the controller adds 2 pods. If you changed the image version, it does a rolling update.


Workload Resource Types — Full Breakdown

Resource API group What it is When to use
Pod core One or more containers running together. No self-healing — if it dies, it stays dead. Rarely directly — use a controller instead
Deployment apps Manages ReplicaSets. Handles rolling updates, rollbacks, scaling. Stateless apps — web servers, APIs, any app where instances are interchangeable
ReplicaSet apps Ensures N copies of a pod spec are running. Deployment manages these for you. Almost never touch directly
StatefulSet apps Like Deployment but for stateful apps. Pods get stable hostnames (pod-0, pod-1) and each gets its own persistent storage that follows it even if it restarts. Databases (MySQL, Cassandra), message queues (Kafka), anything needing stable identity or persistent per-instance storage
DaemonSet apps Runs exactly one copy of a pod on every node. When nodes are added, a pod is automatically scheduled on them. Log collectors (Fluentd), monitoring agents (Node Exporter, Datadog agent), CNI plugins
Job batch Runs one or more pods until they complete successfully, then stops. Failed pods are retried. One-off batch tasks — database migrations, report generation, data transformation
CronJob batch Creates Jobs on a cron schedule. Scheduled tasks — nightly backups, periodic data imports, cleanup jobs

Pod vs Deployment — the most important distinction:

A bare Pod is like running a process directly. If the server reboots, or the process crashes, it's gone. A Deployment wraps the Pod spec in a controller that ensures it's always running. The Deployment owns a ReplicaSet which owns the actual Pods. If a pod dies, the ReplicaSet controller creates a new one. If you update the image, the Deployment does a rolling update (creates new pods, waits for them to be ready, terminates old ones) with zero downtime.

StatefulSet vs Deployment:

Deployment pods are fungible — any instance is identical to any other. If pod-1 dies, a new pod comes up with a different name and different storage. That's fine for stateless apps.

StatefulSet pods have identity. mysql-0, mysql-1, mysql-2 — each has a stable hostname and its own PersistentVolumeClaim. If mysql-1 restarts, it comes back as mysql-1 and reattaches its own storage. The other pods can reference mysql-0 by hostname reliably. This is required for clustered databases.


RBAC YAML — Full Structure

ClusterRole

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: group1-role-cka
rules:
- apiGroups: ["apps"]
  resources: ["deployments"]
  verbs: ["create", "get", "list"]

Every field explained:

apiVersion: rbac.authorization.k8s.io/v1 — the API group and version for RBAC objects. Always this exact string for Role, ClusterRole, RoleBinding, ClusterRoleBinding.

kind: ClusterRole — this is a ClusterRole (not namespace-scoped). Use Role for namespace-scoped.

rules — a list of permission rules. Each rule has three required fields:

apiGroups — which Kubernetes API group the resource belongs to.

Resource apiGroups value
Pod, Service, ConfigMap, Secret, ServiceAccount, Node, Namespace, PersistentVolume [""] (empty string — the core group)
Deployment, ReplicaSet, StatefulSet, DaemonSet, Job, CronJob ["apps"] or ["batch"] for Job/CronJob
Role, ClusterRole, RoleBinding, ClusterRoleBinding ["rbac.authorization.k8s.io"]
Ingress ["networking.k8s.io"]

Getting apiGroups wrong is the #1 RBAC mistake. The permission looks correct but silently doesn't work because the rule is in the wrong API group.

How to find the correct apiGroup for any resource:

kubectl api-resources | grep deployments
# NAME          SHORTNAMES   APIVERSION   NAMESPACED   KIND
# deployments   deploy       apps/v1      true         Deployment
# The APIVERSION column shows apps/v1 — so apiGroup is "apps"

resources — which resource types this rule applies to.

resources: ["deployments"]           # only deployments
resources: ["deployments", "pods"]   # deployments AND pods
resources: ["*"]                     # all resources (avoid this — not least privilege)

Subresources — some actions require specifying a subresource:

resources: ["pods/log"]    # kubectl logs
resources: ["pods/exec"]   # kubectl exec
resources: ["pods/portforward"]

verbs — what operations are allowed:

Verb What it allows kubectl equivalent
get Fetch a specific named resource kubectl get pod nginx
list List all resources of that type kubectl get pods
watch Stream changes to resources kubectl get pods -w
create Create new resources kubectl create / kubectl apply (new)
update Full replacement of a resource kubectl replace
patch Partial update of a resource kubectl patch, kubectl apply (update)
delete Delete a specific resource kubectl delete pod nginx
deletecollection Delete all resources matching a selector kubectl delete pods -l app=nginx

get without list = can fetch by name but can't see what exists. list without get = can see names but can't fetch full details. Usually you want both together.

ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: group1-role-binding-cka
subjects:
- kind: ServiceAccount
  name: group1-sa
  namespace: default
roleRef:
  kind: ClusterRole
  name: group1-role-cka
  apiGroup: rbac.authorization.k8s.io

subjects — who gets the permissions. A list — you can bind to multiple subjects:

kind Who it is Example
ServiceAccount A pod's identity Requires namespace field
User A human user (from kubeconfig) No namespace needed
Group A group of users system:masters, system:authenticated

roleRef — which Role or ClusterRole to bind to. This is immutable — once created, you cannot change the roleRef. Delete and recreate if you need to change it.


Inspecting and Modifying RBAC

See What a ClusterRole Has

kubectl describe clusterrole group1-role-cka

Human-readable summary — shows the rules in a table. Good for quick checks.

kubectl get clusterrole group1-role-cka -o yaml

Raw YAML — shows the exact object as stored in etcd. Better when you're about to modify something, because you see the exact structure and can copy-paste it.

-o yaml over describe for the exam. You'll be modifying YAML constantly. Seeing the raw structure is more useful than the pretty summary.

Modify a ClusterRole

Option 1 — kubectl edit (interactive):

kubectl edit clusterrole group1-role-cka
Opens in vim. Find the rules: section, modify, save and exit. Applied immediately.

Pros: quick for small changes. Cons: error-prone under time pressure. If you make a YAML syntax error, it won't save and you'll have to fix it in-editor.

Option 2 — kubectl patch (surgical update):

kubectl patch clusterrole group1-role-cka --type='json' \
  -p='[{"op":"replace","path":"/rules","value":[{"apiGroups":["apps"],"resources":["deployments"],"verbs":["create","get","list"]}]}]'

Surgical — doesn't touch anything else on the object. JSON patch format specifies the exact operation and path.

Option 3 — export, edit, apply (safest):

kubectl get clusterrole group1-role-cka -o yaml > cr.yaml
# edit cr.yaml in your editor
kubectl apply -f cr.yaml

Most reliable. You can review the file before applying. The YAML is the source of truth.


The $do Pattern for RBAC

Generate YAML without creating — inspect it, edit it, apply it:

# Generate a ClusterRole YAML
kubectl create clusterrole group1-role-cka \
  --verb=create,get,list \
  --resource=deployments \
  $do

# Generate a ClusterRoleBinding YAML
kubectl create clusterrolebinding group1-role-binding-cka \
  --clusterrole=group1-role-cka \
  --serviceaccount=default:group1-sa \
  $do

--serviceaccount=<namespace>:<name> — the format for specifying a service account subject in imperative commands. default:group1-sa = service account group1-sa in namespace default.

Redirect to file and apply:

kubectl create clusterrole group1-role-cka \
  --verb=create,get,list \
  --resource=deployments \
  $do > clusterrole.yaml

kubectl apply -f clusterrole.yaml


Check Permissions — kubectl auth can-i

Test whether a specific identity has a specific permission without actually trying to do it:

# Can the current user create pods?
kubectl auth can-i create pods

# Can service account group1-sa in default namespace list deployments?
kubectl auth can-i list deployments \
  --as=system:serviceaccount:default:group1-sa

# Can they do it in a specific namespace?
kubectl auth can-i list deployments \
  --as=system:serviceaccount:default:group1-sa \
  -n team-a

--as=system:serviceaccount:<namespace>:<name> — impersonate a service account for the permission check. The format system:serviceaccount:default:group1-sa is the full username format Kubernetes uses internally for service accounts.

Returns yes or no. Invaluable for verifying your RBAC is set up correctly without having to actually switch contexts and test.


Full Exercise Solution

Task: Update ClusterRole group1-role-cka so that service account group1-sa can only create, get, list on deployments.

Step 1 — Check current state:

kubectl get clusterrole group1-role-cka -o yaml

Step 2 — Edit the role:

kubectl edit clusterrole group1-role-cka

Find and replace the rules: section with:

rules:
- apiGroups: ["apps"]
  resources: ["deployments"]
  verbs: ["create", "get", "list"]

Step 3 — Verify the role now has the right rules:

kubectl describe clusterrole group1-role-cka

Step 4 — Verify the service account has the binding:

kubectl describe clusterrolebinding group1-role-binding-cka

Step 5 — Confirm the permission works:

kubectl auth can-i create deployments --as=system:serviceaccount:default:group1-sa
# yes

kubectl auth can-i delete deployments --as=system:serviceaccount:default:group1-sa
# no

kubectl auth can-i get pods --as=system:serviceaccount:default:group1-sa
# no