Skip to content

PV, PVC & StorageClass — Full Breakdown

The Big Picture

These three files work together as a chain:

StorageClass (blue-stc-cka)
    ↓  defines the "type" of storage and how it behaves
PersistentVolume (blue-pv-cka)
    ↓  actual storage on the node, references the StorageClass
PersistentVolumeClaim (blue-pvc-cka)
    ↓  pod requests storage, references both the PV and StorageClass

Think of it like renting a flat: - StorageClass = the rental agency rules (terms, policies) - PersistentVolume = the actual flat (physical space that exists) - PersistentVolumeClaim = your tenancy agreement (you claiming that flat)


File 1: blue-stc-cka.yml — StorageClass

Start here — the PV references the StorageClass by name, so it must exist first logically.

apiVersion: storage.k8s.io/v1        # StorageClass lives in the storage API group, not core v1
kind: StorageClass
metadata:
  name: blue-stc-cka                 # The name both PV and PVC will reference via storageClassName
  annotations:
    storageclass.kubernetes.io/is-default-class: "false"
    # Not the default — pods that don't specify a storageClassName won't land here
provisioner: kubernetes.io/no-provisioner
# KEY FIELD — tells K8s: "don't auto-create storage for me"
# This means the PV must be created MANUALLY (static provisioning)
# Dynamic provisioners (like AWS EBS or GCE PD) create the disk automatically
# "no-provisioner" = you're managing the actual storage yourself
reclaimPolicy: Retain   # default value is Delete
# What happens to the PV when the PVC is deleted?
# Retain = keep the PV and its data (manual cleanup required)
# Delete = destroy the PV and underlying storage (dangerous on prod)
# Recycle = deprecated, don't use
allowVolumeExpansion: true
# Allows resizing the PVC later (increase storage size)
# You can go UP, never down — shrinking is not supported
mountOptions:
  - discard    # this might enable UNMAP / TRIM at the block storage layer
# discard = tells the OS to send TRIM commands to the underlying block device
# This is relevant for SSDs — lets the drive reclaim freed blocks
# Not all storage backends support it, hence "might enable"
volumeBindingMode: WaitForFirstConsumer
# CRITICAL for local storage — don't bind the PV to a PVC until a pod actually
# tries to use it. Why? Because local storage is node-specific.
# If you bind immediately (Immediate mode), the PVC locks to a PV on node X,
# but the pod might get scheduled on node Y — deadlock.
# WaitForFirstConsumer = wait until the pod's node is known, then bind.
parameters:
  guaranteedReadWriteLatency: "true"   # provider-specific
# Custom parameters passed to the provisioner
# With no-provisioner this is essentially a no-op / documentation
# With real provisioners (e.g. Portworx, NetApp) these map to real settings

File 2: blue-pv-cka.yml — PersistentVolume

The actual storage resource. Cluster-scoped (no namespace).

apiVersion: v1           # PV is a core resource, lives in v1
kind: PersistentVolume
metadata:
  name: blue-pv-cka      # Name the PVC will reference via volumeName
spec:
  capacity:
    storage: 100Mi       # Total size of this PV — 100 mebibytes
                         # The PVC can request LESS (50Mi here), but never more
  volumeMode: Filesystem
  # How the volume is exposed to the pod
  # Filesystem = mounted as a directory (most common)
  # Block = raw block device, no filesystem (for DBs like Cassandra that manage their own I/O)
  accessModes:
  - ReadWriteOnce        # RWO = only ONE node can mount this volume at a time (read/write)
  # Other options:
  # ReadOnlyMany  (ROX) = many nodes, read-only
  # ReadWriteMany (RWX) = many nodes, read/write — requires NFS or similar shared storage
  # ReadWriteOncePod (RWOP) = only ONE pod (not just node) — K8s 1.22+
  # Local storage only supports RWO
  persistentVolumeReclaimPolicy: Retain
  # Same concept as reclaimPolicy in StorageClass, but set on the PV directly
  # PV-level setting takes precedence
  # Retain = when PVC is deleted, PV stays (status changes to "Released")
  # A Released PV cannot be re-bound until manually cleaned up
  storageClassName: blue-stc-cka
  # Must match the StorageClass name exactly
  # This is how PV and PVC find each other — both must use the same storageClassName
  # If this is empty string "", the PV has no class and only binds to PVCs with no class
  local:
    path: /opt/blue-data-cka
  # local = uses a directory on the node's filesystem
  # This path MUST exist on the node before the PV is used
  # Unlike hostPath, local volumes work with the scheduler (via nodeAffinity below)
  # hostPath has no scheduler awareness — local does
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - controlplane
  # MANDATORY for local volumes — tells the scheduler which node has this storage
  # Without this, K8s doesn't know WHERE the data lives
  # kubernetes.io/hostname is a label automatically applied to every node
  # operator: In means "hostname must be one of these values"
  # Here: the data is physically on the 'controlplane' node
  # Pods using this PV will be scheduled onto controlplane

File 3: blue-pvc-cka.yml — PersistentVolumeClaim

What the pod uses. Namespace-scoped (unlike PV which is cluster-scoped).

apiVersion: v1           # Also core v1
kind: PersistentVolumeClaim
metadata:
  name: blue-pvc-cka     # This name goes into the pod spec under volumes
spec:
  accessModes:
    - ReadWriteOnce      # Must match the PV's accessModes for binding to succeed
  volumeMode: Filesystem # Must match the PV's volumeMode
  resources:
    requests:
      storage: 50Mi      # Requesting 50Mi — less than the PV's 100Mi, which is fine
                         # K8s binds PVCs to PVs where PV capacity >= PVC request
  storageClassName: blue-stc-cka
  # Must match the PV's storageClassName
  # This is the primary matchmaking field between PVC and PV
  volumeName: blue-pv-cka
  # Optional but used here to EXPLICITLY bind to a specific PV by name
  # Without this, K8s finds any suitable PV matching class + accessMode + size
  # With this, it's a direct binding — useful when you want deterministic behavior

Binding Flow — What Happens When You Apply These

1. Apply StorageClass → exists in cluster, no provisioning happens yet

2. Apply PV → status: Available
   K8s knows: "100Mi of local storage exists on controlplane at /opt/blue-data-cka"

3. Apply PVC → K8s looks for a matching PV:
   ✓ storageClassName matches (blue-stc-cka)
   ✓ accessModes match (RWO)
   ✓ PV capacity (100Mi) >= PVC request (50Mi)
   ✓ volumeName explicitly points to blue-pv-cka
   → PVC status: Bound | PV status: Bound

4. Pod references PVC in spec.volumes → pod gets scheduled on controlplane (nodeAffinity)
   → /opt/blue-data-cka mounted into the container at whatever mountPath you define

Common Gotchas

Issue Cause Fix
PVC stuck in Pending storageClassName mismatch Check both PV and PVC have identical storageClassName
PVC stuck in Pending WaitForFirstConsumer Normal — will bind once a pod uses it
PV stuck in Released PVC deleted, Retain policy Manually remove claimRef from PV spec or delete+recreate
Pod stuck in Pending nodeAffinity conflict Pod needs to land on controlplane but something blocks it
Path doesn't exist /opt/blue-data-cka missing mkdir -p /opt/blue-data-cka on the controlplane node

How to Reference PVC in a Pod

apiVersion: v1
kind: Pod
metadata:
  name: blue-pod
spec:
  containers:
  - name: app
    image: nginx
    volumeMounts:
    - mountPath: /data      # where inside the container
      name: blue-storage    # matches volumes[].name below
  volumes:
  - name: blue-storage
    persistentVolumeClaim:
      claimName: blue-pvc-cka   # the PVC name

Raw Files (Original)

blue-pv-cka.yml  blue-pvc-cka.yml  blue-stc-cka.yml  filesystem

blue-pv-cka.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: blue-pv-cka
spec:
  capacity:
    storage: 100Mi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: blue-stc-cka
  local:
    path: /opt/blue-data-cka
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - controlplane

blue-pvc-cka.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: blue-pvc-cka
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 50Mi
  storageClassName: blue-stc-cka
  volumeName: blue-pv-cka

blue-stc-cka.yml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: blue-stc-cka
  annotations:
    storageclass.kubernetes.io/is-default-class: "false"
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
  - discard
volumeBindingMode: WaitForFirstConsumer
parameters:
  guaranteedReadWriteLatency: "true"

Imagine a warehouse with shelves of storage boxes.

Static:

  • Someone physically goes and puts a box on a shelf and labels it "box-A, 100MB, shelf-3"
  • That's the PV — it already exists, someone made it manually
  • You come along and say "I need a 50MB box" — that's the PVC
  • The warehouse manager looks around, finds box-A, gives it to you
  • You didn't create the box, it was already there waiting

Dynamic:

  • No boxes exist yet
  • You come along and say "I need a 50MB box" — PVC
  • The warehouse manager calls the factory (provisioner) and says "make a box"
  • Factory makes the box, puts it on a shelf, warehouse manager gives it to you
  • The box didn't exist until you asked for it

The StorageClass is just the rulebook the warehouse manager follows — how to label boxes, what to do when you're done with them (Retain vs Delete), whether to wait until you actually show up before assigning a shelf (WaitForFirstConsumer).

no-provisioner means — there's no factory. Boxes only exist if someone manually put them there. The manager can still follow the rulebook, but can't call the factory.