Skip to content

Kubeadm Cluster Setup — Mac Minis (Debian)

Goal

Multi-node bare-metal Kubernetes cluster on Debian Mac minis, connected via Tailscale. 1x control plane, 1x worker (expandable).


Hardware

Node Role OS
Mini 1 Control plane Debian
Mini 2 Worker Debian
M1 MacBook kubectl client macOS

Phase 0: Networking (do this first)

Get IP addresses

# On each mini — find the ethernet interface IP
ip addr show

# Or list all interfaces
ip link show

# Then get IP for specific interface (usually eth0 or enp*)
ip addr show eth0

Check connectivity between machines

# From M1, ping each mini
ping <mini-ip>

# From mini1, ping mini2
ping <mini2-ip>

Tailscale

# Install on each mini (Debian)
curl -fsSL https://tailscale.com/install.sh | sh

# Start and authenticate
sudo tailscale up

# Get Tailscale IP
tailscale ip -4

Decide: use Tailscale IPs or LAN IPs for the cluster?
LAN (ethernet direct) = lower latency, simpler
Tailscale = works if minis aren't always on same network
Recommendation: LAN IPs for the cluster, Tailscale for remote access from M1


Phase 1: Pre-flight (both nodes)

Disable swap — Kubernetes requires it off

sudo swapoff -a

# Make permanent — comment out swap line
sudo nano /etc/fstab
# Comment out any line with 'swap'

# Verify
free -h  # swap should show 0

Load required kernel modules

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

Set sysctl params

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

sudo sysctl --system

Verify

sysctl net.bridge.bridge-nf-call-iptables
# Should return 1

Phase 2: Container Runtime (both nodes)

Install containerd (default CRI for modern k8s):

sudo apt update
sudo apt install -y containerd

# Generate default config
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml

# Set SystemdCgroup = true (required)
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml

sudo systemctl restart containerd
sudo systemctl enable containerd

# Verify
sudo systemctl status containerd

Phase 3: Install kubeadm, kubelet, kubectl (both nodes)

sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl gpg

# Add Kubernetes apt repo
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | \
  sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] \
  https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | \
  sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt update
sudo apt install -y kubelet kubeadm kubectl

# Pin versions — do not auto-upgrade
sudo apt-mark hold kubelet kubeadm kubectl

# Verify
kubeadm version
kubectl version --client

Check latest stable version before running: https://kubernetes.io/releases/


Phase 4: Initialise Control Plane (mini1 only)

sudo kubeadm init \
  --pod-network-cidr=10.244.0.0/16 \
  --apiserver-advertise-address=<MINI1-LAN-IP>

--pod-network-cidr matches Flannel CNI (next step).
--apiserver-advertise-address = the IP other nodes will reach the API server on.

After init completes — save the join command output

It will look like:

kubeadm join <ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
Save this. You need it for Phase 6.

Set up kubeconfig (mini1)

mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Phase 5: Install CNI (mini1 only)

Without a CNI, pods can't talk to each other. Using Flannel:

kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

Wait for nodes to become Ready:

kubectl get nodes -w


Phase 6: Join Worker Node (mini2)

# Run the join command from Phase 4
sudo kubeadm join <ip>:6443 \
  --token <token> \
  --discovery-token-ca-cert-hash sha256:<hash>

Verify from mini1:

kubectl get nodes
# Both nodes should show Ready


Phase 7: Access from M1 MacBook

# Install kubectl on M1 (if not already)
brew install kubectl

# Copy kubeconfig from mini1
scp user@<mini1-ip>:/home/user/.kube/config ~/.kube/config

# Or via Tailscale IP
scp user@<mini1-tailscale-ip>:/home/user/.kube/config ~/.kube/config

# Test
kubectl get nodes

Phase 8: Smoke Test

# Deploy a test pod
kubectl run test --image=nginx

# Check it's running
kubectl get pods -o wide

# Check it landed on the worker node
# (control plane has taint by default — pods go to worker)

# Clean up
kubectl delete pod test

Troubleshooting Checkpoints

Issue Check
Node not Ready kubectl describe node <name> — look at Conditions
Pods stuck Pending kubectl describe pod <name> — check Events
CNI not working kubectl get pods -n kube-flannel
Can't join worker Token expired? Regenerate: kubeadm token create --print-join-command
containerd issues sudo systemctl status containerd

Blog Post Outline (document as you go)

  1. Why bare-metal over minikube
  2. Hardware overview
  3. Network setup (Tailscale + LAN)
  4. Pre-flight checklist
  5. containerd install
  6. kubeadm init — what actually happens
  7. CNI — why you need it
  8. Joining worker nodes
  9. Accessing remotely from M1
  10. Smoke test
  11. What's next (storage, ingress, real workloads)