How to Deploy a Kubernetes Cluster on Debian 12 with containerd and Calico
Introduction
Kubernetes is a powerful container orchestration platform that allows you to automate deployment, scaling, and management of containerized applications. In this guide, we will set up a Kubernetes cluster with one master node and one worker node on Debian 12, using containerd as the container runtime and Calico as the networking solution.
By the end of this tutorial, you will have a fully functional Kubernetes cluster that is ready to deploy and manage containerized applications.
Prerequisites
Before you begin, ensure you have the following:
- Two Debian 12 machines (one master, one worker) with at least:
- 2 CPUs
- 2GB RAM
- 20GB disk space
- A non-root user with sudo privileges
- Internet access to install required packages
- Firewall disabled or ports opened for Kubernetes components (6443, 2379-2380, 10250-10255, 30000-32767)
Network Configuration
Assign the following static IPs to your nodes:
- Master Node:
192.168.1.100
- Worker Node:
192.168.1.101
Ensure hostname resolution is configured by adding the following to /etc/hosts
on both nodes:
192.168.1.100 master-node
192.168.1.101 worker-node
Step 1: Disable Swap and Configure Kernel Parameters
Kubernetes requires swap to be disabled. This step allows k8s to manage resources more accurate. Run the following on both nodes:
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
Enable required kernel modules:
sudo modprobe overlay
sudo modprobe br_netfilter
Persist these settings:
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
Step 2: Install containerd on Both Nodes
Install containerd, which will serve as our container runtime:
sudo apt update && sudo apt install -y containerd
Create the default containerd configuration file:
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
Containerd uses cgroup
drivers to manage resource allocation for containers. By default, Kubernetes prefers using Systemd as the cgroup driver instead of cgroupfs, as it aligns better with the system-wide process management in modern Linux distributions.
Edit /etc/containerd/config.toml
and set SystemdCgroup = true
:
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
This ensures that the container runtime and Kubernetes share the same cgroup management system, avoiding potential conflicts and improving system stability.
Restart containerd:
sudo systemctl restart containerd
sudo systemctl enable containerd
Step 3: Install Kubernetes Packages
a. Add the Kubernetes Repository
I am adding Kubernetes v1.32, you can change the commands to adding your desired version
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | \
sudo tee /etc/apt/keyrings/kubernetes-archive-keyring.gpg > /dev/null
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] \
https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /" | \
sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
b. Install kubeadm, kubelet, and kubectl
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Step 4: Initialize the Kubernetes Cluster on Master Node
To initialize a cluster with a subnet, run the following command only on the master node:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
Set up kubectl
for the current user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Check the node status:
kubectl get nodes
Step 5: Configure the Worker Node
To join the worker node, run this on the master node to get the join command:
kubeadm token create --print-join-command
Run the output command on the worker node, e.g.:
sudo kubeadm join 192.168.1.100:6443 --token abcdef.1234567890abcdef \
--discovery-token-ca-cert-hash sha256:1234567890abcdef...
Verify the worker node has joined:
kubectl get nodes
Step 6: Deploy Calico Network Plugin
What is Calico?
Calico is a Container Network Interface (CNI) plugin that provides networking and security features for Kubernetes. It supports both BGP routing and overlay networks, ensuring efficient pod-to-pod communication.
Why Use Calico?
- High performance: Uses native Linux networking.
- Network security: Provides Network Policies to control pod communication.
- Scalability: Suitable for large deployments.
To enable pod-to-pod networking, install Calico on the master node:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Check that all pods are running:
kubectl get pods -n kube-system
After a few minutes, check that all nodes are ready:
kubectl get nodes
Step 7: Deploy a Test Application
Deploy an example Nginx deployment to test the cluster:
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --type=NodePort --port=80
Get the service details:
kubectl get svc nginx
You should see an output similar to:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx NodePort 10.96.0.1 <none> 80:31234/TCP 5m
Access Nginx using any node’s IP and the NodePort:
curl http://192.168.1.101:31234
Conclusion
Congratulations! 🎉 You have successfully deployed a Kubernetes cluster on Debian 12 with containerd and Calico. Your cluster is now ready to run containerized applications.
Leave a Reply