Install Kubernetes on Debian 11 (Debian Bullseye)

Table of Contents

These steps has been tested on a fresh Debian 11 system, but should also work on most Debian/Ubuntu based distributions.

All steps need to be run on the master on all nodes, except the last step of building the cluster that you only run on the master node.

1. Add new user to sudo

Login as root, update system and install sudo . Replace alejandro with your non-root username.

$ apt update && apt install -y sudo
usermod -aG sudo alejandro

You can verify the user is in sudo group with:

$ id alejandro
uid=1000(alejandro) gid=1000(alejandro) groups=1000(alejandro),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev),108(netdev),112(bluetooth)

Logout and log again as your normal user

2. Install dependencies

Update the system and install dependencies we will need for further below:

sudo apt update && sudo apt upgrade -y && sudo apt install vim gnupg gnupg2 curl software-properties-common

3. Disable swap on all machines

Swap is enabled by default you can see if its being used and disable it with the following:

$ free -m
               total        used        free      shared  buff/cache   available
Mem:            3931          76        3531           0         322        3641
Swap:            974           0         974

$ sudo swapoff -a

$ free -m
               total        used        free      shared  buff/cache   available
Mem:            3931          76        3531           0         322        3641
Swap:              0           0           0

Edit your file /etc/fstab and comment (by adding a # at the beginnning of the line) or delete the swap partition line. Depending on your distribution it can vay a bit. On Debian the line will look like this:

UUID=d864773c-eb0a-4c25-916c-b82dcc637c7 none   swap    sw  0 0

Note: On Ubuntu the line will look something like this:

/swap.img   none    swap    sw  0   0:

You can also comment the line automatically with the following command

$ sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

4. Configure sysctl and modules correctly

$ sudo modprobe overlay
$ sudo modprobe br_netfilter

$ cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf

$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1

$ sudo sysctl --system

Note: The filename k8s.conf can actually be called whatever you like and long as it’s in that folder and has the .conf extension. More information on why this is needed here.

5. Install and configure containerd

Note: Kubernetes is deprecating docker in favor of containerd, so even though we can still setup Kubernetes by install Docker, for new installations the prefered method should be containerd. More info about why here.

Install containterd:

$ sudo apt update && sudo apt install -y containerd

Enable default config for containerd:

$ containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1

Edit the file /etc/containerd/config.tomland add the line SystemdCgroup = true right bellow [plugins.”io.containerd.grpc.v1.cri”.containerd.runtimes.runc.options] :

        SystemdCgroup = true

Restart containerd for new configuration to take effect:

$ sudo systemctl restart containerd

6. Add Kubernetes repository

Enable Kubernetes apt repository:

$ curl -s | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/cgoogle.gpg
$ sudo apt-add-repository "deb kubernetes-xenial main"

7. Install Kubernetes

sudo apt update && sudo apt install kubelet kubeadm kubectl -y

8. Build the Kubernetes Cluster only on the master plane

Note: The control-plane (master node) needs at least 2 CPUs or wont start.

Initialize my cluster

In this example, I optionaly select the pod network to use ( and select the name of the master host (dmaster). You can omit these fields if you prefer.

sudo kubeadm init --control-plane-endpoint=dmaster --pod-network-cidr

When this process finishes it will display the command you need to copy/paste on your nodes. It will look something like this:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

kubeadm join dmaster:6443 --token rjk6tj.r822vh53ayg4cfh7 \
        --discovery-token-ca-cert-hash sha256:2f863b050e22c7ab81088821adf33c03f7398bb1b94e9b8ca2c1a7026198d4a3 \

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join dmaster:6443 --token rjk6tj.r822vh53ayg4cfh7 \
        --discovery-token-ca-cert-hash sha256:2f863b050e22c7ab81088821adf33c03f7398bb1b94e9b8ca2c1a7026198d4a3 

Do as it says, and run the commands. Then go to your worker nodes and run the kubeadm join command is it displayed in your console.

You can verify it’s working with (Status NotReady is the expected result at this point):

$ kubectl get nodes
NAME       STATUS     ROLES           AGE   VERSION
k8master   NotReady   control-plane   1m    v1.25.4

Troubleshooting ERROR CRI.

If when you try to initiate now the Cluster on the master plane or try to connect to it from a node will get and ERROR CRI similar to this:

alejandro@dmaster:~$ sudo kubeadm init --pod-network-cidr
[init] Using Kubernetes version: v1.25.4
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: missing optional cgroups: blkio
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR CRI]: container runtime is not running: output: E1130 23:13:12.281232    5981 remote_runtime.go:948] "Status from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
time="2022-11-30T23:13:12+01:00" level=fatal msg="getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

To avoid this error we need to remove cri from the disabled_plugins list. More info here To do so, edit the file /etc/containerd/config.toml and comment the line disabled_plugins = ["cri"]. After editing the file you need to run sudo systemctl restart containerd to restart containerd.

Note this has to be done both on the control-plane and the worker nodes.