Installation

Initial Installation Issues

Error Message

When initialising the kubernetes (master or worker) nodes initially, the error provided below appeared.

root@deb-k8master:~# kubeadm init --pod-network-cidr=10.244.0.0/16
W0226 16:59:57.204106    3863 validation.go:28] Cannot validate kubelet config - no validator is available
W0226 16:59:57.204149    3863 validation.go:28] Cannot validate kube-proxy config - no validator is available
[init] Using Kubernetes version: v1.17.3
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

The above issue was solved with this article https://stackoverflow.com/questions/56287494/why-does-kubeadm-not-start-even-after-disabling-swap

Resolution in ansible

Create a file called systemd4docker.sh with the content below:

swapoff -a

cat > /etc/docker/daemon.json <<EOF
{
    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-driver": "json-file",
    "log-opts": {
        "max-size": "100m"
    },
    "storage-driver": "overlay2",
    "storage-opts": [
        "overlay2.override_kernel_check=true"
    ]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# Restart Docker
systemctl daemon-reload
systemctl restart docker

This uses systemd instead of cgroupfs to start Docker, which resolved the issue.

Successfull Install

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.212:6443 --token 9hinnu.waijo6sl5k8ttxnj \
    --discovery-token-ca-cert-hash sha256:09afcf2c3ff7c59302ddd8b5fb3ea0559feea1e91bfe9c1356784767e8dddd30

The message above appears after successfull install which shows how to connect to the kubernetes cluster using kubectl in the next step of the install process.