Kubernetes On-Prem Installation
MetalLB Installation
These notes created in March 2020 are based on version 0.8.3 of MetalLb. Reference URL https://metallb.universe.tf/installation/
The basic kubernetes cluster using ansible has been created without the ability to do load balancing.
$ kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml
namespace/metallb-system created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
daemonset.apps/speaker created
deployment.apps/controller created
The above code block shows the installation of MetalLB’s initial package installation.
The components installed are:
The
metallb-system/controller
deployment. This is the cluster-wide controller that handles IP address assignments.The
metallb-system/speaker
daemonset. This is the component that speaks the protocol(s) of your choice to make the services reachable.Service accounts for the controller and speaker, along with the RBAC permissions that the components need to function.
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
address:
- 192.168.1.70-192.168.1.99
$ kubectl get pods --namespace metallb-system
NAME READY STATUS RESTARTS AGE
controller-65895b47d4-5bvb2 1/1 Running 1 98m
speaker-hknf4 1/1 Running 1 98m
speaker-kj9j6 1/1 Running 1 98m
speaker-m5pcv 1/1 Running 1 98m
The console given above shows how to verify if the speaker is working
$ sudo sysctl net.bridge.bridge-nf-call-iptables=1
$ sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
The commands given above are required to be run to enable MetalLB to work. These commands are already part of the install script. After the install script is run, one or two of the worker nodes need to be rebooted before MetalLB assigns LoadBalancer IP address.
Reference URL:
Contour Installation
Note to self: Contour Installation is not required as we have an NGINX reverse proxy server to do this for us.
$ kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
namespace/projectcontour created
serviceaccount/contour created
configmap/contour created
customresourcedefinition.apiextensions.k8s.io/ingressroutes.contour.heptio.com created
customresourcedefinition.apiextensions.k8s.io/tlscertificatedelegations.contour.heptio.com created
customresourcedefinition.apiextensions.k8s.io/httpproxies.projectcontour.io created
customresourcedefinition.apiextensions.k8s.io/tlscertificatedelegations.projectcontour.io created
serviceaccount/contour-certgen created
rolebinding.rbac.authorization.k8s.io/contour created
role.rbac.authorization.k8s.io/contour-certgen created
job.batch/contour-certgen created
clusterrolebinding.rbac.authorization.k8s.io/contour created
clusterrole.rbac.authorization.k8s.io/contour created
role.rbac.authorization.k8s.io/contour-leaderelection created
rolebinding.rbac.authorization.k8s.io/contour-leaderelection created
service/contour created
service/envoy created
deployment.apps/contour created
daemonset.apps/envoy created
Shown above is the installation of contour after the MetalLB has been created and shown to work successfully.
Reference URL: https://projectcontour.io/getting-started/
To check for the external IP that Envoy proxy service, use the following command:
$ kubectl get services -A | grep contour
projectcontour contour ClusterIP 10.103.251.240 <none> 8001/TCP 10m
projectcontour envoy LoadBalancer 10.100.148.94 192.168.1.70 80:30213/TCP,443:30773/TCP 10m
We can see that envoy is listening on 192.168.1.70 in our case. This means that all services will now be mapped through envoy. Contour was installed on the Mini ITX ESXi server on 9 Mar 2020.
NFS and Dynamic NFS Provisioning
Reference URL: https://medium.com/@myte/kubernetes-nfs-and-dynamic-nfs-provisioning-97e2afb8b4a9
Note to myself: To enable Dynamic NFS Provisoning, see the yaml files in
code/Kubernetes/persistent-volume/