- This scenario shows how to create K8s cluster on virtual PC (multipass, kubeadm, docker)
- "Multipass is a mini-cloud on your workstation using native hypervisors of all the supported plaforms (Windows, macOS and Linux)"
- Multipass is lightweight, fast, easy to use Ubuntu VM (on demand for any workstation)
- Fast to install and to use.
- Link: https://multipass.run/
# creating VM
multipass launch --name k8s-controller --cpus 2 --mem 2048M --disk 10G
multipass launch --name k8s-node1 --cpus 2 --mem 1024M --disk 7G
multipass launch --name k8s-node2 --cpus 2 --mem 1024M --disk 7G
# get shells on different terminals
multipass shell k8s-controller
multipass shell k8s-node1
multipass shell k8s-node2
multipass list
- Run for all 3 nodes on different terminals:
sudo apt-get update
sudo apt-get install docker.io -y # install Docker
sudo systemctl start docker # start and enable the Docker service
sudo systemctl enable docker
sudo usermod -aG docker $USER # add the current user to the docker group
newgrp docker # make the system aware of the new group addition
- Run for all 3 nodes on different terminals:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add # add the repository key and the repository
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
sudo apt-get install kubeadm kubelet kubectl -y # install all of the necessary Kubernetes tools
- Run on new terminal:
multipass list
- Run on controller, add IPs of PCs:
sudo nano /etc/hosts
- Run for all 3 nodes on different terminals:
sudo swapoff -a # turn off swap
- Create this file "daemon.json" in the directory "/etc/docker", docker change cgroup driver to systemd, run on 3 different machines:
cd /etc/docker
sudo touch daemon.json
sudo nano daemon.json
# copy and paste it on daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
sudo systemctl restart docker
- Run on the controller:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo kubectl get nodes
- Run on the nodes (node1, node2):
sudo kubeadm join 172.29.108.209:6443 --token ug13ec.cvi0jwi9xyf82b6f \
--discovery-token-ca-cert-hash sha256:12d59142ccd0148d3f12a673b5c47a2f549cce6b7647963882acd90f9b0fbd28
- Run "kubectl get nodes" on the controller, after deploying pod network, nodes will be ready.
- Run on Controller to deploy a pod network:
- Flannel:
sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
- Calico:
kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml
- Flannel:
- After testing more (restarting master, etc.), Containerd is more flexible and usable than Dockerd run time => KubeAdm-Containerd Setup, because every restart, /etc/hosts should be updated. However, updating of /etc/hosts is not required in the containerd.
-
After restarting Master Node, it could be possible that the IP of master node is updated. Your K8s cluster API's IP is still old IP of the node. So you should configure the K8s cluster with new IP.
-
If you installed the docker for the docker registry, you can remove the exited containers:
sudo docker rm $(sudo docker ps -a -f status=exited -q)
- Run on controller, add IPs of PCs, after restarting IPs should be again updated:
sudo nano /etc/hosts
- Reset kubeadm and init new cluster:
sudo kubeadm reset
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
- It shows which command should be used to join cluster:
sudo kubeadm join 172.31.40.125:6443 --token 07vo3z.q2n2qz6bd07ipdnf \
--discovery-token-ca-cert-hash sha256:46c7dcb092ca091e71ab39bd542e73b90b3f7bdf0c486202b857a678cd9879ba
- Network Configuratin with new IP:
kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml