Ansible playbooks that automate Kubernetes deployment on Ubuntu 24.04 LTS.
Here's the traffic overview for this repository:
- 👁️ Total Views Since Creation: 264 views
- 🔄 Total Clones Since Creation: 58 clones
Last traffic data update: Tue Feb 25 2025 02:10:28 CET
- Single-Master Deployment (Basic setup)
- Playbooks are in the root directory.
- Deploys a single control-plane node.
- Multi-Master Deployment (HA) (Advanced setup)
- Playbooks are inside
playbooks/multi-master/
- Uses HAProxy and multiple control-plane nodes.
- Playbooks are inside
- Ansible and Python3 installed on the local machine
- SSH access to your Ubuntu 24.04 nodes
- The nodes should be provisioned (OpenStack, other cloud providers, or on-premise)
- Unique hostnames for each node (
master-1
,master-2
, etc.)
The hosts.ini
file defines the master and worker nodes:
[master]
master1 ansible_host=192.168.1.159
[workers]
worker1 ansible_host=192.168.1.203
worker2 ansible_host=192.168.1.171
[all:vars]
ansible_python_interpreter=/usr/bin/python3
ansible_ssh_private_key_file=/path/to/your/key.pem
ansible_user=ubuntu
- Install dependencies:
ansible-playbook -i hosts.ini dependencies.yaml
- Initialize master node:
ansible-playbook -i hosts.ini master.yaml
- Join worker nodes:
ansible-playbook -i hosts.ini worker.yaml
- Verify cluster:
ssh -i /path/to/your/key.pem ubuntu@<master_ip> kubectl get nodes
📁 Playbooks are under playbooks/multi-master/
The multi-hosts.ini
file defines the multi-master and HAProxy setup:
[master]
master1 ansible_host=192.168.1.203
master2 ansible_host=192.168.1.249
master3 ansible_host=192.168.1.198
[workers]
worker1 ansible_host=192.168.1.153
worker2 ansible_host=192.168.1.239
[haproxy]
server ansible_host=192.168.1.212
[all:vars]
ansible_python_interpreter=/usr/bin/python3
ansible_ssh_extra_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
ansible_ssh_private_key_file=/path/to/your/key.pem
ansible_user=ubuntu
- Install Kubernetes dependencies on all nodes:
ansible-playbook -i hosts.ini dependencies.yml
This playbook:
- Disables swap
- Loads necessary kernel modules
- Configures system parameters
- Installs containerd runtime
- Installs Kubernetes components (kubelet, kubeadm)
- Initialize the master node:
ansible-playbook -i hosts.ini master.yml
This playbook:
- Initializes the Kubernetes control plane
- Sets up pod networking (Flannel)
- Configures kubeconfig
- Join worker nodes:
ansible-playbook -i hosts.ini worker.yml
This playbook:
- Retrieves the join command from the master
- Joins worker nodes to the cluster
SSH into the master node and check the cluster status:
ssh -i /path/to/your/key.pem ubuntu@<master_ip>
kubectl get nodes
# Expected output:
NAME STATUS ROLES AGE VERSION
master-controller Ready control-plane 10d v1.30.9
worker-controller Ready <none> 10d v1.30.9
All nodes should show "Ready" status.
To test external access to your cluster, deploy a sample Nginx application:
- Create a file named
nginx-test.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
nodePort: 30080
externalIPs: []
- Apply the configuration:
kubectl apply -f nginx-test.yaml
# Expected output:
deployment.apps/nginx-deployment created
service/nginx-service created
- Verify the deployment:
kubectl get deployments
# Expected output:
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/2 2 2 6s
kubectl get pods
# Expected output:
NAME READY STATUS RESTARTS AGE
nginx-deployment-576c6b7b6-88c2r 1/1 Running 0 11s
nginx-deployment-576c6b7b6-jwndk 1/1 Running 0 11s
kubectl get services
# Expected output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 43s
nginx-service NodePort 10.111.243.240 <none> 80:30080/TCP 14s
- Access Nginx:
- Through NodePort:
http://<any-node-ip>:30080
- You should see the default Nginx welcome page
To verify everything is working:
# Check pods are running
kubectl get pods -o wide
# Check service details
kubectl describe service nginx-service
# Check logs of a pod (replace <pod-name> with actual pod name)
kubectl logs <pod-name>
- Multi-master not joining?
- Check etcd members:
ETCDCTL_API=3 etcdctl member list --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key
- Remove unstarted members:
ETCDCTL_API=3 etcdctl member remove <MEMBER_ID>
- Check etcd members:
- HAProxy not balancing?
- Check logs:
sudo journalctl -u haproxy --no-pager
- Ensure all masters are reachable via port
6443
.
- Check logs:
- Based on kubernetes-playbooks by torgeirl, adapted for Ubuntu 24.04 LTS
See the LICENSE file for license rights and limitations (MIT).