This is a password generator application created by Linuxtips. The challenge is to put this into a Kubernetes cluster with all we learned in PICK (Programa Intensivo de Containers e Kubernetes).
- Requirements
- Docker images
- Push images to DockerHub
- Report of image vulnerabilities on readme
- Signed images
- Kube-linter
- KinD Cluster
- Run cluster in OKE
- Monitoring with Prometheus
- Performance Test - Locust
- Automation with GitHub Actions Deploying application in OKE
- Cert Manager
- Complete Documentation on the README file
- Fix Service and Pod Monitors not working in OKE
- Sign the Locust image and reduce its size
- docker
- trivy
- kind
- kubectl
- kube-linter (optional)
- ingress
- kube-prometheus
- terraform
- oci-cli
I've used the wolfi images from Chainguard, to build the application with Python and use its Redis image.
We can test them locally with docker compose:
docker compose up
Log into docker in the terminal (use your username):
docker login -u mmazoni
Push the image you created:
docker push mmazoni/linuxtips-giropops-senhas:3.1
docker push mmazoni/locust-giropops:1.1
The wolfi image for Python has no vulnerabilities, only the Python libs have vulnerabilities, updating the libraries into the fixed versions makes the image with 0 vulnerabilities.
Install cosign. Then, we can give the command to verify the signature:
cosign verify --key=dockerfile/cosign.pub mmazoni/linuxtips-giropops-senhas:3.0
In the GitHub Actions workflow, we are using keyless signing, so to verify use the command below:
cosign verify mmazoni/linuxtips-giropops-senhas:latest \
--certificate-identity https://github.com/MMazoni/giropops-senha-linuxtips/.github/workflows/deploy.yml@refs/heads/main \
--certificate-oidc-issuer https://token.actions.githubusercontent.com | jq
Kube-linter is configured (GitHub Actions) to run when merging/pushing to main
branch. You can run locally too, if you want:
kube-linter lint manifests/ --config .kube-linter.yml
Edit the hosts to the application work with ingress.
sudo vim /etc/hosts
Then, add the hosts necessary for the project:
127.0.0.1 giropops-senhas.kubernetes.local
127.0.0.1 grafana.kubernetes.local
127.0.0.1 prometheus.kubernetes.local
127.0.0.1 alertmanager.kubernetes.local
-
Install kind to use Kubernetes in Docker locally and kubectl to work with Kubernetes API through your terminal.
-
Use this command to create the cluster:
kind create cluster --config=config/kind/cluster.yml
kubectl apply -k manifests/overlays/kind
kubectl apply -f manifests/overlays/kind/specific
-
See if all pods are running, then access the application
kubectl get pods -n giropops
- http://giropops-senhas.kubernetes.local/
- http://grafana.kubernetes.local
- http://prometheus.kubernetes.local
-
Authenticate in OCI following this guide here: https://github.com/Rapha-Borges/oke-free
-
Then create the infrastructure with Terraform:
terraform init terraform apply
-
After that, your cluster will be created and you are already connected to it. All the necessary manifests should be applied too.
-
See if it is working:
kubectl get nodes
Now, you can access giropops-senhas
by the public IP that Terraform shows as output after finishing the provisioning.
-
We will use the kube-prometheus to start monitoring
giropops-senhas
. Now, install the CRDs(Custom Resource Definitions) of kube-prometheus:git clone https://github.com/prometheus-operator/kube-prometheus ~/kube-prometheus cd ~/kube-prometheus kubectl create -f manifests/setup
-
Then, install the services (Prometheus, Grafana, Alertmanager, Blackbox, etc)
kubectl apply -f manifests/
-
Check if everything installed properly:
kubectl get servicemonitors -n monitoring kubectl get pods -n monitoring
Access here: http://prometheus.kubernetes.local/targets?search=
-
In This part we will configure the HorizontalPodAutoscaler and use Locust for the stress testing. First, a requirement of HPA is the Metric Server:
kubectl apply -k manifests/base/oke
-
See if it's installed and wait for the :
kubectl get pods -n kube-system | grep metrics-server
Now we can obtain CPU and memory metrics from nodes and pods
kubectl top nodes
kubectl top pods
- Access the http://<public_ip>:3000 Set the users as 1000 and the rate per second as 100.
Here is the pods' resource monitoring in Grafana: