Skip to content
This repository has been archived by the owner on Apr 17, 2024. It is now read-only.

Commit

Permalink
feat: added script and docs for worker volume add (#19)
Browse files Browse the repository at this point in the history
Co-authored-by: Zuhair AlSader <[email protected]>
  • Loading branch information
IanEff and zalsader authored Jan 15, 2024
1 parent 45d16e3 commit 653a3ee
Show file tree
Hide file tree
Showing 5 changed files with 165 additions and 49 deletions.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,15 @@ For managing the infrastructure to run our demo system

A new best practice for managing K8s infrastructure is to store configuration in Git and use PRs to make sure changes get reviewed. Seems like a great idea.

## Setting up KSD
## Setting up Rook

KSD can be effortlessly deployed in various environments, including bare metal or the cloud.
In this example, we will demonstrate an automated environment configuration to showcase how easily you can prepare for testing KSD in your preferred setup.
Rook can be effortlessly deployed in various environments, including bare metal or the cloud.
In this example, we will demonstrate an automated environment configuration to showcase how easily you can prepare for testing Rook in your preferred setup.


### 1. Deploy your Kubernetes Cluster in Hetzner Cloud

Read how you can [install your Kubernetes Cluster to use KSD](kubernetes-cluster-demo/docs/setup-demo.md)
Read how you can [install your Kubernetes Cluster to use Rook](kubernetes-cluster-demo/docs/setup-demo.md)



Expand Down
137 changes: 93 additions & 44 deletions kubernetes-cluster-kubeone/docs/setup-demo.md
Original file line number Diff line number Diff line change
@@ -1,51 +1,60 @@
## Setup a Kubernetes Cluster for KSD using Hetzner Cloud
## Setup a Kubernetes Cluster for Rook using Hetzner Cloud

This guide is to be ready and test KSD easily.
This document will walk you through the steps required to set up a Kubernetes cluster in the Hetzner Cloud for use with Rook.

### IMPORTANT:

In order to be able to follow all the steps, you would need a Hetzner Api token, to get this you will need:
- Sign in into the Hetzner Cloud Console
- choose a Project
- go to Security → API Tokens
- generate a new token.
To follow this guide, you will need a Hetzner API token. To generate this you will need to:
- Sign in into the [Hetzner Cloud Console](https://console.hetzner.cloud/)
- Select your project
- Go to Security → API Tokens
- Generate a new token

Once you have your token, you must export your token in order to use it during the process
Once you have your token, export it to your shell's environment:
```console
$ export HCLOUD_TOKEN=GlPz.....
```


> If you have troubles, please visit the Hetzner Cloud Documents [https://docs.hetzner.cloud/](https://docs.hetzner.cloud/)
> If you have trouble, please visit the [Hetzner Cloud documentation](https://docs.hetzner.cloud/) site.
## Requirements

- Kubeone
- You can install it by using `curl -sfL https://get.kubeone.io | sh`
- Terraform v1.5.2
- A new ssh key only for this purpose (how to generate a new ssh-key)
- Kubectl (how to install kubectl)
- Api Token Hetzner Cloud
- Kubeone v1.6.2
- AMD64 and ARM64 binaries can be found on [Kubermatic's GitHub page](https://github.com/kubermatic/kubeone/releases).
- Terraform v1.5.2 or greater
- Various installation methods can be found on [Hashicorph's installation page](https://developer.hashicorp.com/terraform/install).
- A new ssh key generated for this project
- This can be created by running `ssh-keygen` on your console. When prompted, enter a name and a location for the new key pair.
- Kubectl
- Various installation methods can be found in the [Kubernetes installation guide](https://kubernetes.io/docs/tasks/tools/).
- A Hetzner Cloud API token
- `ssh-agent` running in your shell
- On your console, run the following:
```console
$ eval `ssh-agent -s`
$ ssh-add /path/to/private/key
```


## Architecture

![architecture.png](architecture.png)
![architecture.png](../architecture.png)

## Hands On

It's time to prepare your Kubernetes cluster for KSD usage.
It's time to prepare your Kubernetes cluster for Rook usage.

#### 1. Clone this repository
#### 1. Clone the Koor demonstration repository

```console
git clone [email protected]:koor-tech/demo-gitops.git
```

#### 2. Navigate to kubernetes-cluster-kubeone
#### 2. Navigate to the terraform directory beneath kubernetes-cluster-kubeone

```console
$ cd kubernetes-cluster-kubeone/terraform/
$ cd demo-gitops/kubernetes-cluster-kubeone/terraform
```

#### 3. Initialize the terraform configuration
Expand Down Expand Up @@ -82,25 +91,34 @@ commands will detect it and remind you to do so if necessary.

#### 4. Setup your cluster

Inside the terraform folder you could find a file called `terraform.tfvars.example` use that file to set up your cluster as you need
In the terraform directory, copy the `terraform.tfvars.example` file to `terraform.tfvars`, and modify the values to describe your cluster:
```console
$ cp terraform.tfvars.example terraform.tfvars
```
```source
cluster_name = "koor-demo"
ssh_public_key_file = "~/.ssh/id_rsa.pub"
control_plane_vm_count=3
initial_machinedeployment_replicas=3
worker_type="cpx41"
control_plane_type="cpx31"
os="ubuntu"
worker_os="ubuntu"
```

KSD is versatile and can run on various clusters, yet in a production environment,
the following are the essential minimum requirements:
Rook is versatile, and can run on many different cluster configurations. For a production environment, these are the minimum requirements:

- 3 Nodes in control plane
- 3 control plane nodes
- 4 CPU
- 8 GB RAM
- 3 Nodes on data/worker nodes
- 3 data/worker nodes
- 8 CPU
- 16 GB RAM
- Calico as CNI (Other CNI plugins work pretty well)
- Calico as the CNI. Other CNI plugins work as well, but haven't been as extensively tested.

#### 4. Validate your changes

Run `terraform plan` to examine what changes will be applied in your infrastructure
Run `terraform plan` to see what changes will be applied to your infrastructure:
```console
$ terraform plan
hcloud_placement_group.control_plane: Refreshing state... [id=185187]
Expand All @@ -122,7 +140,7 @@ hcloud_server_network.control_plane[1]: Refreshing state... [id=35048830-3137203

#### 4. Apply your changes

These changes only will create your infrastructure and Kubernetes will be installed later
Once you're happy with the proposed changes, apply them to create your infrastructure. Kubernetes will be installed later.
```console
$ terraform apply

Expand Down Expand Up @@ -155,35 +173,66 @@ Terraform planned the following actions, but then encountered a problem:

#### 5. Save your infrastructure

You need to save your terraform state into a tf.json file that will be used later for setup your Kubernetes Clusters
Save the generated terraform state to a file. This will be used in the next step to stand up the Kubernetes cluster:
```console
$ terraform output -json > tf.json
$ terraform output -json -no-color > tf.json
```

#### 6. Deploy your Cluster

You already have a `kubeone.yaml` file with the required configuration, but you can update it as you need, and just you need to run:
The `kubeone.yaml` file in the terraform directory already has all necessary configuration details, but you can modify it to meet your requirements.

Once you're ready, run:
```console
$ kubeone apply -m kubeone.yaml -t tf.json
INFO[17:31:59 UTC] Determine hostname...
INFO[17:32:03 UTC] Determine operating system...
INFO[17:32:04 UTC] Running host probes...
The following actions will be taken:
Run with --verbose flag for more information.
+ initialize control plane node "koor-demo-test-ian-control-plane-1" (192.168.0.3) using 1.25.6
+ join control plane node "koor-demo-test-ian-control-plane-2" (192.168.0.4) using 1.25.6
+ ensure machinedeployment "koor-demo-test-ian-pool1" with 4 replica(s) exists
+ apply embedded addons
Do you want to proceed (yes/no): yes

INFO[17:32:31 UTC] Determine hostname...
INFO[17:32:31 UTC] Determine operating system...
INFO[17:32:31 UTC] Running host probes...
INFO[17:32:32 UTC] Installing prerequisites...
INFO[17:32:32 UTC] Creating environment file... node=1.2.3.4 os=ubuntu
INFO[17:32:32 UTC] Creating environment file... node=5.6.7.8 os=ubuntu
INFO[17:32:33 UTC] Configuring proxy... node=1.2.3.4 os=ubuntu
INFO[17:32:33 UTC] Installing kubeadm... node=1.2.3.4 os=ubuntu
INFO[17:32:33 UTC] Configuring proxy... node=5.6.7.8 os=ubuntu
INFO[17:32:33 UTC] Installing kubeadm... node=5.6.7.8 os=ubuntu
```

#### 7. Add your volumes

For this step, you will need to access to your hetzner cloud account [https://accounts.hetzner.com/login](https://accounts.hetzner.com/login)
You can do this by using `volumizer.py` as follows:

1. Access to your hetzner cloud account
```console
$ pip3 install hcloud
$ ./volumizer.py -s 10
There are no volumes attached to <worker name>. Creating and associating.
Creating volume to Hetzner Cloud for <worker name>.
```

Alternatively, you can do this manually. For this step, you will need to access to your [Hetzner Cloud account](https://accounts.hetzner.com/login).

>To utilize all of Rook's features, we recommend associating at least one volume with each data plane node.
1. Navigate to the [Hetzner Cloud console](https://console.hetzner.cloud/)
2. Open your project
3. Go to volumes and add a new volume of your desire size
4. Set the volume name
5. Choose the server
- **important: Choose one server that contains in its name "pool" to use nodes from the data plane**
- Caution: Avoid selecting control plane nodes for KSD, as it relies on deploying pods tied to the volumes. Control plane nodes are unable to host such pods due to [taints and tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/).
6. Select in *Choose mount options*, **manually** to be able to manage completely by KSD
3. Go to the volumes tab in the sidebar
4. Set the volume size and name
5. Choose a data plane node to tie the volume to
- **Important:** Data plane nodes have the word "pool" in their names.
- **Caution:** Do not associate volumes to Rook control plane nodes. Rook works by deploying pods on the data plane nodes tied to your volumes. Control plane nodes are unable to host those pods themselves due to [taints and tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/).
6. Set the *Choose mount options* slider to **manually** so that your cluster has raw block devices to consume. You'll receive a warning, which can be dismissed.
7. Finally, click on create and buy
8. We recommended setting at least one volume per node to be able to use all the KSD features

See the image to check how to do that
![how to create a volume](../how-to-create-volume.gif)

![how to create a volume](how-to-create-volume.gif)
Congratulations! You have created a minimal production Kubernetes Cluster that can now be used to deploy Rook.

With the steps above, you will have readied your minimum production Kubernetes Cluster to be used to deploy KSD
>Note: To _destroy_ your cluster, detach all volumes from your worker nodes, delete them, thens navigate to the `demo-gitops/kubernetes-cluster-kubeone/terraform` directory, and run `terraform apply -destroy`.
9 changes: 8 additions & 1 deletion kubernetes-cluster-kubeone/terraform/run.sh
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,8 @@ ssh_key_file=$(grep -E "^ssh_public_key_file" terraform.tfvars | awk -F= '{gsub(

ssh_key_file="${ssh_key_file/#\~/$HOME}"

# Extract worker volume size
worker_volume_size=$(grep -E "^worker_volume_size" terraform.tfvars.example | awk -F= '{gsub(/[ \047"]/, "", $2); print $2}' | sed 's/^[ \t]*//;s/[ \t]*$//')

if ! [ -f "$ssh_key_file" ]; then
echo "Error: SSH public key file '$ssh_key_file' not found."
Expand Down Expand Up @@ -75,7 +77,7 @@ kubeconfig(){

run() {
local choice
read -p "What do you want to do? (1: plan, 2: apply, 3: save infra, 4:deploy cluster, 5: export kubeconfig, other: exit): " choice
read -p "What do you want to do? (1: plan, 2: apply, 3: save infra, 4:deploy cluster, 5: export kubeconfig, 6: add volumes to workers, other: exit): " choice
case $choice in
1)
# Run terraform plan
Expand Down Expand Up @@ -115,6 +117,11 @@ run() {
kubeconfig
run
;;
6) # assign volumes
echo "Assigning volumes to worker nodes."
pip3 install hcloud
./volumizer.py -c ${cluster_name} -s ${worker_volume_size}
;;
*)
echo "Invalid exiting..."
echo "The script has finished."
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,3 +6,4 @@ worker_type="cpx41"
control_plane_type="cpx31"
os="ubuntu"
worker_os="ubuntu"
worker_volume_size=30
59 changes: 59 additions & 0 deletions kubernetes-cluster-kubeone/terraform/volumizer.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
#!/usr/bin/python3

########
#
# volumizer.py, the simple and effective way of adding volumes to your cluster!
#
# usage: ./volumizer.py -c <cluster name> -s <size>
#
# Relies on the hcloud python library:
# $ pip3 install hcloud
#

import os
import argparse
import re

import hcloud

def create_and_associate_volume(client, worker, size):
volumeName = f"{worker.name}-vol-1"
try:
print(f"Creating volume to Hetzner Cloud for {worker.name}.")
response = client.volumes.create(size=size, name=volumeName, location=worker.datacenter)
except hcloud.APIException:
raise
volume = response.volume
volume.attach(worker)

def parseOpts():
parser = argparse.ArgumentParser()

parser.add_argument("-s", "--size", type=int, default=10, help="the size of each worker's volume, in gigabyes")
parser.add_argument("-c", "--cluster-name", type=str, default="pool", help="the name of your cluster")

args = parser.parse_args()
return args.cluster_name, args.size

if __name__ == "__main__":
assert (
"HCLOUD_TOKEN" in os.environ
), "Please export your API token in the HCLOUD_TOKEN environment variable"
token = os.environ["HCLOUD_TOKEN"]
client = hcloud.Client(token=token)

name, size = parseOpts()

inCluster = re.compile(f'{name}', re.IGNORECASE)
inPool = re.compile('pool', re.IGNORECASE)

try:
servers = client.servers.get_all()
except hcloud.APIException:
raise

for i in range(len(servers)):
if inCluster.search(servers[i].name) and inPool.search(servers[i].name):
if not servers[i].volumes:
print(f"There are no volumes attached to {servers[i].name}. Creating and associating.")
create_and_associate_volume(client=client, worker=servers[i], size=size)

0 comments on commit 653a3ee

Please sign in to comment.