Skip to content

Commit

Permalink
chore(docs) : updating part 1 of the KubeAid demonstration blog serie…
Browse files Browse the repository at this point in the history
…s for AWS

Signed-off-by: Archisman <[email protected]>
  • Loading branch information
Archisman-Mridha committed Feb 25, 2025
1 parent 8728be4 commit cba37bb
Showing 1 changed file with 51 additions and 28 deletions.
79 changes: 51 additions & 28 deletions docs/aws/capi/cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,21 +34,37 @@ To address this, we have [forked](https://github.com/Obmondo/image-builder) the

First, fork the [KubeAid Config](http://github.com/Obmondo/kubeaid-config) repo.

KubeAid Bootstrap script requires a config file, which it'll use to prepare your [KubeAid Config](https://github.com/Obmondo/kubeaid-config) fork. You can generate a sample config file, by running :
Run and exec into the KubeAid Bootstrap Script container :

```sh
docker run --name kubeaid-bootstrap-script --rm \
-v ./outputs:/outputs \
ghcr.io/obmondo/kubeaid-bootstrap-script:v0.5 \
kubeaid-bootstrap-script config generate aws \
--config ./outputs/kubeaid-bootstrap-script.config.yaml
NETWORK_NAME=k3d-management-cluster
if ! docker network ls | grep -q $(NETWORK_NAME); then \
docker network create $(NETWORK_NAME); \
fi

docker run --name kubeaid-bootstrap-script \
--network $(NETWORK_NAME) \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ./outputs:/outputs \
-d \
ghcr.io/obmondo/kubeaid-bootstrap-script:v0.5

docker exec -it kubeaid-bootstrap-script /bin/bash
```

KubeAid Bootstrap Script requires a config file, which it'll use to prepare your [KubeAid Config](https://github.com/Obmondo/kubeaid-config) fork. Generate a sample config file, using :

```sh
kubeaid-bootstrap-script config generate aws \
--config ./outputs/kubeaid-bootstrap-script.config.yaml
```

Open the sample YAML configuration file generated at `./outputs/kubeaid-bootstrap-script.config.yaml` and update the following fields with your specific values :

- git.username and git.password
- forks.kubeaidConfig
- cloud.aws.sshKeyName and machinePools.\*.sshKeyName
- **git.username** and **git.password**
- **forks.kubeaidConfig**
- **cloud.aws.controlPlane.ami.id** and **cloud.aws.nodeGroups.\*.ami.id**
- **cloud.aws.sshKeyName** and **cloud.aws.nodeGroups.\*.sshKeyName**

If you don't have an existing SSH KeyPair in the corresponding AWS region, you can generate one using this command :

Expand All @@ -61,7 +77,7 @@ Open the sample YAML configuration file generated at `./outputs/kubeaid-bootstra
Export your AWS credentials as environment variables :

```sh
export AWS_REGION="eu-west-1"
export AWS_REGION="us-east-1"
export AWS_ACCESS_KEY_ID="xxxxxxxxxx"
export AWS_SECRET_ACCESS_KEY="xxxxxxxxxx"
export AWS_SESSION_TOKEN="xxxxxxxxxx"
Expand All @@ -70,17 +86,17 @@ export AWS_SESSION_TOKEN="xxxxxxxxxx"
Now to bootstrap the Kubernetes (v1.31.0) cluster with 3 control plane node and worker nodes `autoscaled` between 1 to 3 replicas, you can simply run :

```sh
docker run --name kubeaid-bootstrap-script \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ./outputs:/outputs \
ghcr.io/obmondo/kubeaid-bootstrap-script:v0.5 \
kubeaid-bootstrap-script cluster bootstrap aws \
--config /outputs/kubeaid-bootstrap-script.config.yaml
kubeaid-bootstrap-script cluster bootstrap aws \
--config /outputs/kubeaid-bootstrap-script.config.yaml
```

> [!NOTE] You'll notice we're mounting the Docker socket to the container. This allows the container to spinup a K3D cluster on your host system. That K3D cluster is what we call the `temporary management cluster` / `dev environment`. The main cluster will be bootstrapped using that temporary management cluster. KubeAid Bootstrap Script will then make the main cluster manage itself (this is called `pivoting`). There will then be no need for the temporary management cluster anymore.
> [!NOTE]
>
> You'll notice we're mounting the Docker socket to the container. This allows the container to spinup a K3D cluster on your host system. That K3D cluster is what we call the `temporary management cluster` / `dev environment`. The main cluster will be bootstrapped using that temporary management cluster. KubeAid Bootstrap Script will then make the main cluster manage itself (this is called `pivoting`). There will then be no need for the temporary management cluster anymore.
> [!NOTE] If you later wish to spinup that temporary management cluster locally for some reason (updating / deleting the provisioned cluster), you can use the `kubeaid-bootstrap-script devenv create aws` command.
> [!NOTE]
>
> If you later wish to spinup that temporary management cluster locally for some reason (updating / deleting the provisioned cluster), you can use the `kubeaid-bootstrap-script devenv create aws` command.
Once the cluster is bootstrapped, you can find it's kubeconfig at `./outputs/provisioned-cluster.kubeconfig.yaml`.

Expand All @@ -93,6 +109,8 @@ kubectl get pods --all-namespaces

and you'll see Cilium, AWS CCM (Cloud Controller Manager), CertManager, Sealed Secrets, ArgoCD, KubePrometheus etc. pods running :).

> If you wish to access the K3D management cluster for some reason, then use `./outputs/management-cluster.host.kubeconfig.yaml` to access it from the host / `./outputs/management-cluster.container.kubeconfig.yaml` to access it from inside the container.
## Accessing ArgoCD dashboard

Get the password for accessing ArgoCD admin dashboard, by running this command :
Expand All @@ -113,20 +131,25 @@ and visit [https://localhost:8080](https://localhost:8080) to access the ArgoCD

Doing Kubernetes version upgrade for a cluster, manually, is a hastle! You can read about it in [Kubernetes' official docs](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/). ClusterAPI automates this whole process for you.

Export your AWS credentials and run :
First, make sure that you've execed inside the KubeAid Bootstrap Script container and exported your AWS credentials as environment variables.

In a real life scenario, you may have deleted the temporary management cluster / dev environment right after the cluster got bootstrapped. You can bring back the dev environment by doing :

```sh
docker run --name kubeaid-bootstrap-script \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ./outputs:/outputs \
ghcr.io/obmondo/kubeaid-bootstrap-script:v0.5 \
kubeaid-bootstrap-script cluster upgrade aws \
--config /outputs/kubeaid-bootstrap-script.config.yaml \
--k8s-version v1.32.0 \
--ami-id ami-xxxxxxxxxx
kubeaid-bootstrap-script devenv create aws \
--config /outputs/kubeaid-bootstrap-script.config.yaml
```

Now, to upgrade the cluster, run :

```sh
kubeaid-bootstrap-script cluster upgrade aws \
--config /outputs/kubeaid-bootstrap-script.config.yaml \
--k8s-version v1.32.0 \
--ami-id ami-xxxxxxxxxx
```

It'll update your KubeAid config repository (corresponding **capi-cluster.values.yaml** file specifically) and trigger Kubernetes version upgrades for the Control Plane and each node-group.
It'll update your KubeAid config repository (specifically, the corresponding **capi-cluster.values.yaml** file) and trigger Kubernetes version upgrades for the Control Plane and each node-group.

Now that we have bootstrapped a Kubernetes cluster and effortlessly upgraded it, let's move to [Part 2]() where we'll demonstrate, how you can **easily install open-source apps** (like `Keycloak.X`) in your cluster, using KubeAid.

Expand Down

0 comments on commit cba37bb

Please sign in to comment.