Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How-To: Adopt worker nodes with the cloudscale Machine API Provider #362

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ export CLOUDSCALE_API_TOKEN=<cloudscale-api-token> # From https://control.clouds
export CLUSTER_ID=<lieutenant-cluster-id>
export TENANT_ID=<lieutenant-tenant-id>
export REGION=<region> # rma or lpg (without the zone number)
export GITLAB_TOKEN=<gitlab-api-token> # From https://git.vshn.net/-/profile/personal_access_tokens
export GITLAB_TOKEN=<gitlab-api-token> # From https://git.vshn.net/-/user_settings/personal_access_tokens
export GITLAB_USER=<gitlab-user-name>
----

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,218 @@
= Adopt worker nodes with the cloudscale Machine API Provider

[abstract]
--
Steps to adopt worker nodes on https://cloudscale.ch[cloudscale] with the https://github.com/appuio/machine-api-provider-cloudscale[cloudscale Machine API Provider].
--

== Starting situation

* You already have an OpenShift 4 cluster on cloudscale
* You have admin-level access to the cluster
* You want the nodes adopted by the https://github.com/appuio/machine-api-provider-cloudscale[cloudscale Machine API Provider]

== Prerequisites

The following CLI utilities need to be available locally:

* `commodore`, see https://syn.tools/commodore/running-commodore.html[Running Commodore]
* `docker`
* `kubectl`
* `vault`
* `yq`

== Prepare local environment

include::partial$cloudscale/setup-local-env.adoc[]

== Update Cluster Config

. Update cluster config
+
[source,bash]
----
pushd inventory/classes/"${TENANT_ID}"

yq -i '.applications += "machine-api-provider-cloudscale"' \
${CLUSTER_ID}.yml

yq eval -i ".parameters.openshift4_terraform.terraform_variables.make_worker_adoptable_by_provider = true" \
${CLUSTER_ID}.yml

git commit -m "Allow adoption of worker nodes" "${CLUSTER_ID}.yml"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A git push appears to be missing here.

popd
----

. Compile and push the cluster catalog.
+
[source,bash]
----
commodore catalog compile "${CLUSTER_ID}" --push
----

== Prepare Terraform environment

include::partial$cloudscale/configure-terraform-secrets.adoc[]

include::partial$setup_terraform.adoc[]

== Run terraform

. Verify terraform output and apply the changes if everything looks good.
+
Terraform will tag the nodes as preparation for the adoption by the cloudscale Machine API Provider.
+
[source,bash]
----
terraform apply
----

== Apply Machine and MachineSet manifests

[IMPORTANT]
====
Please ensure the terraform apply has completed successfully before proceeding with this step.
Without the tags applied by Terraform, nodes will be duplicated with the same name and weird stuff might happen.

Please be careful to not apply the `MachineSet` before the `Machine` manifests.
====

. Copy `worker-machines_yml` from the Terraform output and apply it to the cluster.
+
[source,bash]
----
terraform output -raw worker-machines_yml | yq -P > worker-machines.yml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

on my machine Terraform prints a warning which then gets included in the yaml (because they have never heard of stderr):

└─» terraform output -raw worker-machines_yml > foo.yml


└─» cat foo.yml 
There are some problems with the CLI configuration:
╷
│ Warning: Unable to open CLI configuration file
│
│ The CLI configuration file at "/root/.terraformrc" does not exist.
╵

"apiVersion": "v1"
"items": []
"kind": "List"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ended up using this command:

terraform output -raw worker-machineset_yml | grep -vP '^(│|╵|╷|There are some problems with the CLI configuration)' | yq -P > worker-machineset.yml
head worker-machineset.yml
kubectl apply -f worker-machineset.yml

head worker-machines.yml
kubectl apply -f worker-machines.yml
----

. Check that all machines are in the `Running` state.
+
[source,bash]
----
kubectl get -f worker-machines.yml
----

. Copy `worker-machineset_yml` from the Terraform output and apply it to the cluster.
+
[source,bash]
----
terraform output -raw worker-machineset_yml | yq -P > worker-machineset.yml
head worker-machineset.yml
kubectl apply -f worker-machineset.yml
----

. Copy `infra-machines_yml` from the Terraform output and apply it to the cluster.
+
[source,bash]
----
terraform output -raw infra-machines_yml | yq -P > infra-machines.yml
head infra-machines.yml
kubectl apply -f infra-machines.yml
----

. Check that all machines are in the `Running` state.
+
[source,bash]
----
kubectl get -f infra-machines.yml
----

. Copy `infra-machineset_yml` from the Terraform output and apply it to the cluster.
+
[source,bash]
----
terraform output -raw infra-machineset_yml | yq -P > infra-machineset.yml
head infra-machineset.yml
kubectl apply -f infra-machineset.yml
----

. Copy `additional-worker-machines_yml` from the Terraform output and apply it to the cluster.
+
[source,bash]
----
terraform output -raw additional-worker-machines_yml | yq -P > additional-worker-machines.yml
head additional-worker-machines.yml
kubectl apply -f additional-worker-machines.yml
----

. Check that all machines are in the `Running` state.
+
[source,bash]
----
kubectl get -f additional-worker-machines.yml
----

. Copy `additional-worker-machinesets_yml` from the Terraform output and apply it to the cluster.
+
[source,bash]
----
terraform output -raw additional-worker-machinesets_yml | yq -P > additional-worker-machinesets.yml
head additional-worker-machinesets.yml
kubectl apply -f additional-worker-machinesets.yml
----

== Remove nodes from the Terraform state

. Remove the nodes from the Terraform state.
+
[source,bash]
----
terraform state rm module.cluster.module.worker
terraform state rm module.cluster.module.infra
terraform state rm module.cluster.module.additional_worker
cat > override.tf <<EOF
module "cluster" {
infra_count = 0
worker_count = 0
additional_worker_groups = {}
}
EOF
----

. Check the terraform plan output and apply the changes.
There should be no server recreation.
Hieradata entries can be ignored.
bastjan marked this conversation as resolved.
Show resolved Hide resolved
+
[source,bash]
----
terraform plan
terraform apply
----

== Cleanup

. Persist the Terraform changes and start managing the machine sets.
+
[source,bash]
----
popd
pushd "inventory/classes/${TENANT_ID}"

yq -i e '.parameters.openshift4_terraform.terraform_variables.additional_worker_groups= {}' \
"${CLUSTER_ID}.yml"
yq -i e '.parameters.openshift4_terraform.terraform_variables.infra_count = 0' \
"${CLUSTER_ID}.yml"
yq -i e '.parameters.openshift4_terraform.terraform_variables.worker_count = 0' \
"${CLUSTER_ID}.yml"

yq -i ea 'select(fileIndex == 0) as $cluster |
$cluster.parameters.openshift4_nodes.machineSets =
([select(fileIndex > 0)][] as $ms ireduce ({};
$ms.metadata.name as $msn |
del($ms.apiVersion) |
del($ms.kind) |
del($ms.metadata.name) |
del($ms.metadata.labels.name) |
del($ms.metadata.namespace) |
. * {$msn: $ms}
)) |
$cluster' \
"${CLUSTER_ID}.yml" ../../../catalog/manifests/openshift4-terraform/*machineset*.yml

git commit -am "Persist provider adopted machine and terraform state for ${CLUSTER_ID}"
git push origin master
popd

commodore catalog compile "${CLUSTER_ID}" --push
----
2 changes: 1 addition & 1 deletion docs/modules/ROOT/pages/how-tos/exoscale/decommission.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ export EXOSCALE_REGION=<cluster-region>

export CLUSTER_ID=<cluster-name>

# From https://git.vshn.net/-/profile/personal_access_tokens
# From https://git.vshn.net/-/user_settings/personal_access_tokens
export GITLAB_TOKEN=<gitlab-api-token>
export GITLAB_USER=<gitlab-user-name>

Expand Down
1 change: 1 addition & 0 deletions docs/modules/ROOT/partials/nav.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -162,6 +162,7 @@
*** xref:oc4:ROOT:how-tos/cloudscale/update_compute_flavors.adoc[Update compute flavors]
*** xref:oc4:ROOT:how-tos/cloudscale/remove_node.adoc[]
*** xref:oc4:ROOT:how-tos/cloudscale/increase-worker-node-disk.adoc[]
*** xref:oc4:ROOT:how-tos/cloudscale/provider-adopt-worker-nodes.adoc[]

** Exoscale
*** xref:oc4:ROOT:how-tos/exoscale/remove_node.adoc[]
Expand Down
3 changes: 2 additions & 1 deletion docs/modules/ROOT/partials/setup_terraform.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,9 @@ tf_tag=$(\

# Generate the terraform alias
base_dir=$(pwd)
alias terraform='docker run -it --rm \
alias terraform='touch .terraformrc; docker run -it --rm \
-e REAL_UID=$(id -u) \
-e TF_CLI_CONFIG_FILE=/tf/.terraformrc \
--env-file ${base_dir}/terraform.env \
-w /tf \
-v $(pwd):/tf \
Expand Down
2 changes: 1 addition & 1 deletion docs/modules/ROOT/partials/vshn-input.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ ifeval::["{needs_gitlab}" != "no"]
.Access to VSHN GitLab
[source,bash]
----
# From https://git.vshn.net/-/profile/personal_access_tokens, "api" scope is sufficient
# From https://git.vshn.net/-/user_settings/personal_access_tokens, "api" scope is sufficient
export GITLAB_TOKEN=<gitlab-api-token>
export GITLAB_USER=<gitlab-user-name>
----
Expand Down