From 38620782791eb960826cec2d0737dc2dac4fb88b Mon Sep 17 00:00:00 2001 From: Hugo Blom <6117705+huxcrux@users.noreply.github.com> Date: Tue, 28 May 2024 16:50:11 +0200 Subject: [PATCH 01/12] Start working on cluster spec and addon info --- .../getting-started/cluster-spec.md | 92 +++++++++++++++++++ content/en/docs/kubernetes/overview.md | 35 +------ 2 files changed, 94 insertions(+), 33 deletions(-) create mode 100644 content/en/docs/kubernetes/getting-started/cluster-spec.md diff --git a/content/en/docs/kubernetes/getting-started/cluster-spec.md b/content/en/docs/kubernetes/getting-started/cluster-spec.md new file mode 100644 index 0000000..7b75cbb --- /dev/null +++ b/content/en/docs/kubernetes/getting-started/cluster-spec.md @@ -0,0 +1,92 @@ +--- +title: "Cluster configuration" +description: "Cluster configuration and optional features" +weight: 1 +alwaysopen: true +--- + +There are a lot of options possible for your cluster. Most options have a sane default howver could be overriden on request. + +A default cluster comes with 3 controlplane and 3 woker nodes. To connect all nodes we create a network, default (10.128.0.0/22). We also deploy monitoring to ensure functionality of all cluster components. However most things are just a default and could be overriden. + + +## Common options + +### Nodes + +The standard configuration consist of the following: + +* Three control plane nodes, one in each of our availability zones. Flavor: + v1-c2-m8-d80 +* Three worker nodes, one in each of our availability zones. Flavor: + v1-c2-m8-d80 + +#### Minimal configuration + +* Three control plane nodes, one in each of our availability zones. Flavor: + v1-c2-m8-d80 +* One worker node, Flavor: + v1-c2-m8-d80 + + This is the minimal configuration offered. Scaling to larger flavors and adding nodes are supported. Autoscaling is not supported with a single worker node. + + > **Note:** SLA is different for minimal configuration type of cluster. SLA's can be found [here](https://elastx.se/en/kubernetes/sla). + +### Nodegroups and multiple flavors + +To try keep node management as easy as possible we make user of nodegroups. A nodegroup contains of one or multiple nodes with one flavor and a list of avalability zones to deploy nodes in. Clusters are default deliverd with a nodegroup called `workers` containing 3 nodes one in each az. Anodegroup are limited to one flavour meaning all nodes in the nodegroup will have the same amount of cpu, ram and disk. + +You could have multiple nodegroups, if you for example want to tarket workload on separate nodes or in case you wish to consume multiple flavours. + +A few eamxples of nodegroups: + +| Name | Flavour | AZ list | Min node count | Max node count (autoscaling) | +| -------- | ----------------- | ------------- | ------------- | ------------- | +|worker |v1-c2-m8-d80 |STO1, STO2, STO3 |3 |0 | +|database |d2-c8-m120-d1.6k |STO1, STO2, STO3 |3 |0 | +|frontend |v1-c4-m16-d160 |STO1, STO2, STO3 |3 |12 | +|jobs |v1-c4-m16-d160 |STO1 |1 |3 | + +In the examples we could se worker our default nodegroup and an example of having separate nodes for database and frontend where the database is running on dedicated nodes and the frontend is running on smaller nodes but can autosacle between 3 and 12 nodes based on current cluster request. We also have a jobs nodegroup where we have one node in sto1 but can scale up to 3 nodes where all are placed inside STO1. +You can read more about [autocalsing here](../autoscaling). + +Nodegroups can be chagned at any time. Please also not that we have auto-healing meaning in case any of your nodes for any reason stops working we will replace them. More about [autohealing could be found here](TBD) + +### Network + +By default we create a network (10.128.0.0/22). However we could use another subnet per cusotmer request. The most common scenario when customers request another subnet is when exposing multiple Kubernetes clsuters over a VPN. + +Please make sure to inform us you wish to use a custom subnet during the ordering process since we cannot replace the network after creation meaning we need to recreate your entire cluster. + +We currently only support cidr in the 10.0.0.0/8 subnet and at lest a /24. Both nodes and loadbalancers are using IPs for this range meaning you need to have a sizable network from the begning. + +### Clsuter domain + +We default all clusters to "cluster.local". This is simullar as most other providers out there. If you wish to have another cluster doamin please let us know during the ordering procedure since it cannot be replaces after cluster creation. + +### Worker nodes Floating IPs + +By default, our clusters come with nodes that do not have any Floating IPs attached to them. If, for any reason, you require Floating IPs on your workload nodes, please inform us, and we can configure your cluster accordingly. It's worth noting that the most common use case for Floating IPs is to ensure predictable source IPs. However, please note that enabling or disabling Floating IPs will necessitate the recreation of all your nodes, one by one but could be enabled or disabled at any time. + +Since during upgrades we create a new node prior to removing an old node you would need to have an additional IP adress on standby. If you wish we to preallocate a list or range of IP adresses just mention this and we will configure your cluster accordingly. + +Please know that only worker nodes are consume IP adresses meaning controlplane nodes does not make use of Floating IPs. + +## Less common options + +### OIDC + +If you wish to integrate with your existing OIDC compatible IDP, example Microsoft AD And Google Workspace that is supported directy in the kubernetes api service. + +By default we ship clusters with this option disabled however if you wish to make use of OIDC just let us know when order the cluster or afterwards. OIDC can be enabled, disabled or changed at any time. + +### Cluster add-ons + +We currently offer managed cert-manager, NGINX Ingress and elx-nodegroup-controller. + +#### Cert-manager + + +#### Ingress + +If you are interested in removing any limitations, we've assembled guides with everything you need to install the same IngressController and cert-manager as we provide. This will give you full control. The various resources gives configuration examples, and instructions for lifecycle management. These can be found in the sections [Getting Started](../getting-started/) and [Guides](../guides/). diff --git a/content/en/docs/kubernetes/overview.md b/content/en/docs/kubernetes/overview.md index 0f597f0..c4a48a4 100644 --- a/content/en/docs/kubernetes/overview.md +++ b/content/en/docs/kubernetes/overview.md @@ -34,50 +34,19 @@ and we integrate with the features it provides. * **Standards conformant**: Our clusters are certified by the [CNCF Conformance Program](https://www.cncf.io/certification/software-conformance/) ensuring interoperability with Cloud Native technologies and minimizing vendor lock-in. -## Flavor of nodes - -The standard configuration consist of the following: - -* Three control plane nodes, one in each of our availability zones. Flavor: - v1-c2-m8-d80 -* Three worker nodes, one in each of our availability zones. Flavor: - v1-c2-m8-d80 - -### Minimal configuration - -* Three control plane nodes, one in each of our availability zones. Flavor: - v1-c2-m8-d80 -* One worker node, Flavor: - v1-c2-m8-d80 - - This is the minimal configuration offered. Scaling to larger flavors and adding nodes are supported. Autoscaling is not supported with a single worker node. - -> **Note:** -SLA is different for minimal configuration type of cluster. SLA's can be found [here](https://elastx.se/en/kubernetes/sla). - ## Good to know ### Design your Cloud We expect customers to design their setup to not require access to Openstack Horizon. This is to future proof the product. This means, do not place other instances in the same Openstack project, nor utilize Swift (objectstore) in the same project. -We are happy to provide a separate Swiftproject, and a secondary Openstack project for all needs. We do not charge per each Openstack project! +We are happy to provide a separate Swiftproject, and a secondary Openstack project for all needs. ### Persistent volumes Cross availability zone mounting of volumes is not supported. Therefore, volumes can only be mounted by nodes in the same availability zone. -### Cluster subnet CIDR - -The default cluster node network CIDR is *10.128.0.0/22*. An alternate CIDR can -be specified on cluster creation. Changing CIDR after creation requires -rebuilding the cluster. - -### Worker nodes Floating IPs - -By default, our clusters come with nodes that do not have any Floating IPs attached to them. If, for any reason, you require Floating IPs on your workload nodes, please inform us, and we can configure your cluster accordingly. It's worth noting that the most common use case for Floating IPs is to ensure predictable source IPs. However, please note that enabling or disabling Floating IPs will necessitate the recreation of all your nodes, one by one. - ### Ordering and scaling Ordering and scaling of clusters is currently a manual process involving contact @@ -85,7 +54,7 @@ with either our sales department or our support. This is a known limitation, but Since Elastx Private Kubernetes 2.0 we offer auto scaling of workload nodes. This is based on resource requests, which means it relies on the administator to set realistic requests on the workload. Configuring auto-scaling options is currently a manual process involving contact with either our sales department or our support. -### Optional features and add-ons +### Cluster add-ons We offer a managed cert-manager and a managed NGINX Ingress Controller. From 2ca7fec8a706f0186322c8642b2f06bc6abebce5 Mon Sep 17 00:00:00 2001 From: Hugo Blom <6117705+huxcrux@users.noreply.github.com> Date: Wed, 12 Jun 2024 11:30:14 +0200 Subject: [PATCH 02/12] more addon info --- .../docs/kubernetes/getting-started/cluster-spec.md | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/content/en/docs/kubernetes/getting-started/cluster-spec.md b/content/en/docs/kubernetes/getting-started/cluster-spec.md index 7b75cbb..6d3b855 100644 --- a/content/en/docs/kubernetes/getting-started/cluster-spec.md +++ b/content/en/docs/kubernetes/getting-started/cluster-spec.md @@ -9,7 +9,6 @@ There are a lot of options possible for your cluster. Most options have a sane d A default cluster comes with 3 controlplane and 3 woker nodes. To connect all nodes we create a network, default (10.128.0.0/22). We also deploy monitoring to ensure functionality of all cluster components. However most things are just a default and could be overriden. - ## Common options ### Nodes @@ -86,7 +85,16 @@ We currently offer managed cert-manager, NGINX Ingress and elx-nodegroup-control #### Cert-manager +Cert-manager ([link to cert-manager.io](https://cert-manager.io/)) helpsyou to manage TLS certificates. A common usecase if to use lets-exncrypt to "automaticly" generate certificates for web apps. However the functiuonality goes much deeper. We also have [usage instructions](../../guides/cert-manager/) and have a [guide](../../guides/install-certmanager/) if you wish to deploy cert-manager yourself. #### Ingress -If you are interested in removing any limitations, we've assembled guides with everything you need to install the same IngressController and cert-manager as we provide. This will give you full control. The various resources gives configuration examples, and instructions for lifecycle management. These can be found in the sections [Getting Started](../getting-started/) and [Guides](../guides/). +An ingress controller in a Kubernetes cluster manages how external traffic reaches your services. It routes requests based on rules, handles load balancing, and can integrate with cert-manager to manage TLS certificates. This simplifies traffic handling and improves scalability and security compared to exposing each service individually. We have a usage guide with examples that can be found [here.](../../guides/ingress/) + +We have chosen to use ingress-nginx and to support ingress, we limit what custom configurations can be made per cluster. We offer two "modes". One that we call direct mode, which is the default behavior. This mode is used when end-clients connect directly to your ingress. We also have a proxy mode for when a proxy (e.g., WAF) is used in front of your ingress. When running in proxy mode, we also have the ability to limit traffic from specific IP addresses, which we recommend doing for security reasons. If you are unsure which mode to use or how to handle IP whitelisting, just let us know and we will help you choose the best options for your use case. + +If you are interested in removing any limitations, we've assembled guides with everything you need to install the same IngressController as we provide. This will give you full control. The various resources give configuration examples and instructions for lifecycle management. These can be found [here.](../../guides/install-ingress/) + +#### elx-nodegroup-controller + +The nodegroup controller is usefull when customers wants to make use custom taints or labels on their nodes. It supports matching nodes based on nodegroup or by name. The controller can be found on [Github](https://github.com/elastx/elx-nodegroup-controller) if you wish to inspect the code or deploy it yourself. From 2f109351fb2a994c0c16052cbe2d472198a0a361 Mon Sep 17 00:00:00 2001 From: Hugo Blom <6117705+huxcrux@users.noreply.github.com> Date: Wed, 12 Jun 2024 11:30:59 +0200 Subject: [PATCH 03/12] improve text --- content/en/docs/kubernetes/getting-started/cluster-spec.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/kubernetes/getting-started/cluster-spec.md b/content/en/docs/kubernetes/getting-started/cluster-spec.md index 6d3b855..9f0cf68 100644 --- a/content/en/docs/kubernetes/getting-started/cluster-spec.md +++ b/content/en/docs/kubernetes/getting-started/cluster-spec.md @@ -97,4 +97,4 @@ If you are interested in removing any limitations, we've assembled guides with e #### elx-nodegroup-controller -The nodegroup controller is usefull when customers wants to make use custom taints or labels on their nodes. It supports matching nodes based on nodegroup or by name. The controller can be found on [Github](https://github.com/elastx/elx-nodegroup-controller) if you wish to inspect the code or deploy it yourself. +The nodegroup controller is useful when customers want to use custom taints or labels on their nodes. It supports matching nodes based on nodegroup or by name. The controller can be found on [Github](https://github.com/elastx/elx-nodegroup-controller) if you wish to inspect the code or deploy it yourself. From 4cf97d92a7abd0971d6f52e49be61b852e328200 Mon Sep 17 00:00:00 2001 From: Hugo Blom <6117705+huxcrux@users.noreply.github.com> Date: Wed, 12 Jun 2024 11:37:37 +0200 Subject: [PATCH 04/12] link fixes --- content/en/docs/kubernetes/getting-started/cluster-spec.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/kubernetes/getting-started/cluster-spec.md b/content/en/docs/kubernetes/getting-started/cluster-spec.md index 9f0cf68..2f6f772 100644 --- a/content/en/docs/kubernetes/getting-started/cluster-spec.md +++ b/content/en/docs/kubernetes/getting-started/cluster-spec.md @@ -47,7 +47,7 @@ A few eamxples of nodegroups: |jobs |v1-c4-m16-d160 |STO1 |1 |3 | In the examples we could se worker our default nodegroup and an example of having separate nodes for database and frontend where the database is running on dedicated nodes and the frontend is running on smaller nodes but can autosacle between 3 and 12 nodes based on current cluster request. We also have a jobs nodegroup where we have one node in sto1 but can scale up to 3 nodes where all are placed inside STO1. -You can read more about [autocalsing here](../autoscaling). +You can read more about [autocalsing here](../../guides/autoscaling/). Nodegroups can be chagned at any time. Please also not that we have auto-healing meaning in case any of your nodes for any reason stops working we will replace them. More about [autohealing could be found here](TBD) From f9c41352a8642d1be2577bb5a7c158ff4f1e1210 Mon Sep 17 00:00:00 2001 From: Hugo Blom <6117705+huxcrux@users.noreply.github.com> Date: Wed, 12 Jun 2024 13:01:39 +0200 Subject: [PATCH 05/12] Update content/en/docs/kubernetes/getting-started/cluster-spec.md Co-authored-by: zrk02 <85105797+zrk02@users.noreply.github.com> --- content/en/docs/kubernetes/getting-started/cluster-spec.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/kubernetes/getting-started/cluster-spec.md b/content/en/docs/kubernetes/getting-started/cluster-spec.md index 2f6f772..881634d 100644 --- a/content/en/docs/kubernetes/getting-started/cluster-spec.md +++ b/content/en/docs/kubernetes/getting-started/cluster-spec.md @@ -85,7 +85,7 @@ We currently offer managed cert-manager, NGINX Ingress and elx-nodegroup-control #### Cert-manager -Cert-manager ([link to cert-manager.io](https://cert-manager.io/)) helpsyou to manage TLS certificates. A common usecase if to use lets-exncrypt to "automaticly" generate certificates for web apps. However the functiuonality goes much deeper. We also have [usage instructions](../../guides/cert-manager/) and have a [guide](../../guides/install-certmanager/) if you wish to deploy cert-manager yourself. +Cert-manager ([link to cert-manager.io](https://cert-manager.io/)) helps you to manage TLS certificates. A common use case is to use lets-encrypt to "automatically" generate certificates for web apps. However the functionality goes much deeper. We also have [usage instructions](../../guides/cert-manager/) and have a [guide](../../guides/install-certmanager/) if you wish to deploy cert-manager yourself. #### Ingress From 8545bb31d00d3ccc971ce06d8c84d5cdb03cc6db Mon Sep 17 00:00:00 2001 From: Hugo Blom <6117705+huxcrux@users.noreply.github.com> Date: Wed, 12 Jun 2024 13:01:43 +0200 Subject: [PATCH 06/12] Update content/en/docs/kubernetes/getting-started/cluster-spec.md Co-authored-by: zrk02 <85105797+zrk02@users.noreply.github.com> --- content/en/docs/kubernetes/getting-started/cluster-spec.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/kubernetes/getting-started/cluster-spec.md b/content/en/docs/kubernetes/getting-started/cluster-spec.md index 881634d..08f8ed8 100644 --- a/content/en/docs/kubernetes/getting-started/cluster-spec.md +++ b/content/en/docs/kubernetes/getting-started/cluster-spec.md @@ -5,7 +5,7 @@ weight: 1 alwaysopen: true --- -There are a lot of options possible for your cluster. Most options have a sane default howver could be overriden on request. +There are a lot of options possible for your cluster. Most options have a sane default however could be overridden on request. A default cluster comes with 3 controlplane and 3 woker nodes. To connect all nodes we create a network, default (10.128.0.0/22). We also deploy monitoring to ensure functionality of all cluster components. However most things are just a default and could be overriden. From d70eb2bf0d9dcfcf0e50101ef36de7f3404950cd Mon Sep 17 00:00:00 2001 From: Hugo Blom <6117705+huxcrux@users.noreply.github.com> Date: Wed, 12 Jun 2024 13:01:48 +0200 Subject: [PATCH 07/12] Update content/en/docs/kubernetes/getting-started/cluster-spec.md Co-authored-by: zrk02 <85105797+zrk02@users.noreply.github.com> --- content/en/docs/kubernetes/getting-started/cluster-spec.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/kubernetes/getting-started/cluster-spec.md b/content/en/docs/kubernetes/getting-started/cluster-spec.md index 08f8ed8..bc3e1a6 100644 --- a/content/en/docs/kubernetes/getting-started/cluster-spec.md +++ b/content/en/docs/kubernetes/getting-started/cluster-spec.md @@ -7,7 +7,7 @@ alwaysopen: true There are a lot of options possible for your cluster. Most options have a sane default however could be overridden on request. -A default cluster comes with 3 controlplane and 3 woker nodes. To connect all nodes we create a network, default (10.128.0.0/22). We also deploy monitoring to ensure functionality of all cluster components. However most things are just a default and could be overriden. +A default cluster comes with 3 controlplane and 3 worker nodes. To connect all nodes we create a network, default (10.128.0.0/22). We also deploy monitoring to ensure functionality of all cluster components. However most things are just a default and could be overridden. ## Common options From 6955fed3700a0e29fba21b9151336fea885bf0b0 Mon Sep 17 00:00:00 2001 From: Hugo Blom <6117705+huxcrux@users.noreply.github.com> Date: Wed, 12 Jun 2024 13:01:52 +0200 Subject: [PATCH 08/12] Update content/en/docs/kubernetes/getting-started/cluster-spec.md Co-authored-by: zrk02 <85105797+zrk02@users.noreply.github.com> --- content/en/docs/kubernetes/getting-started/cluster-spec.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/kubernetes/getting-started/cluster-spec.md b/content/en/docs/kubernetes/getting-started/cluster-spec.md index bc3e1a6..a201d3e 100644 --- a/content/en/docs/kubernetes/getting-started/cluster-spec.md +++ b/content/en/docs/kubernetes/getting-started/cluster-spec.md @@ -13,7 +13,7 @@ A default cluster comes with 3 controlplane and 3 worker nodes. To connect all n ### Nodes -The standard configuration consist of the following: +The standard configuration consists of the following: * Three control plane nodes, one in each of our availability zones. Flavor: v1-c2-m8-d80 From d3e4a2615b6cfb2c846ebebced9aa3262363d2a7 Mon Sep 17 00:00:00 2001 From: Hugo Blom <6117705+huxcrux@users.noreply.github.com> Date: Wed, 12 Jun 2024 13:01:57 +0200 Subject: [PATCH 09/12] Update content/en/docs/kubernetes/getting-started/cluster-spec.md Co-authored-by: zrk02 <85105797+zrk02@users.noreply.github.com> --- content/en/docs/kubernetes/getting-started/cluster-spec.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/kubernetes/getting-started/cluster-spec.md b/content/en/docs/kubernetes/getting-started/cluster-spec.md index a201d3e..1d02d01 100644 --- a/content/en/docs/kubernetes/getting-started/cluster-spec.md +++ b/content/en/docs/kubernetes/getting-started/cluster-spec.md @@ -33,7 +33,7 @@ The standard configuration consists of the following: ### Nodegroups and multiple flavors -To try keep node management as easy as possible we make user of nodegroups. A nodegroup contains of one or multiple nodes with one flavor and a list of avalability zones to deploy nodes in. Clusters are default deliverd with a nodegroup called `workers` containing 3 nodes one in each az. Anodegroup are limited to one flavour meaning all nodes in the nodegroup will have the same amount of cpu, ram and disk. +To try keep node management as easy as possible we make use of nodegroups. A nodegroup contains of one or multiple nodes with one flavor and a list of availability zones to deploy nodes in. Clusters are default deliverd with a nodegroup called `workers` containing 3 nodes one in each AZ. A nodegroup is limited to one flavor meaning all nodes in the nodegroup will have the same amount of CPU, RAM and disk. You could have multiple nodegroups, if you for example want to tarket workload on separate nodes or in case you wish to consume multiple flavours. From ad95279c31c86ed40041909cb5c21339b3928507 Mon Sep 17 00:00:00 2001 From: Hugo Blom <6117705+huxcrux@users.noreply.github.com> Date: Wed, 12 Jun 2024 13:02:10 +0200 Subject: [PATCH 10/12] Update content/en/docs/kubernetes/getting-started/cluster-spec.md Co-authored-by: zrk02 <85105797+zrk02@users.noreply.github.com> --- content/en/docs/kubernetes/getting-started/cluster-spec.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/kubernetes/getting-started/cluster-spec.md b/content/en/docs/kubernetes/getting-started/cluster-spec.md index 1d02d01..2bb51fb 100644 --- a/content/en/docs/kubernetes/getting-started/cluster-spec.md +++ b/content/en/docs/kubernetes/getting-started/cluster-spec.md @@ -35,7 +35,7 @@ The standard configuration consists of the following: To try keep node management as easy as possible we make use of nodegroups. A nodegroup contains of one or multiple nodes with one flavor and a list of availability zones to deploy nodes in. Clusters are default deliverd with a nodegroup called `workers` containing 3 nodes one in each AZ. A nodegroup is limited to one flavor meaning all nodes in the nodegroup will have the same amount of CPU, RAM and disk. -You could have multiple nodegroups, if you for example want to tarket workload on separate nodes or in case you wish to consume multiple flavours. +You could have multiple nodegroups, if you for example want to target workload on separate nodes or in case you wish to consume multiple flavors. A few eamxples of nodegroups: From a063df84ff0cab6ff33aa552b11b7f14d52b0ab0 Mon Sep 17 00:00:00 2001 From: Hugo Blom <6117705+huxcrux@users.noreply.github.com> Date: Wed, 12 Jun 2024 13:02:49 +0200 Subject: [PATCH 11/12] Apply suggestions from code review Co-authored-by: zrk02 <85105797+zrk02@users.noreply.github.com> --- .../kubernetes/getting-started/cluster-spec.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/content/en/docs/kubernetes/getting-started/cluster-spec.md b/content/en/docs/kubernetes/getting-started/cluster-spec.md index 2bb51fb..6f235a7 100644 --- a/content/en/docs/kubernetes/getting-started/cluster-spec.md +++ b/content/en/docs/kubernetes/getting-started/cluster-spec.md @@ -37,7 +37,7 @@ To try keep node management as easy as possible we make use of nodegroups. A nod You could have multiple nodegroups, if you for example want to target workload on separate nodes or in case you wish to consume multiple flavors. -A few eamxples of nodegroups: +A few examples of nodegroups: | Name | Flavour | AZ list | Min node count | Max node count (autoscaling) | | -------- | ----------------- | ------------- | ------------- | ------------- | @@ -46,22 +46,22 @@ A few eamxples of nodegroups: |frontend |v1-c4-m16-d160 |STO1, STO2, STO3 |3 |12 | |jobs |v1-c4-m16-d160 |STO1 |1 |3 | -In the examples we could se worker our default nodegroup and an example of having separate nodes for database and frontend where the database is running on dedicated nodes and the frontend is running on smaller nodes but can autosacle between 3 and 12 nodes based on current cluster request. We also have a jobs nodegroup where we have one node in sto1 but can scale up to 3 nodes where all are placed inside STO1. -You can read more about [autocalsing here](../../guides/autoscaling/). +In the examples we could see worker our default nodegroup and an example of having separate nodes for databases and frontend where the database is running on dedicated nodes and the frontend is running on smaller nodes but can autoscale between 3 and 12 nodes based on current cluster request. We also have a jobs nodegroup where we have one node in sto1 but can scale up to 3 nodes where all are placed inside STO1. +You can read more about [autoscaling here](../../guides/autoscaling/). -Nodegroups can be chagned at any time. Please also not that we have auto-healing meaning in case any of your nodes for any reason stops working we will replace them. More about [autohealing could be found here](TBD) +Nodegroups can be changed at any time. Please also note that we have auto-healing meaning in case any of your nodes for any reason stops working we will replace them. More about [autohealing could be found here](TBD) ### Network -By default we create a network (10.128.0.0/22). However we could use another subnet per cusotmer request. The most common scenario when customers request another subnet is when exposing multiple Kubernetes clsuters over a VPN. +By default we create a cluster network (10.128.0.0/22). However we could use another subnet per customer request. The most common scenario is when customer request another subnet is when exposing multiple Kubernetes clusters over a VPN. -Please make sure to inform us you wish to use a custom subnet during the ordering process since we cannot replace the network after creation meaning we need to recreate your entire cluster. +Please make sure to inform us if you wish to use a custom subnet during the ordering process since we cannot replace the network after creation, meaning we would then need to recreate your entire cluster. -We currently only support cidr in the 10.0.0.0/8 subnet and at lest a /24. Both nodes and loadbalancers are using IPs for this range meaning you need to have a sizable network from the begning. +We currently only support cidr in the 10.0.0.0/8 subnet range and at least a /24. Both nodes and loadbalancers are using IPs for this range meaning you need to have a sizable network from the beginning. -### Clsuter domain +### Cluster domain -We default all clusters to "cluster.local". This is simullar as most other providers out there. If you wish to have another cluster doamin please let us know during the ordering procedure since it cannot be replaces after cluster creation. +We default all clusters to "cluster.local". This is simular to most other providers. If you wish to have another cluster domain please let us know during the ordering procedure since it cannot be replaced after cluster creation. ### Worker nodes Floating IPs From 731eb1723ec329c3b59f245fee2d3358219d297c Mon Sep 17 00:00:00 2001 From: Hugo Blom <6117705+huxcrux@users.noreply.github.com> Date: Wed, 12 Jun 2024 16:28:13 +0200 Subject: [PATCH 12/12] fix autohealing link --- content/en/docs/kubernetes/getting-started/cluster-spec.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/kubernetes/getting-started/cluster-spec.md b/content/en/docs/kubernetes/getting-started/cluster-spec.md index 6f235a7..a4294c3 100644 --- a/content/en/docs/kubernetes/getting-started/cluster-spec.md +++ b/content/en/docs/kubernetes/getting-started/cluster-spec.md @@ -49,7 +49,7 @@ A few examples of nodegroups: In the examples we could see worker our default nodegroup and an example of having separate nodes for databases and frontend where the database is running on dedicated nodes and the frontend is running on smaller nodes but can autoscale between 3 and 12 nodes based on current cluster request. We also have a jobs nodegroup where we have one node in sto1 but can scale up to 3 nodes where all are placed inside STO1. You can read more about [autoscaling here](../../guides/autoscaling/). -Nodegroups can be changed at any time. Please also note that we have auto-healing meaning in case any of your nodes for any reason stops working we will replace them. More about [autohealing could be found here](TBD) +Nodegroups can be changed at any time. Please also note that we have auto-healing meaning in case any of your nodes for any reason stops working we will replace them. More about [autohealing could be found here](../../guides/autohealing/) ### Network