Skip to content

Commit

Permalink
Renaming and adding server groups
Browse files Browse the repository at this point in the history
Signed-off-by: Andre Machowiak <[email protected]>
  • Loading branch information
nerdicbynature committed Feb 9, 2024
1 parent 7f4fc13 commit 32c40c3
Show file tree
Hide file tree
Showing 17 changed files with 52 additions and 30 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,6 @@ Local SSD Storage is ideal for volatile or temporary workloads such as caches. A
Local SSD Storage shares the same lifecylce as the VM instance. If the VM is deleted or crashes the Local SSD storage data will be lost as well. What's more, your VMs cannot be resized or live-migrated to another hypervisor in case of a hypervisor maintenance. In the event of a hardware failure your Local SSD data could be completely lost. Even if there is no disk failure, there will be regular disk downtime.
{{% /alert %}}


See [reference](../../../reference/local-storage/) to learn how to use Local SSD Storage.

## Object Storage
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: "Images and Instances"
title: "Instances and Images"
type: "docs"
weight: 50
date: 2023-02-24
Expand All @@ -13,7 +13,7 @@ When you want to work with Images and Instances (aka virtual machines) in the Ho
The overview shows your current consumption of cloud ressources and the current limits.

## Instances
The instances menu shows details about all your virtual machines and their current state. From here you can manage you virtual machines and create new ones. Managing your instances covers various aspects which are covered below.
The instances menu shows details about all your virtual machines and their current state. From here you can manage you virtual machines and create new ones. Managing your instances covers various aspects which are covered below.
### Instance Actions Menu
The "Action" menu shows options, which might render your instance unavailable or have impact on its security in red colour:
<img src="image2020-10-19_10-51-36.png" alt="screenshot of the instances action menu" width="60%" height="60%" title="Instances Action Menu">
Expand All @@ -32,9 +32,9 @@ Here you can manage metadata for your instance. There is a set of example metada
#### Edit Security Groups
Here you can manage the security groups for your instance. If your instance has more than one network interface changes will be applied to all of them. If you want to manage different security groups for different network interfaces please use "Edit Port Security Groups" from the menu.
#### Edit Port Security Groups
Security Groups can here be configured for different network interfaces (ports) separately. Furthermore you can edit port characteristics like "Enable Admin State" to forward packets over that port, "Binding VNIC Type" for the port (you would choose "Normal" most of the time for virtual machines) and "Port Security" to activate security features like "anti-spoofing" and allow the use of security groups for that port.
Security Groups can here be configured for different network interfaces (ports) separately. Furthermore you can edit port characteristics like "Enable Admin State" to forward packets over that port, "Binding VNIC Type" for the port (you would choose "Normal" most of the time for virtual machines) and "Port Security" to activate security features like "anti-spoofing" and allow the use of security groups for that port.
#### Console
Opens a virtual console to the login prompt of your instance.
Opens a virtual console to the login prompt of your instance.
#### View Log
Allows to review the console log messages of your instance
#### Rescue Instance
Expand All @@ -58,4 +58,4 @@ Instances will be switched off immediately. This might lead to filesystem checks
#### Rebuild Instance
Rebuilding allows you to re-create an instance but changing characteristics of it (like using a different image). The UUID, volumes and ports of the instance will stay the same. Using it on instances with ceph volumes will not work, though.
#### Delete instance
The instance will be deleted. All used ressources will be given back to the pool.
The instance will be deleted. All used ressources will be given back to the pool.
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,12 @@ type: "docs"
weight: 50
date: 2023-02-24
description: >
Creating Instances with the Horizon gui
Creating Instances with the Horizon GUI
---
## Launch Instance
Using the button "Launch Instance" you can create one or more new instances and start them. An guided dialogue helps you to go through all required steps. As soon as you have entered enough information for launching an instance the button "Create Instance" becomes available and you can start your new instance(s). Asterisks (*) mark required information.

Keep in mind that shell access to the new instance is only possible via ssh key authentication. Thus you either need to create a ssh keypair during instance creation or upload your keypair beforehand.
Keep in mind that shell access to the new instance is only possible via ssh key authentication. Thus you either need to create a ssh keypair during instance creation or upload your keypair beforehand.
Clicking on "**Launch Instance**" opens a dialogue, which will guide you through several steps, which have to be completed to launch an instance:

![screenshot of the launch instance menu](./2023-03-30_10-39.png)
Expand All @@ -18,17 +18,17 @@ As usual Asterisks (*) mark required information and as soon as enough informati

You need to give your new instance a name in the "**Instance Name**" field. The description is optional. There is only one "**Availability Zone**" you can choose. You can use the "**Count**" field to spawn serveral instances of the same type at the same time.

"Next" you should define the "**Source**" of your instance. Basically you choose, what image your instance should be based on.
"Next" you should define the "**Source**" of your instance. Basically you choose, what image your instance should be based on.

<img src="2023-03-30_11-09.png" alt="screenshot of the source menu" width="50%" height="50%" title="Source Menu">

First you choose whether your new instance should be booted from an image (and you see a list of the items available to you under "**Available**"), from an instance snapshot, from a volume or from a volume snapshot. If you choose an existing volume, you can only boot one instance from it. If you choose an image or a snapshot, you can boot more than one instance from it. You choose the item you want by clicking on on the little "up" arrow on the right.

Next you define the "**Volume Size**" of the root volume of your new instance. If you set no value here (or one which is too small), the size will automatically adjusted to the size of the image you choose.
Next you define the "**Volume Size**" of the root volume of your new instance. If you set no value here (or one which is too small), the size will automatically adjusted to the size of the image you choose.

The options on the right side ("**Create New Volume**" and "**Delete Volume on Instance Delete**") determine the lifecycle of the root volume of your instance. If you want, that your instance and its root volume are deleted when the instance is deleted, you should choose not to create a new volume (the option to delete the volume on instance delete will be deactivated). If you have chosen to create a volume, you can choose to have the volume deleted on instance deletion. If you don't choose this option, the root volume of the instance will "survive" the deletion of the instance (and consume storage and be billed).

Now - by clicking on "Next" - you have to choose the "**Flavor**" of your new instance. "Flavors" determine the "dimensions" of your new instance regarding the number of virtual CPUs, the amount of virtual memory and the size of the root disk.
Now - by clicking on "Next" - you have to choose the "**Flavor**" of your new instance. "Flavors" determine the "dimensions" of your new instance regarding the number of virtual CPUs, the amount of virtual memory and the size of the root disk.

<img src="2023-03-31_09-52.png" alt="screenshot of the flavor menu" width="50%" height="50%" title="Flavor Menu">

Expand All @@ -52,7 +52,7 @@ The "**Key Pair**" menu allows you to generate a new ssh public/privatey key pai

<img src="2023-03-31_13-30.png" alt="screenshot of the key pair menu" width="50%" height="50%" title="Key Pair Menu">

If you create a key pair, you are presented with the _private_ key, which you should save to your local workstation and protect from eavesdropping through third parties. The public half of the key pair is saved in your OpenStack project. If you choose to import a "key pair" you actually only import the _public_ part of your key pair. The private key remains in your posession.
If you create a key pair, you are presented with the _private_ key, which you should save to your local workstation and protect from eavesdropping through third parties. The public half of the key pair is saved in your OpenStack project. If you choose to import a "key pair" you actually only import the _public_ part of your key pair. The private key remains in your posession.

You can also quickly create a new public/private key pair on the command line with ``ssh-keygen -t rsa -f cloud.key`` and then import the public key ``cloud.key.pub`` into your OpenStack project.
If you are using Windows you would use PuttyGen to do the same - just be sure to choose ``openssh`` as the key format.
Expand All @@ -64,11 +64,7 @@ If you are using Windows you would use PuttyGen to do the same - just be sure to
As many cloud images use [cloud-init](https://cloudinit.readthedocs.io/en/latest/) for customization nowadays, this option might be used a little less common than usual.
Another option here is "**Disk Partition**", which can be done "automatic" and "manual". "Automatic" basically creates one partition per volume. With "manual" you can create more partitions per volume.

With "**Server Groups**" you can assign your new instance to an existing server group in order to let your new instance be created either next to other instances in that server group or explicitly not next to other instances in that group (affinity - anti-affinity).

<img src="2023-03-31_13-54.png" alt="screenshot of the server group menu" width="50%" height="50%" title="Server Group Menu">

Server groups can have affinity, anti-affinity, soft-affinity and soft-anti-affinity policies. While the affinity policy will fail (and not create the instance), when it cannot place the new instance next to an existing instance of that server group, the soft-affinity policy will place the new instance not next to an existing instance of that server group, if it is not possible (but create the new instance anyway).
With ["**Server Groups**"](../server-groups/) you can assign your new instance to an existing server group in order to let your new instance be created either next to other instances in that server group or explicitly not next to other instances in that group (affinity - anti-affinity).

If you want to add some "**Scheduler Hints**" in order to affect the placement of your new instance you can either choose from the existing metadata catalog or create your own keys in the first line of the left side.

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
---
title: "Server Groups"
type: "docs"
weight: 60
date: 2024-02-09
description: >
Using Server Groups to apply (Anti-)Affinity
---

## Overview

Server Groups allow you to specify a set of VMs that must run on the same hypervisor (affinity) or on different hypervisors (anti-affinity). In general, anti-affinity is good for fault tolerance and load balancing, while affinity is useful if you want to minimise network effects between your VMs.

When using [Local SSD Storage](../../local-storage/), it is highly recommended that you use of Server Groups to achieve fault tolerance against a hypervisor failure.

With "**Server Groups**" you can assign your new instance to an existing server group in order to let your new instance be created either next to other instances in that server group or explicitly not next to other instances in that group (affinity - anti-affinity).

<img src="2023-03-31_13-54.png" alt="screenshot of the server group menu" width="50%" height="50%" title="Server Group Menu">
<br/><br/>

Server groups can have affinity, anti-affinity, soft-affinity and soft-anti-affinity policies. While the affinity policy will fail (and not create the instance), when it cannot place the new instance next to an existing instance of that server group, the soft-affinity policy will place the new instance not next to an existing instance of that server group, if it is not possible (but create the new instance anyway).
32 changes: 19 additions & 13 deletions content/en/compute/pluscloudopen/reference/local-storage/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,33 +61,39 @@ There are two cases where VMs running on Local SSD Storage will experience downt

#### Periodic reboots

Any Local SSD Storage hypervisor will need to be rebooted periodically. Typically this will be once a month. You should therefore expect your VMs to be down on a regular basis.
Any Local SSD Storage hypervisor will need to be **rebooted periodically**. Typically this will be **once a month**. You should therefore expect your VMs to be down on a regular basis.

The average downtime is approximately half an hour, but can vary. All VMs will receive an ACPI shutdown signal prior to maintenance. VMs are given one minute to shut down properly.
The average downtime is **approximately half an hour**, but can vary. All VMs will receive an ACPI shutdown signal prior to maintenance. VMs are given **one minute to shut down** properly.

After this time, they will simply shut down.

You should expect your VMs to remain powered off after the hypervisor reboots. We are currently planning a feature that will allow you to configure the VM to automatically restart if necessary.
You should expect your VMs to **remain powered off** after the hypervisor reboots. We are currently planning a feature that will allow you to configure the VM to automatically restart if necessary.

There will be a 30 minute pause between hypervisor reboots. This will give your software stack time to reconfigure.
There will be a **30 minute pause** between hypervisor reboots. This will give your software stack time to reconfigure.

However, all VMs on the same hypervisor will be affected. You will need to enable anti-affinity server grouping.
However, all VMs on the same hypervisor will be affected. You will need to enable **anti-affinity** [Server Groups](../instances-and-images/server-groups/).

#### Hardware Failure

In the event of a complete hardware failure or reconfiguration, you must expect data loss.
In the event of a complete hardware failure or reconfiguration, you must **expect data loss**.

In these cases, the boot disks will be lost. This means that when the hypervisor comes back up, there will be corrupted VMs.

You will be expected to wipe these VMs yourself. This is because we believe it is better to keep broken VM definitions so that you can more reliably restore these instances from a backup or snapshot. You will have to pay for broken VMs.
You will be expected to **wipe these VMs yourself**. This is because we believe it is better to keep broken VM definitions so that you can more reliably restore these instances from a backup or snapshot. You will have to pay for broken VMs.

Speaking of backups: You should take regular snapshots to be able to restore a failed VM in the event of a hardware failure of the underlying hypervisor.

#### Use Server Groups and Anti-Affinity to Achieve Fault Tolerance

If you are using Local SSD Storage, **you are strongly encouraged to create fault tolerance** against hypervisor failures.

One thing you can do is to use [Server Groups](../instances-and-images/server-groups/) to distribute your VMs across multiple hypervisors.

## Using Local SSD Storage

To use Local SSD Storage, simply create a VM with a specific Local SSD Storage Flavor. All Flavors that end with an "s" indicate Local SSD Storage. Configure the VM to boot without a volume. This is crucial if you want the VM to boot from a local disk instead of a remote volume.
To use Local SSD Storage, simply create a VM with a specific Local SSD Storage Flavor. All Flavors that end with an "**s**" indicate Local SSD Storage. Configure the VM to boot without a volume. This is crucial if you want the VM to boot from a local disk instead of a remote volume.

After you have created the VM, it will boot with a local disk from the /dev/sda1 block device. You can attach additional volumes to your VM. However, these volumes will come from the Ceph shared storage.
After you have created the VM, it will boot with a local disk from the **/dev/sda1** block device. You can attach additional volumes to your VM. However, these volumes will come from the Ceph shared storage.

Examples for Local SSD Storage Flavors:

Expand All @@ -104,21 +110,21 @@ Do not create a boot volume! If you were to create a boot volume, your VM would

To create a VM to use Local SSD Storage, follow these steps:

Navigate to the Launch Instance dialogue box. In 'Details', set 'Instance Name'.
Navigate to the Launch Instance dialogue box. In "**Details**", set "**Instance Name**".

<center>
<img src="screenshot-2024-02-09-13.59.00.png" alt="screenshot of instance details tab" width="75%" title="details tab">
<br/><br/>
</center>

In 'Source', select your favourite cloud image. Leave the default to boot from image and not create a volume.
In "**Source**", select your favourite cloud image. Leave the default to boot from image and not create a volume.

<center>
<img src="screenshot-2024-02-09-13.59.28.png" alt="screenshot of instance source tab" width="75%" title="source tab">
<br/><br/>
</center>

In 'Flavor', select one of the Flavors ending in 's'.
In "**Flavor**", select one of the Flavors ending in "**s**".
<center>
<img src="screenshot-2024-02-09-13.59.52.png" alt="screenshot of instance flavor tab" width="75%" title="flavor tab">
<br/><br/>
Expand Down Expand Up @@ -173,6 +179,6 @@ The output should look like this:
```

{{% alert title="Note" color="info" %}}
volumes_attached' should be empty unless you're adding additional shared storage volumes.
"**volumes_attached**" should be empty unless you're adding additional shared storage volumes.
{{% /alert %}}

0 comments on commit 32c40c3

Please sign in to comment.