diff --git a/content/en/docs/storage/_index.md b/content/en/docs/storage/_index.md index 979029f..acc1738 100644 --- a/content/en/docs/storage/_index.md +++ b/content/en/docs/storage/_index.md @@ -3,18 +3,17 @@ title: "Storage" weight: 6 labfoldernumber: "06" description: > - Configuring Kubernetes Storage for Virtual Machines + Configuring Kubernetes storage for virtual machines --- +In this section we focus on attaching storage volumes provided by Kubernetes storage drivers to our virtual machines. +We also have a look at hotplugging and resizing disks. -In this section we focus on attaching storage volumes provided by kubernetes storage drivers to our virtual machines. -We also have a look at hot plugging and resizing disks. - -## Lab Goals +## Lab goals * Mount storage as disks or filesystems * Use filesystem and block devices -* Mount ConfigMaps and secrets -* Hot plugging storage -* Resizing disks +* Mount ConfigMaps and Secrets +* Hotplug storage +* Resize disks diff --git a/content/en/docs/storage/hotplug-disks.md b/content/en/docs/storage/hotplug-disks.md index 294067f..69eeb3c 100644 --- a/content/en/docs/storage/hotplug-disks.md +++ b/content/en/docs/storage/hotplug-disks.md @@ -1,12 +1,12 @@ --- -title: "Hot plugging Disks" +title: "Hotplugging disks" weight: 63 labfoldernumber: "06" description: > - Hot plugging disks into a virtual machines. + Hotplugging disks into virtual machines --- -In this section we will hot plug a disk to a running virtual machine. +In this section we will hotplug a disk to a running virtual machine. ## {{% task %}} Starting a virtual machine @@ -59,29 +59,34 @@ spec: ``` Create and start the virtual machine with: + ```bash kubectl apply -f {{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}/vm_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros.yaml --namespace=$USER ``` Start the VM with: + ```bash virtctl start {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros --namespace=$USER ``` Open a console with: + ```bash virtctl console {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros --namespace=$USER ``` -## Inspect block devices on the virtual machine before hot plugging a disk +## Inspect block devices on the virtual machine before hotplugging a disk Depending on the system you use you have different ways of listing devices. Here is a set of possibilities: Listing block devices with `lsblk`: + ```bash lsblk -a ``` + ``` NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 44M 0 disk @@ -93,18 +98,22 @@ loop0 7:0 0 0 loop ``` Listing devices in the `/dev` folder: + ```bash ls -d -- /dev/[sv]d[a-z] ``` + ``` /dev/vda /dev/vdb ``` -Some operating systems also list disk devices in `/dev/disk` with subfolders representing different views to your disks. For example -on a Fedora virtual machine: +Some operating systems also list disk devices in `/dev/disk` with subfolders representing different views to your disks. +On a, e.g., Fedora virtual machine: + ```bash ls -lR /dev/disk/* ``` + ``` /dev/disk/by-label: cidata -> ../../vdb @@ -128,15 +137,16 @@ d1b37ed4-3bbb-40b2-a6ba-f377f0c90217 -> ../../vda1 Devices starting with `/dev/vd{a,b,c}` are our devices using the virtio bus type. Devices with `/dev/s{a,b,c,...}` are SCSI devices. What we see inside the virtual machine reflects our configuration. -Actually we can see that we have two disks available. +Actually, we can see that we have two disks available: -* `/dev/vda` - first disk and therefore `vda` is our container disk. -* `/dev/vdb` - second disk is the cloud-init disk. If your operating system provides the `/dev/disk` folder you may see that the cloud-init disk is labeled `cidata` (see sample output). This is required for cloud-init to detect the disk as a provider for cloud-init configuration[^1]. +* `/dev/vda` - The first disk and therefore our container disk. +* `/dev/vdb` - The second disk is the cloud-init disk. If your operating system provides the `/dev/disk` folder you may see that the cloud-init disk is labeled `cidata` (see sample output). This is required for cloud-init to detect the disk as a provider for cloud-init configuration[^1]. -## {{% task %}} Create a DataVolume to be hot plugged +## {{% task %}} Create a DataVolume to be hotplugged Create a file `dv_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-hotplug-disk.yaml` in the folder `{{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}` with the following content: + ```yaml apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume @@ -154,27 +164,30 @@ spec: ``` Create the data volume with: + ```bash kubectl apply -f {{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}/dv_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-hotplug-disk.yaml --namespace=$USER ``` -## {{% task %}} Hotplug volume to virtual machine +## {{% task %}} Hotplug a volume to a virtual machine Hotplug the disk by using `virtctl`: ```bash virtctl addvolume {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros --volume-name={{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-hotplug-disk --namespace=$USER ``` + ``` Successfully submitted add volume request to VM {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros for volume {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-hotplug-disk ``` -After some time the device will be hot plugged to your virtual machine. You may get a first hint where your new disk is plugged in by executing `dmesg` in the vm console: +After some time, the device will be hotplugged to your virtual machine. You may get a first hint where your new disk is plugged in by executing `dmesg` in the VM's console: ```bash dmesg ``` + ``` [ 78.043285] scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 [ 78.056318] sd 0:0:0:0: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatical @@ -186,17 +199,19 @@ dmesg [ 78.113862] sd 0:0:0:0: [sda] Attached SCSI disk ``` -You can verify this with using the commands above. Check that your `/dev/sda` device is available. +You can verify this using the commands above. Check that your `/dev/sda` device is available. {{% alert title="Note" color="info" %}} -Disks are always hot plugged using the SCSI bus. This is due to the fact that virtio devices use a PCIe slot which are +Disks are always hotplugged using the SCSI bus. This is due to the fact that virtio devices use a PCIe slot which are limited to 32 slots on a system. Further PCIe slots need to be reserved ahead of time. {{% /alert %}} Check with list block devices: + ```bash lsblk -a ``` + ``` NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 128M 0 disk @@ -209,14 +224,16 @@ loop0 7:0 0 0 loop ``` Check listing the devices: + ```bash ls -d -- /dev/[sv]d[a-z] ``` + ``` /dev/sda /dev/vda /dev/vdb ``` -In our case we clearly see that our new hot plugged disk is `/dev/sda`. Let's mount it. +In our case we can clearly see that our new hotplugged disk is `/dev/sda`. Let's mount it. ### {{% task %}} Format and mount the disk @@ -224,13 +241,14 @@ In our case we clearly see that our new hot plugged disk is `/dev/sda`. Let's mo From the previous steps we know our new disk is `/dev/sda`. To use it we first have to format the new disk. {{% alert title="Important" color="warning" %}} -When hot plugging volumes and formatting devices always make sure you know where they are plugged in. +When hotplugging volumes and formatting devices always make sure you know where they are plugged in. {{% /alert %}} ```bash sudo mkfs.ext4 /dev/sda ``` + ``` mke2fs 1.42.12 (29-Aug-2014) Discarding device blocks: done @@ -246,16 +264,19 @@ Writing superblocks and filesystem accounting information: done ``` Next we have to create mount points for our new disk: + ```bash sudo mkdir /mnt/disk ``` And finally mount the disk: + ```bash sudo mount /dev/sda /mnt/disk/ ``` We can start to use the disk: + ```bash sudo touch /mnt/disk/myfile ``` @@ -263,10 +284,12 @@ sudo touch /mnt/disk/myfile ### {{% task %}} Removing a disk -You can remove a hot plugged disk with: +You can remove a hotplugged disk with: + ```bash virtctl removevolume {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros --volume-name={{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-hotplug-disk --namespace=$USER ``` + ``` Successfully submitted remove volume request to VM {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros for volume {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-hotplug-disk ``` @@ -274,12 +297,13 @@ Successfully submitted remove volume request to VM {{% param "labsubfolderprefix What happens if you remount the disk again? Is the created file `/mnt/disk/myfile` still present? {{% details title="Task Hint" %}} -After hot plugging and mounting the volume again the file is still present. There is no need to format the device again. +After hotplugging and mounting the volume again the file is still present. There is no need to format the device again. However, the mounting needs to be done again. You may mount disks with startup scripts like cloud-init. -Beware that the device might have a different name: `sdb` +Beware that the device might have a different name such as `sdb`. Mount the disk again with: + ```bash sudo mkdir /mnt/disk sudo mount /dev/sdb /mnt/disk/ @@ -298,12 +322,13 @@ virtctl removevolume {{% param "labsubfolderprefix" %}}{{% param "labfoldernumbe ## Persistent mounting -With the above steps we have hot plugged a disk into the vm. This mount is not persistent. Whenever the VM is restarted -or shutdown the disk is not attached. When you want to mount the disk persistently you can use the `--persist` flag. +With the above steps we have hotplugged a disk into the VM. This mount is not persistent. Whenever the VM is restarted +or shut down, the disk won't be attached again. When you want to mount the disk persistently you can use the `--persist` flag. ```bash virtctl addvolume {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros --volume-name={{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-hotplug-disk --persist --namespace=$USER ``` + ``` Successfully submitted add volume request to VM {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros for volume {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-hotplug-disk ``` @@ -313,6 +338,7 @@ This will add the relevant sections to your `VirtualMachine` manifest. You can s ```bash kubectl get vm {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros -o yaml --namespace=$USER ``` + ```bash apiVersion: kubevirt.io/v1 kind: VirtualMachine @@ -351,4 +377,4 @@ spec: [...] ``` -[^1]: [Cloud-Init labeled drive](https://cloudinit.readthedocs.io/en/latest/reference/datasources/nocloud.html#source-2-drive-with-labeled-filesystem) +[^1]: [Cloud-init labeled drive](https://cloudinit.readthedocs.io/en/latest/reference/datasources/nocloud.html#source-2-drive-with-labeled-filesystem) diff --git a/content/en/docs/storage/mounting-storage.md b/content/en/docs/storage/mounting-storage.md index 65505c4..8d333b0 100644 --- a/content/en/docs/storage/mounting-storage.md +++ b/content/en/docs/storage/mounting-storage.md @@ -1,21 +1,22 @@ --- -title: "Mounting Storage" +title: "Mounting storage" weight: 61 labfoldernumber: "06" description: > - Mounting storage as disks and filesystems. + Mounting storage as disks and filesystems --- -There are multiple ways of mounting a disk to a virtual machine. In this section we will mount various disks to our vm. +There are multiple ways of mounting a disk to a virtual machine. In this section we will mount various disks to our VM. -## Kubernetes Storage +## Kubernetes storage -There are multiple methods to provide storage for your virtual machine. For a storage to be attached you have to specify +There are multiple methods to provide storage for your virtual machines. For storage to be attached you have to specify a volume in the `spec.templates.spec.volumes` block of your virtual machine. This volume must then be referenced as a device in `spec.templates.spec.domain.devices.disks` or `spec.templates.spec.domain.devices.filesystems`. -Sample configuration of a PersistentVolumeClaim mounted as a disk: +A sample configuration of a PersistentVolumeClaim mounted as a disk: + ```yaml apiVersion: kubevirt.io/v1 kind: VirtualMachine @@ -38,32 +39,33 @@ spec: The following types are the most important volume types which KubeVirt supports: -* cloudInitNoCloud: Attach a cloud-init data source. Requires a proper setup of cloud-init. -* cloudInitConfigDrive: Attach a cloud-init data source. Similar to cloudInitNoCloud this requires a proper setup of cloud-init. The config-drive can be used for Ignition. -* persistentVolumeClaim: Provide persistent storage using a PersistentVolumeClaim. -* dataVolume: Simplifies the process for creating virtual machine disks. Without using DataVolumes the creation of a PersistentVolumeClaim is on behalf of the user. -* ephemeral: Local Copy-On-Write image where the original data is never mutated. KubeVirt stores writes in an ephemeral image on local storage. -* containerDisk: Disk images are pulled and backed by a local store on the node. -* emptyDisk: Attach an empty disk to the virtual machine. An empty disk survives a guest restart but not a virtual machine recreation. -* hostDisk: Allows to attach a disk residing somewhere on the local node. -* configMap: Allows to mount a ConfigMap as a disk or a filesystem. -* secret: Allows to mount a Secret as a disk or a filesystem. -* serviceAccount: Allows to mount a ServiceAccount as a disk or a filesystem. -* downwardMetrics: Expose a limited set of VM and host metrics in a `vhostmd` compatible format to the guest. +* `cloudInitNoCloud`: Attach a cloud-init data source. Requires a proper setup of cloud-init. +* `cloudInitConfigDrive`: Attach a cloud-init data source. Similar to `cloudInitNoCloud`, this requires a proper setup of cloud-init. The config drive can be used for Ignition. +* `persistentVolumeClaim`: Provide persistent storage using a PersistentVolumeClaim. +* `dataVolume`: Simplifies the process for creating virtual machine disks. Without using DataVolumes the creation of a PersistentVolumeClaim is on behalf of the user. +* `ephemeral`: Local copy-on-write image where the original data is never mutated. KubeVirt stores writes in an ephemeral image on local storage. +* `containerDisk`: Disk images are pulled and backed by a local store on the node. +* `emptyDisk`: Attach an empty disk to the virtual machine. An empty disk survives a guest restart but not a virtual machine recreation. +* `hostDisk`: Allows to attach a disk residing somewhere on the local node. +* `configMap`: Allows to mount a ConfigMap as a disk or a filesystem. +* `secret`: Allows to mount a Secret as a disk or a filesystem. +* `serviceAccount`: Allows to mount a ServiceAccount as a disk or a filesystem. +* `downwardMetrics`: Expose a limited set of VM and host metrics in a `vhostmd` compatible format to the guest. -You can find more Information about the configuration options of Volumes in the [Volume API Reference](https://kubevirt.io/api-reference/master/definitions.html#_v1_volume). You can find volume examples in the [KubeVirt Storage Documentation](https://kubevirt.io/user-guide/storage/disks_and_volumes/#volumes) +You can find more information about the configuration options of volumes in the [Volume API reference](https://kubevirt.io/api-reference/master/definitions.html#_v1_volume). You can find volume examples in the [KubeVirt storage documentation](https://kubevirt.io/user-guide/storage/disks_and_volumes/#volumes). -## KubeVirt Disks +## KubeVirt disks -Beside other options disk referenced in the `spec.templates.spec.domain.devices` section can be: +Besides other options, a disk referenced in the `spec.templates.spec.domain.devices` section can be: -* lun: The disk is attached as a LUN device allowing to execute iSCSI command passthrough. -* disk: Expose the volume as a regular disk. -* cdrom: Expose the volume as a cdrom drive (read-only by defaul). -* fileystems: Expose the volume as a filesystem to the VM using virtiofs. +* `lun`: The disk is attached as a LUN device allowing to execute iSCSI command passthrough +* `disk`: Expose the volume as a regular disk +* `cdrom`: Expose the volume as a cdrom drive (read-only by defaul) +* `fileystems`: Expose the volume as a filesystem to the VM using virtiofs Sample configuration: + ```yaml apiVersion: kubevirt.io/v1 kind: VirtualMachine @@ -101,11 +103,11 @@ spec: claimName: cdrom-pvc ``` -You can find more Information about the configuration options of Disks in the [Disk API Reference](https://kubevirt.io/api-reference/master/definitions.html#_v1_disk). You can find disk examples in the [KubeVirt Storage Documentation](https://kubevirt.io/user-guide/storage/disks_and_volumes/#disks) +You can find more information about the configuration options of disks in the [Disk API reference](https://kubevirt.io/api-reference/master/definitions.html#_v1_disk). You can find disk examples in the [KubeVirt storage documentation](https://kubevirt.io/user-guide/storage/disks_and_volumes/#disks). {{% alert title="Note" color="info" %}} -In contrast to `disks` a `filesystem` reflects changes in the source to the volume inside the VM. However, it is important -to know that a volume mounted as a filesystem does not allow Live Migration. +In contrast to `disks`, a `filesystem` reflects changes in the source to the volume inside the VM. However, it is important +to know that a volume mounted as a filesystem does not allow live migration. {{% /alert %}} @@ -115,11 +117,12 @@ It might be a bit confusing as the name filesystem is overloaded. In this sectio PersistentVolumeClaim and not the mounting inside our guest. Kubernetes supports the `volumeMode` type `block` and `filesystem` (default). Whether you can use `block` devices or not depends on your CSI driver supporting block volumes. -* Filesystem: A volume is mounted into Pods into a directory. If the volume is backed by a block device and the device is empty, Kubernetes creates a filesystem on the device before mounting it for the first time. -* Block: Specifies that the volume is a raw block device. Such volume is presented into a Pod as a block device, without any filesystem on it. This mode is useful to provide a Pod the fastest possible way to access a volume, without any filesystem layer between the Pod and the volume. +* Filesystem: A volume is mounted into pods into a directory. If the volume is backed by a block device and the device is empty, Kubernetes creates a filesystem on the device before mounting it for the first time. +* Block: Specifies that the volume is a raw block device. Such volume is presented in a pod as a block device without any filesystem on it. This mode is useful to provide a pod with the fastest possible way to access a volume, without any filesystem layer between the pod and the volume. -When creating a DataVolume we can specify whether the disk should be of type `Filesystem` or `Block` device. Requesting a +When creating a DataVolume we can specify whether the disk should be of type `Filesystem` or `Block`. Requesting a blank volume will look like this: + ```yaml apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume @@ -137,7 +140,8 @@ spec: storage: 128Mi ``` -This specification translate to the PersistentVolumeClaim as following: +This specification translate to the PersistentVolumeClaim as follows: + ```yaml apiVersion: v1 kind: PersistentVolumeClaim @@ -154,10 +158,10 @@ spec: [...] ``` -Clearly seeing that the volumeMode `Block` is requested. +We can clearly see that the volumeMode `Block` is requested. -## Mounting Storage +## Mounting storage Let us create some storage and mount it to a virtual machine using different options. @@ -169,7 +173,8 @@ tool like cloud-init. ### {{% task %}} Prepare and create disks and configmaps -First we create a disk using the default `Filesystem` volumeMode. Create the file `dv_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-fs-disk.yaml` in the folder `{{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}` with the following content: +First, we create a disk using the default `Filesystem` volume mode. Create the file `dv_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-fs-disk.yaml` in the folder `{{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}` with the following content: + ```yaml apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume @@ -186,12 +191,15 @@ spec: requests: storage: 128Mi ``` -Create the filesystem backed disk in the kubernetes cluster: + +Create the filesystem-backed disk in the Kubernetes cluster: + ```bash kubectl apply -f {{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}/dv_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-fs-disk.yaml --namespace=$USER ``` -Second we create a disk using the `Block` volumeMode. Create the file `dv_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-block-disk.yaml` in the folder `{{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}` with the following content: +Then we create a disk using the `Block` volume mode. Create the file `dv_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-block-disk.yaml` in the folder `{{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}` with the following content: + ```yaml apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume @@ -208,24 +216,30 @@ spec: requests: storage: 128Mi ``` -Create the block storage disk in the kubernetes cluster: + +Create the block storage disk in the Kubernetes cluster: + ```bash kubectl apply -f {{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}/dv_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-block-disk.yaml --namespace=$USER ``` -Next we create a cloud-init configuration secret. Create a file `cloudinit-userdata.yaml` in the folder `{{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}` with the following content: +Next, we create a cloud-init configuration secret. Create a file `cloudinit-userdata.yaml` in the folder `{{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}` with the following content: + ```yaml #cloud-config password: kubevirt chpasswd: { expire: False } timezone: Europe/Zurich ``` -Create the secret in the kubernetes cluster: + +Create the secret in the Kubernetes cluster: + ```bash kubectl create secret generic {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cloudinit --from-file=userdata={{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}/cloudinit-userdata.yaml --namespace=$USER ``` -Last we create a ConfigMap with some values for an application. Create a file `cm_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-application.yaml` in the folder `{{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}` with the following content: +Last but not least, we create a ConfigMap with some values for an application. Create a file `cm_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-application.yaml` in the folder `{{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}` with the following content: + ```yaml kind: ConfigMap apiVersion: v1 @@ -238,13 +252,14 @@ data: another.property=42 ``` -Now create the ConfigMap in the kubernetes cluster: +Create the ConfigMap in the Kubernetes cluster: + ```bash kubectl apply -f {{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}/cm_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-application.yaml --namespace=$USER ``` -### {{% task %}} Create a VirtualMachine mounting the storage +### {{% task %}} Create a virtual machine mounting the storage We create a virtual machine mounting the following storage: @@ -257,7 +272,8 @@ We create a virtual machine mounting the following storage: * ServiceAccount to /serviceaccounts/default * ConfigMap to /configmaps/application -The VirtualManifest for this looks like the following snipped. Create a file `vm_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-storage.yaml` in the folder `{{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}` with the following content: +The VirtualMachine manifest for this looks like the following snippet. Create a file `vm_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-storage.yaml` in the folder `{{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}` with the following content: + ```yaml apiVersion: kubevirt.io/v1 kind: VirtualMachine @@ -347,25 +363,30 @@ spec: - ["/dev/vdd", "/disks/fs", "ext4", "defaults,nofail", "0", "2" ] ``` -Create the virtual machine in the kubernetes cluster: +Create the virtual machine: + ```bash kubectl apply -f {{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}/vm_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-storage.yaml --namespace=$USER ``` Start the virtual machine with: + ```bash virtctl start {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-storage --namespace=$USER ``` Open a console to the virtual machine: + ```bash virtctl console {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-storage --namespace=$USER ``` After logging in (user: `fedora`, passowrd: `kubevirt`) you can examine the virtual machine. First you may check the block device output with `lsblk`: + ```bash lsblk ``` + ``` NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS zram0 251:0 0 1.9G 0 disk [SWAP] @@ -381,10 +402,12 @@ vdc 252:32 0 252M 0 disk /disks/block vdd 252:48 0 257M 0 disk /disks/fs ``` -Another variant to also include the other mounts like is for example listing device usage: +Another variant to also list the other mounts is, e.g., showing the device usage: + ```bash df -h ``` + ``` Filesystem Size Used Avail Use% Mounted on /dev/vda4 4.0G 418M 3.1G 12% / @@ -404,28 +427,33 @@ tmpfs 197M 4.0K 197M 1% /run/user/1000 configmap-fs 226G 123G 94G 57% /configmaps/application ``` -Explore how the cloud init secret and the config map have been mounted in the VM by using the following commands: +Explore how the cloud-init secret and the configmap have been mounted in the VM by using the following commands: + ```bash ls -l /secrets/cloudinit/ cat /secrets/cloudinit/userdata ``` + ```bash ls -l /configmaps/application/ cat /configmaps/application/singlevalue cat /configmaps/application/application.properties ``` -Next try to edit the config map within kubernetes. You can open a new terminal in your webshell or leave the console -and head back later. Issue the following command to alter the ConfigMap: +Next, try to edit the configmap within Kubernetes. You can open a new terminal in your webshell or leave the console +and head back later. Issue the following command to alter the ConfigMap resource: + ```bash kubectl patch cm {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-application --type json --patch '[{ "op": "replace", "path": "/data/singlevalue", "value": "kubevirt-training" }]' --namespace=$USER ``` -After some time the change should be seamlessly be propagated to your vm. Head back to the console and check the value -of your mounted config map: +After some time, the change should be seamlessly propagated to your VM. Head back to the console and check the value +of your mounted configmap: + ```bash cat /configmaps/application/singlevalue ``` + ``` kubevirt-training ``` @@ -433,8 +461,9 @@ kubevirt-training ### Analyze mounting behaviour -Another thing to note is how our virtual machine pods is set up. Leave the console and describe the `virt-launcher` pod -responsible for your vm: +Another thing to note is how our virtual machine pods are set up. Leave the console and describe the `virt-launcher` pod +responsible for your VM: + ```bash kubectl describe pod virt-launcher-{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-storage- --namespace=$USER ``` @@ -496,23 +525,23 @@ Volumes: Type: Projected (a volume that contains injected data from multiple sources) ``` -This is a highly shortened output of the pod description showing the relevant sections. Our virt-launcher pod has multiple +Above output is shortened quite significantly and shows the pod description's relevant sections. Our virt-launcher pod has multiple helper containers managing the mounts. Every storage is shown in the volumes block. The ServiceAccount is actually in the `kube-api-access` volume. The `compute` container is the container running our virtual machine where our storage should be available. -#### Disk Mounts: Block storage and Filesystem storage +#### Disk Mounts: Block storage and filesystem storage -Our two PersistentVolumeClaims - one backed by Filesystem and one beeing a raw block device - are mounted differently. +Our two PersistentVolumeClaims - one backed by a filesystem and one being a raw block device - are mounted differently. -* **fsdisk**: Is in the mount block () of the `compute` container. -* **blockdisk**: Our block storage disk is not show in the mount block of the `compute` container. However, we see the `blockdisk` as `/dev/blockdisk` in the devices section in the `compute` container. This means that our block storage is indeed passed as a device. +* **fsdisk**: Is in the mount block of the `compute` container. +* **blockdisk**: Does not show up in the mount block of the `compute` container. However, we see the `blockdisk` as `/dev/blockdisk` in the devices section in the `compute` container. This means that our block storage is indeed passed as a device. -#### Filesystem Mounts: Virtiofs volumes +#### Filesystem mounts: Virtiofs volumes -Our Secret, ConfigMap, ServiceAccount are mounted to the guest as `filesystem`. They are using the shared filesystem [Virtiofs](https://virtio-fs.gitlab.io/). +Our Secret, ConfigMap and ServiceAccount are mounted to the guest as `filesystem`. They are using the shared filesystem [Virtiofs](https://virtio-fs.gitlab.io/). For each mounted filesystem there is a supporting container `virtiofs-`. In our case the supporting containers are `virtiofs-serviceaccount-fs`, `virtiofs-cloudinit-fs`, `virtiofs-configmap-fs`. @@ -539,11 +568,11 @@ Containers: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mmf6t (ro) ``` -* The container is running the `virtiofsd` file system daemon (See Command) -* The `virtiofsd` has access to the underlying data (in our case ServiceAccount) as it mounts the data (See Mounts in `virtiofs-serviceaccount-fs`). +* The container is running the `virtiofsd` file system daemon (see command) +* The `virtiofsd` has access to the underlying data (in our case ServiceAccount) as it mounts the data (see mounts in `virtiofs-serviceaccount-fs`) * The `virtiofsd` is creating/using the socket `/var/run/kubevirt/virtiofs-containers/serviceaccount-fs.sock` -* This socket is on the `virtiofs-containers` volume (See Args and Mount in `virtiofs-serviceaccount-fs`) -* This volume is also mounted by the `compute` container (see Mount section in `compute`). +* This socket is on the `virtiofs-containers` volume (see args and mount in `virtiofs-serviceaccount-fs`) +* This volume is also mounted by the `compute` container (see the mount section in `compute`) This establishes a communication channel using sockets. This channel is used by QEMU from the `compute` container to communicate with `virtiofsd` in `virtiofs-serviceaccount-fs`. @@ -554,6 +583,7 @@ communicate with `virtiofsd` in `virtiofs-serviceaccount-fs`. ## Stop your VM As we are at the end of this section it is time to stop your running virtual machine with: + ```bash virtctl stop {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-storage --namespace=$USER ``` diff --git a/content/en/docs/storage/resizing-disks.md b/content/en/docs/storage/resizing-disks.md index 7367b60..b644336 100644 --- a/content/en/docs/storage/resizing-disks.md +++ b/content/en/docs/storage/resizing-disks.md @@ -6,22 +6,23 @@ description: > Resizing virtual machine disks --- -In this section we will resize the root disk of our virtual machine. +In this section we will resize the root disk of a virtual machine. ## Requirements -Resizing depends on the Kubernetes Storage provider. The CSI driver must support resizing volumes as well as it must be configured to `AllowVolumeExpansion`. +Resizing disks depends on the Kubernetes storage provider. The CSI driver must support resizing volumes and must be configured with `AllowVolumeExpansion`. -Further it may depend on the operating system you use. Whenever the volume is resized your VM might see the change -in disk size immediately. But there might still be the need to resize the partition and filesystem. For example Fedora Cloud -uses has the package `cloud-utils-growpart` installed. This rewrites the partition table so that partition take up all -the space it is available. This makes it very handy choice for virtual machines resizing disk images. +Further, it may depend on the operating system you use. Whenever the volume is resized, your VM might see the change +in disk size immediately. But there might still be the need to resize the partition and filesystem. Fedora Cloud for instance +has the package `cloud-utils-growpart` installed. This rewrites the partition table so that the partition takes up all +the space available. This makes it a very handy choice for resizing disk images. ## {{% task %}} Create a volume and a virtual machine -In a first step have to create a fedora disk. Create the file `dv_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-expand-disk.yaml` in the folder `{{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}` with the following content: +In a first step we are going to create a Fedora disk. Create the file `dv_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-expand-disk.yaml` in the folder `{{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}` with the following content: + ```yaml apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume @@ -40,11 +41,13 @@ spec: ``` Create the data volume in the cluster: + ```bash kubectl apply -f {{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}/dv_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-expand-disk.yaml --namespace=$USER ``` Create the file `vm_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-expand.yaml` in the folder `{{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}` and use the following yaml specification: + ```yaml apiVersion: kubevirt.io/v1 kind: VirtualMachine @@ -93,11 +96,13 @@ spec: ``` Create the virtual machine with: + ```bash kubectl apply -f {{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}/vm_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-expand.yaml --namespace=$USER ``` Start the virtual machine with: + ```bash virtctl start {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-expand --namespace=$USER ``` @@ -105,16 +110,19 @@ virtctl start {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}- ### Check the disk size -Start the console of the virtual machine and login (user: `fedora`, password: `kubevirt`): +Start the virtual machines' console and log in (user: `fedora`, password: `kubevirt`): + ```bash virtctl console {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-expand --namespace=$USER ``` Check your block devices with: + ```bash lsblk ``` -```s + +```bash NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS zram0 251:0 0 1.9G 0 disk [SWAP] vda 252:0 0 5.7G 0 disk @@ -130,10 +138,12 @@ vdb 252:16 0 1M 0 disk ## {{% task %}} Resize the disk -Triggering a resize of a pvc in kubernetes can be done with editing the pvc size request. Get the PersistentVolumeClaim manifest with: +Triggering a resize of a PVC in Kubernetes can be done with editing the PVC size request. Get the PersistentVolumeClaim manifest with: + ```bash kubectl get pvc {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-expand-disk -o yaml --namespace=$USER ``` + ```yaml apiVersion: v1 kind: PersistentVolumeClaim @@ -147,19 +157,23 @@ spec: [...] ``` -Now patch the pvc to increase the disk size to `8Gi`. +Now, patch the PVC to increase the disk size to `8Gi`: + ```bash kubectl patch pvc {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-expand-disk --type='json' -p='[{"op": "replace", "path": "/spec/resources/requests/storage", "value":"8Gi"}]' --namespace=$USER ``` + ``` persistentvolumeclaim/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-expand-disk patched ``` It might take some time for the storage provider to resize the persistent volume. You can see details about the process -in the events section when describing the PersistentVolumeClaim: +in the events section when describing the PersistentVolumeClaim resource: + ```bash kubectl describe pvc {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-expand-disk --namespace=$USER ``` + ``` Events: Type Reason Age From Message @@ -171,15 +185,18 @@ Events: ``` If you still have a console open in your virtual machine you see that there was a message about the capacity change: + ```bash [ 896.201742] virtio_blk virtio3: [vda] new size: 15853568 512-byte logical blocks (8.12 GB/7.56 GiB) [ 896.202409] vda: detected capacity change from 11890688 to 15853568 ``` Recheck your block devices: + ```bash lsblk ``` + ``` NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS zram0 251:0 0 1.9G 0 disk [SWAP] @@ -194,26 +211,29 @@ vdb 252:16 0 1M 0 disk ``` You will see that the capacity change is visible from within the virtual machine. But at this time our partitions still -have the same size and do not use all available diskspace. +have the same size and do not use all available disk space. Issue a reboot to let the system expand the partitions: + ```bash sudo reboot ``` -After the reboot you have to login again and check `lsblk` again: +After the reboot you have to log in and check `lsblk` again: + ```bash lsblk ``` -You can see that for example `vda4` has been resized from `5.7G` to `7.6G`. +You can see that, e.g., `vda4` has been resized from `5.7G` to `7.6G`. ## End of lab {{% alert title="Cleanup resources" color="warning" %}} {{% param "end-of-lab-text" %}} -Delete your VirtualMachines: +Delete your VirtualMachine resources: + ```bash kubectl delete vm {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-storage --namespace=$USER kubectl delete vm {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros --namespace=$USER @@ -222,6 +242,7 @@ kubectl delete vm {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" ``` Delete your disks: + ```bash kubectl delete dv {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-fs-disk --namespace=$USER kubectl delete dv {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-block-disk --namespace=$USER @@ -231,6 +252,7 @@ kubectl delete dv {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" ``` Delete your VirtualMachineSnapshots: + ```bash kubectl delete vmsnapshot {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot-snap --namespace=$USER ``` diff --git a/content/en/docs/storage/snapshot.md b/content/en/docs/storage/snapshot.md index e8e10f1..0365a3c 100644 --- a/content/en/docs/storage/snapshot.md +++ b/content/en/docs/storage/snapshot.md @@ -1,5 +1,5 @@ --- -title: "VM Snapshot and Restore" +title: "VM snapshot and restore" weight: 64 labfoldernumber: "06" description: > @@ -9,15 +9,17 @@ description: > KubeVirt provides a snapshot and restore functionality. This feature is only available if your storage driver supports `VolumeSnapshots` and a `VolumeSnapshotClass` is configured. You can list the available `VolumeSnapshotClass` with: + ```yaml kubectl get volumesnapshotclass --namespace=$USER ``` + ``` NAME DRIVER DELETIONPOLICY AGE longhorn-snapshot-vsc driver.longhorn.io Delete 21d ``` -You can snapshot virtual machines in a running state or in the stopped state. Using the QEMU guest agent the snapshot can +You can snapshot virtual machines in running or stopped state. Using the QEMU guest agent the snapshot can temporarily freeze your VM to get a consistent backup. @@ -50,6 +52,7 @@ spec: ``` Create the data volume with: + ```bash kubectl apply -f {{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}/dv_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros-disk.yaml --namespace=$USER ``` @@ -105,11 +108,13 @@ spec: ``` Create the virtual machine with: + ```bash kubectl apply -f {{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}/vm_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot.yaml --namespace=$USER ``` Start your virtual machine with: + ```bash virtctl start {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot --namespace=$USER ``` @@ -117,14 +122,16 @@ virtctl start {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}- ### {{% task %}} Edit a file in your virtual machine -Now we make a file change and validate if the change is persistent. +We now make a file change and validate if the change is persistent. Enter the virtual machine with: + ```yaml virtctl console {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot --namespace=$USER ``` -Whenever you see the login prompt CirrOS shows the user and the default password. +CirrOS' login prompt always show the user and default password: + ```bash ____ ____ ____ / __/ __ ____ ____ / __ \/ __/ @@ -136,26 +143,32 @@ Whenever you see the login prompt CirrOS shows the user and the default password login as 'cirros' user. default password: 'gocubsgo'. use 'sudo' for root. ``` -Let's get rid of this message and replace it with our own. Login with the credentials and change our `/etc/issue` file. +Let's get rid of this message and replace it with our own. Log in with the credentials and change our `/etc/issue` file: + ```bash sudo cp /etc/issue /etc/issue.orig echo "Greetings from the KubeVirt Training. This is a CirrOS virtual machine." | sudo tee /etc/issue ``` Check that the greeting is printed correctly by logging out: + ```bash exit ``` + ``` Greetings from the KubeVirt Training. This is a CirrOS virtual machine. {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot login: ``` -Login again and restart the virtual machine to verify the change was persistent. +Log in again and restart the virtual machine to verify the change was persistent: + ```bash sudo reboot ``` -After the restart completed you should see your new Greeting message. + +After the restart completed, you should see your new message: + ``` ____ ____ ____ / __/ __ ____ ____ / __ \/ __/ @@ -186,25 +199,32 @@ spec: name: {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot ``` -Start the snapshot process by creating the VirtualMachineSnapshot: +Start the snapshot process by creating the VirtualMachineSnapshot resource: + ```yaml kubectl apply -f {{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}/vmsnapshot_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot-snap.yaml ``` -Make sure you wait until the snapshot is ready. You can issue the following command to wait until the snapshot is ready: +Make sure you wait until the snapshot is ready. You can issue the following command to wait for that to happen: + ```bash kubectl wait vmsnapshot {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot-snap --for condition=Ready ``` + It should complete with: + ``` virtualmachinesnapshot.snapshot.kubevirt.io/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot-snap condition met ``` You can list your snapshots with: + ```bash kubectl get virtualmachinesnapshot --namespace=$USER ``` + The output should be similar to: + ``` NAME SOURCEKIND SOURCENAME PHASE READYTOUSE CREATIONTIME ERROR {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot-snap VirtualMachine {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot Succeeded true 102s @@ -212,6 +232,7 @@ NAME SOURCEKIND SOURCENAME PHASE READYTOUSE You can describe the resource and have a look at the status of the `VirtualMachineSnapshot` and its subresource `VirtualMachineSnapshotContent`. + ```bash kubectl describe virtualmachinesnapshot --namespace=$USER kubectl describe virtualmachinesnapshotcontent --namespace=$USER @@ -239,50 +260,58 @@ status: [...] ``` -For example in the status of the VirtualMachineSnapshot description you may find information what volumes are in the snapshot. +In the status of the VirtualMachineSnapshot description, you may find information what volumes reside in the snapshot. -* `status.indications`: Information how the snapshot was made. - * `Online`: The VM was running during snapshot creation. - * `GuestAgent` QEMU guest agent was running during snapshot creation. - * `NoGuestAgent` QEMU guest agent was not running during snapshot creation or the QEMU guest agent could not be used due to an error. +* `status.indications`: Information how the snapshot was made + * `Online`: Indicates that the VM was running during snapshot creation + * `GuestAgent` Indicates that the QEMU guest agent was running during snapshot creation + * `NoGuestAgent` Indicates that the QEMU guest agent was not running during snapshot creation or the QEMU guest agent could not be used due to an error * `status.snapshotVolumes`: Information of which volumes are included Snapshots also include your virtual machine metadata `spec.template.metadata` and the specification `spec.template.spec`. -## {{% task %}} Changing our Greeting message again +## {{% task %}} Changing our greeting message again Enter the virtual machine with: + ```bash virtctl console {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot --namespace=$USER ``` -Change the Greeting message again: +Change the greeting message again: + ```bash sudo cp /etc/issue /etc/issue.bak echo "Hello" | sudo tee /etc/issue ``` -Now restart the virtual machine again and verify the change was persistent. +Now restart the virtual machine and verify the change was persistent: + ```bash sudo reboot ``` -After the restart completed you should see your new `Hello` message. -In addition to the changed file, containing the Greeting message, add a label `acend.ch/training: kubevirt` to our VirtualMachine metadata `{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot`. This will allow us to see what happens to the labels, once we restore the previous snapshot. +After the restart completed, you should see your new `Hello` message. + +In addition to the changed file containing the greeting message, add a label `acend.ch/training: kubevirt` to the VirtualMachine's metadata `{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot`. This will allow us to see what happens to the labels once we restore the previous snapshot. You can do this by patching your virtual machine with: + ```bash kubectl patch virtualmachine {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot --type='json' -p='[{"op": "add", "path": "/spec/template/metadata/labels/acend.ch~1training", "value":"kubevirt"}]' --namespace=$USER ``` + ``` virtualmachine.kubevirt.io/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot patched ``` Describe the virtual machine to check if the label is present: + ```bash kubectl describe virtualmachine {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot --namespace=$USER ``` + ``` API Version: kubevirt.io/v1 Kind: VirtualMachine @@ -299,22 +328,23 @@ Spec: ``` -## {{% task %}} Restoring a Virtual Machine +## {{% task %}} Restoring a virtual machine -Before we now restore the Virtual Machine from the Snapshot, let us do a quick recap: +Before we restore the virtual machine from the snapshot, let us do a quick recap: -1. We provisioned a cirros VM with an attached persistent volume. -1. We then changed the Greeting Message to `Greetings from the KubeVirt Training. This is a CirrOS virtual machine.` -1. We created a snapshot from that Volume -1. We change the Greeting Message to `Hello` and added a label `acend.ch/training: kubevirt` +1. We provisioned a CirrOS VM with an attached persistent volume +1. We then changed the greeting Message to `Greetings from the KubeVirt Training. This is a CirrOS virtual machine.` +1. We created a snapshot from that volume +1. We change the greeting Message to `Hello` and added a label `acend.ch/training: kubevirt` -Now ywe want restore the snapshot from step 3. Make sure your virtual machine is stopped. +Now we want to restore the snapshot from step 3. Make sure your virtual machine is stopped: ```bash virtctl stop {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot --namespace=$USER ``` Create the file `vmsnapshot_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot-restore.yaml` in the folder `{{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}` with the following content: + ```yaml apiVersion: snapshot.kubevirt.io/v1beta1 kind: VirtualMachineRestore @@ -328,16 +358,20 @@ spec: virtualMachineSnapshotName: {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot-snap ``` -Start the restore process by creating the VirtualMachineRestore: +Start the restore process by creating a VirtualMachineRestore resource: + ```bash kubectl apply -f {{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}/vmsnapshot_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot-restore.yaml --namespace=$USER ``` Make sure you wait until the restore is done. You can use the following command to wait until the restore is finished: + ```bash kubectl wait vmrestore {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot-restore --for condition=Ready --namespace=$USER ``` + It should complete with: + ``` virtualmachinerestore.snapshot.kubevirt.io/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot-restore condition met ``` @@ -345,13 +379,14 @@ virtualmachinerestore.snapshot.kubevirt.io/{{% param "labsubfolderprefix" %}}{{% ## {{% task %}} Check the restored virtual machine -Start the virtual machine again with: +Start the virtual machine: + ```bash virtctl start {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot --namespace=$USER ``` -Whenever the restore was successful the `Hello` greeting should be gone and we should see the following Greeting again. -Open the console to check the greeting: +If the restore was successful the `Hello` greeting should be gone and we should see below message again. +Open the console to check the greeting message: ```bash virtctl console {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot --namespace=$USER @@ -368,10 +403,12 @@ virtctl console {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %} Greetings from the KubeVirt Training. This is a CirrOS virtual machine. ``` -What about the label on the virtual machine manifest? Describe the virtual machine with and validate that it has been removed as well: +What about the label on the virtual machine manifest? Describe the virtual machine and validate that it has been removed as well: + ```bash kubectl describe virtualmachine {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-snapshot --namespace=$USER ``` + ``` API Version: kubevirt.io/v1 Kind: VirtualMachine diff --git a/content/en/docs/storage/storageprofiles.md b/content/en/docs/storage/storageprofiles.md index b15bf72..27d4868 100644 --- a/content/en/docs/storage/storageprofiles.md +++ b/content/en/docs/storage/storageprofiles.md @@ -3,27 +3,29 @@ title: "Using StorageProfiles" weight: 62 labfoldernumber: "06" description: > - Setting defaults for storage provisioning using StorageProfiles + Setting defaults for storage provisioning using storage profiles --- -When working with storage and the containerized data importer one usually wants to have meaningful defaults. Let us have a look -how we can configure storage profiles to be used with KubeVirt. +When working with storage and the containerized data importer, one usually wants to have meaningful defaults. Let us have a look +at how we can configure storage profiles to be used with KubeVirt. -{{% alert title="Note" color="info" %}} -Due to the cluster wide configuration of storage classes, the resources and command in this lab are not meant to be created and executed. +{{% alert title="Warning" color="warning" %}} +Due to the cluster-wide configuration of storage classes, the resources and commands in this lab are not meant to be created and executed! {{% /alert %}} -## What are StorageProfiles +## What are storage profiles -For each available StorageClass KubeVirt creates a StorageProfile. StorageProfiles serve as a source of information about -the recommended parameters for a pvc. They are used when provisioning a PVC using a DataVolume. Having recommended parameters +For each available StorageClass, KubeVirt creates a StorageProfile resource. StorageProfiles serve as a source of information about +the recommended parameters for a PVC. They are used when provisioning a PVC using a DataVolume. Having recommended parameters defined centrally in a StorageProfile reduces the complexity of your DataVolume definition. You can check the StorageProfiles with: + ```yaml kubectl get storageprofiles --namespace=$USER ``` + ``` NAME AGE hcloud-volumes 38d @@ -31,9 +33,11 @@ longhorn 38d ``` You may check the configuration of the StorageProfile with: + ```yaml kubectl describe storageprofile longhorn --namespace=$USER ``` + ``` Name: longhorn Namespace: @@ -85,12 +89,14 @@ spec: storage: 128Mi ``` -The DataVolume will not get created. Whenever we describe the DataVolume with: +The DataVolume will not be created. Describe the DataVolume with: + ```bash kubectl describe datavolume my-dv --namespace=$USER ``` -We see that CDI is lacking some information to create the PVC and return with an error. +We see that CDI is lacking some information to create the PVC and returns an error: + ``` Status: Conditions: @@ -99,30 +105,31 @@ Status: Status: Unknown ``` -This means that the controller did not know which accessMode to use for the PVC. +This means that the controller did not know which access mode to use for the PVC. ### Define StorageProfiles -Beside others a storage profile `spec` block can take the following parameters: +Besides others, a storage profile `spec` block can take the following parameters: * `claimPropertySets` - * `accessMode` - contains the desired access modes the volume should have - * `volumeMode` - defines what type of volume is required by the claim -* `cloneStrategy` - defines the preferred method for performing a CDI clone - * `copy` - copy blocks of data over the network - * `snapshot` - clones the volume by creating a temporary VolumeSnapshot and restoring it to a new PVC - * `csi-clone` - clones the volume using a CSI clone + * `accessMode` - Contains the desired access modes the volume should have + * `volumeMode` - Defines what type of volume is required by the claim +* `cloneStrategy` - Defines the preferred method for performing a CDI clone + * `copy` - Copy blocks of data over the network + * `snapshot` - Clones the volume by creating a temporary VolumeSnapshot and restores it to a new PVC + * `csi-clone` - Clones the volume using a CSI clone -If you want to read more about parameters, defaults and how storage profiles are used, check the [StorageProfiles](https://github.com/kubevirt/containerized-data-importer/blob/main/doc/storageprofile.md#parameters) documentation. +If you want to read more about parameters, defaults and how storage profiles are used, check the [StorageProfiles documentation](https://github.com/kubevirt/containerized-data-importer/blob/main/doc/storageprofile.md#parameters). ### Setting AccessMode and VolumeMode -To fix the issue above and provide default values for the storage profile `longhorn` we can set defaults in the `spec` +To fix the issue above and provide default values for the storage profile `longhorn`, we can set defaults in the `spec` block of the storage profile. Let's assume we add the `accessMode` and `volumeMode` to the `longhorn` storage profile like this: + ```yaml apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile @@ -137,7 +144,8 @@ spec: [...] ``` -When we have the storage profile configured and in place, we can re-apply our DataVolume: +As soon as we have the storage profile configured and in place, we can re-apply our DataVolume: + ```yaml apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume @@ -152,10 +160,12 @@ spec: storage: 128Mi ``` -It is now successfully provisioned using the defaults from the storage profile. +It is now successfully provisioned using the defaults from the storage profile: + ```bash kubectl describe datavolume my-dv --namespace=$USER ``` + ``` Status: Conditions: