This guide walks you through getting your EKS cluster ready for data protection and how to prepare your cluster to enable CSI Snapshots.
It is recommended that you enable both FSB and CSI volume snapshots when backing up a cluster.
The VMware Documentation can be found here and it is recommended to read these instructions first and become familar with the concepts.
You need to prepare your cluster to install a new StorageClass to use the CSI Driver, Volume Snapshot CRDs, and finally the Volume Snapshot Controller. These next steps follows this guide.
EKS clusters come with the gp2 storageclass by default, but this doesn't come with CSI driver support. gp3 volumes are cheaper and faster supposedly, so this is a good time to install a new default storage class and convert over to gp3 and use the CSI driver.
Let's first create a new storageclass and make it default (Note the encrypted parameter. Set this if you want encrypted volumes provisioned):
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: gp3
parameters:
type: gp3
encrypted: "true"
provisioner: ebs.csi.aws.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
Next delete your old storageclass gp2 as it's no longer needed.
k delete sc gp2
You need to install the Volume Snapshot CRDS:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
Install the Snapshot Controller:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml
You need to create a VolumeSnapshotClass with the same name as the driver and the velero.io/csi-volumesnapshot-class: "true"
label on it. The label is used by Velero to know to create CSI Snapshot Volumes using this VolumeSnapshot Class (See Velero CSI Documentation):
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: aws-ebs-csi-driver
labels:
velero.io/csi-volumesnapshot-class: "true"
driver: ebs.csi.aws.com
deletionPolicy: Delete
You have now prepared your cluster to support CSI Volume Snapshots and you should see FBS and CSI data protection enabled within the TMC console (After you turn it on of course):
-
If you want to test the CSI Volume Snapshot backup feature using TMC, you can follow this guide.
-
I ran into a problem where I needed to create a credential to create a new TMC Target Location on an AWS Account that already had a credential setup and I couldn't just run the cloud formation generated for you. This guide walks you through how you can change the existing role and policy to meet your needs. Only run this if already have a credential setup.