Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: get secret from pvc annotation #20

Merged
merged 9 commits into from
Sep 4, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 2 additions & 23 deletions .github/workflows/publish-chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,30 +17,10 @@ jobs:
token: ${{secrets.GITHUB_TOKEN }}
- name: install dependencies
run: pip install chartpress
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- name: Flake check
run: nix flake check
- name: Publish images
uses: workflow/[email protected]
- name: Publish chart
env:
DOCKER_USERNAME: ${{ secrets.RENKU_DOCKER_USERNAME }}
DOCKER_PASSWORD: ${{ secrets.RENKU_DOCKER_PASSWORD }}
with:
flakes-from-devshell: true
flakes: .#csi-rclone-container-layerd
script: |
export TAG=$(echo ${GITHUB_REF} |cut -d/ -f3)
nix build .#csi-rclone-container-layerd && ./result | docker load
docker tag csi-rclone:latest renku/csi-rclone:latest
docker tag csi-rclone:latest renku/csi-rclone:${TAG}
echo ${DOCKER_PASSWORD}|docker login -u ${DOCKER_USERNAME} --password-stdin
docker push renku/csi-rclone:latest
docker push renku/csi-rclone:${TAG}

- name: Publish chart
env:
GITHUB_TOKEN: ${{ secrets.RENKUBOT_GITHUB_TOKEN }}
run: |
cd deploy
Expand All @@ -50,5 +30,4 @@ jobs:
helm dep update csi-rclone
chartpress --tag $TAG
helm lint csi-rclone
chartpress --tag $TAG --no-build --publish-chart

chartpress --tag $TAG --push --publish-chart
33 changes: 1 addition & 32 deletions .github/workflows/test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -36,38 +36,7 @@ jobs:
init-kind-cluster
local-deploy
get-kind-kubeconfig
go test -v test/sanity_test.go
- name: Print rclone log
if: ${{ failure() }}
run: cat /tmp/rclone.log
tests-with-decryption:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install fuse
run: |
sudo apt-get update
sudo apt-get install -y fuse3
sudo bash -c 'echo "user_allow_other" >> /etc/fuse.conf'
- uses: actions/setup-go@v4
with:
go-version: '1.20'
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- name: Flake check
run: nix flake check
- name: Helm check
run: helm lint deploy/csi-rclone
- name: Run tests with secret decryption
uses: workflow/[email protected]
with:
flakes-from-devshell: true
script: |
init-kind-cluster
local-deploy
get-kind-kubeconfig
go test -v test/sanity_with_decrypt_test.go
go test -v ./...
- name: Print rclone log
if: ${{ failure() }}
run: cat /tmp/rclone.log
10 changes: 10 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
FROM golang:1.23.0-bookworm AS build
COPY . .
RUN go build -o /csi-rclone cmd/csi-rclone-plugin/main.go

FROM debian:bookworm-slim
# NOTE: the rclone package in apt does not install ca-certificates or fuse3
# which it both needs to successfully mount cloud storage
RUN apt-get update && apt-get install -y fuse3 rclone ca-certificates && rm -rf /var/cache/apt/archives /var/lib/apt/lists/*
COPY --from=build /csi-rclone /csi-rclone
ENTRYPOINT ["/csi-rclone"]
86 changes: 83 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,90 @@

This project implements Container Storage Interface (CSI) plugin that allows using [rclone mount](https://rclone.org/) as storage backend. Rclone mount points and [parameters](https://rclone.org/commands/rclone_mount/) can be configured using Secret or PersistentVolume volumeAttibutes.

## Usage

## Installing CSI driver to kubernetes cluster
The easiest way to use this driver is to just create a Persistent Volume Claim (PVC) with the `csi-rclone`
storage class. Or if you have modified the storage class name in the `values.yaml` file then use the name you have chosen.
Note that since the storage is backed by an existing cloud storage like S3 or something similar the size
that is requested in the PVC below has no role at all and is completely ignored. It just has to be provided in the PVC specification.

```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-rclone-example
namespace: csi-rclone-example
annotations:
csi-rclone.dev/secretName: csi-rclone-example-secret
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: csi-rclone
```

You have to provide a secret with the rclone configuration. The secret has to have a specific format explained below.
The secret can be passed to the CSI driver via the annotation `csi-rclone.dev/secretName`.

The secret requires the following fields:
- `remote`: The name of the remote that should be mounted - has to match the section name in the `configData` field
- `remotePath`: The path on the remote that should be mounted, it should start with the container itself, for example
for a S3 bucket, if the bucket is called `test_bucket`, then the remote should be at least `test_bucket/`.
- `configData`: The rclone configuration, has to match the JSON schema from `rclone config providers`

```yaml
apiVersion: v1
kind: Secret
metadata:
name: csi-rclone-example-secret
namespace: csi-rclone-example
type: Opaque
stringData:
remote: giab
remotePath: giab/
configData: |
[giab]
type = s3
provider = AWS

```

### Skip provisioning and create PV directly

This is more complicated but doable. Here you have to specify the secret name in the CSI parameters.
Assuming that the secret that contains the configuration is called `csi-rclone-example-secret` and
is located in the namespace `csi-rclone-example-secret-namespace`, then the PV specification would look as follows.

```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: csi-rclone-pv-example
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
csi:
driver: csi-rclone
volumeAttributes:
nodePublishSecretRef: csi-rclone-example-secret
nodePublisSecretRefNamespace: csi-rclone-example-secret-namespace
persistentVolumeReclaimPolicy: Delete
```

## Installation

You can install the CSI plugin via Helm. Please checkout the default values file at `deploy/csi-rclone/values.yaml`
in this repository for the possible options on how to configure the installation.

```bash
helm repo add renku https://swissdatasciencecenter.github.io/helm-charts
helm repo update
helm install csi-rclone renku/csi-rclone
```

## Changelog

Expand Down Expand Up @@ -35,5 +117,3 @@ $ get-kind-kubeconfig
$ local-deploy
$ go test -v ./...
```


8 changes: 5 additions & 3 deletions deploy/chartpress.yaml → chartpress.yaml
Original file line number Diff line number Diff line change
@@ -1,15 +1,17 @@
charts:
- name: csi-rclone
- name: deploy/csi-rclone
imagePrefix: renku/
resetTag: ""
resetVersion: 0.2.0
repo:
git: SwissDataScienceCenter/helm-charts
published: https://swissdatasciencecenter.github.io/helm-charts
paths:
- ./
images:
csi-rclone:
contextPath: ../
dockerFilePath: ../Dockerfile
contextPath: ./
dockerFilePath: Dockerfile
valuesPath:
- csiControllerRclone.rclone.image
- csiNodepluginRclone.rclone.image
5 changes: 4 additions & 1 deletion cmd/csi-rclone-plugin/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -67,5 +67,8 @@ func handle() {
panic(err)
}
d := rclone.NewDriver(nodeID, endpoint, kubeClient)
d.Run()
err = d.Run()
if err != nil {
panic(err)
}
}
1 change: 1 addition & 0 deletions deploy/csi-rclone/templates/csi-controller-rbac.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ rules:
- list
- watch
- update
- patch
- apiGroups:
- ""
resources:
Expand Down
1 change: 0 additions & 1 deletion deploy/csi-rclone/templates/csi-controller-rclone.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,6 @@ spec:
mountPath: /csi
- name: rclone
args:
- /bin/csi-rclone-plugin
- run
- --nodeid=$(NODE_ID)
- --endpoint=$(CSI_ENDPOINT)
Expand Down
1 change: 0 additions & 1 deletion deploy/csi-rclone/templates/csi-nodeplugin-rclone.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,6 @@ spec:
name: plugin-dir
- name: rclone
args:
- /bin/csi-rclone-plugin
- run
- --nodeid=$(NODE_ID)
- --endpoint=$(CSI_ENDPOINT)
Expand Down
9 changes: 9 additions & 0 deletions deploy/csi-rclone/templates/csi-rclone-storageclass.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,12 @@ metadata:
provisioner: {{ .Values.storageClassName }}
volumeBindingMode: Immediate
reclaimPolicy: Delete
parameters:
# CreateVolumeRequest.secrets or DeleteVolumeRequest.secrets
# If creating a PersistentVolume by hand then these are not needed, see below
csi.storage.k8s.io/provisioner-secret-name: ${pvc.annotations['csi-rclone.dev/secretName']}
csi.storage.k8s.io/provisioner-secret-namespace: ${pvc.namespace}
# Populates NodePublishVolumeRequest.secrets
# If creating a PersistentVolume by hand then set spec.csi.nodePublishSecretRef.name and spec.csi.NodePublishSecretRef.namespace
csi.storage.k8s.io/node-publish-secret-name: ${pvc.annotations['csi-rclone.dev/secretName']}
csi.storage.k8s.io/node-publish-secret-namespace: ${pvc.namespace}
10 changes: 5 additions & 5 deletions deploy/csi-rclone/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@ storageClassName: csi-rclone
csiControllerRclone:
csiAttacher:
image:
repository: k8s.gcr.io/sig-storage/csi-attacher
tag: v3.4.0
repository: registry.k8s.io/sig-storage/csi-attacher
tag: v4.7.0
imagePullPolicy: IfNotPresent
csiProvisioner:
image:
repository: registry.k8s.io/sig-storage/csi-provisioner
tag: v3.4.1
tag: v5.1.0
imagePullPolicy: IfNotPresent
rclone:
image:
Expand All @@ -31,8 +31,8 @@ csiControllerRclone:
csiNodepluginRclone:
nodeDriverRegistrar:
image:
repository: k8s.gcr.io/sig-storage/csi-node-driver-registrar
tag: v2.4.0
repository: registry.k8s.io/sig-storage/csi-node-driver-registrar
tag: v2.12.0
imagePullPolicy: IfNotPresent
rclone:
containerSecurityContext:
Expand Down
39 changes: 17 additions & 22 deletions pkg/rclone/controllerserver.go
Original file line number Diff line number Diff line change
Expand Up @@ -32,11 +32,10 @@ func (cs *controllerServer) ValidateVolumeCapabilities(ctx context.Context, req
}

cs.mutex.Lock()
defer cs.mutex.Unlock()
if _, ok := cs.active_volumes[volId]; !ok {
cs.mutex.Unlock()
return nil, status.Errorf(codes.NotFound, "Volume %s not found", volId)
}
cs.mutex.Unlock()
return &csi.ValidateVolumeCapabilitiesResponse{
Confirmed: &csi.ValidateVolumeCapabilitiesResponse_Confirmed{
VolumeContext: req.VolumeContext,
Expand Down Expand Up @@ -73,34 +72,30 @@ func (cs *controllerServer) CreateVolume(ctx context.Context, req *csi.CreateVol
// differing capacity, so we need to remember it
volSizeBytes := int64(req.GetCapacityRange().GetRequiredBytes())
cs.mutex.Lock()
defer cs.mutex.Unlock()
if val, ok := cs.active_volumes[volumeName]; ok && val != volSizeBytes {
cs.mutex.Unlock()
return nil, status.Errorf(codes.AlreadyExists, "Volume operation already exists for volume %s", volumeName)
}
cs.active_volumes[volumeName] = volSizeBytes
cs.mutex.Unlock()

pvcName := req.Parameters["csi.storage.k8s.io/pvc/name"]
ns := req.Parameters["csi.storage.k8s.io/pvc/namespace"]
// NOTE: We need the PVC name and namespace when mounting the volume, not here
// that is why they are passed to the VolumeContext
pvcSecret, err := GetPvcSecret(ctx, ns, pvcName)
if err != nil {
return nil, err
}
remote, remotePath, _, _, err := extractFlags(req.GetParameters(), req.GetSecrets(), pvcSecret, nil)
if err != nil {
return nil, status.Errorf(codes.InvalidArgument, "CreateVolume: %v", err)
// See https://github.com/kubernetes-csi/external-provisioner/blob/v5.1.0/pkg/controller/controller.go#L75
// on how parameters from the persistent volume are parsed
// We have to pass these into the context so that the node server can use them
secretName, nameFound := req.Parameters["csi.storage.k8s.io/provisioner-secret-name"]
secretNs, nsFound := req.Parameters["csi.storage.k8s.io/provisioner-secret-namespace"]
volumeContext := map[string]string{}
if nameFound && nsFound {
volumeContext["secretName"] = secretName
volumeContext["secretNamespace"] = secretNs
} else {
// This is here for compatibility reasons before this update the secret name was equal to the PVC
volumeContext["secretName"] = req.Parameters["csi.storage.k8s.io/pvc/name"]
volumeContext["secretNamespace"] = req.Parameters["csi.storage.k8s.io/pvc/namespace"]
}
return &csi.CreateVolumeResponse{
Volume: &csi.Volume{
VolumeId: volumeName,
VolumeContext: map[string]string{
"secretName": pvcName,
"namespace": ns,
"remote": remote,
"remotePath": remotePath,
},
VolumeContext: volumeContext,
},
}, nil

Expand All @@ -113,8 +108,8 @@ func (cs *controllerServer) DeleteVolume(ctx context.Context, req *csi.DeleteVol
return nil, status.Error(codes.InvalidArgument, "DeteleVolume must be provided volume id")
}
cs.mutex.Lock()
defer cs.mutex.Unlock()
delete(cs.active_volumes, volId)
cs.mutex.Unlock()

return &csi.DeleteVolumeResponse{}, nil
}
Expand Down
Loading
Loading