You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
A panic occurs because of a nil pointer in processPersistentVolumeClaim.
This is because the provisioner on a claim can be different to the provisioner on a volume.
I've found this on a GKE cluster where the volume was "migrated" from GCE to CSI.
To Reproduce
Steps to reproduce the behavior:
Create a PersistentVolumeClaim on GKE using your clusters "default" provisioner
A PersistentVolume is provisioned by GCE
The PersistentVolume is migrated to CSI
The PersistentVolumeClaim is provisioned by CSI
processPersistentVolumeClaim panics because it looks for the volumeID on a nil field
Expected behavior
Finds the volumeID on the PersistentVolume regardless of provisioner on PersistentVolumeClaim.
Additional context
I think this could fix the issue: https://github.com/afharvey/k8s-pvc-tagger/pull/1/files
I tried to keep the changes limited to GCP.
I'm happy to try and fix this or take another approach.
I've only seen this on GCP. Azure and AWS work great.
Here are the K8s resources which cause the panic (and crash looping).
Describe the bug
A panic occurs because of a nil pointer in
processPersistentVolumeClaim
.This is because the provisioner on a claim can be different to the provisioner on a volume.
I've found this on a GKE cluster where the volume was "migrated" from GCE to CSI.
To Reproduce
Steps to reproduce the behavior:
PersistentVolumeClaim
on GKE using your clusters "default" provisionerPersistentVolume
is provisioned by GCEPersistentVolume
is migrated to CSIPersistentVolumeClaim
is provisioned by CSIprocessPersistentVolumeClaim
panics because it looks for thevolumeID
on a nil fieldExpected behavior
Finds the
volumeID
on thePersistentVolume
regardless of provisioner onPersistentVolumeClaim
.Additional context
I think this could fix the issue: https://github.com/afharvey/k8s-pvc-tagger/pull/1/files
I tried to keep the changes limited to GCP.
I'm happy to try and fix this or take another approach.
I've only seen this on GCP. Azure and AWS work great.
Here are the K8s resources which cause the panic (and crash looping).
PersistentVolumeClaim
- GCP_PD_CSI provisionerPersistentVolume
- GCP_PD_LEGACY provisionerThe text was updated successfully, but these errors were encountered: