You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When deploying a StorageClass, PersistentVolumeClaim and Pod while using the EfsCsiDriverAddOn to dynamically provision an EFS Access Point and mount it to the Pod, mounting fails with the error mount.nfs4: access denied by server while mounting 127.0.0.1:/.
Expected Behavior
Mounting the EFS Access Point to the Pod succeeds.
Current Behavior
Running kubectl describe pod/efs-app shows the following Event logs for the Pod:
Name: efs-app
Namespace: default
Priority: 0
Service Account: default
Node: ip-XXX-XXX-XXX-XXX.eu-west-1.compute.internal/XXX.XXX.XXX.XXX
Start Time: Thu, 01 Aug 2024 13:08:05 +0200
Labels: <none>
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
app:
Container ID:
Image: centos
Image ID:
Port: <none>
Host Port: <none>
Command:
/bin/sh
Args:
-c
while true; do echo $(date -u) >> /data/out; sleep 5; done
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/data from persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zg68d (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: efs-claim
ReadOnly: false
kube-api-access-zg68d:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18s default-scheduler Successfully assigned default/efs-app to ip-XXX-XXX-XXX-XXX.eu-west-1.compute.internal
Warning FailedMount 8s (x5 over 17s) kubelet MountVolume.SetUp failed for volume "pvc-XXXXXXX" : rpc error: code = Internal desc = Could not mount "fs-XXXXXXX:/" at "/var/lib/kubelet/pods/XXXXXXX/volumes/kubernetes.io~csi/pvc-XXXXXXX/mount": mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t efs -o accesspoint=fsap-XXXXXXX,tls fs-XXXXXXX:/ /var/lib/kubelet/pods/XXXXXXX/volumes/kubernetes.io~csi/pvc-XXXXXXX/mount
Output: Could not start amazon-efs-mount-watchdog, unrecognized init system "aws-efs-csi-dri"
b'mount.nfs4: access denied by server while mounting 127.0.0.1:/'
Warning: config file does not have fips_mode_enabled item in section mount.. You should be able to find a new config file in the same folder as current config file /etc/amazon/efs/efs-utils.conf. Consider update the new config file to latest config file. Use the default value [fips_mode_enabled = False].Warning: config file does not have retry_nfs_mount_command item in section mount.. You should be able to find a new config file in the same folder as current config file /etc/amazon/efs/efs-utils.conf. Consider update the new config file to latest config file. Use the default value [retry_nfs_mount_command = True].
However, the creation of the EFS Access Point does succeed, as seen in the AWS Console and via command kubectl describe pvc/efs-claim:
Name: efs-claim
Namespace: default
StorageClass: efs-sc
Status: Bound
Volume: pvc-XXXXXXX
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: efs.csi.aws.com
volume.kubernetes.io/storage-provisioner: efs.csi.aws.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 5Gi
Access Modes: RWX
VolumeMode: Filesystem
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 17s persistentvolume-controller Waiting for a volume to be created either by the external provisioner 'efs.csi.aws.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
Normal Provisioning 17s efs.csi.aws.com_efs-csi-controller-XXXXXXX External provisioner is provisioning volume for claim "default/efs-claim"
Normal ProvisioningSucceeded 17s efs.csi.aws.com_efs-csi-controller-XXXXXXX Successfully provisioned volume pvc-XXXXXXX
Then the details of the Storage Class, by running command kubectl describe sc/efs-sc:
Check the mount status of the Pod with kubectl describe pod/efs-app. Optionally also check the PVC and SC with kubectl describe pvc/efs-claim and kubectl describe sc/efs-sc.
Possible Solution
Not sure.
Additional Information/Context
I have done the following troubleshooting, which all result in the same error:
Manually added mountOptions to the provided StorageClass with iam and tls included. This was suggested in this AWS re:Post.
kind: StorageClass
...
mountOptions:
- tls
- iam
...
Configured the my cluster nodes with a custom role that includes AWS AmazonElasticFileSystemClientReadWriteAccess policy. This dit not fix the issue.
Checked if EFS CSI Driver provisions the EFS Access Point correctly, which it does.
Checked the EFS File System Policy, which looks alright.
Checked if EFS is in the same VPC as the EKS Cluster, which it is.
Checked if EFS Security Groups allow inbound NFS:2049 traffic, which it does.
CDK CLI Version
2.133.0 (build dcc1e75)
EKS Blueprints Version
1.15.1
Node.js Version
v20.11.0
Environment details (OS name and version, etc.)
Win11Pro22H2
Other information
While I'm uncertain of the exact cause, I assume it is IAM related.
I found a similar issue on the EKS Blueprints for Terraform repository (aws-ia/terraform-aws-eks-blueprints#1171), which has been solved (aws-ia/terraform-aws-eks-blueprints#1191). Perhaps this has a similar cause? I believe it might be related because the mount-option mention in the fix does not seem to be included in the mount-command in the above EKS Blueprints for CDK logs (specifically the Pod Event logs).
The text was updated successfully, but these errors were encountered:
You will see steps and policies to configure your EFS filesystem with e2e encryption. Please let me know if that solves the issue, we can then update the docs with that reference.
@shapirov103 Hey, I wasn't aware that there was a Workshop for the EFS CSI Driver. I've only used the QuickStart docs. The instructions in the Workshop work perfectly! Issue solved.
Thank you!
I had the same issue but the guide did not provide a solution. When I changed the Policy to be more permissive It finally worked. So not sure the policy works for everyone:
Describe the bug
When deploying a StorageClass, PersistentVolumeClaim and Pod while using the
EfsCsiDriverAddOn
to dynamically provision an EFS Access Point and mount it to the Pod, mounting fails with the errormount.nfs4: access denied by server while mounting 127.0.0.1:/
.Expected Behavior
Mounting the EFS Access Point to the Pod succeeds.
Current Behavior
Running
kubectl describe pod/efs-app
shows the following Event logs for the Pod:However, the creation of the EFS Access Point does succeed, as seen in the AWS Console and via command
kubectl describe pvc/efs-claim
:Then the details of the Storage Class, by running command
kubectl describe sc/efs-sc
:Lastly I have also checked the efs-csi-controller logs using command
kubectl logs deployment/efs-csi-controller -n kube-system -c efs-plugin
:Reproduction Steps
kubectl describe pod/efs-app
. Optionally also check the PVC and SC withkubectl describe pvc/efs-claim
andkubectl describe sc/efs-sc
.Possible Solution
Not sure.
Additional Information/Context
I have done the following troubleshooting, which all result in the same error:
mountOptions
to the provided StorageClass withiam
andtls
included. This was suggested in this AWS re:Post.AmazonElasticFileSystemClientReadWriteAccess
policy. This dit not fix the issue.CDK CLI Version
2.133.0 (build dcc1e75)
EKS Blueprints Version
1.15.1
Node.js Version
v20.11.0
Environment details (OS name and version, etc.)
Win11Pro22H2
Other information
While I'm uncertain of the exact cause, I assume it is IAM related.
I found a similar issue on the EKS Blueprints for Terraform repository (aws-ia/terraform-aws-eks-blueprints#1171), which has been solved (aws-ia/terraform-aws-eks-blueprints#1191). Perhaps this has a similar cause? I believe it might be related because the mount-option mention in the fix does not seem to be included in the mount-command in the above EKS Blueprints for CDK logs (specifically the Pod Event logs).
The text was updated successfully, but these errors were encountered: