-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
csi-s3 daemonset pod restart causes mounted pvc not accessible #29
Comments
Hi, interesting, I'll have a look |
RPC_LIST_VOLUMES_PUBLISHED_NODES is now officially not a solution :-) kubernetes-csi/external-attacher#374 (comment) I'll try to solve this problem in some other way, for example by persisting mounts on each node in a "state file" and remounting them on restart. But this approach may fail because it's not guaranteed that these "remounts" will propagate to application pods correctly. And if it fails then we'll also have to think about moving FUSE processes out of the mounter pod. |
@vitalif Followed your footprints across several repos, the reason behind this issue has been much more clear to me. Thanks for your efforts and look forward to good news! |
Follow up. Any help would be much appreciated. |
Okay, I checked the state file approach and predictably it doesn't work because the re-mounted mount isn't propagated into the application pod. |
I finally implemented running outside of the container using transient systemd units. The code is in master branch, not released yet. One slight ugliness is that it still requires to be ran as root on host - not a big deal compared to the current version in fact - it also runs under root in a privileged container, but I still prefer to make software run without root privileges where it's possible... I'll probably try to add an option to geesefs to drop root privileges itself after initializing the fuse mount, that will solve this issue. Anyway you can already try the new version if you build code from master branch yourself :) |
The fix is released. Note that it's kind of strange :) because it starts geesefs outside the container. But it allows to not crash mountpoints when updating csi-s3. Also the new version doesn't start multiple geesefs processes for one volume mounted to multiple containers on one host, it only starts one geesefs per volume per host. Try it in 0.34.7. |
Problem
If the csi-s3 daemonset pod restarts for some reason, the pod that mounts s3-based pvc will not be able to access the pvc and reports "Transport endpoint is not connected".
Reproduce
cd deploy/kubernetes kubectl create -f examples/secret.yaml kubectl create -f provisioner.yaml kubectl create -f attacher.yaml kubectl create -f csi-s3.yaml kubectl create -f examples/storageclass.yaml
Found nothing special from csi pod logs.
Related
This issue describes the same problem and its maintainer suggests that LIST_VOLUMES and LIST_VOLUMES_PUBLISHED_NODES should be implemented. Would you please have a look?
The text was updated successfully, but these errors were encountered: