-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running KinD in Kubernetes Pod with gVisor RuntimeClass #11313
Comments
Can you umount /dev/termination-logs and run your workflow again? I believe this is a known issue in gvisor |
To clarify
should be gone once you umount /dev/termination-logs, this is the known issue, we expect to have a fix in soon. However, it will still fail with the cgroup namespace, I can reproduce it via docker
|
it looks to me, under the cover, kind uses a docker container to start a kind control plane, the command is
|
look at the option from the gvisor's perspective, it can be simple reproduced via
|
Description
Hello,
I am trying to run a KinD cluster within a Pod inside Kubernetes where to pod uses gVisor RuntimeClass.
I am using the docker daemon provided by your basic images (docker-in-gvisor) and the regular docker-cli image with version 27.3.1-cli.
The pod has capabilities as per your recommendation here: docker in gvisor
audit_write, chown, dac_override, fowner, fsetid, kill, mknod, net_bind_service, net_admin, net_raw, setfcap, setgid, setpcap, setuid, sys_admin, sys_chroot, sys_ptrace
Additionally the following configuration is set for the runsc configuration.
containerd config
I have managed to get both
docker build
anddocker run
commands to work but when trying to runkind create cluster
it fails and the logs I am getting in the daemon is sparse even when running with the debug flag.I can run the same setup in Kubernetes when RuntimeClass is using default runc in cluster and having it run as privileged. That setup works, but when running with gvisor it does not.
daemon-logs-working.txt (default cluster runc with privileged)
daemon-logs-not-working.txt (runsc RuntimeClass)
If zooming in I see that workingsetup I get this
Which is not shown on the daemon in the cluster. Does bundle creation have to do with overlay part not available inside the docker daemon as it is using VFS as storage driver?
There is also a section with IP tables erroring at top in gvisor logs but since that is disabled I guess it is accurate?
The initial commands ran in the pod of the daemon as per your image is:
Also it complains about cgroup setting from the docker-cli container
Is there something we need to do here?
Steps to reproduce
device is not a node
errorrunsc version
docker version (if using docker)
uname
uname -a Linux ip-10-15-40-36.eu-west-1.compute.internal 6.1.119-129.201.amzn2023.aarch64 #1 SMP Tue Dec 3 21:06:52 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
kubectl (if using Kubernetes)
repo state (if built from source)
No response
runsc debug logs (if available)
The text was updated successfully, but these errors were encountered: