-
-
Notifications
You must be signed in to change notification settings - Fork 363
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: wireguard - adding ipv6 rule: file exists #2521
Comments
@qdm12 is more or less the only maintainer of this project and works on it in his free time.
|
Same issue found recently, can't trace exactly when it started, but I'm getting the same logs. |
Oddly it seems that ipv6 rule exists before Gluetun does anything, not too sure why, let's try to find why first.
PS: in case this cannot be fixed, I can change the code to consider "file exists" as yep it has been created ok, but I would prefer to understand the root cause if possible since this isn't normal behavior really. Also I'm inclined to think this is a host system/kernel problem, since other users are running gluetun with ipv6 just fine. |
I went the extra yard (not mile yet 😄) to have an image tag |
Cool. Let me give it a go. |
Result of LOG_LEVEL=debug
Result of ip -6 rule within the container command
Result of podman run --rm --cap-add NET_ADMIN alpine:3.20 ip -6 rule
|
Just to add to the above info. At the moment while Gluetun isn't connected to the rest of the containers that I am running on Fedora there is a VPN connected to the Host of the containers via opnSENSE Wireguard Selective Routing. So maybe that is affecting the containers . Additionally, I run ULAs IPv6 Addresses internally on my LAN. So that is why you may see it is successful in the logs, but I am not 100% sure if that will affect it. |
@Ttfgggf Wait I'm a bit confused, why is the container not crashing in the last logs you shared with the error |
Not sure to be honest, but it has crashed. Could SELinux be affecting it? Right now nothing is connected to the gluetun container. But there is another AirVPN connection to the machine hosting gluetun is using in the meantime. With a Local ULA for IPV6 and an IPV4 address. |
I'm running into the same issue. Tried to run pr-2526 image but get the same behavior.
With provider:custom is fails straight away, when I set the provider to protonvpn the VPN connects, everything works for between 5 tot 20 minutes. Qbittorrent can download with 200mbps in that time, then the VPN becomes unhealthy, restarts and 'bootloops' with the same iptables file exists error from which it never recovers (unless i manually restart the pod, then it works again for sometime before it fails again). |
Thanks @leovanalphen for trying that image! 👍
There is no fix in the image, it just logs out existing rules if adding a rule fails with I've updated the |
@qdm12 No worries, glad to be able to contribute in some way. Thank you for sharing your work with all of us. I just repulled pr-2526, waited a couple of minutes for the VPN to become unhealthy. And to my surprise this time the HC kicked in, restarted the VPN and it came back up first try. My test transfer over the VPN just kept running with a barely noticeable temporary slow down. So far it has recovered without issue three times. Added my logs below. I haven't changed anything in my setup other than repull pr-2526. For completeness, I'm running on Kubernetes 1.30 using Talos as underlying OS. Chart I'm using as a base is from truecharts, I just edited the image url to point to pr-2526.
|
Is this urgent?
None
Host OS
Fedora 40
CPU arch
x86_64
VPN service provider
AirVPN
What are you using to run the container
Podman
What is the version of Gluetun
Running version latest built on 2024-10-11T18:31:08.386Z (commit abe9dcb)
What's the problem 🤔
The problem is a similar to one to #1991.
I made a change to my Podman Quadlet file and it stopped working although it was working before.
Share your logs (at least 10 lines)
Share your configuration
The text was updated successfully, but these errors were encountered: