Replies: 1 comment 4 replies
-
If we try to split this into pieces, Talos Linux as the OS only manages host-to-host traffic, which in your case seems to achieve wire speed (around 10 Gbit/s), so there is no problem at the OS level. The next layer is the CNI, and in the default setup of both Flannel and Cilium they use I would probably go into the following directions:
|
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm trying to set up a cluster on some very simple VMs from an independent host (non cloud). I have 3 of them in a control plane with scheduling enabled, and each node has a 10gbe nic.
However, when testing pod to pod networking speed using iperf3, I can only get between 450-600mbps between pods on different nodes. When using kubectl debug, I can get ~9gbps between nodes or between a pod and a node.
I've tested using flannel and cilium, this is a cluster without any workloads and has been created using
gen config
with the minimal config for test purposes. No firewall, kubespan disabled, the only real config I added is some network interfaces as the cloud init config is in a non standard location.Install method
AMD64 Metal iso, with system extensions:
VM Specs
AMD Epyc platform - 4 virtual cores
8GB RAM
10gbe vnic
1 public ip address
What i've tried
Flannel CNI
Cillium CNI, with kube proxy disabled
Experimented with different mtus, 1500, 9000
Added various kernel flags suggested by the host
Since the ip addresses are public, I'd prefer not to share the support zip here, but let me know how I can provide further logs or config info if necessary, or what else I should try.
This is the network config. All nodes are on the same /24 subnet, but it is not private to me.
Beta Was this translation helpful? Give feedback.
All reactions