You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hubble is not working. hubble status reports Max Flows, Flows/s: N/A and Connected Nodes: 0/0
None of the solutions in #599 worked
Tried with TLS Disabled, HTTPV2 metric disabled, different CRI's. Fresh VM snapshot install on each attempt
Tried installing with both HELM and Cilium CLI individually
I have confirmed that i use the default cluster.local kubernetes domain
I have tried with and without kube-proxy replacement
Installed versions:
VMware Workstation 17
Debian 12 or Ubuntu 22.04
Kubernetes version 1.27
Cilium 1.14.4
Containerd.io 1.6.24
Hubble relay logs level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443
cilium status inside the agent container says all is ok ? Even Hubble seems to list some ongoing flows
root@debload1:~# kubectl -n kube-system exec cilium-dkvcv -- cilium status
Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
KVStore: Ok Disabled
Kubernetes: Ok 1.27 (v1.27.7) [linux/amd64]
Kubernetes APIs: ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: True [ens33 172.16.179.151 (Direct Routing)]
Host firewall: Disabled
CNI Chaining: none
Cilium: Ok 1.14.4 (v1.14.4-87dd2b64)
NodeMonitor: Listening for events on 128 CPUs with 64x4096 of shared memory
Cilium health daemon: Ok
IPAM: IPv4: 3/254 allocated from 10.0.1.0/24,
IPv4 BIG TCP: Disabled
IPv6 BIG TCP: Disabled
BandwidthManager: Disabled
Host Routing: Legacy
Masquerading: IPTables [IPv4: Enabled, IPv6: Disabled]
Controller Status: 26/26 healthy
Proxy Status: OK, ip 10.0.1.121, 0 redirects active on ports 10000-20000, Envoy: embedded
Global Identity Range: min 256, max 65535
Hubble: Ok Current/Max Flows: 1205/4095 (29.43%), Flows/s: 7.68 Metrics: Disabled
Encryption: Disabled
Cluster health: 2/2 reachable (2023-11-15T17:05:47Z)
The text was updated successfully, but these errors were encountered:
Bear-LB
changed the title
Hubble-relay error="context deadline exceeded"
Connected Nodes: 0/0 and Hubble-relay error="context deadline exceeded"
Nov 15, 2023
I switched VMware Workstation 17 out with VirtualBox. Following the exact same configuration, i then couldn't reproduce the error. Every part of the networking in the kubernetes cluster was functioning except for hubble. I didn't think it could be a hypervisor error
Hubble is not working. hubble status reports Max Flows, Flows/s: N/A and Connected Nodes: 0/0
None of the solutions in #599 worked
Tried with TLS Disabled, HTTPV2 metric disabled, different CRI's. Fresh VM snapshot install on each attempt
Tried installing with both HELM and Cilium CLI individually
I have confirmed that i use the default cluster.local kubernetes domain
I have tried with and without kube-proxy replacement
Installed versions:
VMware Workstation 17
Debian 12 or Ubuntu 22.04
Kubernetes version 1.27
Cilium 1.14.4
Containerd.io 1.6.24
Cluster initiated with kubeadm init
Cilium installed with
OR
Hubble relay logs
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443
Hubble status
Cilium status says all is ok
cilium status inside the agent container says all is ok ? Even Hubble seems to list some ongoing flows
The text was updated successfully, but these errors were encountered: