You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Not sure if this is an enhancement, feature or a maybe a bug.
I naively assumed that an observability platform for networking on K8s gives me some kind of explicit indication if a K8s service cannot be reached, because the pods that are backing the service are not available/running.
However, I only see the usual DNS flows that also occur when the pods are running. No indication whatsoever that would make me aware that the service cannot be used, because the pods have crashed:
At the client side I simply curl’ed the service, getting an expected error message, because the server pods were not running:
root@client-0:/# curl http://pod-service
curl: (7) Couldn't connect to server
When the pods are running, then the flows look like this (note that the first 10 lines look exactly like the 10 lines of the error-case mentioned above):
To reproduce the scenario, here’s a simple yaml with a client that can execute curl requests (e.g. curl http://pod-service). The error situation is provoked by choosing a "nodeName" of the server pods which does not exist.
By commenting the “nodeName” in the StatefulSet “server”, the scenario can be switched to a state where the pod is running successfully and can serve as an endpoint to the service.
Not sure if this is an enhancement, feature or a maybe a bug.
I naively assumed that an observability platform for networking on K8s gives me some kind of explicit indication if a K8s service cannot be reached, because the pods that are backing the service are not available/running.
However, I only see the usual DNS flows that also occur when the pods are running. No indication whatsoever that would make me aware that the service cannot be used, because the pods have crashed:
At the client side I simply curl’ed the service, getting an expected error message, because the server pods were not running:
When the pods are running, then the flows look like this (note that the first 10 lines look exactly like the 10 lines of the error-case mentioned above):
To reproduce the scenario, here’s a simple yaml with a client that can execute curl requests (e.g.
curl http://pod-service
). The error situation is provoked by choosing a "nodeName" of the server pods which does not exist.By commenting the “nodeName” in the StatefulSet “server”, the scenario can be switched to a state where the pod is running successfully and can serve as an endpoint to the service.
The text was updated successfully, but these errors were encountered: