Load balancing to a backed websocket service using "least_conn" #1960
-
Hi, I am trying to set up the ingress controller to load balance to a websocket backed svc using the "least_conn" algorithm for load balancing. To being with, I have a service that accepts incoming websocket connections for clients. Kubeproxy is currently responsible for load balancing the incoming websocket requests across different service pods. Everything works fine with round-robin algorithm that Kubeproxy supports. Now I wanted to replace Kubeproxy with Nginx ingress controller for load balancing the websocket connections to my backend service pods. I configured all the moving parts necessary for Nginx and got websocket load balancing to work with Nginx ingress controller. However, I still see that the load balancing is still round-robin even though I configured the lb-method to "least_conn". I am not sure if I missed a configuration that enables using different load balancing options. Any help is greatly appreciated. Here is the Ingress config.
|
Beta Was this translation helpful? Give feedback.
Replies: 7 comments 4 replies
-
@vebodank |
Beta Was this translation helpful? Give feedback.
-
@vebodank From a NGINX point of view the "least_conn" does not care about the protocol itself. NGINX will load balance/proxy accordingly. |
Beta Was this translation helpful? Give feedback.
-
@vebodank thanks. That is helpful. A few of the engineers and I took a look at your config and have some suggestions. We noticed that you are not using any keepalives in your upstreams, which we recommend. You can use the following annotation and specify your value:
More information here: NGINX Ingress will automatically upgrade the connections to websockets, as you can see with these two settings:
We are wondering if you are seeing the number of requests, instead of the number of connections. I would start with the keepalives and use a basic setup, like you have and proceed to test again. |
Beta Was this translation helpful? Give feedback.
-
@vebodank |
Beta Was this translation helpful? Give feedback.
-
@vebodank I believe we know what is going on and why you are seeing the different behavior between NGINX Ingress OSS and NGINX In gress Plus. WIth NGINX Ingress Open source, there has to be a reload when upstreams are changed. In your case, when you bring down one, then bring it back online.. That is important because when the reload occurs, there is a new set of worker processes that is spawned and is not aware of any current existing connections to the backend. (those connections are handled by the previous worker connections) That said, when NGINX Ingress OSS is reloaded and the new 30 requests is sent in, NGINX Ingress OSS distributes those connections evenly across the three upstream pods. Let me know if that is helpful and answers your question. |
Beta Was this translation helpful? Give feedback.
-
if you don't mind can you check mine? I am also confused about what actually running on my ingress. i tought is rounud robin but seems is not, ewma instead. thank you
|
Beta Was this translation helpful? Give feedback.
-
@rthamrin Unfortunately, you are using a different NGINX Ingress controller project. https://kubernetes.github.io/ingress-nginx/deploy/ This github repo is for NGINXINC project. |
Beta Was this translation helpful? Give feedback.
@vebodank I believe we know what is going on and why you are seeing the different behavior between NGINX Ingress OSS and NGINX In gress Plus.
What is happening is that it works fine for NGINX Ingress Plus, because NGINX Ingress Plus does not have to do a reload when upstreams are changed (brought down brought online) This can be done dynamically.
WIth NGINX Ingress Open source, there has to be a reload when upstreams are changed. In your case, when you bring down one, then bring it back online.. That is important because when the reload occurs, there is a new set of worker processes that is spawned and is not aware of any current existing connections to the backend. (those connections are…