-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to set up round robin for microservices? #32
Comments
Maybe you can try |
Put some load on the system and you should see it being balanced properly |
When I generate 10 messages per second they still go all to the same instance, even after specifying the cluster-weight. |
Okay. Load balancing is only work when we are getting connections from the outside to the client. For load balance inside the cluster. You need to create a pool of connections and send the in a round robin fashion. See how to do it: var loggers = [];
for(var lc=0; lc<10; lc++) {
loggers.push(Cluster.discoverConnection('log'));
}
function sendLog(message) {
var index = Math.floor(Math.random() * loggers.length);
loggers[index].call('log', message);
};
for(var lc=0; lc<10000; lc++) {
sendLog("my log messgae");
} I'm not sure, cluster should support this by default or it's a task of a separate package. |
Hmm, ok, thanks, but that's not the solution I was looking for. Therefore I would prefer the cluster package to provide such functionality out of the box, because it knows how many instances are available of any particular service. |
Hmm, yes this seems to be expected behaviour. Maybe this is more of a PR encouraged enhancement rather than separate package. @arunoda are there situations where you wouldn't want this happening? |
I'd love to this to be a part of cluster. Only way, I can think of this by pooling connections. Then it's pretty straightforward to implement something for methods as I explained in the code. But the problem comes for subscriptions. I have no good solution for that. |
Is in not effective to use the same technique just with a 'publishers' pool? |
May be need to think that a bit. On Fri, Mar 20, 2015 at 5:18 PM Rhys Bartels-Waller <
|
I don't know the inner details of the cluster package, nor the publish / subscribe mechanism between client and server in meteor and therefore I have no clue whether it is possible at all, but e.g. in Vert.x - which I use for another project - modules can simply publish / subscribe on the server side as well. This means that in the example above I would start 2 instances of the "log" module and let them both subscribe to the same address, e.g. "log". This way an application can be very easily scaled by simply adding extra instances of a module. |
It sounds like what your describing here can be achieved using RPCs via Meteor.methods, whereas the issue @arunoda has raised relates to how internal pub/sub within the cluster would be load balanced since you can subscribe to another ddp server (or cluster service) publication from the server. The meteor docs have it listed as a 'client' feature, although another ddp server in this context is the client. I might submit a PR to address this documentation issue. The DDP protocol has a standard set of messages to manage the synchronisation of data between server/client, and it's very worthwhile to read through the docs to get a full understanding of it's potential. Cluster refers to these as |
Yes santo. What you are referring to multi casting and publish. Yes, that's But in meteor pub/sub refers to something else. It's for getting a subset
|
Ok, I understand pub/sub is working in another way in Meteor. The only point I'm trying to make is that it's not uncommon to implement the loadbalancing of (micro)services on the server side on top of the clustering technology And that's what I'm currently missing in the Meteor cluster package (cfr my initial post). |
Is that true if I connect to services from the client like you did in the Microservices - Beyond Basics ? |
@santo74 Have you implemented a solution to load balance internal requests? If so, keen to know if you have iterated on arunoda's snippet |
No, as I stated before I hope to see this kind of functionality implemented in the cluster package itself rather than having to implement it myself. |
Update: The conclusion is that there are still some important pieces missing. So yes, the cluster package offers a really neat solution for service discovery, but unfortunately the load balancing currently has some important limitations. |
@santo74 You could use the Cluster endpoints collection as a reactive data source for the pool right? As for pub/sub, unless I'm mistaking the issue, it would only impact server to server Meteor pub/sub. Is that what your doing? |
Yes, of course I can use the cluster endpoints collection, but that's the whole point: to make the cluster package a real loadbalancing tool - as is advertised here - it should do this out of the box. But please don't get me wrong, I really appreciate the work being done here and I understand it is still a relatively young project. |
My comment was only looking at how it would be solved by the package itself. This is opensource! Submit a PR is you get a solution that makes sense :-) |
I'm experimenting with cluster and what I want to achieve is to be able to round robin requests between multiple instances of a particular micro service.
E.g. I have a "log" service which provides logging functionality to the cluster:
and a web service which connects to that log service:
As a test I start 2 instances of the log service (let's call them log1 and log2), both on a different port:
And 1 instance of the web service:
The web service can send log messages to the log service, but it's always the same instance (let's say log1) that's receiving the messages.
When I stop this instance (log1), the messages are being sent to the other one (log2).
So the failover is working, but in addition to this I also want some kind of load balancing between the two log instances so that they can both accept messages from the web service in a round robin fashion.
What am I doing wrong?
The text was updated successfully, but these errors were encountered: