Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Maintain deterministic order of CLUSTER SHARDS response #411

Closed
wants to merge 3 commits into from

Conversation

VoletiRam
Copy link
Contributor

Maintain deterministic order of CLUSTER SHARDS response. Currently we don't maintain the shards/masters in sorted fashion and hence we get the order of CLUSTER SHARDS response non-deterministic on different nodes. Maintain the sorted Masters list of pointers, similar to replicas, and replace the current <shards, list[nodes]> dict which is not suitable for sorting. Add the TOPOLOGY argument to get the deterministic response which would remove the replication offset and node health status from cluster shards response. Sort the masters based on the node Id. Include the new CLUSTER SHARDS TOPOLOGY command in the cluster_config_consistent procedure to ensure thorough test coverage and conduct a sanity check on cluster consistency.

Example response of CLUSTER SHARDS TOPOLOGY in a 2 primaries 2 replicas cluster.

Response from Primary 1:

127.0.0.1:6320> cluster shards topology
1) 1) "slots"
   2) 1) (integer) 501
      2) (integer) 16383
   3) "nodes"
   4) 1)  1) "id"
          2) "879df438dd67bd56e717d8400b718c743c4355d1"
          3) "port"
          4) (integer) 6321
          5) "ip"
          6) "127.0.0.1"
          7) "endpoint"
          8) "127.0.0.1"
          9) "role"
         10) "master"
      2)  1) "id"
          2) "4c04d2a090de3e7c9a375ad0ee2e25b52c31c310"
          3) "port"
          4) (integer) 6323
          5) "ip"
          6) "127.0.0.1"
          7) "endpoint"
          8) "127.0.0.1"
          9) "role"
         10) "replica"
2) 1) "slots"
   2) 1) (integer) 0
      2) (integer) 500
   3) "nodes"
   4) 1)  1) "id"
          2) "af1e6a659a85779864456efe6a323fa7c7eda187"
          3) "port"
          4) (integer) 6320
          5) "ip"
          6) "127.0.0.1"
          7) "endpoint"
          8) "127.0.0.1"
          9) "role"
         10) "master"
      2)  1) "id"
          2) "5f4de2e73e92b67c0d4adb0f5737776f5d14e137"
          3) "port"
          4) (integer) 6322
          5) "ip"
          6) "127.0.0.1"
          7) "endpoint"
          8) "127.0.0.1"
          9) "role"
         10) "replica"

Response from Primary 2:

127.0.0.1:6321> cluster shards topology
1) 1) "slots"
   2) 1) (integer) 501
      2) (integer) 16383
   3) "nodes"
   4) 1)  1) "id"
          2) "879df438dd67bd56e717d8400b718c743c4355d1"
          3) "port"
          4) (integer) 6321
          5) "ip"
          6) "127.0.0.1"
          7) "endpoint"
          8) "127.0.0.1"
          9) "role"
         10) "master"
      2)  1) "id"
          2) "4c04d2a090de3e7c9a375ad0ee2e25b52c31c310"
          3) "port"
          4) (integer) 6323
          5) "ip"
          6) "127.0.0.1"
          7) "endpoint"
          8) "127.0.0.1"
          9) "role"
         10) "replica"
2) 1) "slots"
   2) 1) (integer) 0
      2) (integer) 500
   3) "nodes"
   4) 1)  1) "id"
          2) "af1e6a659a85779864456efe6a323fa7c7eda187"
          3) "port"
          4) (integer) 6320
          5) "ip"
          6) "127.0.0.1"
          7) "endpoint"
          8) "127.0.0.1"
          9) "role"
         10) "master"
      2)  1) "id"
          2) "5f4de2e73e92b67c0d4adb0f5737776f5d14e137"
          3) "port"
          4) (integer) 6322
          5) "ip"
          6) "127.0.0.1"
          7) "endpoint"
          8) "127.0.0.1"
          9) "role"
         10) "replica"

Ref: #114

Copy link

codecov bot commented May 1, 2024

Codecov Report

Attention: Patch coverage is 93.97590% with 10 lines in your changes missing coverage. Please review.

Project coverage is 70.28%. Comparing base (f2bbd1f) to head (d2303cf).
Report is 92 commits behind head on unstable.

Files Patch % Lines
src/cluster_legacy.c 93.97% 10 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff              @@
##           unstable     #411      +/-   ##
============================================
- Coverage     70.30%   70.28%   -0.03%     
============================================
  Files           111      111              
  Lines         60300    60285      -15     
============================================
- Hits          42393    42370      -23     
- Misses        17907    17915       +8     
Files Coverage Δ
src/cluster_legacy.c 86.05% <93.97%> (+0.19%) ⬆️

... and 11 files with indirect coverage changes

@hpatro
Copy link
Contributor

hpatro commented May 1, 2024

Thanks @VoletiRam for the PR.

There were discussion around using CANONICAL / DETERMINISTIC terminology for the filter. I think both of those are confusing from user perspective who is not bothered about the order. Rather I feel TOPOLOGY is more easy to understand where we provide a subset of information regarding the cluster topology. We also initially wanted the command to be named as CLUSTER TOPOLOGY redis/redis#10168.

@valkey-io/core-team Please take a look.

Comment on lines 1633 to 1545
list *clusterGetNodesInMyShard(clusterNode *node) {
sds s = sdsnewlen(node->shard_id, CLUSTER_NAMELEN);
dictEntry *de = dictFind(server.cluster->shards,s);
sdsfree(s);
return (de != NULL) ? dictGetVal(de) : NULL;
clusterNode *master = clusterNodeGetMaster(node);

list *l = listCreate();
listAddNodeTail(l, master);
for (int i = 0; i < master->numslaves; i++) {
listAddNodeTail(l, master->slaves[i]);
}
return l;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This operation becomes O(N) and some additional memory allocation. However, I'm not bothered a lot about it since the code flow is not on the hot path.

@hwware
Copy link
Member

hwware commented May 1, 2024

Just want to confirm with you @VoletiRam If addling the new parameter "topology" for cluster shards goal is that from every client's view, the output of the 2 masters and 2 replicas nodes is always:

127.0.0.1:6321> cluster shards topology

    1. "slots"
      1. (integer) 501
      2. (integer) 16383
    2. "nodes"
        1. "id"
        2. "879df438dd67bd56e717d8400b718c743c4355d1"
        3. "port"
        4. (integer) 6321
          .....
        1. "id"
        2. "4c04d2a090de3e7c9a375ad0ee2e25b52c31c310"
        3. "port"
        4. (integer) 6323
          ......
    1. "slots"
      1. (integer) 0
      2. (integer) 500
      3. "nodes"
          1. "id"
        1. "af1e6a659a85779864456efe6a323fa7c7eda187"
        2. "port"
        3. (integer) 6320
        1. "id"
        2. "5f4de2e73e92b67c0d4adb0f5737776f5d14e137"
        3. "port"
        4. (integer) 6322

BWT, I am reviewing this PR codes, Thanks

@@ -63,6 +63,8 @@ void clusterUpdateState(void);
int clusterNodeCoversSlot(clusterNode *n, int slot);
list *clusterGetNodesInMyShard(clusterNode *node);
int clusterNodeAddSlave(clusterNode *master, clusterNode *slave);
int clusterNodeAddMaster(clusterNode *master);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually don't like this abstraction. I would like it to be conceptually possible for a shard to exist without a master. @PingXie Thoughts?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you explain a bit more on how such situation can arise?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer updating this function name as int clusterNodeAddToMasters(clusterNode *node);

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like it to be conceptually possible for a shard to exist without a master.

This abstraction seems fine from a quick look. Basically, the contract is to keep track of all the primaries in the cluster in a deterministic order. Even if a primary-less shard could exist, the contract would be upheld regardless I think.

This also reminds me that I will need to create a PR to remove all references in code to "m/s".

I will take a closer look at this PR after merging #245

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for taking a look. Will wait until you review the PR.

@hpatro
Copy link
Contributor

hpatro commented May 1, 2024

@hwware The ordering is based on primary node id lexicographically irrespective of the topology filtering. topology filtering helps with removing the state of replication offset/health which can vary across node(s).

@VoletiRam
Copy link
Contributor Author

Thank you @hwware. As @hpatro pointed, the view will be same for both CLUSTER SHARDS and CLUSTER SHARDS TOPOLOGY from every client. The TOPOLOGY option will exclude replication offset and node health status from the shards response. This request aims to eliminate differing fields, facilitating straightforward comparison of the shards response across cluster nodes via simple string matching, thus avoiding the need for parsing.

@zuiderkwast
Copy link
Contributor

@PingXie This PR has large changes in cluster_legacy.c. I think we should merge your PR #245 before this one. Please check if you think everything will be different after merging your PR.

@zuiderkwast zuiderkwast requested a review from PingXie May 2, 2024 08:02
void addShardReplyForClusterShards(client *c, list *nodes) {
serverAssert(listLength(nodes) > 0);
clusterNode *n = listNodeValue(listFirst(nodes));
void addShardReplyForClusterShards(client *c, clusterNode* n, int topology) {
Copy link
Contributor

@barshaul barshaul May 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor: clusterNode* n -> clusterNode* primary to clarify that this function expects to receive a primary node

@barshaul
Copy link
Contributor

barshaul commented May 2, 2024

@hpatro @VoletiRam

Does it filter out failed and loading nodes?

It would be ideal if the client's topology map could be solely based on the results of this command, eliminating the need for subsequent checks on the nodes' status.

@hwware
Copy link
Member

hwware commented May 2, 2024

@hwware The ordering is based on primary node id lexicographically irrespective of the topology filtering. topology filtering helps with removing the state of replication offset/health which can vary across node(s).

Thanks for your words. Then according to this rule, sorted by primary node id lexicographically, all clients should get the same view from any node. (At least primary output is same.)

@VoletiRam
Copy link
Contributor Author

VoletiRam commented May 2, 2024

@barshaul We are only filtering out fields that contribute to non-deterministic output but not the node's information based on their health status. I think the ask was to eliminate volatile fields that can vary across the clients, at least not clear from discussion in #114. We can help filter out node's information if everyone agrees.

src/cluster.c Outdated
(c->argc == 2 || c->argc == 3))
{
/* CLUSTER SHARDS [TOPOLOGY] */
int topology = 1;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually, I set this kind of bool variable default value as 0 (here the variable more close to a bool variable).
But it is not a big issue I think

@@ -4,15 +4,20 @@
"complexity": "O(N) where N is the total number of cluster nodes",
"group": "cluster",
"since": "7.0.0",
"arity": 2,
"arity": -2,
"container": "CLUSTER",
"function": "clusterCommand",
"command_flags": [
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because you add one more argument here, you need add one item "history" here, for example

"history": [
[
"2.8.0",
"Added the -2 reply."
]
],

void clusterCommandShards(client *c) {
addReplyArrayLen(c, dictSize(server.cluster->shards));
void clusterCommandShards(client *c, int topology) {
serverAssert(server.cluster->nummasters > 0);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think server.cluster->nummasters > 0 checking is must. 0 should work, how do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you very much for reviewing the changes. Will address the suggestions once everyone finishes reviewing the PR.

@hpatro
Copy link
Contributor

hpatro commented May 8, 2024

@PingXie @madolson @barshaul

Few questions which we need to reach consensus on:

  1. With this filtering option introduced, does CLUSTER SHARDS command have any significance in itself?
  2. @barshaul Regarding removing replica(s) from response which are not ready to serve any data, would it be fine to keep the health data? I presume health information doesn't change that often compared to replication-offset.
  3. Should we filter out primaries not serving any slot?

@VoletiRam
Copy link
Contributor Author

VoletiRam commented May 9, 2024

Thank you @hpatro for raising the questions that need consensus. I want to add couple of questions as well.

I am checking few scenarios with 2 primaries - 2 replicas in a 4 node cluster with slot coverage on primaries.

  1. If a primary is failed, the replica will never failover as there are not enough votes to win the election (need atleast 2 votes and there are not enough primaries with slot coverage in the cluster that are responsive). The quorum would never reach to mark the primary as failed. Except CLUSTER NODES, nothing indicates about failed/disconnected primary. How do we want the topology response in this scenario?

  2. Let's assume replica failed over a primary. If old primary is still part of cluster but not responsive/failed (which means the role is still primary and marked as fail with no slot coverage), currently in CLUSTER SHARDS response, we show two primaries (old is marked fail and new one is online) but slots coverage will show as empty, which is inaccurate. How do we want the topology response in this scenario?

With my implementation, the slot coverage empty issue in ##5 can be solved as we go over each master from masters list and print corresponding slots, but it will still show old master in the response with empty slots and fail health status unless we decide to filter out either master node if marked fail or master with no slot coverage. Please share your opinion.

@PingXie
Copy link
Member

PingXie commented May 9, 2024

With this filtering option introduced, does CLUSTER SHARDS command have any significance in itself?

Not sure if I understand your question. Can you elaborate?

@barshaul Regarding removing replica(s) from response which are not ready to serve any data, would it be fine to keep the health data? I presume health information doesn't change that often compared to replication-offset.

Based on the use case as described in #114 , including health makes sense to me as it is an important factor to consider for the "routing" decision.

Should we filter out primaries not serving any slot?
I would say yes since the uber use case is to facilitate routing decisions in the client.

I am checking few scenarios with 2 primaries - 2 replicas in a 4 node cluster with slot coverage on primaries.

I don't think 2-shard deployments are legit given the current design/implementation. We need to officially support 2-shard clusters first and then it makes sense to discuss the output of cluster shards in this case.

@zuiderkwast should we resurrect the "voting replicas" discussion? redis/redis#12390

@hpatro
Copy link
Contributor

hpatro commented May 9, 2024

With this filtering option introduced, does CLUSTER SHARDS command have any significance in itself?

Not sure if I understand your question. Can you elaborate?

If all clients would prefer using CLUSTER SHARDS TOPOLOGY command, what will be the utility of CLUSTER SHARDS command ? Few alternatives I can think of apart from the above solution:

  1. Deprecate CLUSTER SHARDS command and introduce CLUSTER TOPOLOGY command with the trimmed down version.
  2. Make a breaking change in Valkey 8 and update CLUSTER SHARDS command itself and remove replication-offset from it (I'm not aware of the client adoption and version handling).

@hwware
Copy link
Member

hwware commented May 9, 2024

With this filtering option introduced, does CLUSTER SHARDS command have any significance in itself?

Not sure if I understand your question. Can you elaborate?

If all clients would prefer using CLUSTER SHARDS TOPOLOGY command, what will be the utility of CLUSTER SHARDS command ? Few alternatives I can think of apart from the above solution:

  1. Deprecate CLUSTER SHARDS command and introduce CLUSTER TOPOLOGY command with the trimmed down version.
  2. Make a breaking change in Valkey 8 and update CLUSTER SHARDS command itself and remove replication-offset from it (I'm not aware of the client adoption and version handling).

I do not think we should deprecate CLUSTER SHARDS command. Client need to remember one more command and
the worse case is to update the client side code as well.

Thus my suggestion is:
If client just call CLUSTER SHARDS, the response should be the same behavior as now.
If client call CLUSTER SHARDS TOPOLOGY, the result should be from this PR.

@hwware
Copy link
Member

hwware commented May 9, 2024

Thank you @hpatro for raising the questions that need consensus. I want to add couple of questions as well.

I am checking few scenarios with 2 primaries - 2 replicas in a 4 node cluster with slot coverage on primaries.

  1. If a primary is failed, the replica will never failover as there are not enough votes to win the election (need atleast 2 votes and there are not enough primaries with slot coverage in the cluster that are responsive). The quorum would never reach to mark the primary as failed. Except CLUSTER NODES, nothing indicates about failed/disconnected primary. How do we want the topology response in this scenario?
  2. Let's assume replica failed over a primary. If old primary is still part of cluster but not responsive/failed (which means the role is still primary and marked as fail with no slot coverage), currently in CLUSTER SHARDS response, we show two primaries (old is marked fail and new one is online) but slots coverage will show as empty, which is inaccurate. How do we want the topology response in this scenario?

With my implementation, the slot coverage empty issue in ##5 can be solved as we go over each master from masters list and print corresponding slots, but it will still show old master in the response with empty slots and fail health status unless we decide to filter out either master node if marked fail or master with no slot coverage. Please share your opinion.

In fact. 2 primaries - 2 replicas cluster, 2 primaries - 0 replicas cluster, 2 primaries - 4 replicas cluster are totally different.

Case 1: 2 primaries - 0 replicas cluster. If client set cluster-require-full-coverage as no in conf file, cluster still work even one primary node fail.

Case 2: 2 primaries - 2 replicas cluster: If any primary fails, no vote happens, and replica can failover immediately

Case 3: 2 primaries - 4 replicas cluster: vote happen if any primary fails

So i agree with Ping, let us first support 2-shard clusters first then discuss the output of cluster shards [topology]

@zuiderkwast
Copy link
Contributor

zuiderkwast commented May 10, 2024

I don't think 2-shard deployments are legit given the current design/implementation. We need to officially support 2-shard clusters first and then it makes sense to discuss the output of cluster shards in this case.

@zuiderkwast should we resurrect the "voting replicas" discussion?

@PingXie Whether replicas can vote or whether the cluster has quorum to perform failovers, or even what kind of consensus algorithm is used, should be irrelevant to the clients. (It's even possible to have some external watchdog that performs manual failover.) So let's decouple those discussions from this PR?

Based on the use case as described in #114 , including health makes sense to me as it is an important factor to consider for the "routing" decision.

I don't think clients should make their own decisions about the health of nodes. That's something the cluster does for them. The clients should only be concerned with routing according to what the cluster tells them. For this, there's no need to include shards without slots. Maybe it's better to exclude them, because such nodes are usually going to be taken down or are just being set up and not really ready to be used for pubsub and other stuff clients may want to send to them.

To summarize: I think CLUSTER SHARDS TOPOLOGY should return no more info that what's included in CLUSTER SLOTS. (Just on a different format.)

I do not think we should deprecate CLUSTER SHARDS command. Client need to remember one more command and the worse case is to update the client side code as well.

Thus my suggestion is:
If client just call CLUSTER SHARDS, the response should be the same behavior as now.
If client call CLUSTER SHARDS TOPOLOGY, the result should be from this PR.

I agree with @hwware about this. If clients have started using CLUSTER SHARDS, we can let them do that. Let's not break it.

@madolson
Copy link
Member

madolson commented May 10, 2024

To summarize: I think CLUSTER SHARDS TOPOLOGY should return no more info that what's included in CLUSTER SLOTS. (Just on a different format.)

If we accept this premise, I think we should consider that maybe we are trying to force CLUSTER SHARDS to be something it isn't. CLUSTER SHARDS was supposed to solve three things:

  1. Be more densely encoded for large highly fragmented clusters. I.e., where slots are distributed in such a way that they don't form continuous ranges. This causes cluster slots to print a lot of duplicate information.
  2. Be a more readable and extensible version of cluster nodes for clients that want more information (like health and offset) but out of the text based format of CLUSTER NODES. It was also supposed to be for humans to see the status of a cluster.
  3. Unwind some poor choices we made historically about "preferred" endpoints and make it easier for clients to see the "whole" picture about networking. I don't think SHARDS has really been that successful here.

It seems like we are saying that clients just shouldn't care about all the extra data provided by CLUSTER NODES. In that world, why wouldn't we do something like CLUSTER SLOTS COMPACT that attempts to help CLUSTER SLOTS solve problem 1. We could change the return type from a start and end, to just an array of start/end ranges. Then clients can more easily adopt the new code? In respect to 3, I guess clients already have that baked in, so do we need to fix it?

The asks from @barshaul are basically, "I don't want any more information, I just want to know what slots are healthy and able to be served from". That is what CLUSTER SLOTS does.

So, we can make CLUSTER SHARDS more deterministic (in ordering), it's still a nice property for doing diffs. But I'm not sure I'm convinced anymore that CLUSTER SHARDS TOPOLOGY is a good idea.

@zuiderkwast
Copy link
Contributor

Yes (1) was what I meant, but I wasn't completely aware of the background and details.

It seems like the main point of this new CLUSTER SHARDS variant is that it's deterministic, so a you (or a test case) can check that the nodes' views of the cluster is consistent. This isn't the use case for client slot routing. It's rather a use case for test cases and for admins, to check that the cluster converges after adding/removing nodes, slot migrations, etc. If it's deterministic for a healthy cluster even with health info included, then I'm not going to argue against it.

It can be used by clients too, just to save some bytes, but if some clients feels they want more info, they'll just use the full version of the command, or CLUSTER NODES. That can't be helped.

So I guess the question should be: How common or important is it for cluster admins to check that a cluster converges in this way?

(In our own test framework we can solve it in some other way if it's just for us.)

@madolson
Copy link
Member

madolson commented May 12, 2024

So I guess the question should be: How common or important is it for cluster admins to check that a cluster converges in this way?

I've done a fair amount of "diff" between various cluster outputs, and usually have to do some pre-processing to make sure they agree. It would be nice if the node ordering was the same in that case. You could then trivially ignore the fields that are known to be slightly different (replication offset).

@madolson madolson added the major-decision-pending Major decision pending by TSC team label May 15, 2024
@madolson
Copy link
Member

madolson commented May 15, 2024

To make my suggestion about cluster slots more concrete, I'm proposing a change so that the response of cluster slots becomes:

> CLUSTER SLOTS
1) 1) 1) (integer) 0  -- Start of range 1
      2) (integer) 10000 -- Start of range 2
   2) 1) (integer) 5460 -- End of range 1
      2) (integer) 12000 -- End of range 2
   3) 1) "127.0.0.1"
      2) (integer) 30001
      3) "09dbe9720cda62f7865eabc5fd8857c5d2678366"
      4) 1) hostname
         2) "host-1.valkey.example.com"
   4) 1) "127.0.0.1"
      2) (integer) 30004
      3) "821d8ca00d7ccf931ed3ffc7e3db0599d2271abf"
      4) 1) hostname
         2) "host-2.valkey.example.com"
2) 1) 1) (integer) 5461 -- Start of range 1
      2) (integer) 12001 -- Start of range 2
   2) 1) (integer) 9999 -- End of range 1
      2) (integer) 16383 -- End of range 2
   3) 1) "127.0.0.1"
      2) (integer) 30002
      3) "c9d93d9f2c0c524ff34cc11838c2003d8c29e013"
      4) 1) hostname
         2) "host-3.valkey.example.com"
   4) 1) "127.0.0.1"
      2) (integer) 30005
      3) "faadb3eb99009de4ab72ad6b6ed87634c7ee410f"
      4) 1) hostname
         2) "host-4.valkey.example.com"

Besides that, it behaves the exact same cluster CLUSTER SLOTS. Most likely by adding an argument like CLUSTER SLOTS PACKED or something.

@zuiderkwast
Copy link
Contributor

@madolson I don't think the reason clients haven't adopted CLUSTER SHARDS (added in 7.0) is that it's hard to parse. The reason is rather that clients want to be backward compatible and support old Redis versions. If we add CLUSTER SLOTS PACKED, it will have the same problem: Clients can only use it if they know the server supports it, and then they still need a fallback for versions that don't.

Once Redis 6.2 and all Redis 6-compatible services are EOL (or about the time of Valkey 9 is released), then all deployments support CLUSTER SHARDS and then we can start expecting clients to switch to CLUSTER SHARDS.

@madolson
Copy link
Member

I don't think the reason clients haven't adopted CLUSTER SHARDS (added in 7.0) is that it's hard to parse. The reason is rather that clients want to be backward compatible and support old Redis versions. If we add CLUSTER SLOTS PACKED, it will have the same problem: Clients can only use it if they know the server supports it, and then they still need a fallback for versions that don't.

I agree! People don't want to use the CLUSTER SHARDS we implemented, for reasons that Bar mentioned, so if we were to introduce a net new variant CLUSTER SHARDS TOPOLOGY, we are in the same situation of asking clients to move again. I'm saying that maybe CLUSTER SHARDS as a client command was a mistake, the way it was structured was almost entirely to make it more likely to be adopted by a single client, lettuce, since they wanted weird extra information. What I got from this thread though, is we think clients shouldn't be collecting more information, and should just rely on the flat response from CLUSTER SLOTS.

I suppose there is another option. If we implement a client capability functionality, we could make a change that allows clients to "opt-in" to the new format I proposed. If at startup, they send CLIENT CAPA PACKED-SLOTS, then we can respond with packed response type. Clients can check during parse if it's the "integer" or "array" encoded responses, and handle it accordingly.

Once Redis 6.2 and all Redis 6-compatible services are EOL (or about the time of Valkey 9 is released), then all deployments support CLUSTER SHARDS and then we can start expecting clients to switch to CLUSTER SHARDS.

I don't agree this will happen. Lots of people will continue to use old versions because they will be supported.

@zuiderkwast
Copy link
Contributor

Regarding this PR: Can we just settle with sorting what can be sorted in CLUSTER SHARDS? No new argument. That's my vote. Then we document what needs to be ignored when comparing the result from two different nodes. That means a doc PR.

@madolson
Copy link
Member

@VoletiRam @hpatro Can you review what Victor posted in the previous message. Instead of adding a new new command, let's just make the existing version deterministically ordered but not make any changes to arguments.

}

/* Add to the output buffer of the given client, an array of slot (start, end)
* pair owned by the shard, also the primary and set of replica(s) along with
* information about each node. */
void clusterCommandShards(client *c) {
addReplyArrayLen(c, dictSize(server.cluster->shards));
addReplyArrayLen(c, server.cluster->nummasters);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change becomes much simpler if we replace the dict with a RAX, instead of maintaining this list. It also preserves the structure we built up, which is the Cluster->Shards->nodes mapping.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will explain the rationale to decide to maintain the pointers to masters instead of shards->list-of-nodes.

We initially thought of replacing the Dict with RAX, but that only helps solve one problem. We can maintain sorted shardIDs with RAX, but we still need the corresponding list of to be sorted if we want to uphold the shards->nodes mapping; otherwise, the list of is useless. We are currently using shards->nodes only for the CLUSTER SHARDS command and not anywhere else. Having RAX, list, and clusterNode overhead to maintain this list just for one command doesn't seem right when it doesn't come with significant performance or memory benefits over an array of pointers to masters. We felt that maintaining the array of pointers to masters is a reasonable tradeoff. Technically, we still honored the shardId->master contract through pointers to masters.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are currently using shards->nodes only for the CLUSTER SHARDS command and not anywhere else. Having RAX, list, and clusterNode overhead to maintain this list just for one command doesn't seem right when it doesn't come with significant performance or memory benefits over an array of pointers to masters. We felt that maintaining the array of pointers to masters is a reasonable tradeoff. Technically, we still honored the shardId->master contract through pointers to masters.

The thinking was that we would like to use the SHARD ID more in the future as an O(1) lookup, as it's the more logical identifier as opposed to just the Primary ID.

Comment on lines 14 to 17
"command_tips": [
"NONDETERMINISTIC_OUTPUT"
],
"reply_schema": {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The replication offset is still effectively non-determinsitic, so I think we nee to leave this flag. It's not used within the engine to determine anything, so it should be OK to leave.

@@ -285,3 +285,24 @@ test "CLUSTER MYSHARDID reports same shard id after cluster restart" {
assert_equal [dict get $node_ids $i] [R $i cluster myshardid]
}
}

test "Deterministic order of CLUSTER SHARDS response" {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test isn't part of the CI since it uses the legacy clustering test framework. We should either move cluster-shards to the new framework or just move this part of the file over and we can remove the rest of the file in a separate PR.

@hpatro
Copy link
Contributor

hpatro commented May 20, 2024

@VoletiRam One of the thing which came up while discussing with @madolson we could sort the cluster->nodes (maybe use rax?) and pull out primaries from that. With that we could avoid these additional pointers. Could you check if that's feasible?

@bbarani
Copy link

bbarani commented Jun 25, 2024

@VoletiRam One of the thing which came up while discussing with @madolson we could sort the cluster->nodes (maybe use rax?) and pull out primaries from that. With that we could avoid these additional pointers. Could you check if that's feasible?

@VoletiRam do you have any updates here?

Ram Prasad Voleti added 2 commits July 5, 2024 03:35
Maintain deterministic order of CLUSTER SHARDS response. Currently we
don't maintain the shards/masters in sorted fashion and hence we get the
order of CLUSTER SHARDS response non-deterministic on different nodes.
Maintain the sorted Masters list of pointers, similar to replicas,
and get rid of <shards, list<nodes>> dict which is not suitable
for sorting. Add TOPOLOGY argument to get the deterministic response
which would remove the replication offset and node health status from
cluster shards response. Sort the nodes based on the node Id. Use it
in proc `cluster_config_consistent` for the test coverage and sanity
purpose.

Signed-off-by: Ram Prasad Voleti <[email protected]>
Remove topology argument and cleanup related code changes.

Signed-off-by: Ram Prasad Voleti <[email protected]>
@VoletiRam VoletiRam force-pushed the deterministic_shards branch 5 times, most recently from 45b7926 to 20d8225 Compare July 5, 2024 08:20
Replace Dict with Rax for Cluster Nodes and construct primaries list on the
go, instead of maintaining shards/masters list.

Signed-off-by: Ram Prasad Voleti <[email protected]>
@VoletiRam
Copy link
Contributor Author

Sorry for the delayed response. Was busy with the other commitments at work. I addressed the comments. I replaced the Dict data structure with Rax for cluster->nodes, and constructed the list of primaries from it when the 'CLUSTER SHARDS' command is requested.

Please review whenever you get a chance @hpatro @madolson

@hpatro
Copy link
Contributor

hpatro commented Aug 8, 2024

@madolson we would still want to improve CLUSTER SHARDS, right? I think we should. Would like to understand @valkey-io/core-team stance before diving deep into the PR.

@enjoy-binbin
Copy link
Member

yes, i am in for the CLUSTER SHARDS improve thing.

@hpatro hpatro self-requested a review August 9, 2024 20:51
Copy link
Contributor

@hpatro hpatro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM. There are plenty of touchpoints (iteration over rax) but the idea is to replace dict with rax to maintain the cluster nodes information to get primaries/replicas in lexicographical ordering.

Comment on lines +5588 to +5596
list *primaries = clusterGetPrimaries();
addReplyArrayLen(c, listLength(primaries));
listIter li;
listRewind(primaries, &li);
for (listNode *ln = listNext(&li); ln != NULL; ln = listNext(&li)) {
clusterNode *n = listNodeValue(ln);
addShardReplyForClusterShards(c, n);
}
dictReleaseIterator(di);
listRelease(primaries);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the crux of the change. Here we would get primaries in lexicographical ordering due to underlying RAX structure.

@@ -95,6 +95,7 @@
*/

#define RAX_NODE_MAX_SIZE ((1 << 29) - 1)
#define RAX_OK 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not used from RAX related method's return statements ?

@hpatro
Copy link
Contributor

hpatro commented Aug 9, 2024

@PingXie Could you also take a look at this? It removes one of the abstraction you had introduced of shards -> nodes.

@madolson
Copy link
Member

madolson commented Aug 9, 2024

@madolson we would still want to improve CLUSTER SHARDS, right?

Yeah. I'm still fine improving this. I wasn't sure I was happy with removing the shards abstraction though from the code internals since we intend to add it in the migrate slots command.

@hpatro
Copy link
Contributor

hpatro commented Aug 9, 2024

I wasn't sure I was happy with removing the shards abstraction though from the code internals since we intend to add it in the migrate slots command.

We could maybe still keep the abstraction with the small overhead we were already paying.

@PingXie
Copy link
Member

PingXie commented Aug 12, 2024

@PingXie Could you also take a look at this? It removes one of the abstraction you had introduced of shards -> nodes.

I like the high level idea of single-sourcing the shard membership management.

However, I have a few questions regarding the impact of switching from a dictionary to a Rax:

  1. I believe Rax may result in higher cache misses during lookups. This should not be an issue on smaller clusters but could be an issue for large clusters with 100s of nodes where we can see higher cluster traffic. Has any performance evaluation been done on the changes?

  2. Additionally, the effect on random node selection during gossiping is worth further consideration. The overhead will also increase, and I’m curious whether this change might also affect the distribution of random node selections.

@zuiderkwast zuiderkwast linked an issue Aug 12, 2024 that may be closed by this pull request
@hpatro
Copy link
Contributor

hpatro commented Aug 16, 2024

  1. I believe Rax may result in higher cache misses during lookups. This should not be an issue on smaller clusters but could be an issue for large clusters with 100s of nodes where we can see higher cluster traffic. Has any performance evaluation been done on the changes?
  2. Additionally, the effect on random node selection during gossiping is worth further consideration. The overhead will also increase, and I’m curious whether this change might also affect the distribution of random node selections.

These are good callouts but might be difficult to measure. @PingXie Do you have any suggestion/scenario(s) in mind to reproduce? Will be helpful for @VoletiRam.

@PingXie
Copy link
Member

PingXie commented Aug 21, 2024

  1. I believe Rax may result in higher cache misses during lookups. This should not be an issue on smaller clusters but could be an issue for large clusters with 100s of nodes where we can see higher cluster traffic. Has any performance evaluation been done on the changes?
  2. Additionally, the effect on random node selection during gossiping is worth further consideration. The overhead will also increase, and I’m curious whether this change might also affect the distribution of random node selections.

These are good callouts but might be difficult to measure. @PingXie Do you have any suggestion/scenario(s) in mind to reproduce? Will be helpful for @VoletiRam.

can we decouple the two changes: single sourcing the shard membership management and switching to Rax? I think we need some time to better understand the impact of Rax but we could benefit from the single sourcing change sooner.

For the performance impact analysis of Rax, I'm thinking about writing a small program that constructs a Rax tree with 1000 cluster nodes and performs queries on it. To simulate real-world conditions, we'll periodically flush the CPU cache by writing a large amount of data to a 32 MB memory block. We'll repeat the same process for a hash table-based implementation. Afterward, we can compare the aggregated lookup times, excluding the memory copy time.

For the distribution analysis, we can take a similar approach by having the program log its random node selections. We can then generate charts to compare the distribution patterns between the Rax-based and dictionary-based implementations.

Thoughts?

@madolson
Copy link
Member

For now, the conclusion is we are okay leaving CLUSTER SHARDS as not deterministic. We are going to wait for the cached cluster slots response to see if it solves the performance issue folks have been seeing. If we still see issues, we will investigate the CLUSTER SLOTS DENSE.

@madolson madolson closed this Aug 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
major-decision-pending Major decision pending by TSC team needs-doc-pr This change needs to update a documentation page. Remove label once doc PR is open.
Projects
Status: Implementation
Development

Successfully merging this pull request may close these issues.

[NEW] Deterministic CLUSTER SHARDS Command
9 participants