You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 18, 2021. It is now read-only.
The completer should support running in a multi-node cluster configuration to provide horizontal scaling, HA and avoid single points of failure.
The following approaches are being considered:
1. Sharding
Since the completer's main purpose is to process fully self-contained computation graphs (i.e. no cross-graph computation is allowed), we have a very flat actor hierarchy that makes it ideal for sharding at the graph level. Note that a graph is the underlying data model used to represent an FnFlow for the purpose of this discussion.
The main idea behind sharding is that every cluster-aware request is associated with a graph ID. By inspecting this ID, it is possible to computer a shard ID for the request. Any given shard ID is owned by a single completer node in the cluster. All requests pertaining to the same graph can thus be forwarded to and ultimately processed by the same node in the cluster. Note that the forwarding of requests can also take place at either the HTTP API or the actor system levels.
There are two main approaches to sharding, depending on whether the assignment of shards to completer nodes in the cluster is static or dynamic.
Static Sharding
In static sharding, the number of nodes and shards is fixed at runtime. The number of nodes can be changed, but would require stopping and starting the cluster for the reassignment of shards to nodes to take place. Changing the number of shards is not possible without clearing the full graph state, as the mappings of graphs to shards would need to change.
The advantages of a static sharding strategy are that it is simple to implement and reason about, and it does not require a centralized component to allocate shards to nodes. Instead, given a constant number of shards and nodes, we can use hash partitioning to map incoming graph requests to nodes.
Dynamic Sharding
In dynamic sharding, the number of nodes and optionally the number of shards can be changed at runtime. This has the advantage that it is possible to scale out/in a cluster while servicing live requests with zero to minimal downtime.
The disadvantage is that this flexibility comes at the price of additional complexity and need for synchronization. A common way of implementing dynamic sharding is to have a single coordinator that assigns shards to nodes, and a pluggable application-level strategy to map requests to shards. This is the approach taken by Akka Cluster Sharding. The coordinator is also responsible for safely transferring ownership of shards to nodes upon cluster membership changes, whether due to spontaneous node failures or scale out/in.
2. Virtual Actors
Virtual actor systems were popularized by Microsoft Orleans and have the advantage of freeing the application from managing the lifecycle of virtual actors or grains. Instead, the actor platform is free to activate a physical actor on any node in the cluster in a manner transparent to the application.
Since virtual actors trade off availability for consistency, they do not guarantee that at any given time there will not exist two or more physical activations of the same logical actor. This is not acceptable for actors whose activations have at most-once request semantics, for example when messages have side-effects (as is the case when invoking Flow continuations).
To avoid getting into an unsafe or inconsistent state, we would have to rely on an external synchronization mechanism to avoid two graph activations from running in parallel. Options include using database locking, leader election (e.g. using Consul sessions) or an application-level CAS protocol. The main disadvantage of these approaches is that the may limit scalability and make it difficult to reason about the correctness of the system.
The text was updated successfully, but these errors were encountered:
The completer should support running in a multi-node cluster configuration to provide horizontal scaling, HA and avoid single points of failure.
The following approaches are being considered:
1. Sharding
Since the completer's main purpose is to process fully self-contained computation graphs (i.e. no cross-graph computation is allowed), we have a very flat actor hierarchy that makes it ideal for sharding at the graph level. Note that a graph is the underlying data model used to represent an FnFlow for the purpose of this discussion.
The main idea behind sharding is that every cluster-aware request is associated with a graph ID. By inspecting this ID, it is possible to computer a shard ID for the request. Any given shard ID is owned by a single completer node in the cluster. All requests pertaining to the same graph can thus be forwarded to and ultimately processed by the same node in the cluster. Note that the forwarding of requests can also take place at either the HTTP API or the actor system levels.
There are two main approaches to sharding, depending on whether the assignment of shards to completer nodes in the cluster is static or dynamic.
In static sharding, the number of nodes and shards is fixed at runtime. The number of nodes can be changed, but would require stopping and starting the cluster for the reassignment of shards to nodes to take place. Changing the number of shards is not possible without clearing the full graph state, as the mappings of graphs to shards would need to change.
The advantages of a static sharding strategy are that it is simple to implement and reason about, and it does not require a centralized component to allocate shards to nodes. Instead, given a constant number of shards and nodes, we can use hash partitioning to map incoming graph requests to nodes.
In dynamic sharding, the number of nodes and optionally the number of shards can be changed at runtime. This has the advantage that it is possible to scale out/in a cluster while servicing live requests with zero to minimal downtime.
The disadvantage is that this flexibility comes at the price of additional complexity and need for synchronization. A common way of implementing dynamic sharding is to have a single coordinator that assigns shards to nodes, and a pluggable application-level strategy to map requests to shards. This is the approach taken by Akka Cluster Sharding. The coordinator is also responsible for safely transferring ownership of shards to nodes upon cluster membership changes, whether due to spontaneous node failures or scale out/in.
2. Virtual Actors
Virtual actor systems were popularized by Microsoft Orleans and have the advantage of freeing the application from managing the lifecycle of virtual actors or grains. Instead, the actor platform is free to activate a physical actor on any node in the cluster in a manner transparent to the application.
Since virtual actors trade off availability for consistency, they do not guarantee that at any given time there will not exist two or more physical activations of the same logical actor. This is not acceptable for actors whose activations have at most-once request semantics, for example when messages have side-effects (as is the case when invoking Flow continuations).
To avoid getting into an unsafe or inconsistent state, we would have to rely on an external synchronization mechanism to avoid two graph activations from running in parallel. Options include using database locking, leader election (e.g. using Consul sessions) or an application-level CAS protocol. The main disadvantage of these approaches is that the may limit scalability and make it difficult to reason about the correctness of the system.
The text was updated successfully, but these errors were encountered: