Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

One pool per node #19

Draft
wants to merge 6 commits into
base: master
Choose a base branch
from
Draft

Conversation

shashitnak
Copy link

Instead of creating one pool of Cluster connection, now creating one Cluster connection object which has a HashMap of Pool of each Node connection

@shashitnak shashitnak force-pushed the one-pool-per-node branch 2 times, most recently from 3b588c0 to 03fbbce Compare March 17, 2023 05:22
@@ -443,24 +443,24 @@ allMasterNodes (Connection nodeConns _ _ _ _) (ShardMap shardMap) =
requestNode :: NodeConnection -> [[B.ByteString]] -> IO [Reply]
requestNode (NodeConnection pool lastRecvRef nid) requests = do
envTimeout <- round . (\x -> (x :: Time.NominalDiffTime) * 1000000) . realToFrac . fromMaybe (0.5 :: Double) . (>>= readMaybe) <$> lookupEnv "REDIS_REQUEST_NODE_TIMEOUT"
eresp <- race requestNodeImpl (threadDelay envTimeout)
eresp <- race (withResource pool requestNodeImpl) (threadDelay envTimeout)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shashitnak We should actually call withResource even above the line where we throw the exception in case of a timeout. Otherwise we can run into a scenario where this connection times out and someother thread ends up using this connection and reading the stale data from last connection.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ishan-juspay moved it upto requestNode. I think it will now handle the case that you are talking about

@@ -443,24 +443,24 @@ allMasterNodes (Connection nodeConns _ _ _ _) (ShardMap shardMap) =
requestNode :: NodeConnection -> [[B.ByteString]] -> IO [Reply]
requestNode (NodeConnection pool lastRecvRef nid) requests = do
envTimeout <- round . (\x -> (x :: Time.NominalDiffTime) * 1000000) . realToFrac . fromMaybe (0.5 :: Double) . (>>= readMaybe) <$> lookupEnv "REDIS_REQUEST_NODE_TIMEOUT"
eresp <- race requestNodeImpl (threadDelay envTimeout)
eresp <- race (withResource pool requestNodeImpl) (threadDelay envTimeout)
case eresp of
Left e -> return e
Right _ -> putStrLn ("timeout happened" ++ show nid) *> throwIO NoNodeException

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shashitnak , lets create a custom exception for Timeout, we can name it RedisTimeoutException or something.
There might be other reasons for timeout apart from NoNode only

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ishan-juspay I actually thought about this. There is a test where an infinite script is sent to a node which causes it to timeout, which we aren't handling currently. So apart from NoNodeException, I'm thinking about adding a BusyNodeException

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ishan-juspay added a RedisTimeoutException type but it's still throwing NoNodeException wrapped in RedisTimeoutException for now. Will add the logic for figuring out the appropriate Exception later

let connection = Connection nodeConns pipelineVar shardMapVar (CMD.newInfoMap commandInfos) isReadOnly
return connection
where
-- simpleNodeConnections :: ShardMap -> IO (HM.HashMap NodeID NodeConnection)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shashitnak , can we remove these old commented code ? Doesn't look clean.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ishan-juspay removed them while rebasing to the new master

@@ -61,7 +62,7 @@ type IsReadOnly = Bool
data Connection = Connection (HM.HashMap NodeID NodeConnection) (MVar Pipeline) (MVar ShardMap) CMD.InfoMap IsReadOnly

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since a pool is now per node, the map "HM.HashMap NodeID NodeConnection" is probably never going to update.
Can we find a way to update this mapping also? This will fail requests when we scale up Redis cluster where we won't be able to find the new NodeID in this map.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ishan-juspay is it okay if we trigger the refresh logic at the point where some NodeId is not found in the map? We can have a Pool of a pair of HashMap and ShardMap and whenever some NodeId is not found in the HashMap, it will throw and Exception and a new constructor will be called for the Pair.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And this way, if some Resources are working with NodeId that they are able to find, then they will keep working without any problems.

Copy link

@ishan-juspay ishan-juspay Apr 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can have this HashMap as an MVar and update it whenever RefreshShardMap is called ? @aravindgopall

where
ctxToConn :: Cluster.NodeConnection -> IO (Maybe PP.Connection)
ctxToConn (Cluster.NodeConnection pool _ nid) = do
maybeConn <- try $ withResource pool PP.fromCtx

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have to remove the connection from pool if there is an exception? rather than catching with try and returning Nothing. Verify this once.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants