Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does MilvusClientV2Pool always work if milvus servers restart after crash for a long time #1176

Open
dingle66 opened this issue Nov 8, 2024 · 3 comments

Comments

@dingle66
Copy link

dingle66 commented Nov 8, 2024

I always cache the MilvusClientV2Pool when it is created, should I set expired time fot it ?
In my project, getClient always return null since I used the ClientPool。
Now, I restart my application to fix it , so I think ClientPool may have some feature I do not notice.

@yhmo
Copy link
Contributor

yhmo commented Nov 8, 2024

The PoolConfig defines the behavior of the pool:

PoolConfig poolConfig = PoolConfig.builder()
                .maxIdlePerKey(10) // max idle clients per key
                .maxTotalPerKey(20) // max total(idle + active) clients per key
                .maxTotal(100) // max total clients for all keys
                .maxBlockWaitDuration(Duration.ofSeconds(5L)) // getClient() will wait 5 seconds if no idle client available
                .minEvictableIdleDuration(Duration.ofSeconds(10L)) // if number of idle clients is larger than maxIdlePerKey, redundant idle clients will be evicted after 10 seconds
                .build();

When you call pool.getClient(), if there is an idle client in the pool, it will return the idle client for you.
If the milvus server is down. the connection is broken, you will get error when you call interface of the client object. Then you use pool.returnClient() to return the client to pool, the pool will validate it by calling client.clientIsReady(). Since the connection is broken, the clientIsReady() returns false then the client will be destroyed after a while.
If all the invalid clients have been destroyed, you restart your milvus, and call pool.getClient() again, it will create a new client to connect to the milvus.

@dingle66
Copy link
Author

So, even if milvus server broken and restart, the MilvusClientV2Pool instance cached by the client is still useful ?

@yhmo
Copy link
Contributor

yhmo commented Nov 15, 2024

I just tested with the following steps:

  1. start a milvus server
  2. pool.getClient() to new a client to do something
  3. shutdown the milvus server
  4. pool.getClient() to get the same client to do something, the client will hang forever if you didn't set rpcDeadlineMs, or time out if you have set rpcDeadlineMs.

Another tets:

  1. start a milvus server
  2. pool.getClient() to new a client to do something
  3. restart the milvus server
  4. pool.getClient() to get the same client to do something, it works fine

So, I think the answer is yes. If milvus server is broken and restarted, the MilvusClientV2Pool instance cached by the client is still useful.

The rpc channel is managed by grpc lib, I think the behavior is based on the grpc lib.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants