-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance: High latency in async Python #3284
Comments
I suspect this latency is due to the scheduling of asyncio, but it is still higher than redis-py. In a scenario with 100 tasks:
Here’s the glide code: import asyncio
import gc
import random
import time
import glide
import redis.asyncio as redis
async def print_time():
config = glide.GlideClientConfiguration(
addresses=[glide.NodeAddress("127.0.0.1", 6379)],
request_timeout=10,
reconnect_strategy=glide.BackoffStrategy(0, 0, 0),
protocol=glide.ProtocolVersion.RESP2,
)
r = await glide.GlideClient.create(config)
while True:
gc.disable()
time_start = time.time()
await r.set("test", "test")
print(f"{time.strftime('%Y-%m-%d %H:%M:%S', time.localtime())} {(time.time() - time_start) * 1000:.3f}")
gc.enable()
await asyncio.sleep(random.random())
await r.close()
async def main():
tasks = []
for i in range(100):
tasks.append(asyncio.create_task(print_time()))
await asyncio.gather(*tasks)
asyncio.run(main()) And here’s the code for redis-py: import asyncio
import gc
import random
import time
import glide
import redis.asyncio as redis
async def print_time():
r = redis.Redis()
while True:
gc.disable()
time_start = time.time()
await r.set("test", "test")
print(f"{time.strftime('%Y-%m-%d %H:%M:%S', time.localtime())} {(time.time() - time_start) * 1000:.3f}")
gc.enable()
await asyncio.sleep(random.random())
await r.close()
async def main():
tasks = []
for i in range(100):
tasks.append(asyncio.create_task(print_time()))
await asyncio.gather(*tasks)
asyncio.run(main()) |
Hi @suxb201 ! :) |
@BoazBD I am currently testing the latency variations of different Redis databases using Python, so I'm very sensitive to any delays. |
Thank you for your patience! Glide's API is asynchronous and uses a multiplexed connection to interact with Valkey. This means that all requests are sent through a single connection, leveraging Valkey's pipelining capabilities. Pipelining enhances performance by sending multiple commands at once without waiting for individual responses, and using a single connection is the recommended method to optimize performance.
To have more accurate results, can you please share you use case and setup:
You can also run our python benchmark that compares redis-py and glide by the following command: git clone https://github.com/valkey-io/valkey-glide.git
cd valkey-glide/benchmarks
# if TLS isn't enabled, add -no-tls flag
# if you're testing a cluster-mode setup, add -is-cluster flag
./install_and_test.sh -python -host "example.cluster.use1.cache.amazonaws.com" -concurrentTasks 100 -data 100 |
@BoazBD @avifenesh Thank you very much for your detailed explanation. I believe this statement addresses the issue:
To answer your questions:
Upon further testing, I found that my previous tests were not rigorous. I have conducted some more tests, and the results show that GLIDE's latency data has significantly improved. In comparison with asyncio redis-py, GLIDE has slightly higher latency at low QPS, but as QPS increases, the latencies from both approaches become comparable. 1 thread sync redis-py's qps is about 7000. ![]() high qps asyncio: ![]() test code: import asyncio
import random
import sys
import time
import glide
import redis.asyncio as redis
async def glide_client():
config = glide.GlideClientConfiguration(
addresses=[glide.NodeAddress("r-2zesvk4kudzic5yicy.redis.rds.aliyuncs.com", 6379)],
request_timeout=10,
reconnect_strategy=glide.BackoffStrategy(0, 0, 0),
protocol=glide.ProtocolVersion.RESP2,
)
r = await glide.GlideClient.create(config)
return r
def redis_client():
return redis.Redis(host="r-2zesvk4kudzic5yicy.redis.rds.aliyuncs.com", port=6379, db=0)
total_count = 0
max_latency = 0
last_log_time = time.time()
async def print_time():
global total_count, max_latency, last_log_time
if sys.argv[1].lower() == "redis":
print("Using asyncio Redis client")
r = redis_client()
else:
assert sys.argv[1].lower() == "glide"
print("Using Glide client")
r = await glide_client()
while True:
time_start = time.time()
await r.set("test", "test")
latency = (time.time() - time_start) * 1000
total_count += 1
if latency > max_latency:
max_latency = latency
if time.time() - last_log_time > 1:
print(f"{time.strftime('%Y-%m-%d %H:%M:%S', time.localtime())} max_latency: {max_latency:.3f}ms, qps: {total_count / (time.time() - last_log_time):.0f}")
last_log_time = time.time()
max_latency = 0
total_count = 0
# await asyncio.sleep(random.random()
async def main():
tasks = []
for i in range(1): # 1 coroutine
tasks.append(asyncio.create_task(print_time()))
await asyncio.gather(*tasks)
asyncio.run(main()) That said, while the latency data for both GLIDE and asyncio redis-py is quite good, I am still concerned about the higher uncertainty associated with asyncio scheduling delays, as well as the propagation of the asyncio coding style in my project. Therefore, I will continue to consider using sync redis-py in my project with Tair Pulse. Thank you all for your support and assistance! |
@suxb201 Understandable! |
@suxb201 I'm closing the issue. I believe it won't take long, but can't swear on that. Anyway, it worth keeping an eye on it if sync API and low latency are your goals. |
Describe the bug
I have noticed high latency when using valkey-glide in an asyncio environment. On my MacBook, the latency is about 1ms, while with redis-cli it is 0.3ms and with redis-py it is also 0.3ms.
Expected Behavior
low latency
Current Behavior
3x latency compare to redis-py and redis-cli
Reproduction Steps
glide code:
output:
redis-cli --latency output:
Client version used
1.3.0
Engine type and version
Redis 7.2.7
OS
Darwin Kernel Version 24.2.0
Language
Python
Language Version
3.12
The text was updated successfully, but these errors were encountered: