You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In your paper "FreeFlow-Software-based Virtual RDMA Networking for Containerized Clouds", you compared native TCP with FreeFlow + rsocket, and verified FreeFlow always outperforms Weave both for throughput and latency. In our test, we have obtained the similar results that support your results, but the CPU overheads were higher than we imagine.
The CPU utilization ratio only decreases 20% to 30% than Weave. We initially consider that using rsocket will bring higher CPU overheads,
and the loss of CPU increases 50% when compared with ib_send_bw. So we want to know if you got similar problems, or our test results were wrong.
The text was updated successfully, but these errors were encountered:
In your paper "FreeFlow-Software-based Virtual RDMA Networking for Containerized Clouds", you compared native TCP with FreeFlow + rsocket, and verified FreeFlow always outperforms Weave both for throughput and latency. In our test, we have obtained the similar results that support your results, but the CPU overheads were higher than we imagine.
The CPU utilization ratio only decreases 20% to 30% than Weave. We initially consider that using rsocket will bring higher CPU overheads,
and the loss of CPU increases 50% when compared with ib_send_bw. So we want to know if you got similar problems, or our test results were wrong.
The text was updated successfully, but these errors were encountered: