You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I looked a little bit deeper into your benchmark and how it does work with the different driver implementations. I know, you state at the beginning of your README
This tool is meant to provide a rough estimate on how fast each Pub/Sub can process messages. It uses very simplified infrastructure to set things up and default configurations.
I still feel, the different benchmarks are comparing apples with oranges for the following reasons:
Every driver/backend combination provides a different set of capabilities (called features in the test cases). So comparing scenarios where the set of features if different does not really make sense in my opinion. E.g. it is way harder (and therefore slower) to guarantee order of deliver than no order guarantee, it is way harder to guarantee strictly one delivery than at least one delivery.
Some tests are run purely on localhost (backend and test tool on the same machine), some operate over the network. In the first case, they compete for the same resources, in the second case, there is network latency to be considered.
At very least, I would recommend to add a note to every benchmark, which guarantees the benchmark did include.
The text was updated successfully, but these errors were encountered:
I looked a little bit deeper into your benchmark and how it does work with the different driver implementations. I know, you state at the beginning of your README
I still feel, the different benchmarks are comparing apples with oranges for the following reasons:
At very least, I would recommend to add a note to every benchmark, which guarantees the benchmark did include.
The text was updated successfully, but these errors were encountered: