Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Comparing apples with oranges #6

Open
breml opened this issue Feb 18, 2021 · 0 comments
Open

Comparing apples with oranges #6

breml opened this issue Feb 18, 2021 · 0 comments

Comments

@breml
Copy link

breml commented Feb 18, 2021

I looked a little bit deeper into your benchmark and how it does work with the different driver implementations. I know, you state at the beginning of your README

This tool is meant to provide a rough estimate on how fast each Pub/Sub can process messages. It uses very simplified infrastructure to set things up and default configurations.

I still feel, the different benchmarks are comparing apples with oranges for the following reasons:

  • Every driver/backend combination provides a different set of capabilities (called features in the test cases). So comparing scenarios where the set of features if different does not really make sense in my opinion. E.g. it is way harder (and therefore slower) to guarantee order of deliver than no order guarantee, it is way harder to guarantee strictly one delivery than at least one delivery.
  • Some tests are run purely on localhost (backend and test tool on the same machine), some operate over the network. In the first case, they compete for the same resources, in the second case, there is network latency to be considered.

At very least, I would recommend to add a note to every benchmark, which guarantees the benchmark did include.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant