-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide "clean" and "real world" results #59
Comments
I believe setting rounds to 0 should be about equivalent |
@francescoalemanno that is a brilliant idea. Would at least make things way easier to implement. |
I reimplement the benchmark for C++, Java, Golang, Python and JavaScript: https://github.com/Glavo/leibniz-benchmark I run twenty rounds of benchmarking and count the average time spent on the last ten rounds. Here is the result I got: |
I think the "clean" result is the one that can reflect the real world situation. The main factor affecting the results now is the startup and loading time, not the real performance of the language. |
As suggested in #51 by @HenrikBengtsson more "clean" data for the calculation of pi could be gathered by measuring the performance of each language with and without calculating pi and then subtracting the one from the other.
I think it would be best to keep both data. "Real world" data with startup and IO, and "clean" data for just calculating pi.
I would keep both data in the CSV, but I'm not sure which one to favour for the image creation.
Probably the "clean" data 🤔
In terms of implementation. I can see two approaches:
Obviously, both would require adjustments to
scbench
and the analysis step.The text was updated successfully, but these errors were encountered: