Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide "clean" and "real world" results #59

Open
niklas-heer opened this issue Oct 18, 2022 · 4 comments
Open

Provide "clean" and "real world" results #59

niklas-heer opened this issue Oct 18, 2022 · 4 comments

Comments

@niklas-heer
Copy link
Owner

As suggested in #51 by @HenrikBengtsson more "clean" data for the calculation of pi could be gathered by measuring the performance of each language with and without calculating pi and then subtracting the one from the other.

[...] I think it would be better if you could find a way to not include the startup times, and the parsing of 'rounds.txt' in the results. A poor man's solution would be to benchmark each language with and without the part that calculates pi and the subtract to get the timings of interest.

I think it would be best to keep both data. "Real world" data with startup and IO, and "clean" data for just calculating pi.
I would keep both data in the CSV, but I'm not sure which one to favour for the image creation.
Probably the "clean" data 🤔

In terms of implementation. I can see two approaches:

  • the straightforward way of having a second implementation file for each language
  • the dynamic way of including a comment and keyword within the source file to cut off for getting the result without calculating pi

Obviously, both would require adjustments to scbench and the analysis step.

This was referenced Oct 18, 2022
@francescoalemanno
Copy link
Contributor

I believe setting rounds to 0 should be about equivalent

@niklas-heer
Copy link
Owner Author

@francescoalemanno that is a brilliant idea. Would at least make things way easier to implement.

@Glavo
Copy link

Glavo commented Aug 13, 2023

@niklas-heer

I reimplement the benchmark for C++, Java, Golang, Python and JavaScript: https://github.com/Glavo/leibniz-benchmark

I run twenty rounds of benchmarking and count the average time spent on the last ten rounds. Here is the result I got:

@Glavo
Copy link

Glavo commented Aug 13, 2023

I think the "clean" result is the one that can reflect the real world situation.

The main factor affecting the results now is the startup and loading time, not the real performance of the language.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants