-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to create the dataset in the plot? #3
Comments
Yes the dataset was created via criterion. I produced the output with a half manual, half python script so a script to automate it all would be awesome. Maybe the output could be something like this?
And nope, I didn't use anything special just make sure target-cpu is native for example:
I just fixed up the benches to now take input len. currently the sampling is every 10th: for n in (0..200).map(|x| x * 10) { This is not perfect as it misses the aligned (32,64,128...), but not sure what a good strategy would be. Do you have a suggestion? Anyway the benches take forever even with this setup 🤔 |
Maybe we could drop the signed versions? benchmark_contains::<u8>(c, "u8", n);
benchmark_contains::<i8>(c, "i8", n);
benchmark_contains::<u16>(c, "u16", n);
benchmark_contains::<i16>(c, "i16", n);
benchmark_contains::<u32>(c, "u32", n);
benchmark_contains::<i32>(c, "i32", n);
benchmark_contains::<u64>(c, "u64", n);
benchmark_contains::<i64>(c, "i64", n);
benchmark_contains::<isize>(c, "isize", n);
benchmark_contains::<usize>(c, "usize", n);
benchmark_contains_floats::<f32>(c, "f32", n);
benchmark_contains_floats::<f64>(c, "f64", n); to benchmark_contains::<u8>(c, "u8", n);
benchmark_contains::<u16>(c, "u16", n);
benchmark_contains::<u32>(c, "u32", n);
benchmark_contains::<u64>(c, "u64", n);
benchmark_contains::<usize>(c, "usize", n);
benchmark_contains_floats::<f32>(c, "f32", n);
benchmark_contains_floats::<f64>(c, "f64", n); |
@LaihoE Thank you for getting back to me! In my view, more benchmarks is always better. I also think we can use Criterion to take care of the plotting. That would be nice since that would not require the use of python. I'm running the benchmarks on a server right now to see how it looks. |
sounds good! |
Since we need to use a python script anyway, I wanted to bring up divan. It would allow you to remove the repeat benchmark function calls to account for different types. Take a look at how convenient it is here. The downside is divan currently doesn't support output to json/csv, so we'd have to parse the output manually. Just an option to consider. |
Divan certainly looks convenient but as Criterion is the go to benchmark tool, I feel like people may find the results less trustworthy. Divan also seems to be quite a young project. |
Hi,
Thank you for your hard work. This is an awesome project! I'd be happy to contribute a plotting script to help to automate the process. Do you just collect the dataset via criterion? If so, are there any other flags/options I should enable during benchmarks?
Thank you!
The text was updated successfully, but these errors were encountered: