-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add benchmarks #18
Add benchmarks #18
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #18 +/- ##
=======================================
Coverage 89.86% 89.86%
=======================================
Files 13 13
Lines 306 306
=======================================
Hits 275 275
Misses 31 31
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the intention here for actually showing the benchmark results?
Do you want to add them to a gh page later?
The current output:
TrivialNeighborhoodSearch
with 50 = 50 particles finished in 2.235 μs
GridNeighborhoodSearch
with 50 = 50 particles finished in 1.041 μs
ProcomputedNeighborhoodSearch
with 50 = 50 particles finished in 105.863 ns
...
needs to be more structured its unclear what the differences are between for example the second output:
TrivialNeighborhoodSearch
with 50 = 50 particles finished in 3.202 μs
It would also be nice to have a benchmark just for the initialization of the methods to cover PrecomputedNeighborhoodSearch.
Yes, the plots are supposed to go in the docs and/or README. Especially with #8, we can also very nicely show the difference to other Julia packages. And as opposed to other packages, I wanted to make our benchmark code easily available, so that one only has to run a single command to produce these plots.
I don't understand what you mean. The output very clearly shows the timings of the different implementations at different problem sizes:
My future plans for the benchmarks are:
|
Based on #15.