Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmarks #18

Merged
merged 14 commits into from
Jun 25, 2024
Merged

Add benchmarks #18

merged 14 commits into from
Jun 25, 2024

Conversation

efaulhaber
Copy link
Member

Based on #15.

Copy link

codecov bot commented May 21, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 89.86%. Comparing base (f4109dd) to head (eb64d9c).

Additional details and impacted files
@@           Coverage Diff           @@
##             main      #18   +/-   ##
=======================================
  Coverage   89.86%   89.86%           
=======================================
  Files          13       13           
  Lines         306      306           
=======================================
  Hits          275      275           
  Misses         31       31           
Flag Coverage Δ
unit 89.86% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@efaulhaber efaulhaber reopened this May 29, 2024
@efaulhaber efaulhaber marked this pull request as ready for review May 29, 2024 11:01
@efaulhaber efaulhaber requested review from svchb and LasNikas May 29, 2024 11:14
Copy link
Collaborator

@svchb svchb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the intention here for actually showing the benchmark results?
Do you want to add them to a gh page later?

The current output:

TrivialNeighborhoodSearch
with 50 = 50 particles finished in 2.235 μs

GridNeighborhoodSearch
with 50 = 50 particles finished in 1.041 μs

ProcomputedNeighborhoodSearch
with 50 = 50 particles finished in 105.863 ns
... 

needs to be more structured its unclear what the differences are between for example the second output:

TrivialNeighborhoodSearch
with 50 = 50 particles finished in 3.202 μs

It would also be nice to have a benchmark just for the initialization of the methods to cover PrecomputedNeighborhoodSearch.

benchmarks/plot.jl Outdated Show resolved Hide resolved
test/benchmarks.jl Show resolved Hide resolved
@efaulhaber
Copy link
Member Author

efaulhaber commented Jun 20, 2024

What is the intention here for actually showing the benchmark results?
Do you want to add them to a gh page later?

Yes, the plots are supposed to go in the docs and/or README. Especially with #8, we can also very nicely show the difference to other Julia packages. And as opposed to other packages, I wanted to make our benchmark code easily available, so that one only has to run a single command to produce these plots.

The current output:

TrivialNeighborhoodSearch
with 50 = 50 particles finished in 2.235 μs

GridNeighborhoodSearch
with 50 = 50 particles finished in 1.041 μs

ProcomputedNeighborhoodSearch
with 50 = 50 particles finished in 105.863 ns
...

needs to be more structured its unclear what the differences are between for example the second output:

TrivialNeighborhoodSearch
with 50 = 50 particles finished in 3.202 μs

I don't understand what you mean. The output very clearly shows the timings of the different implementations at different problem sizes:

julia> plot_benchmarks(benchmark_count_neighbors, (100, 100), 2)
TrivialNeighborhoodSearch
with 100x100 = 10000 particles finished in 8.618 ms

GridNeighborhoodSearch
with 100x100 = 10000 particles finished in 769.375 μs

PrecomputedNeighborhoodSearch
with 100x100 = 10000 particles finished in 53.708 μs

TrivialNeighborhoodSearch
with 200x200 = 40000 particles finished in 136.701 ms

GridNeighborhoodSearch
with 200x200 = 40000 particles finished in 3.472 ms

PrecomputedNeighborhoodSearch
with 200x200 = 40000 particles finished in 215.458 μs

It would also be nice to have a benchmark just for the initialization of the methods to cover PrecomputedNeighborhoodSearch.

My future plans for the benchmarks are:

  1. Add real-life SPH benchmarks calling interact! for WCSPH and TLSPH (Add real-life SPH benchmarks #29).
  2. Add an artificial benchmark for update! by updating the same neighborhood search, alternating between two slightly different coordinate arrays (point_cloud with different seeds).
  3. Add a real-life benchmark of a full WCSPH simulation (combining update and query performance).

@efaulhaber efaulhaber requested a review from svchb June 25, 2024 14:19
@svchb svchb merged commit 927120c into main Jun 25, 2024
10 checks passed
@svchb svchb deleted the ef/benchmarks branch June 25, 2024 18:38
LasNikas added a commit to LasNikas/PointNeighbors.jl that referenced this pull request Jun 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants