-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor benchmarks #12
Conversation
@CodyCBakerPhD what do you think about a design like this for the tests? The main idea is that I would like to be able to compute multiple metrics for the same test case, but I would like to avoid having to copy-paste the same case multiple times just to change the metric. |
Overall definitely a good idea; left some suggestions for improving the hierarchy We could even go down the road of having 'mixin' tests if we want to define common types that ought to apply with mix and matchable setups |
Co-authored-by: Cody Baker <[email protected]>
…houtborders/nwb_benchmarks into refactor/benchmarks
for more information, see https://pre-commit.ci
…houtborders/nwb_benchmarks into refactor/benchmarks
for more information, see https://pre-commit.ci
Thanks @CodyCBakerPhD for the helpful suggestions. I made some more changes to implement the changes you had suggested. Can you please take another look. |
@oruebel Thanks for working with me on this; I've opened #19, #20, and #21 in draft mode as demonstrations of alternative strategies We should really put some thorough thought into what will make it easiest for us to both (a) write tests quickly (as in, now, during development) without as you say, copying/pasting too much, as well as (b) for us to read (in the far future) to quickly remember and understand what is being setup and tested in each case The earlier we decide on a refactor is put in place, the less work it will be compared to fixing it down the line Too much implicit structure and redirection can hamper (b), but we need some amount to speed up and (a) and keep the main benchmarks relatively clean, so a balance must be struck somehow For example, on this PR, if we wanted to answer the question 'what is We can see quickly that it runs 3 time tests that redirect to class methods. We follow the parent to the streaming base to discover the method definitions in each case, so this one is not too difficult to go back and read. However, if we wanted to answer the question 'what is We can see that a single time test is being run, and is running a slice request. But this time we have to follow the parent to find the Also note that in both cases, for development, if we wanted to make a slight change to the As an aside I realized the context method is also going to track the |
I totally agree. I appreciate that you are taking such a thoughtful approach to this. I think it would be easiest for us to review the different proposed variants via Zoom so that we can decide on a strategy. |
replaced by #21 |
Refactor benchmarks to separate the definition of test cases from the specific metrics. The idea is to define the test case in a base class and then the actual benchmark class defines the metric to be run for the test cases. The goal is to make it easy to run the same test with different metrics without having to repeat the implementation of the test case.