Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compare side-by-side #304

Open
amomchilov opened this issue Jan 19, 2025 · 3 comments
Open

Compare side-by-side #304

amomchilov opened this issue Jan 19, 2025 · 3 comments
Assignees

Comments

@amomchilov
Copy link

Hey there!

From what I understand reading the docs and implementation of this package, I see that it emphasizes "before vs after" style benchmarking, where the general process is:

  1. write an initial implementation of some code
  2. benchmark it, and set it as a baseline
  3. change the code, applying some optimizations
  4. benchmark it again, and compare that to the baseline

This is the process that gives you the nice percentage deltas, via swift package benchmark baseline compare. This is great on iterating on larger app components, but I'm often interested in comparing two small samples side-by-side, like this: https://github.com/amomchilov/SwiftMicroBenchmarks/blob/d96eb3b076e696dc44bb02ef97c071322dbafa10/Benchmarks/IsOptional/IsOptional.swift#L34

As a point of comparison, consider the popular Ruby gem benchmark-ips. Calling x.compare! generates a comparison table for all the current named benchmarks.

Is there any goal to support comparison of multiple "current" benchmarks like that?

@hassila
Copy link
Contributor

hassila commented Jan 31, 2025

Have you tried --grouping metric together with a relevant combination of

--filter <filter>
Benchmarks matching the regexp filter that should be run
--skip <skip>
Benchmarks matching the regexp filter that should be skipped
--target <target>
Benchmark targets matching the regexp filter that should be run
--skip-target <skip-target>
Benchmark targets matching the regexp filter that should be skipped

to select what you want to run?

That way you can get a comparison table metric by metric for any subset of benchmarks you have defined?

@hassila
Copy link
Contributor

hassila commented Feb 11, 2025

@amomchilov did you get the chance to try --grouping metric and see if it solved your use case?

@hassila
Copy link
Contributor

hassila commented Feb 11, 2025

Also see the ability to force common units in output as part of:
https://github.com/ordo-one/package-benchmark/releases/tag/1.28.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants