-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comparison chart with bars #22
Comments
Hey @orangy , do you have any concrete JMH JSON output you could share ? |
We are trying to use it with multiplatform Kotlin Benchmarks: https://github.com/kotlin/kotlinx-benchmark |
So i guess you would have different json files (e.g. one for Kotlin JS, Native & JVM, etc..) and the you wanne compare same benchmarks across the files !? |
@jzillmann I'd also like to thank you for this project, it's great! I have a similar use-case - in this PR I'm developing a hash map implementation that uses a different technique for hashing keys than the Scala collections implementation, so my benchmarks are basically comparing the different methods of the two implementations to see how competitive my implementation is vs the Scala collections one. I have benchmark methods like You can find an example of the kind of JMH results I want to compare here (albeit these ones are CSV not JSON). I'm currently at @opencastsoftware working on open source and although I'm not a great JS developer personally I think there are people on my team who would love to have a go at implementing this? |
BTW you can see the approach I took to generate the benchmark charts from that PR in this gist, basically I used a regex to extract the benchmarks containing |
Hey @DavidGregory084 I think your request is different from the original purpose of the ticket (which I think is more about aggregating multiple result files to a single run instead of multiple runs). As it comes to code modification, I'm not using the project right now so I won't invest much time into it. At one point I also considered having a kind of configuration file (where you associate, include, exclude benchmarks). So if the project would be adapted and released as npm module, people could have a configuration file in their project and generate the graphs more specifically... |
I'm not sure if this falls within the same request but my case would be satisfied by just being able to have consistent x-axis whenever I'm uploading a run. Run now I'm changing my code and re-running the same benchmark, however, the bar charts can't be visually compared between runs because they all use a different dynamically-generated x-axis. If they all had the same x-axis, I could display all runs and see how one run was an improvement over a previous run because the bars are visually longer. A user-configurable option to specify the x-axis maximum would work well. |
Thanks for such a nice project! I'm using JMH for quite a while and only just now found it :)
I'm trying to use visualizer to compare values for different flavors of the same code, not several runs in the optimization process. For example, an http server (ktor.io) using different engines such as Netty, Jetty or Coroutines. Another example are multiplatform benchmarks for Kotlin JS, Native & JVM.
It would be nice to have a different comparison rendering which would show differently colored bars with legend for the same test (with a vague, may be configurable definition of "same"). Graph bar as it is now makes no sense for such comparisons.
The text was updated successfully, but these errors were encountered: