Skip to content

Performance

Sasha Lopoukhine edited this page Nov 4, 2024 · 1 revision

We don't currently have a coherent story on performance measurements, but would love help with this. Ideally, it would be good to establish two key workflows:

  1. regular benchmarking and data collection
  2. a profiling setup to dig into performance regressions/opportunities

Regular Benchmarking

The best tool for benchmarking Python frameworks over time seems to be Air Speed Velocity. We experimented with using it from the same repo as the framework, but the better move is likely to set up a separate repo to collect the measurements. We would love some help setting this up!

Profiling

It's not clear that there's a single winner for best benchmarking tool. Here's a list of things that kind of work:

# Viztracer + perfetto
viztracer xdsl/tools/xdsl_opt.py tests/xdsl_opt/empty_program.mlir
vizviewer result.json
# Alternatively: ui.perfetto.dev + drag and drop result.json

# cProfile + flameprof
python -m cProfile -o my-prof.prof xdsl/tools/xdsl-opt tests/xdsl_opt/empty_program.mlir
flameprof my-prof.prof > my-prof.svg
open my-prof.svg

For some reason, cProfile stopped collecting full traces since we introduced lazy loading of dialects.

Help improving our benchmarking/profiling is welcome.