Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add scripts for benchmarks using the current API #306

Merged
merged 14 commits into from
Sep 11, 2024

Conversation

mihaimaruseac
Copy link
Collaborator

Summary

These are the scripts to generate sample models for benchmarks and to benchmark serializing models from disk to the in-toto statements (or the digest) used just before signing.

I will follow up with another PR where we have some benchmark data, currently running these. I'll also add those numbers to a README which explains how to run these and what experiments were considering.

Wanted to have it here before I move on cleaning the API a little. If you look at the benchmark harness, we have quite a large number of options and the APIs should be very similar. So I'll work on that next, while benchmarks here are running.

We should be able to use this and compare against the numbers in #13 (and also investigate SHA256 vs BLAKE2).

Fixes #206 in that the benchmarks are public, but we should still get benchmarks data before releasing stable version of library.

Release Note

NONE

Documentation

NONE

@mihaimaruseac mihaimaruseac added this to the V1 release milestone Sep 10, 2024
@mihaimaruseac mihaimaruseac marked this pull request as ready for review September 10, 2024 21:02
@mihaimaruseac mihaimaruseac requested review from a team as code owners September 10, 2024 21:02
We go from

```
[...]$ hyperfine -w 3 "python benchmarks/generate.py file --root /tmp/file 100000000"
Benchmark 1: python benchmarks/generate.py file --root /tmp/file 100000000
  Time (mean ± σ):     10.290 s ±  0.140 s    [User: 10.197 s, System: 0.092 s]
  Range (min … max):   10.149 s … 10.541 s    10 runs
```

to

```
[...]$ hyperfine -w 3 "python benchmarks/generate.py file --root /tmp/file 100000000" --show-output
Benchmark 1: python benchmarks/generate.py file --root /tmp/file 100000000
  Time (mean ± σ):     381.1 ms ±  13.9 ms    [User: 512.9 ms, System: 633.1 ms]
  Range (min … max):   365.5 ms … 412.1 ms    10 runs
```

Signed-off-by: Mihai Maruseac <[email protected]>
Signed-off-by: Mihai Maruseac <[email protected]>
Signed-off-by: Mihai Maruseac <[email protected]>
benchmarks/generate.py Outdated Show resolved Hide resolved
benchmarks/generate.py Outdated Show resolved Hide resolved
Signed-off-by: Mihai Maruseac <[email protected]>
benchmarks/generate.py Show resolved Hide resolved
pyproject.toml Show resolved Hide resolved
benchmarks/serialize.py Show resolved Hide resolved
benchmarks/serialize.py Outdated Show resolved Hide resolved
benchmarks/serialize.py Show resolved Hide resolved
Signed-off-by: Mihai Maruseac <[email protected]>
Signed-off-by: Mihai Maruseac <[email protected]>
Signed-off-by: Mihai Maruseac <[email protected]>
@mihaimaruseac mihaimaruseac merged commit 74dedf9 into sigstore:main Sep 11, 2024
18 checks passed
@mihaimaruseac mihaimaruseac deleted the benchmarks branch September 11, 2024 19:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Benchmarking scripts
3 participants