Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(ci): cleanup benchmarking scripts and add profiling capability #9419

Merged
merged 10 commits into from
May 29, 2024

Conversation

brettlangdon
Copy link
Member

@brettlangdon brettlangdon commented May 28, 2024

Main change here is adding viztracer and a way to generate profiles from benchmark scenario runs.

List of changes:

  • Adding PROFILE_BENCHMARKS=1 env var to trigger generating viztracer profiles for each scenario and putting the results in artifacts directory
  • Support supplying . as a ddtrace version to install local version mounted to /src/ in the benchmark container
  • Run the benchmark container with --network host to allow connecting to a local trace agent for the scenarios which rely on an agent (flask_simple, etc)
  • Make sure latest version of pip is present (not that important, I just saw a version upgrade notice, so figured it doesn't hurt)
  • Run build_docs when modifying benchmarks/README.rst since it is included in our docs

Checklist

  • Change(s) are motivated and described in the PR description
  • Testing strategy is described if automated tests are not included in the PR
  • Risks are described (performance impact, potential for breakage, maintainability)
  • Change is maintainable (easy to change, telemetry, documentation)
  • Library release note guidelines are followed or label changelog/no-changelog is set
  • Documentation is included (in-code, generated user docs, public corp docs)
  • Backport labels are set (if applicable)
  • If this PR changes the public interface, I've notified @DataDog/apm-tees.

Reviewer Checklist

  • Title is accurate
  • All changes are related to the pull request's stated goal
  • Description motivates each change
  • Avoids breaking API changes
  • Testing strategy adequately addresses listed risks
  • Change is maintainable (easy to change, telemetry, documentation)
  • Release note makes sense to a user of the library
  • Author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment
  • Backport labels are set in a manner that is consistent with the release branch maintenance policy

@brettlangdon brettlangdon added the changelog/no-changelog A changelog entry is not required for this PR. label May 28, 2024
@brettlangdon brettlangdon requested a review from a team as a code owner May 28, 2024 21:30
@brettlangdon brettlangdon requested a review from emmettbutler May 28, 2024 21:31
@brettlangdon brettlangdon enabled auto-merge (squash) May 28, 2024 21:39
@brettlangdon
Copy link
Member Author

Multi-process timeline view (zoomed out)

image

Individual process timeline (zoomed in)

image

Combined flamegraph across all processes

image

Custom SQL query

image

benchmarks/README.rst Outdated Show resolved Hide resolved
@brettlangdon
Copy link
Member Author

@brettlangdon brettlangdon merged commit 110f4e4 into main May 29, 2024
50 of 51 checks passed
@brettlangdon brettlangdon deleted the brettlangdon/benchmark.improvements branch May 29, 2024 20:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog/no-changelog A changelog entry is not required for this PR.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants