Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: remove earthly usage for benchmarks #654

Merged
merged 10 commits into from
Jan 17, 2025
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 9 additions & 8 deletions .github/workflows/benchmark.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,16 +26,17 @@ jobs:
--postgres-max-idle-conns=100
--experimental-features
--otel-metrics-keep-in-memory &
- uses: extractions/setup-just@v2
- run: >
earthly
--allow-privileged
${{ contains(github.event.pull_request.labels.*.name, 'no-cache') && '--no-cache' || '' }}
./test/performance+run --args="-benchtime 10s --ledger.url=http://localhost:3068 --parallelism=5" --locally=yes
just
--justfile ./test/performance/justfile
--working-directory ./test/performance
run . 5 10s 1 http://localhost:3068
- run: >
earthly
--allow-privileged
${{ contains(github.event.pull_request.labels.*.name, 'no-cache') && '--no-cache' || '' }}
./test/performance+generate-graphs
just
--justfile ./test/performance/justfile
--working-directory ./test/performance
graphs
- run: kill -9 $(ps aux | grep "ledger serve"| grep -v "grep" | awk '{print $2}')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there isn't this part, and the benchmarks go wrong for X or Y reason, the Ledger will continue to work and won't be cut.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ledger is run via the test, not externally

if: always()
- uses: actions/upload-artifact@v4
Expand Down
1 change: 1 addition & 0 deletions flake.nix
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,7 @@
just
nodejs_22
self.packages.${system}.speakeasy
goperf
];
};
}
Expand Down
59 changes: 0 additions & 59 deletions test/performance/Earthfile

This file was deleted.

10 changes: 5 additions & 5 deletions test/performance/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,25 +10,25 @@ Scripts can be found in directory [scripts](./scripts).
## Run locally

```shell
earthly +run
just run
```

You can pass additional arguments (the underlying command is a standard `go test -bench=.`) using the flag `--args`.
For example:
```shell
earthly +run --args="-benchtime 10s"
just run "-benchtime 10s"
```

## Run on a remote stack

```shell
earthly +run --args="--stack.url=XXX --client.id=XXX --client.secret=XXX"
just run "--stack.url=XXX --client.id=XXX --client.secret=XXX"
```

## Run on a remote ledger

```shell
earthly +run --args="--ledger.url=XXX --auth.url=XXX --client.id=XXX --client.secret=XXX"
just run "--ledger.url=XXX --auth.url=XXX --client.id=XXX --client.secret=XXX"
```

## Results
Expand All @@ -37,7 +37,7 @@ TPS is included as a benchmark metrics.

You can generate some graphs using the command:
```
earthly +generate-graphs
just graphs
```

See generated files in `report` directory.
15 changes: 8 additions & 7 deletions test/performance/charts/index.js
Original file line number Diff line number Diff line change
Expand Up @@ -39,13 +39,14 @@ const main = () => __awaiter(void 0, void 0, void 0, function* () {
let buffer = fs.readFileSync('../report/report.json', 'utf-8');
let reports = JSON.parse(buffer);
yield (0, graphs_1.exportTPSGraph)({
output: 'tps.png',
output: '../report/tps.png',
}, reports);
yield (0, graphs_1.exportLatencyGraph)({
output: 'p99.png'
}, 'P99', reports);
yield (0, graphs_1.exportLatencyGraph)({
output: 'p95.png'
}, 'P95', reports);
yield (0, graphs_1.exportDatabaseStats)('../report/database_connections.png', reports);
const ps = ['P99', 'P95', 'P75', 'Avg'];
for (let p of ps) {
yield (0, graphs_1.exportLatencyGraph)({
output: '../report/' + p.toLowerCase() + '.png'
}, p, reports);
}
});
main();
6 changes: 3 additions & 3 deletions test/performance/charts/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,15 @@ const main = async () => {
let buffer = fs.readFileSync('../report/report.json', 'utf-8');
let reports = JSON.parse(buffer);
await exportTPSGraph({
output: 'tps.png',
output: '../report/tps.png',
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Directory creation check is required

The codebase lacks directory existence verification before file operations. The suggested change to create the '../report' directory is necessary to prevent runtime errors.

  • Location: test/performance/charts/index.ts
  • Add directory creation before file operations:
if (!fs.existsSync('../report')) {
    fs.mkdirSync('../report', { recursive: true });
}
🔗 Analysis chain

Verify report directory existence before file operations.

While the path changes are consistent, ensure the '../report' directory exists before writing files to prevent runtime errors.

Consider adding directory creation:

 const main = async () => {
+    // Ensure report directory exists
+    if (!fs.existsSync('../report')) {
+        fs.mkdirSync('../report', { recursive: true });
+    }
     let buffer = fs.readFileSync('../report/report.json', 'utf-8');

Also applies to: 12-12, 17-17

🏁 Scripts executed

The following scripts were executed for the analysis:

Error: No shell scripts found


Script:

#!/bin/bash
# Check the content of the test file
cat test/performance/charts/index.ts

# Search for directory creation patterns in related files
rg -l "mkdirSync.*report" --type ts

# Look for fs operations in test setup
rg "fs\.(mkdir|existsSync)" --type ts

Length of output: 778


Script:

#!/bin/bash
# Check for test runner configuration
fd -e json package.json --exec cat {}

# Look for test setup files
fd -e ts -e js setup test/performance --exec cat {}

Length of output: 650

}, reports);

await exportDatabaseStats('database_connections.png', reports);
await exportDatabaseStats('../report/database_connections.png', reports);

const ps: (keyof MetricsTime)[] = ['P99', 'P95', 'P75', 'Avg']
for (let p of ps) {
await exportLatencyGraph({
output: p.toLowerCase() + '.png'
output: '../report/' + p.toLowerCase() + '.png'
}, p, reports);
}
}
Expand Down
86 changes: 81 additions & 5 deletions test/performance/charts/src/graphs.js
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,12 @@ var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
exports.exportLatencyGraph = exports.exportTPSGraph = void 0;
exports.exportDatabaseStats = exports.exportLatencyGraph = exports.exportTPSGraph = void 0;
const colors_1 = require("./colors");
const chartjs_to_image_1 = __importDefault(require("chartjs-to-image"));
const chart_js_1 = require("chart.js");
const chartjs_plugin_annotation_1 = __importDefault(require("chartjs-plugin-annotation"));
chart_js_1.Chart.register(chartjs_plugin_annotation_1.default);
const exportTPSGraph = (configuration, result) => __awaiter(void 0, void 0, void 0, function* () {
const scripts = [];
for (let script in result) {
Expand All @@ -27,15 +30,15 @@ const exportTPSGraph = (configuration, result) => __awaiter(void 0, void 0, void
const datasets = scripts.map(((script, index) => {
return {
label: script,
data: result[script].map(r => r.tps),
data: result[script].map(r => r.TPS),
backgroundColor: colors_1.NAMED_COLORS[index % scripts.length],
};
}));
const config = {
type: 'bar',
data: {
labels: reportsForAnyScript
.map(r => r.configuration.name),
.map(r => r.Configuration.Name),
datasets: datasets
},
options: {
Expand Down Expand Up @@ -76,15 +79,15 @@ const exportLatencyGraph = (configuration, key, result) => __awaiter(void 0, voi
const datasets = scripts.map(((script, index) => {
return {
label: script,
data: result[script].map(r => r.metrics.Time[key].substring(0, r.metrics.Time[key].length - 2)),
data: result[script].map(r => parseFloat(r.Metrics.Time[key].substring(0, r.Metrics.Time[key].length - 2))),
backgroundColor: colors_1.NAMED_COLORS[index % scripts.length],
};
}));
const config = {
type: 'bar',
data: {
labels: reportsForAnyScript
.map(r => r.configuration.name),
.map(r => r.Configuration.Name),
datasets: datasets
},
options: {
Expand Down Expand Up @@ -112,3 +115,76 @@ const exportLatencyGraph = (configuration, key, result) => __awaiter(void 0, voi
yield chart.toFile(configuration.output);
});
exports.exportLatencyGraph = exportLatencyGraph;
const exportDatabaseStats = (output, result) => __awaiter(void 0, void 0, void 0, function* () {
const scope = 'github.com/uptrace/opentelemetry-go-extra/otelsql';
const scripts = [];
for (let script in result) {
scripts.push(script);
}
const reportsForAnyScript = result[scripts[0]];
if (!reportsForAnyScript) {
throw new Error("no data");
}
const datasets = scripts.map(((script, index) => {
return {
label: script,
data: result[script].map(r => r.InternalMetrics.ScopeMetrics
.find(scopeMetric => scopeMetric.Scope.Name == scope)
.Metrics
.find(metric => metric.Name == 'go.sql.connections_open')
.Data
.DataPoints[0]
.Value),
backgroundColor: colors_1.NAMED_COLORS[index % scripts.length],
};
}));
const maxConnection = reportsForAnyScript[0].InternalMetrics.ScopeMetrics
.find(scopeMetric => scopeMetric.Scope.Name == scope)
.Metrics
.find(metric => metric.Name == 'go.sql.connections_max_open')
.Data
.DataPoints[0]
.Value;
const config = {
type: 'bar',
data: {
labels: reportsForAnyScript.map(r => r.Configuration.Name),
datasets: datasets
},
options: {
plugins: {
title: {
display: true,
text: 'Database connections'
},
annotation: {
annotations: {
line1: {
type: 'line',
yMin: maxConnection,
yMax: maxConnection,
borderColor: 'rgb(255, 99, 132)',
borderWidth: 2,
}
}
}
},
interaction: {
intersect: false,
},
scales: {
x: {
stacked: false,
},
y: {
stacked: false
}
}
}
};
const chart = new chartjs_to_image_1.default();
chart.setConfig(config);
chart.setChartJsVersion('4');
yield chart.toFile(output);
});
exports.exportDatabaseStats = exportDatabaseStats;
33 changes: 33 additions & 0 deletions test/performance/justfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
set dotenv-load
set positional-arguments

tmpdir := `mktemp -d`
gfyrag marked this conversation as resolved.
Show resolved Hide resolved

run bench='.' p='1' benchtime='1s' count='1' ledger-url='' output='./report/benchmark-output.txt':
rm -f {{output}}
go test -run ^$ -tags it,local \
-report.file ./report/report.json \
-timeout 60m \
-bench={{bench}} \
-count={{count}} \
--ledger.url={{ledger-url}} \
-p {{p}} \
-test.benchtime {{benchtime}} . | tee -a {{output}}

compare bench='.' p='1' benchtime='1s' count='1' output='./report/benchmark-output.txt':
just run {{bench}} {{p}} {{benchtime}} {{count}} '' './report/benchmark-output-local.txt'
git clone --depth 1 -b main https://github.com/formancehq/ledger {{tmpdir}}
go test -run ^$ -tags it,local -report.file \
./report/report.json \
-timeout 60m \
-bench={{bench}} \
-count={{count}} \
-p {{p}} \
-test.benchtime {{benchtime}} \
{{tmpdir}}/test/performance | tee -a ./report/benchmark-output-main.txt
benchstat ./report/benchmark-output-main.txt ./report/benchmark-output-local.txt > ./report/benchmark-comparison.txt || true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve error handling in comparison workflow.

The current implementation has two issues:

  1. Git clone failures are not handled explicitly
  2. Benchstat errors are silently suppressed with || true

Consider this improved error handling:

 compare bench='.' p='1' benchtime='1s' count='1' output='./report/benchmark-output.txt':
     just run {{bench}} {{p}} {{benchtime}} {{count}} '' './report/benchmark-output-local.txt'
-    git clone --depth 1 -b main https://github.com/formancehq/ledger {{tmpdir}}
+    git clone --depth 1 -b main https://github.com/formancehq/ledger {{tmpdir}} || exit 1
     go test -run ^$ -tags it,local -report.file \
         ./report/report.json \
         -timeout 60m \
         -bench={{bench}} \
         -count={{count}} \
         -p {{p}} \
         -test.benchtime {{benchtime}} \
         {{tmpdir}}/test/performance | tee -a ./report/benchmark-output-main.txt
-    benchstat ./report/benchmark-output-main.txt ./report/benchmark-output-local.txt > ./report/benchmark-comparison.txt || true
+    if ! benchstat ./report/benchmark-output-main.txt ./report/benchmark-output-local.txt > ./report/benchmark-comparison.txt; then
+        echo "Warning: benchstat failed to generate comparison" >&2
+    fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
git clone --depth 1 -b main https://github.com/formancehq/ledger {{tmpdir}}
go test -run ^$ -tags it,local -report.file \
./report/report.json \
-timeout 60m \
-bench={{bench}} \
-count={{count}} \
-p {{p}} \
-test.benchtime {{benchtime}} \
{{tmpdir}}/test/performance | tee -a ./report/benchmark-output-main.txt
benchstat ./report/benchmark-output-main.txt ./report/benchmark-output-local.txt > ./report/benchmark-comparison.txt || true
git clone --depth 1 -b main https://github.com/formancehq/ledger {{tmpdir}} || exit 1
go test -run ^$ -tags it,local -report.file \
./report/report.json \
-timeout 60m \
-bench={{bench}} \
-count={{count}} \
-p {{p}} \
-test.benchtime {{benchtime}} \
{{tmpdir}}/test/performance | tee -a ./report/benchmark-output-main.txt
if ! benchstat ./report/benchmark-output-main.txt ./report/benchmark-output-local.txt > ./report/benchmark-comparison.txt; then
echo "Warning: benchstat failed to generate comparison" >&2
fi


graphs:
cd charts && npm install
cd charts && npm run build
cd charts && node ./index.js
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

./index.ts ? Because index.js is removed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

after the npm run build above, it exists

Loading