Skip to content
This repository has been archived by the owner on Sep 30, 2023. It is now read-only.

Commit

Permalink
node and browser benchmark runner (#32)
Browse files Browse the repository at this point in the history
* fix: correct npm package name

* expose gc as a function

* install isNode

* add src/benchmarker.js

* add needed bind for timeout callback

* rework benchmarker into http client/server with cli

* add commander to dependencies

* fix program options usage

* fix cli ls exec

* make child process exec async

* use node-fetch instead of whatwg-fetch polyfill

* change benchmarker http-client addMetric api

* fix benchmarker server create and _handleResults

* logging over websockets

* remove unneeded timeout

* write result json files to results folder

* remove whatwg-fetch polyfill

* benchmarker better logging

* results over websockets

* error logging and use stdout instead of console.log

* remove http from benchmarker server

* prep for browser support

* install webpack middleware and express

* install html-webpack-plugin

* install val-loader

* downgrade webpack to v4

* downgrade html-webpack-plugin to support webpack4

* downgrade val-loader

* add newlines for cancel/complete status

* manual browser support (no puppeteer yet)

* automate browser benchmarks with puppeteer

* removed uneeded promisify

* use fork over exec in benchmarker cli

* move webpack to its own thread

* ensure split works for large ls return

* cleanup and minor changes

* use open ports, remove port argument

* install ws websocket server

* create results dir path on results

* move server variable into runBenchmarks

* fix typo

* add bundling... message

* use console.log for log messages

* fix window.performance.memory accuracy

* use already opened browser page

* add puppeteer to deps

* move default metrics to separate file

* get ready to build fixtures

* edit package deps and npm audit fix

* ignore fixtures dir

* refactor benchmarker; all working but reports

* remove old benchmark runner files

* edit deps; commit package and package-lock

* base working

* add log-load benchmark

* change benchmark setting

* use execBenchmarkPath variable

* stop using fixtures

* more report outputs

* add ordered benchmarks

* add process-results and report util file

* get percent change for time metric

* remove getLabel from process-results

* make dir for output path

* no fixtures or hard coded port; small cleanup

* move Report component to parent dir

* change option order

* optionally track mem/cpu

* benchmarks path param for cli

* add catch to webpackServer call

* add benchmarking... console message

* fix webpack-server

* remove local benchmarks

* no written output by default

* remove --no-output option

* fix avg processed metric

* change benchmarker server variable name

* reuse webpack port for indexedDb

* change baselines option flag

* reword opt description; change -b default

* add basic usage to README.md

* remove fixtures from gitignore

* remove tests for now

* remove runPlace leftover from run.js

* browser name property for execBenchmarks

* static webpack port

* make reporter/process-results.js more readable

* fix: webpack-entry use run func again

* change tempdir name for benchmark runner

* remove outdated comment

* small style edit cli.js

* check for baseline path exist

* add baseline comparison example

* add basic end2end and cli option tests

* add docs on creating benchmarks

* add 30 sec timeout to tests

* support benchmark hooks

* console report spacing and negative array length

Co-authored-by: tabcat <>
  • Loading branch information
tabcat authored May 31, 2021
1 parent a7bb89f commit 76d38ba
Show file tree
Hide file tree
Showing 31 changed files with 18,927 additions and 3,232 deletions.
29 changes: 23 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,17 +6,34 @@
## Install

`npm i benchmark-runner`
`npm i orbit-db-benchmark-runner`

## Usage
## CLI Usage

TBD
Check [cli.js](./src/cli.js) or use `npx benchmarker -h` for help

## Testing
*If you want to run benchmarks in a folder have their file name end in `.benchmark.js`.*

Mocha is used as the testing framework, SinonJS for stubs and mocks and ChaiJS for assertions. To run tests:
```
Options:
-V, --version output the version number
-b, --benchmarks <path> benchmark folder or file (default: "./benchmarks")
-o, --output <file path> report output path (.html or .json)
-i, --baselines <path> baselines to use for comparison (.json output)
--no-node skip nodejs benchmarks
--no-browser skip browser benchmarks
```

`npm run test`
##### Running Comparisons

1. Create the baseline report output to use for comparison: `npx benchmarker -o report.json`
2. Use the output baseline report with the baseline option: `npx benchmarker -i report.json`

***benchmarks ran for comparison are best ran on their own machine or a machine with few other things happening in the background***

## Writing Benchmarks

Benchmark files must export an object with an asynchronous method `benchmark`. The method takes 1 parameter `benchmarker` which is used to control the recording and give information about the benchmark. Please see [test.benchmark.js]('./test/fixtures/benchmarks/test.benchmark.js') for an example.

## Contributing

Expand Down
20,523 changes: 17,871 additions & 2,652 deletions package-lock.json

Large diffs are not rendered by default.

40 changes: 30 additions & 10 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,16 +4,43 @@
"description": "OrbitDB Benchmark Runner",
"main": "./src/index.js",
"bin": {
"benchmark-runner": "./src/cli.js"
"benchmarker": "./src/cli.js"
},
"scripts": {
"test": "nyc mocha"
},
"author": "mistakia",
"license": "MIT",
"dependencies": {
"expose-gc": "^1.0.0",
"yargs": "^15.4.1"
"@babel/core": "^7.13.10",
"@babel/preset-env": "^7.13.12",
"@babel/preset-react": "^7.12.13",
"@nivo/core": "^0.67.0",
"@nivo/line": "^0.67.0",
"babel-loader": "^8.2.2",
"bootstrap": "^4.6.0",
"commander": "^7.1.0",
"css-loader": "^5.1.3",
"express": "^4.17.1",
"html-webpack-plugin": "^4.5.2",
"inline-assets-html-plugin": "^1.0.0",
"is-node": "^1.0.2",
"puppeteer": "^8.0.0",
"react": "^17.0.2",
"react-bootstrap": "^1.5.2",
"react-dom": "^17.0.2",
"style-loader": "^2.0.0",
"val-loader": "^2.1.2",
"webpack": "^4.46.0",
"webpack-dev-middleware": "^4.1.0",
"ws": "^7.4.4"
},
"devDependencies": {
"ipfs": "^0.54.4",
"mocha": "^8.3.2",
"nyc": "^15.1.0",
"orbit-db": "^0.26.1",
"standard": "^14.3.4"
},
"localMaintainers": [
"hajamark <[email protected]>",
Expand All @@ -32,13 +59,6 @@
"homepage": "https://github.com/orbitdb/benchmark-runner#readme",
"bugs": "https://github.com/orbitdb/benchmark-runner/issues",
"repository": "github:orbitdb/benchmark-runner",
"devDependencies": {
"chai": "^4.2.0",
"mocha": "^8.1.3",
"nyc": "^15.1.0",
"sinon": "^9.0.3",
"standard": "^14.3.4"
},
"standard": {
"env": "mocha"
}
Expand Down
120 changes: 120 additions & 0 deletions src/benchmarker/client.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
'use strict'
const isNode = require('is-node')
const nodeDir = (dir) => require('path').join(dir, 'node')
const getWebSocket = () => isNode
? require('ws')
: window.WebSocket
const { makeId, withInfo, creators } = require('./ws-action')
const {
timeMetric,
cpuUsageMetric,
memoryUsedMetric,
memoryTotalMetric
} = require('./metrics')

class Benchmarker {
constructor (ws, dir) {
this._ws = ws
this.dir = isNode ? nodeDir(dir) : dir
this._timeout = null

this.isNode = isNode
this.id = makeId()
this.info = {
id: this.id,
name: `benchmark-${this.id}`,
env: isNode ? 'node' : 'browser',
metrics: []
}
this._interval = 1000 // record metrics every this many ms

this.metrics = []
this.addMetric(timeMetric)
}

static async create (host, dir) {
const ws = await new Promise(resolve => {
const ws = new (getWebSocket())(`ws://${host}`)
ws.onopen = () => resolve(ws)
})
return new Benchmarker(ws, dir)
}

async close () {
if (this._ws.readyState !== 3) {
await new Promise(resolve => {
this._ws.onclose = () => resolve()
this._ws.close()
})
}
}

trackMemory () {
this.addMetric(memoryUsedMetric)
this.addMetric(memoryTotalMetric)
}

trackCpu () {
if (isNode) this.addMetric(cpuUsageMetric)
}

addMetric ({ name, get }) {
if (this.info.metrics.includes(name)) {
throw new Error('a metric with that name already exists')
}
if (this._timeout) {
throw new Error('metrics have already started being recorded')
}
this.metrics.push({ name, get })
this.info.metrics = this.metrics.map(m => m.name)
}

setInterval (interval) {
if (typeof interval !== 'number') {
throw new Error('interval must be a number')
}
if (this._timeout) {
throw new Error('metrics have already started being recorded')
}
this._interval = interval
}

setBenchmarkName (name) {
this.info.name = name.toString()
}

setHookInfo (info) {
this.info.hook = info
}

log (msg) {
this._sendAction(creators.LOG(msg))
}

_sendAction (action) {
this._ws.send(JSON.stringify(withInfo(this.info)(action)))
}

_recordMetrics () {
this._sendAction(creators.SEGMENT(this.metrics.map(({ get }) => get())))
}

startRecording () {
if (!this._timeout) {
const interval = this._interval
const repeater = () => {
this._recordMetrics()
this._timeout = setTimeout(repeater.bind(this), interval)
}
repeater()
}
}

stopRecording () {
clearTimeout(this._timeout)
this._timeout = null
this._recordMetrics()
}
}

module.exports = Benchmarker
71 changes: 71 additions & 0 deletions src/benchmarker/metrics/index.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
'use strict'
const isNode = require('is-node')
const useMetricState = (state, get) => () => {
const { newState, next } = get(state)
state = newState
return next
}

const timeMetric = {
name: 'time',
get: useMetricState(0, (state) => {
const now = Date.now()
return {
newState: state || now,
next: now - (state || now) // on first metric sample: now - now, aka 0
}
})
}

const ns2ms = (ms) => ms / 1000
const cpuUsageMetric = {
name: 'cpu usage',
get: useMetricState(undefined, (state) => {
const time = Date.now()
const { user, system } = process.cpuUsage()
const total = ns2ms(user) + ns2ms(system)
return {
newState: { total, time },
next: state
// cpu usage to percent
? Math.round(100 * ((total - state.total) / (time - state.time)))
: 0
}
})
}

const memorySample = () => {
const sample = isNode
? process.memoryUsage()
: window.performance.memory
const memory = {
total: null,
used: null
}
// denominated in bytes
if (isNode) {
memory.total = sample.heapTotal
memory.used = sample.heapUsed
} else {
memory.total = sample.totalJSHeapSize
memory.used = sample.usedJSHeapSize
}
return memory
}
const toMegabytes = (bytes) => bytes / 1000000
const memoryUsedMetric = {
name: 'heap used',
get: () => toMegabytes(memorySample().used)
}
const memoryTotalMetric = {
name: 'heap total',
get: () => toMegabytes(memorySample().total)
}

module.exports = {
useMetricState,
timeMetric,
cpuUsageMetric,
memoryUsedMetric,
memoryTotalMetric
}
39 changes: 39 additions & 0 deletions src/benchmarker/server.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
'use strict'
const WebSocket = require('ws')
const { parse, types } = require('./ws-action')
const logMessage = (id, msg) =>
`benchmark id:${id}
${msg}
`

class BenchmarkerServer {
constructor ({ port } = {}) {
this._wss = new WebSocket.Server({ port: port || 0 })
this._wss.on('connection', this._handleWsConnection.bind(this))
this.address = this._wss.address.bind(this._wss)
this.results = {}
}

static create (opts) { return new BenchmarkerServer(opts) }

async _handleWsConnection (ws) {
ws.on('message', m => {
const { info, type, msg } = parse(m)
switch (type) {
case types.LOG:
console.log(logMessage(info.id, msg))
break
case types.SEGMENT: {
const { name, env } = info
if (!this.results[name]) this.results[name] = {}
if (!this.results[name][env]) this.results[name][env] = info
if (!this.results[name][env].recorded) this.results[name][env].recorded = []
this.results[name][env].recorded.push(msg)
break
}
}
})
}
}

module.exports = BenchmarkerServer
22 changes: 22 additions & 0 deletions src/benchmarker/ws-action.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
'use strict'

const action = {}

action.types = {
LOG: 'LOG',
SEGMENT: 'SEGMENT'
}

action.creators = {
[action.types.LOG]: (msg) =>
({ type: action.types.LOG, msg }),
[action.types.SEGMENT]: (msg) =>
({ type: action.types.SEGMENT, msg })
}

action.makeId = () => Date.now()
action.withInfo = (info) => (action) => ({ info, ...action })

action.parse = (action) => JSON.parse(action)

module.exports = action
Loading

0 comments on commit 76d38ba

Please sign in to comment.