From 288a827bfb571cc302b6fc5eff789ddbe53b14d3 Mon Sep 17 00:00:00 2001 From: Arjun Suresh Date: Mon, 26 Feb 2024 21:18:51 +0000 Subject: [PATCH] Update README_reference.md --- .../inference/3d-unet/README_reference.md | 25 ++----------------- 1 file changed, 2 insertions(+), 23 deletions(-) diff --git a/docs/mlperf/inference/3d-unet/README_reference.md b/docs/mlperf/inference/3d-unet/README_reference.md index 2db9b0c87d..f67064d16e 100644 --- a/docs/mlperf/inference/3d-unet/README_reference.md +++ b/docs/mlperf/inference/3d-unet/README_reference.md @@ -25,34 +25,13 @@ cmr "generate-run-cmds inference _find-performance _all-scenarios" \ ``` cmr "generate-run-cmds inference _submission _all-scenarios" --model=3d-unet-99.9 \ --device=cpu --implementation=reference --backend=onnxruntime \ ---execution-mode=valid --results_dir=$HOME/inference_3.1_results \ ---category=edge --division=open --quiet +--execution-mode=valid --category=edge --division=open --quiet ``` * Use `--power=yes` for measuring power. It is ignored for accuracy and compliance runs * Use `--division=closed` to run all scenarios for the closed division including the compliance tests -* `--offline_target_qps`, `--server_target_qps` and `--singlestream_target_latency` can be used to override the determined performance numbers - -### Populate the README files describing your submission - -``` -cmr "generate-run-cmds inference _populate-readme _all-scenarios" \ ---model=3d-unet-99.9 --device=cpu --implementation=reference --backend=onnxruntime \ ---execution-mode=valid --results_dir=$HOME/inference_3.1_results \ ---category=edge --division=open --quiet -``` - -### Generate actual submission tree - -Here, we are copying the performance and accuracy log files (compliance logs also in the case of closed division) from the results directory to the submission tree following the [directory structure required by MLCommons Inference](https://github.com/mlcommons/policies/blob/master/submission_rules.adoc#inference-1). After the submission tree is generated, [accuracy truncate script](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/truncate-mlperf-inference-accuracy-log) is called to truncate accuracy logs and then the [submission checker](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/run-mlperf-inference-submission-checker) is called to validate the generated submission tree. +* `--offline_target_qps`, and `--singlestream_target_latency` can be used to override the determined performance numbers -We should use the master branch of MLCommons inference repo for the submission checker. You can use `--hw_note_extra` option to add your name to the notes. -``` -cmr "generate inference submission" --results_dir=$HOME/inference_3.1_results/valid_results \ ---submission_dir=$HOME/inference_submission_tree --clean \ ---run-checker --submitter=cTuning --adr.inference-src.version=master \ ---hw_notes_extra="Result taken by NAME" --quiet -``` ### Questions? Suggestions?