Skip to content

Commit

Permalink
Resolving PR comments
Browse files Browse the repository at this point in the history
Signed-off-by: Min <[email protected]>
  • Loading branch information
geomin12 committed Feb 24, 2025
1 parent a4e0770 commit d93af1f
Show file tree
Hide file tree
Showing 7 changed files with 40 additions and 17 deletions.
1 change: 1 addition & 0 deletions .github/workflows/test_sharktank_models.yml
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@ jobs:
run: |
source ${VENV_DIR}/bin/activate
python3 -m pip install -e sharktank_models/
python3 -m pip install -r sharktank_models/requirements-iree.txt
# Run tests.
- name: Run Sharktank models test suite
Expand Down
22 changes: 22 additions & 0 deletions sharktank_models/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,3 +107,25 @@ Please refer to [Quality tests README](regression_tests/README.md) to run tests
Please refer to [Benchmark tests README](benchmarks/README.md) to run tests

Note: for benchmark tests to run, you will need `vmfbs` files available

## Generating model files using Shark AI

In order to generate and compile MLIR files to compile, run quality tests and benchmarking tests, please run the following the following commands:

This example generates IRPA and MLIR files for Llama, please look in [Shark AI Models](https://github.com/nod-ai/shark-ai/tree/main/sharktank/sharktank/models) to see which models you can generate

```
git clone https://github.com/nod-ai/shark-ai.git
cd shark-ai/sharktank
python3 -m pip install .
cd ..
# Generate the IRPA files:
python3 -m sharktank.models.llama.toy_llama --output toy_llama.irpa
# Generate the MLIR files:
python3 -m sharktank.examples.export_paged_llm_v1 --bs=1 \
--irpa-file toy_llama.irpa --output-mlir toy_llama.mlir
```
6 changes: 3 additions & 3 deletions sharktank_models/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ name: IREE Sharktank Models Regression Tests

inputs:
model:
description: "The model to run threshold and benchmark tests"
description: "The model to run quality and benchmark tests"
required: true
sku:
description: "Type of SKU to test"
Expand Down Expand Up @@ -39,11 +39,11 @@ runs:
pip install -e ${GITHUB_ACTION_PATH}/sharktank_models
pip install -r ${GITHUB_ACTION_PATH}/sharktank_models/requirements-iree.txt
- name: Run compilation and threshold tests
- name: Run compilation and quality tests
shell: bash
run: |
source ${GITHUB_WORKSPACE}/venv/bin/activate
python ${GITHUB_ACTION_PATH}/sharktank_models/regression_tests/run_thresholds.py \
python ${GITHUB_ACTION_PATH}/sharktank_models/regression_tests/run_quality_tests.py \
--model=${{ inputs.model }} \
--sku=${{ inputs.sku }} \
--backend=${{ inputs.backend }}
Expand Down
2 changes: 1 addition & 1 deletion sharktank_models/llama3.1/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ maintain the same `irpa` values while only updating the executed code.
This files are generated by the following

```bash
git checkout https://github.com/nod-ai/shark-ai.git
git clone https://github.com/nod-ai/shark-ai.git

cd shark-ai/sharktank
python3 -m pip install .
Expand Down
14 changes: 7 additions & 7 deletions sharktank_models/regression_tests/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,22 +6,22 @@

### How to run

- Example command to run a specific submodel or all submodels for a specific model
- Example command to run quality tests for a specific model

```
python sharktank_models/regression_tests/run_thresholds.py --model=sdxl --submodel=*
python sharktank_models/regression_tests/run_quality_tests.py --model=sdxl --submodel=*
python sharktank_models/regression_tests/run_thresholds.py --model=sdxl --submodel=clip
python sharktank_models/regression_tests/run_quality_tests.py --model=sdxl --submodel=clip
```

Argument options for the script

| Argument Name | Default value | Description |
| ------------- | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| --model | sdxl | Runs threshold tests for a specific model |
| --submodel | \* | If specified, the threshold tests will run for a specific submodel (ex: `--submodel clip`). If not specified, it will run tests on all submodels |
| --sku | mi300 | The threshold tests will run on this sku and retrieve golden values from the specified sku |
| --backend | gfx942 | The threshold tests will run on this backend |
| --model | sdxl | Runs quality tests for a specific model |
| --submodel | \* | If specified, the quality tests will run for a specific submodel (ex: `--submodel clip`). If not specified, it will run tests on all submodels |
| --sku | mi300 | The quality tests will run on this sku and retrieve golden values from the specified sku |
| --backend | gfx942 | The quality tests will run on this backend |

### Required and optional fields for the JSON model file

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,16 +23,16 @@ def main():
sku = args.sku
backend = args.backend

os.environ["THRESHOLD_MODEL"] = model
os.environ["THRESHOLD_SUBMODEL"] = submodel
os.environ["MODEL_TO_TEST"] = model
os.environ["SUBMODEL_TO_TEST"] = submodel
os.environ["SKU"] = sku
os.environ["BACKEND"] = backend

THIS_DIR = Path(__file__).parent

command = [
"pytest",
THIS_DIR / "test_model_threshold.py",
THIS_DIR / "test_model_quality.py",
"-rpFe",
"--log-cli-level=info",
"--capture=no",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@
vmfb_dir = os.getenv("TEST_OUTPUT_ARTIFACTS", default=str(PARENT_DIR))
backend = os.getenv("BACKEND", default="gfx942")
sku = os.getenv("SKU", default="mi300")
model_name = os.getenv("THRESHOLD_MODEL", default="sdxl")
submodel_name = os.getenv("THRESHOLD_SUBMODEL", default="*")
model_name = os.getenv("MODEL_TO_TEST", default="sdxl")
submodel_name = os.getenv("SUBMODEL_TO_TEST", default="*")

SUBMODEL_FOLDER_PATH = THIS_DIR / f"{model_name}"

Expand Down Expand Up @@ -67,7 +67,7 @@ def common_run_flags_generation(input_list, output_list):


@pytest.mark.parametrize("submodel_name", parameters)
class TestModelThreshold:
class TestModelQuality:
@pytest.fixture(autouse=True)
@classmethod
def setup_class(self, submodel_name):
Expand Down

0 comments on commit d93af1f

Please sign in to comment.