Skip to content
This repository has been archived by the owner on Oct 11, 2024. It is now read-only.

Commit

Permalink
update other workflows for lm-eval changes (#292)
Browse files Browse the repository at this point in the history
* apply changes made to `remote-push` to other workflows
  • Loading branch information
robertgshaw2-neuralmagic authored Jun 8, 2024
1 parent 9f1cc2c commit bc2b3f6
Show file tree
Hide file tree
Showing 3 changed files with 36 additions and 0 deletions.
16 changes: 16 additions & 0 deletions .github/workflows/nm-nightly.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,10 @@ jobs:
benchmark_config_list_file: ./.github/data/nm_benchmark_remote_push_configs_list.txt
benchmark_timeout: 720
push_benchmark_results_to_gh_pages: "${{ github.event_name == 'schedule' || inputs.push_benchmark_results_to_gh_pages }}"

lm_eval_label: gcp-k8s-l4-solo
lm_eval_configuration: ./neuralmagic/lm-eval/weekly.yaml
lm_eval_timeout: 60
secrets: inherit

PYTHON-3-9:
Expand All @@ -51,6 +55,10 @@ jobs:
benchmark_config_list_file: ./.github/data/nm_benchmark_remote_push_configs_list.txt
benchmark_timeout: 720
push_benchmark_results_to_gh_pages: "${{ github.event_name == 'schedule' || inputs.push_benchmark_results_to_gh_pages }}"

lm_eval_label: gcp-k8s-l4-solo
lm_eval_configuration: ./neuralmagic/lm-eval/weekly.yaml
lm_eval_timeout: 60
secrets: inherit

PYTHON-3-10:
Expand All @@ -69,6 +77,10 @@ jobs:
benchmark_config_list_file: ./.github/data/nm_benchmark_remote_push_configs_list.txt
benchmark_timeout: 720
push_benchmark_results_to_gh_pages: "${{ github.event_name == 'schedule' || inputs.push_benchmark_results_to_gh_pages }}"

lm_eval_label: gcp-k8s-l4-solo
lm_eval_configuration: ./neuralmagic/lm-eval/weekly.yaml
lm_eval_timeout: 60
secrets: inherit

PYTHON-3-11:
Expand All @@ -87,4 +99,8 @@ jobs:
benchmark_config_list_file: ./.github/data/nm_benchmark_remote_push_configs_list.txt
benchmark_timeout: 720
push_benchmark_results_to_gh_pages: "${{ github.event_name == 'schedule' || inputs.push_benchmark_results_to_gh_pages }}"

lm_eval_label: gcp-k8s-l4-solo
lm_eval_configuration: ./neuralmagic/lm-eval/weekly.yaml
lm_eval_timeout: 60
secrets: inherit
16 changes: 16 additions & 0 deletions .github/workflows/nm-release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,10 @@ jobs:
benchmark_config_list_file: ./.github/data/nm_benchmark_nightly_configs_list.txt
benchmark_timeout: 720
push_benchmark_results_to_gh_pages: ${{ inputs.push_benchmark_results_to_gh_pages }}

lm_eval_label: gcp-k8s-l4-solo
lm_eval_configuration: ./neuralmagic/lm-eval/weekly.yaml
lm_eval_timeout: 60
secrets: inherit

PYTHON-3-9:
Expand All @@ -47,6 +51,10 @@ jobs:
benchmark_config_list_file: ./.github/data/nm_benchmark_nightly_configs_list.txt
benchmark_timeout: 720
push_benchmark_results_to_gh_pages: ${{ inputs.push_benchmark_results_to_gh_pages }}

lm_eval_label: gcp-k8s-l4-solo
lm_eval_configuration: ./neuralmagic/lm-eval/weekly.yaml
lm_eval_timeout: 60
secrets: inherit

PYTHON-3-10:
Expand All @@ -65,6 +73,10 @@ jobs:
benchmark_config_list_file: ./.github/data/nm_benchmark_nightly_configs_list.txt
benchmark_timeout: 720
push_benchmark_results_to_gh_pages: ${{ inputs.push_benchmark_results_to_gh_pages }}

lm_eval_label: gcp-k8s-l4-solo
lm_eval_configuration: ./neuralmagic/lm-eval/weekly.yaml
lm_eval_timeout: 60
secrets: inherit

PYTHON-3-11:
Expand All @@ -83,4 +95,8 @@ jobs:
benchmark_config_list_file: ./.github/data/nm_benchmark_nightly_configs_list.txt
benchmark_timeout: 720
push_benchmark_results_to_gh_pages: ${{ inputs.push_benchmark_results_to_gh_pages }}

lm_eval_label: gcp-k8s-l4-solo
lm_eval_configuration: ./neuralmagic/lm-eval/weekly.yaml
lm_eval_timeout: 60
secrets: inherit
4 changes: 4 additions & 0 deletions .github/workflows/nm-weekly.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,4 +33,8 @@ jobs:
benchmark_config_list_file: ./.github/data/nm_benchmark_weekly_configs_list.txt
benchmark_timeout: 720
push_benchmark_results_to_gh_pages: "${{ github.event_name == 'schedule' || inputs.push_benchmark_results_to_gh_pages }}"

lm_eval_label: gcp-k8s-l4-solo
lm_eval_configuration: ./neuralmagic/lm-eval/weekly.yaml
lm_eval_timeout: 60
secrets: inherit

8 comments on commit bc2b3f6

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bigger_is_better

Benchmark suite Current: bc2b3f6 Previous: 9fe9187 Ratio
{"name": "request_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.0", "python_version": "3.8.17 (default, Jun 7 2023, 12:29:56) \n[GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 2.3922378530978214 prompts/s
{"name": "token_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.0", "python_version": "3.8.17 (default, Jun 7 2023, 12:29:56) \n[GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 918.6193355895635 tokens/s

This comment was automatically generated by workflow using github-action-benchmark.

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bigger_is_better

Benchmark suite Current: bc2b3f6 Previous: 9fe9187 Ratio
{"name": "request_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.0", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 2.379112353440113 prompts/s
{"name": "token_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.0", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 913.5791437210033 tokens/s

This comment was automatically generated by workflow using github-action-benchmark.

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bigger_is_better

Benchmark suite Current: bc2b3f6 Previous: 9fe9187 Ratio
{"name": "request_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.0", "python_version": "3.11.4 (main, Jun 7 2023, 11:01:02) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 2.4270153133786723 prompts/s
{"name": "token_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.0", "python_version": "3.11.4 (main, Jun 7 2023, 11:01:02) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 931.9738803374102 tokens/s

This comment was automatically generated by workflow using github-action-benchmark.

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bigger_is_better

Benchmark suite Current: bc2b3f6 Previous: 9fe9187 Ratio
{"name": "request_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.0", "python_version": "3.9.17 (main, Jun 7 2023, 12:34:12) \n[GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 2.37666765834168 prompts/s 2.352493940598332 prompts/s 0.99
{"name": "token_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.0", "python_version": "3.9.17 (main, Jun 7 2023, 12:34:12) \n[GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 912.6403808032051 tokens/s 903.3576731897596 tokens/s 0.99

This comment was automatically generated by workflow using github-action-benchmark.

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bigger_is_better

Benchmark suite Current: bc2b3f6 Previous: 9fe9187 Ratio
{"name": "request_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.0", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 2.3619891926780143 prompts/s
{"name": "token_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.0", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 907.0038499883575 tokens/s

This comment was automatically generated by workflow using github-action-benchmark.

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bigger_is_better

Benchmark suite Current: bc2b3f6 Previous: 9fe9187 Ratio
{"name": "request_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.0", "python_version": "3.11.4 (main, Jun 7 2023, 11:01:02) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 2.3950578287064936 prompts/s
{"name": "token_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.0", "python_version": "3.11.4 (main, Jun 7 2023, 11:01:02) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 919.7022062232936 tokens/s

This comment was automatically generated by workflow using github-action-benchmark.

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bigger_is_better

Benchmark suite Current: bc2b3f6 Previous: 9fe9187 Ratio
{"name": "request_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.0", "python_version": "3.9.17 (main, Jun 7 2023, 12:34:12) \n[GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 2.4017320769775536 prompts/s 2.352493940598332 prompts/s 0.98
{"name": "token_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.0", "python_version": "3.9.17 (main, Jun 7 2023, 12:34:12) \n[GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 922.2651175593805 tokens/s 903.3576731897596 tokens/s 0.98

This comment was automatically generated by workflow using github-action-benchmark.

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bigger_is_better

Benchmark suite Current: bc2b3f6 Previous: 9fe9187 Ratio
{"name": "request_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.0", "python_version": "3.8.17 (default, Jun 7 2023, 12:29:56) \n[GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 2.3871754881800737 prompts/s
{"name": "token_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.0", "python_version": "3.8.17 (default, Jun 7 2023, 12:29:56) \n[GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 916.6753874611484 tokens/s

This comment was automatically generated by workflow using github-action-benchmark.

Please sign in to comment.