Skip to content
This repository has been archived by the owner on Oct 11, 2024. It is now read-only.

Commit

Permalink
fix if conditionals
Browse files Browse the repository at this point in the history
  • Loading branch information
Varun Sundar Rabindranath committed Mar 27, 2024
1 parent b548311 commit a41c281
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 5 deletions.
3 changes: 1 addition & 2 deletions .github/actions/nm-github-action-benchmark/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,6 @@ runs:
# inconsistent state.
- name: reset github pages branch
run: |
echo "See if we can see the secret github token : ${{ inputs.github_token }} "
git update-ref refs/heads/${{ inputs.gh_pages_branch }} origin/${{ inputs.gh_pages_branch }}
shell: bash

Expand All @@ -67,4 +66,4 @@ runs:
# Add a commit comment describing what triggered the alert
comment-on-alert: ${{ inputs.reporting_enabled == 'true' }}
# TODO (varun): Is this a reasonable number ?
max-items-in-chart: 50
max-items-in-chart: 50
6 changes: 3 additions & 3 deletions .github/workflows/nm-benchmark.yml
Original file line number Diff line number Diff line change
Expand Up @@ -208,7 +208,7 @@ jobs:

- name: nm-github-action-benchmark(bigger_is_better.json)
# Absence of the file indicates that there were no "bigger_is_better" metrics
if: ${{ success() || failure() }} && ${{ hashFiles('downloads/bigger_is_better.json') != '' }}
if: (success() || failure()) && (hashFiles('downloads/bigger_is_better.json') != '')
uses: ./.github/actions/nm-github-action-benchmark
with:
gh_action_benchmark_name: "bigger_is_better"
Expand All @@ -221,7 +221,7 @@ jobs:

- name: nm-github-action-benchmark(smaller_is_better.json)
# Absence of the file indicates that there were no "smaller_is_better" metrics
if: ${{ success() || failure() }} && ${{ hashFiles('downloads/smaller_is_better.json') != '' }}
if: (success() || failure()) && (hashFiles('downloads/smaller_is_better.json') != '')
uses: ./.github/actions/nm-github-action-benchmark
with:
gh_action_benchmark_name: "smaller_is_better"
Expand All @@ -234,7 +234,7 @@ jobs:

- name: nm-github-action-benchmark(observation_metrics.json)
# Absence of the file indicates that there were no "observation" metrics
if: ${{ success() || failure() }} && ${{ hashFiles('downloads/observation_metrics.json') != '' }}
if: (success() || failure()) && (hashFiles('downloads/observation_metrics.json') != '')
uses: ./.github/actions/nm-github-action-benchmark
with:
gh_action_benchmark_name: "observation_metrics"
Expand Down

3 comments on commit a41c281

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bigger_is_better

Benchmark suite Current: a41c281 Previous: 8894487 Ratio
{"name": "request_throughput", "description": "Benchmark vllm serving\nmodel - mistralai/Mistral-7B-Instruct-v0.2\nmax-model-len - 4096\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"5,inf\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA A10G x 4", "vllm_version": "0.1.0", "python_version": "3.10.12 (main, Mar 7 2024, 18:39:53) [GCC 9.4.0]", "torch_version": "2.1.2+cu121"} 1.077347861792705 prompts/s
{"name": "input_throughput", "description": "Benchmark vllm serving\nmodel - mistralai/Mistral-7B-Instruct-v0.2\nmax-model-len - 4096\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"5,inf\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA A10G x 4", "vllm_version": "0.1.0", "python_version": "3.10.12 (main, Mar 7 2024, 18:39:53) [GCC 9.4.0]", "torch_version": "2.1.2+cu121"} 200.60217186580167 tokens/s
{"name": "output_throughput", "description": "Benchmark vllm serving\nmodel - mistralai/Mistral-7B-Instruct-v0.2\nmax-model-len - 4096\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"5,inf\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA A10G x 4", "vllm_version": "0.1.0", "python_version": "3.10.12 (main, Mar 7 2024, 18:39:53) [GCC 9.4.0]", "torch_version": "2.1.2+cu121"} 190.47510196495023 tokens/s
{"name": "request_throughput", "description": "Benchmark vllm engine throughput - with dataset\nmodel - mistralai/Mistral-7B-Instruct-v0.2\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"output-len\": 128,\n \"num-prompts\": 100,\n \"dataset\": \"sharegpt\",\n \"max-model-len\": 4096\n}", "gpu_description": "NVIDIA A10G x 4", "vllm_version": "0.1.0", "python_version": "3.10.12 (main, Mar 7 2024, 18:39:53) [GCC 9.4.0]", "torch_version": "2.1.2+cu121"} 7.775327625003007 prompts/s
{"name": "token_throughput", "description": "Benchmark vllm engine throughput - with dataset\nmodel - mistralai/Mistral-7B-Instruct-v0.2\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"output-len\": 128,\n \"num-prompts\": 100,\n \"dataset\": \"sharegpt\",\n \"max-model-len\": 4096\n}", "gpu_description": "NVIDIA A10G x 4", "vllm_version": "0.1.0", "python_version": "3.10.12 (main, Mar 7 2024, 18:39:53) [GCC 9.4.0]", "torch_version": "2.1.2+cu121"} 3525.5668050051136 tokens/s

This comment was automatically generated by workflow using github-action-benchmark.

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

smaller_is_better

Benchmark suite Current: a41c281 Previous: 8894487 Ratio
{"name": "median_request_latency", "description": "Benchmark vllm serving\nmodel - mistralai/Mistral-7B-Instruct-v0.2\nmax-model-len - 4096\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"5,inf\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA A10G x 4", "vllm_version": "0.1.0", "python_version": "3.10.12 (main, Mar 7 2024, 18:39:53) [GCC 9.4.0]", "torch_version": "2.1.2+cu121"} 3263.224603000026 ms
{"name": "mean_ttft_ms", "description": "Benchmark vllm serving\nmodel - mistralai/Mistral-7B-Instruct-v0.2\nmax-model-len - 4096\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"5,inf\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA A10G x 4", "vllm_version": "0.1.0", "python_version": "3.10.12 (main, Mar 7 2024, 18:39:53) [GCC 9.4.0]", "torch_version": "2.1.2+cu121"} 240.91842699999688 ms
{"name": "median_ttft_ms", "description": "Benchmark vllm serving\nmodel - mistralai/Mistral-7B-Instruct-v0.2\nmax-model-len - 4096\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"5,inf\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA A10G x 4", "vllm_version": "0.1.0", "python_version": "3.10.12 (main, Mar 7 2024, 18:39:53) [GCC 9.4.0]", "torch_version": "2.1.2+cu121"} 240.87523399998645 ms
{"name": "mean_tpot_ms", "description": "Benchmark vllm serving\nmodel - mistralai/Mistral-7B-Instruct-v0.2\nmax-model-len - 4096\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"5,inf\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA A10G x 4", "vllm_version": "0.1.0", "python_version": "3.10.12 (main, Mar 7 2024, 18:39:53) [GCC 9.4.0]", "torch_version": "2.1.2+cu121"} 13.814070076108585 ms
{"name": "median_tpot_ms", "description": "Benchmark vllm serving\nmodel - mistralai/Mistral-7B-Instruct-v0.2\nmax-model-len - 4096\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"5,inf\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA A10G x 4", "vllm_version": "0.1.0", "python_version": "3.10.12 (main, Mar 7 2024, 18:39:53) [GCC 9.4.0]", "torch_version": "2.1.2+cu121"} 14.301734848100761 ms

This comment was automatically generated by workflow using github-action-benchmark.

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bigger_is_better

Benchmark suite Current: a41c281 Previous: 8894487 Ratio
{"name": "request_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA A10G x 1", "vllm_version": "0.1.0", "python_version": "3.10.12 (main, Mar 7 2024, 18:39:53) [GCC 9.4.0]", "torch_version": "2.1.2+cu121"} 3.984863119976597 prompts/s 3.98509085065037 prompts/s 1.00
{"name": "token_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA A10G x 1", "vllm_version": "0.1.0", "python_version": "3.10.12 (main, Mar 7 2024, 18:39:53) [GCC 9.4.0]", "torch_version": "2.1.2+cu121"} 1530.187438071013 tokens/s 1530.2748866497423 tokens/s 1.00

This comment was automatically generated by workflow using github-action-benchmark.

Please sign in to comment.