This repository has been archived by the owner on Oct 11, 2024. It is now read-only.
forked from vllm-project/vllm
-
Notifications
You must be signed in to change notification settings - Fork 10
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Upstream sync 2024 07 01 (#350) - not a release candidate SUMMARY: * Merge commits from vllm-project@6c916ac to vllm-project@8e0817c * Note that vllm-project@6c916ac is NOT included in this merge. COMPARE vs UPSTREAM: https://github.com/neuralmagic/nm-vllm/compare/upstream-sync-2024-07-01..8e0817c262da5c104f651a0ce4ac9ee0cd76f4ce --------- Signed-off-by: kevin <[email protected]> Signed-off-by: Thomas Parnell <[email protected]> Signed-off-by: Stephanie Wang <[email protected]> Signed-off-by: Stephanie <[email protected]> Signed-off-by: Xiaowei Jiang <[email protected]> Signed-off-by: Joe Runde <[email protected]> Co-authored-by: Murali Andoorveedu <[email protected]> Co-authored-by: Isotr0py <[email protected]> Co-authored-by: youkaichao <[email protected]> Co-authored-by: Michael Goin <[email protected]> Co-authored-by: Chang Su <[email protected]> Co-authored-by: Kevin H. Luu <[email protected]> Co-authored-by: Cyrus Leung <[email protected]> Co-authored-by: Woo-Yeon Lee <[email protected]> Co-authored-by: Jie Fu (傅杰) <[email protected]> Co-authored-by: Antoni Baum <[email protected]> Co-authored-by: Dipika Sikka <[email protected]> Co-authored-by: Woosuk Kwon <[email protected]> Co-authored-by: Matt Wong <[email protected]> Co-authored-by: Thomas Parnell <[email protected]> Co-authored-by: aws-patlange <[email protected]> Co-authored-by: Stephanie Wang <[email protected]> Co-authored-by: Stephanie <[email protected]> Co-authored-by: Roger Wang <[email protected]> Co-authored-by: Luka Govedič <[email protected]> Co-authored-by: Chih-Chieh-Yang <[email protected]> Co-authored-by: Lucas Wilkinson <[email protected]> Co-authored-by: sasha0552 <[email protected]> Co-authored-by: Chip Kerchner <[email protected]> Co-authored-by: Nick Hill <[email protected]> Co-authored-by: Abhinav Goyal <[email protected]> Co-authored-by: xwjiang2010 <[email protected]> Co-authored-by: Divakar Verma <[email protected]> Co-authored-by: Roger Wang <[email protected]> Co-authored-by: Tyler Michael Smith <[email protected]> Co-authored-by: Ilya Lavrenov <[email protected]> Co-authored-by: Cody Yu <[email protected]> Co-authored-by: Robert Shaw <rshaw@neuralmagic> Co-authored-by: wangding zeng <[email protected]> Co-authored-by: Philipp Moritz <[email protected]> Co-authored-by: Lily Liu <[email protected]> Co-authored-by: LiuXiaoxuanPKU <[email protected]>, bong-furiosa <[email protected]> Co-authored-by: mcalman <[email protected]> Co-authored-by: William Lin <[email protected]> Co-authored-by: Joe Runde <[email protected]> Co-authored-by: Cyrus Leung <[email protected]> Co-authored-by: llmpros <[email protected]> Co-authored-by: SangBin Cho <[email protected]> Co-authored-by: sang <[email protected]> Co-authored-by: sroy745 <[email protected]> Co-authored-by: zhyncs <[email protected]> Co-authored-by: Simon Mo <[email protected]> Co-authored-by: Avshalom Manevich <[email protected]> Co-authored-by: derekk-nm <[email protected]> Co-authored-by: Domenic Barbuzzi <[email protected]>
- Loading branch information
1 parent
9346bff
commit 7144d20
Showing
257 changed files
with
12,471 additions
and
4,350 deletions.
There are no files selected for viewing
11 changes: 11 additions & 0 deletions
11
.buildkite/lm-eval-harness/configs/Meta-Llama-3-70B-Instruct.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
# bash .buildkite/lm-eval-harness/run-lm-eval-gsm-hf-baseline.sh -m meta-llama/Meta-Llama-3-70B-Instruct -b 32 -l 250 -f 5 | ||
model_name: "meta-llama/Meta-Llama-3-70B-Instruct" | ||
tasks: | ||
- name: "gsm8k" | ||
metrics: | ||
- name: "exact_match,strict-match" | ||
value: 0.892 | ||
- name: "exact_match,flexible-extract" | ||
value: 0.892 | ||
limit: 250 | ||
num_fewshot: 5 |
11 changes: 11 additions & 0 deletions
11
.buildkite/lm-eval-harness/configs/Meta-Llama-3-8B-Instruct-FP8.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
# bash .buildkite/lm-eval-harness/run-lm-eval-gsm-hf-baseline.sh -m neuralmagic/Meta-Llama-3-8B-Instruct-FP8 -b 32 -l 250 -f 5 -t 1 | ||
model_name: "neuralmagic/Meta-Llama-3-8B-Instruct-FP8" | ||
tasks: | ||
- name: "gsm8k" | ||
metrics: | ||
- name: "exact_match,strict-match" | ||
value: 0.756 | ||
- name: "exact_match,flexible-extract" | ||
value: 0.752 | ||
limit: 250 | ||
num_fewshot: 5 |
11 changes: 11 additions & 0 deletions
11
.buildkite/lm-eval-harness/configs/Meta-Llama-3-8B-Instruct.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
# bash .buildkite/lm-eval-harness/run-lm-eval-gsm-hf-baseline.sh -m meta-llama/Meta-Llama-3-8B-Instruct -b 32 -l 250 -f 5 -t 1 | ||
model_name: "meta-llama/Meta-Llama-3-8B-Instruct" | ||
tasks: | ||
- name: "gsm8k" | ||
metrics: | ||
- name: "exact_match,strict-match" | ||
value: 0.756 | ||
- name: "exact_match,flexible-extract" | ||
value: 0.752 | ||
limit: 250 | ||
num_fewshot: 5 |
11 changes: 11 additions & 0 deletions
11
.buildkite/lm-eval-harness/configs/Mixtral-8x7B-Instruct-v0.1.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
# bash .buildkite/lm-eval-harness/run-lm-eval-gsm-hf-baseline.sh -m neuralmagic/Mixtral-8x7B-Instruct-v0.1 -b 32 -l 250 -f 5 -t 4 | ||
model_name: "mistralai/Mixtral-8x7B-Instruct-v0.1" | ||
tasks: | ||
- name: "gsm8k" | ||
metrics: | ||
- name: "exact_match,strict-match" | ||
value: 0.616 | ||
- name: "exact_match,flexible-extract" | ||
value: 0.632 | ||
limit: 250 | ||
num_fewshot: 5 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,2 @@ | ||
Meta-Llama-3-70B-Instruct.yaml | ||
Mixtral-8x7B-Instruct-v0.1.yaml |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,2 @@ | ||
Meta-Llama-3-8B-Instruct.yaml | ||
Meta-Llama-3-8B-Instruct-FP8.yaml |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,46 @@ | ||
#!/bin/bash | ||
# We can use this script to compute baseline accuracy on GSM for transformers. | ||
# | ||
# Make sure you have lm-eval-harness installed: | ||
# pip install git+https://github.com/EleutherAI/lm-evaluation-harness.git@9516087b81a61d0e220b22cc1b75be76de23bc10 | ||
|
||
usage() { | ||
echo`` | ||
echo "Runs lm eval harness on GSM8k using huggingface transformers." | ||
echo "This pathway is intended to be used to create baselines for " | ||
echo "our automated nm-test-accuracy workflow" | ||
echo | ||
echo "usage: ${0} <options>" | ||
echo | ||
echo " -m - huggingface stub or local directory of the model" | ||
echo " -b - batch size to run the evaluation at" | ||
echo " -l - limit number of samples to run" | ||
echo " -f - number of fewshot samples to use" | ||
echo | ||
} | ||
|
||
while getopts "m:b:l:f:" OPT; do | ||
case ${OPT} in | ||
m ) | ||
MODEL="$OPTARG" | ||
;; | ||
b ) | ||
BATCH_SIZE="$OPTARG" | ||
;; | ||
l ) | ||
LIMIT="$OPTARG" | ||
;; | ||
f ) | ||
FEWSHOT="$OPTARG" | ||
;; | ||
\? ) | ||
usage | ||
exit 1 | ||
;; | ||
esac | ||
done | ||
|
||
lm_eval --model hf \ | ||
--model_args pretrained=$MODEL,parallelize=True \ | ||
--tasks gsm8k --num_fewshot $FEWSHOT --limit $LIMIT \ | ||
--batch_size $BATCH_SIZE |
51 changes: 51 additions & 0 deletions
51
.buildkite/lm-eval-harness/run-lm-eval-gsm-vllm-baseline.sh
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,51 @@ | ||
#!/bin/bash | ||
# We can use this script to compute baseline accuracy on GSM for vllm. | ||
# We use this for fp8, which HF does not support. | ||
# | ||
# Make sure you have lm-eval-harness installed: | ||
# pip install lm-eval==0.4.2 | ||
|
||
usage() { | ||
echo`` | ||
echo "Runs lm eval harness on GSM8k using huggingface transformers." | ||
echo "This pathway is intended to be used to create baselines for " | ||
echo "our automated nm-test-accuracy workflow" | ||
echo | ||
echo "usage: ${0} <options>" | ||
echo | ||
echo " -m - huggingface stub or local directory of the model" | ||
echo " -b - batch size to run the evaluation at" | ||
echo " -l - limit number of samples to run" | ||
echo " -f - number of fewshot samples to use" | ||
echo " -t - tensor parallel size to run at" | ||
echo | ||
} | ||
|
||
while getopts "m:b:l:f:t:" OPT; do | ||
case ${OPT} in | ||
m ) | ||
MODEL="$OPTARG" | ||
;; | ||
b ) | ||
BATCH_SIZE="$OPTARG" | ||
;; | ||
l ) | ||
LIMIT="$OPTARG" | ||
;; | ||
f ) | ||
FEWSHOT="$OPTARG" | ||
;; | ||
t ) | ||
TP_SIZE="$OPTARG" | ||
;; | ||
\? ) | ||
usage | ||
exit 1 | ||
;; | ||
esac | ||
done | ||
|
||
lm_eval --model vllm \ | ||
--model_args pretrained=$MODEL,tensor_parallel_size=$TP_SIZE \ | ||
--tasks gsm8k --num_fewshot $FEWSHOT --limit $LIMIT \ | ||
--batch_size $BATCH_SIZE |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,59 @@ | ||
#!/bin/bash | ||
|
||
usage() { | ||
echo`` | ||
echo "Runs lm eval harness on GSM8k using vllm and compares to " | ||
echo "precomputed baseline (measured by HF transformers.)" | ||
echo | ||
echo "usage: ${0} <options>" | ||
echo | ||
echo " -c - path to the test data config (e.g. configs/small-models.txt)" | ||
echo " -t - tensor parallel size" | ||
echo | ||
} | ||
|
||
SUCCESS=0 | ||
|
||
while getopts "c:t:" OPT; do | ||
case ${OPT} in | ||
c ) | ||
CONFIG="$OPTARG" | ||
;; | ||
t ) | ||
TP_SIZE="$OPTARG" | ||
;; | ||
\? ) | ||
usage | ||
exit 1 | ||
;; | ||
esac | ||
done | ||
|
||
# Parse list of configs. | ||
IFS=$'\n' read -d '' -r -a MODEL_CONFIGS < $CONFIG | ||
|
||
for MODEL_CONFIG in "${MODEL_CONFIGS[@]}" | ||
do | ||
LOCAL_SUCCESS=0 | ||
|
||
echo "=== RUNNING MODEL: $MODEL_CONFIG WITH TP SIZE: $TP_SIZE===" | ||
|
||
export LM_EVAL_TEST_DATA_FILE=$PWD/configs/${MODEL_CONFIG} | ||
export LM_EVAL_TP_SIZE=$TP_SIZE | ||
pytest -s test_lm_eval_correctness.py || LOCAL_SUCCESS=$? | ||
|
||
if [[ $LOCAL_SUCCESS == 0 ]]; then | ||
echo "=== PASSED MODEL: ${MODEL_CONFIG} ===" | ||
else | ||
echo "=== FAILED MODEL: ${MODEL_CONFIG} ===" | ||
fi | ||
|
||
SUCCESS=$((SUCCESS + LOCAL_SUCCESS)) | ||
|
||
done | ||
|
||
if [ "${SUCCESS}" -eq "0" ]; then | ||
exit 0 | ||
else | ||
exit 1 | ||
fi |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,54 @@ | ||
""" | ||
LM eval harness on model to compare vs HF baseline computed offline. | ||
Configs are found in configs/$MODEL.yaml | ||
* export LM_EVAL_TEST_DATA_FILE=configs/Meta-Llama-3-70B-Instruct.yaml | ||
* export LM_EVAL_TP_SIZE=4 | ||
* pytest -s test_lm_eval_correctness.py | ||
""" | ||
|
||
import os | ||
from pathlib import Path | ||
|
||
import lm_eval | ||
import numpy | ||
import yaml | ||
|
||
RTOL = 0.02 | ||
TEST_DATA_FILE = os.environ.get( | ||
"LM_EVAL_TEST_DATA_FILE", | ||
".buildkite/lm-eval-harness/configs/Meta-Llama-3-8B-Instruct.yaml") | ||
|
||
TP_SIZE = os.environ.get("LM_EVAL_TP_SIZE", 1) | ||
|
||
|
||
def launch_lm_eval(eval_config): | ||
model_args = f"pretrained={eval_config['model_name']}," \ | ||
f"tensor_parallel_size={TP_SIZE}" | ||
|
||
results = lm_eval.simple_evaluate( | ||
model="vllm", | ||
model_args=model_args, | ||
tasks=[task["name"] for task in eval_config["tasks"]], | ||
num_fewshot=eval_config["num_fewshot"], | ||
limit=eval_config["limit"], | ||
batch_size="auto") | ||
|
||
return results | ||
|
||
|
||
def test_lm_eval_correctness(): | ||
eval_config = yaml.safe_load( | ||
Path(TEST_DATA_FILE).read_text(encoding="utf-8")) | ||
|
||
# Launch eval requests. | ||
results = launch_lm_eval(eval_config) | ||
|
||
# Confirm scores match ground truth. | ||
for task in eval_config["tasks"]: | ||
for metric in task["metrics"]: | ||
ground_truth = metric["value"] | ||
measured_value = results["results"][task["name"]][metric["name"]] | ||
print(f'{task["name"]} | {metric["name"]}: ' | ||
f'ground_truth={ground_truth} | measured={measured_value}') | ||
assert numpy.isclose(ground_truth, measured_value, rtol=RTOL) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
# This script build the OpenVINO docker image and run the offline inference inside the container. | ||
# It serves a sanity check for compilation and basic model usage. | ||
set -ex | ||
|
||
# Try building the docker image | ||
docker build -t openvino-test -f Dockerfile.openvino . | ||
|
||
# Setup cleanup | ||
remove_docker_container() { docker rm -f openvino-test || true; } | ||
trap remove_docker_container EXIT | ||
remove_docker_container | ||
|
||
# Run the image and launch offline inference | ||
docker run --network host --env VLLM_OPENVINO_KVCACHE_SPACE=1 --name openvino-test openvino-test python3 /workspace/vllm/examples/offline_inference.py |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.