Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improved CM commands #2013

Closed
wants to merge 31 commits into from
Closed
Show file tree
Hide file tree
Changes from 9 commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
ae6025c
Update SDXL README.md, improved CM commands
sahilavaran Dec 23, 2024
6b567a7
Update README.md | Fix SDXL model download path
sahilavaran Dec 23, 2024
8ad2884
Update README.md | Added cm command for downloading coco2014 size.50
sahilavaran Dec 23, 2024
469a3bc
Update README.md | Fix SDXL calibration download command
sahilavaran Dec 23, 2024
41bded9
Update SDXL README.md
sahilavaran Dec 23, 2024
6df318e
Update README.md
sahilavaran Dec 23, 2024
1a2f391
Update README.md| Added outdirname for the bert
sahilavaran Dec 31, 2024
a313db7
Merge branch 'mlcommons:master' into master
sahilavaran Jan 5, 2025
5b9b622
Fixed X and Y axis in coco.py
sahilavaran Jan 5, 2025
5f5acec
Merge branch 'mlcommons:master' into master
sahilavaran Jan 10, 2025
5bd1f3e
Merge branch 'master' into master
arjunsuresh Jan 19, 2025
3d868ad
[Automated Commit] Format Codebase
github-actions[bot] Jan 19, 2025
3060b71
made changes
sahilavaran Jan 23, 2025
2a0fe9b
Merge branch 'master' of github.com:sahilavaran/inference
sahilavaran Jan 23, 2025
9cb4e92
made changes in the coco.py
sahilavaran Jan 23, 2025
9076e4f
[Automated Commit] Format Codebase
github-actions[bot] Jan 23, 2025
ed02ab7
Update coco.py
sahilavaran Jan 23, 2025
c91f2e6
Update README.md
sahilavaran Jan 23, 2025
6bc50d8
Update README.md
sahilavaran Jan 23, 2025
e00758b
changed coco.py
sahilavaran Jan 23, 2025
a82a587
Resolved merge conflict
sahilavaran Jan 23, 2025
fdb343c
[Automated Commit] Format Codebase
github-actions[bot] Jan 23, 2025
d839f9f
Merge branch 'master' into master
sahilavaran Jan 23, 2025
d8a0cde
Update coco.py
sahilavaran Jan 23, 2025
e2061f7
[Automated Commit] Format Codebase
github-actions[bot] Jan 23, 2025
4c88038
merge conflict removed
sahilavaran Jan 23, 2025
4de7eed
Update coco.py
sahilavaran Jan 23, 2025
686c8a3
[Automated Commit] Format Codebase
github-actions[bot] Jan 23, 2025
225f81f
fixed
sahilavaran Jan 23, 2025
cb95879
resolved the conflicts
sahilavaran Jan 23, 2025
3ffb5df
[Automated Commit] Format Codebase
github-actions[bot] Jan 23, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions language/bert/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,8 @@ The below CM command will launch the SUT server

```
cm run script --tags=generate-run-cmds,inference --model=bert-99 --backend=pytorch \
--mode=performance --device=cuda --quiet --test_query_count=1000 --network=sut
--mode=performance --device=cuda --quiet --test_query_count=1000 --network=sut \
--outdirname=results/bert-99-performance
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whats the use of --outdirname here?

```

Once the SUT server is launched, the below command can be run on the loadgen node to do issue queries to the SUT nodes. In this command `-sut_servers` has just the localhost address - it can be changed to a comma-separated list of any hostname/IP in the network.
Expand All @@ -64,7 +65,8 @@ Once the SUT server is launched, the below command can be run on the loadgen nod
```
cm run script --tags=generate-run-cmds,inference --model=bert-99 --backend=pytorch --rerun \
--mode=performance --device=cuda --quiet --test_query_count=1000 \
--sut_servers,=http://localhost:8000 --network=lon
--sut_servers,=http://localhost:8000 --network=lon \
--outdirname=results/bert-99-performance-lon
```

If you are not using CM, just add `--network=lon` along with your normal run command on the SUT side.
Expand Down
Binary file added plot.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
60 changes: 39 additions & 21 deletions text_to_image/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
# MLPerf™ Inference Benchmarks for Text to Image

This is the reference implementation for MLPerf Inference text to image.
## Automated command to run the benchmark via MLCommons CM

Please see the [new docs site](https://docs.mlcommons.org/inference/benchmarks/text_to_image/sdxl) for an automated way to run this benchmark across different available implementations and do an end-to-end submission with or without docker.

You can also do `pip install cm4mlops` and then use `cm` commands for downloading the model and datasets using the commands given in the later sections.

## Supported Models

| model | accuracy | dataset | model source | precision | notes |
Expand Down Expand Up @@ -53,10 +55,10 @@ We host two checkpoints (fp32 and fp16) that are a snapshot of the [Hugging Face
The following MLCommons CM commands can be used to programmatically download the model checkpoints.

```
pip install cmind
cm pull repo mlcommons@ck
cm run script --tags=get,ml-model,sdxl,_fp16,_rclone -j
cm run script --tags=get,ml-model,sdxl,_fp32,_rclone -j
cm run script --tags=get,ml-model,sdxl,_fp16,_rclone --outdirname=$MODEL_PATH
```
```
cm run script --tags=get,ml-model,sdxl,_fp32,_rclone --outdirname-$MODEL_PATH
```
#### Manual method

Expand All @@ -72,30 +74,35 @@ Once Rclone is installed, run the following command to authenticate with the buc
rclone config create mlc-inference s3 provider=Cloudflare access_key_id=f65ba5eef400db161ea49967de89f47b secret_access_key=fbea333914c292b854f14d3fe232bad6c5407bf0ab1bebf78833c2b359bdfd2b endpoint=https://c2686074cb2caf5cbaf6d134bdba8b47.r2.cloudflarestorage.com
```
You can then navigate in the terminal to your desired download directory and run the following commands to download the checkpoints:
```
cd $MODEL_PATH
```

**`fp32`**
```
rclone copy mlc-inference:mlcommons-inference-wg-public/stable_diffusion_fp32 ./stable_diffusion_fp32 -P
rclone copy mlc-inference:mlcommons-inference-wg-public/stable_diffusion_fp32 $MODEL_PATH -P
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are already inside $MODEL_PATH after cd $MODEL_PATH

```
**`fp16`**
```
rclone copy mlc-inference:mlcommons-inference-wg-public/stable_diffusion_fp16 ./stable_diffusion_fp16 -P
rclone copy mlc-inference:mlcommons-inference-wg-public/stable_diffusion_fp16 $MODEL_PATH -P
```

#### Move to model path
### Download validation dataset

```bash
mkdir $MODEL_PATH
cd $MODEL_PATH
# For fp32
mv <path_to_download>/stable_diffusion_fp32.zip .
unzip stable_diffusion_fp32.zip
# For fp16
mv <path_to_download>/stable_diffusion_fp16.zip .
unzip stable_diffusion_fp16.zip
#### CM METHOD
The following MLCommons CM commands can be used to programmatically download the validation dataset.

```
cm run script --tags=get,dataset,coco2014,_validation,_full --outdirname=coco2014
```

For debugging you can download only a part of all the images in the dataset
```
cm run script --tags=get,dataset,coco2014,_validation,_size.50 --outdirname=coco2014
```

### Download dataset

#### MANUAL METHOD
```bash
cd $SD_FOLDER/tools
./download-coco-2014.sh -n <number_of_workers>
Expand All @@ -107,14 +114,25 @@ cd $SD_FOLDER/tools
```
If the file [captions.tsv](coco2014/captions/captions.tsv) can be found in the script, it will be used to download the target dataset subset, otherwise it will be generated. We recommend you to have this file for consistency.

#### Calibration dataset
### Download Calibration dataset (only if you are doing quantization)

#### CM METHOD
The following MLCommons CM commands can be used to programmatically download the calibration dataset.

```
cm run script --tags=get,dataset,coco2014,_calibration --outdirname=coco2014
```


#### MANUAL METHOD

We provide a script to download the calibration captions and images. To download only the captions:
```bash
cd $SD_FOLDER/tools
./download-coco-2014-calibration.sh
./download-coco-2014-calibration.sh -n <number_of_workers>
```
To download only the captions and images:

To download both the captions and images:
```bash
cd $SD_FOLDER/tools
./download-coco-2014-calibration.sh -i -n <number_of_workers>
Expand Down
6 changes: 5 additions & 1 deletion tools/upscale_coco/coco.py
Original file line number Diff line number Diff line change
Expand Up @@ -317,7 +317,7 @@ def showAnns(self, anns):
v = kp[2::3]
for sk in sks:
if np.all(v[sk] > 0):
plt.plot(x[sk], y[sk], linewidth=3, color=c)
plt.plot(x[sk], y[sk], linewidth=3, color=c,label=f"keypoint {sk}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's the change done here?

plt.plot(
x[v > 0],
y[v > 0],
Expand All @@ -336,6 +336,10 @@ def showAnns(self, anns):
markeredgecolor=c,
markeredgewidth=2,
)
plt.xlabel("X Coordinate")
plt.ylabel("Y Coordinate")
print("Script is running correctly!")
plt.show()
p = PatchCollection(
polygons,
facecolor=color,
Expand Down
Loading