Skip to content

Commit

Permalink
moved complex examples (#127)
Browse files Browse the repository at this point in the history
  • Loading branch information
IlyasMoutawwakil authored Feb 19, 2024
1 parent 52ea94d commit 06dab18
Show file tree
Hide file tree
Showing 750 changed files with 6 additions and 48,109 deletions.
5 changes: 2 additions & 3 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -167,8 +167,7 @@ sweeps/
data/
version.txt

.engine/
actions-runner/
experiments/
examples/
.engine/
amdsmi
amdsmi/
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
<p align="center"><img src="logo.png" alt="Optimum-Benchmark Logo" width="350" style="max-width: 100%;" /></p>
<p align="center"><q>All benchmarks are wrong, some will cost you less than the others.</q></p>
<p align="center"><q>All benchmarks are wrong, some will cost you less than others.</q></p>
<h1 align="center">Optimum-Benchmark 🏋️</h1>

Optimum-Benchmark is a unified [multi-backend & multi-device](#backends--devices-) utility for benchmarking [Transformers](https://github.com/huggingface/transformers), [Diffusers](https://github.com/huggingface/diffusers), [PEFT](https://github.com/huggingface/peft), [TIMM](https://github.com/huggingface/pytorch-image-models) and [Optimum](https://github.com/huggingface/optimum) flavors, along with all their supported [optimizations & quantization schemes](#backend-features-), for [inference & training](#benchmark-features-%EF%B8%8F), in [distributed & non-distributed settings](#backend-features-), in the most correct and scalable way possible (no need to even download model weights).
Optimum-Benchmark is a unified [multi-backend & multi-device](#backends--devices-) utility for benchmarking [Transformers](https://github.com/huggingface/transformers), [Diffusers](https://github.com/huggingface/diffusers), [PEFT](https://github.com/huggingface/peft), [TIMM](https://github.com/huggingface/pytorch-image-models) and [Optimum](https://github.com/huggingface/optimum) flavors, along with all their supported [optimizations & quantization schemes](#backend-features-), for [inference & training](#benchmark-features-%EF%B8%8F), in [distributed & non-distributed settings](#backend-features-), in the most correct, efficient and scalable way possible (you don't even need to download the weights).

*News* 📰
- PYPI release soon.
Expand Down Expand Up @@ -128,7 +128,7 @@ optimum-benchmark --config-dir examples --config-name pytorch_bert -m backend.de

### Configurations structure 📁

You can create custom configuration files following the [examples here]([examples](https://github.com/IlyasMoutawwakil/optimum-benchmark-examples)).
You can create custom and more complex configuration files following these [examples]([examples](https://github.com/IlyasMoutawwakil/optimum-benchmark-examples)).

## Features 🎨

Expand Down
1 change: 0 additions & 1 deletion examples/api_launch.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@
from optimum_benchmark.launchers.torchrun.config import TorchrunConfig
from optimum_benchmark.logging_utils import setup_logging


if __name__ == "__main__":
setup_logging(level="INFO")
launcher_config = TorchrunConfig(nproc_per_node=2)
Expand Down
44 changes: 0 additions & 44 deletions examples/fast-mteb/README.md

This file was deleted.

Binary file not shown.
Binary file not shown.
Loading

0 comments on commit 06dab18

Please sign in to comment.