Skip to content

Commit

Permalink
pre-commit hook auto-changes
Browse files Browse the repository at this point in the history
  • Loading branch information
karibbov committed Apr 8, 2024
1 parent 947844c commit 98ee210
Show file tree
Hide file tree
Showing 47 changed files with 604 additions and 539 deletions.
1 change: 0 additions & 1 deletion .github/workflows/tests.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -42,4 +42,3 @@ jobs:
- name: Run pytest
timeout-minutes: 15
run: poetry run pytest -m "all_examples or metahyper or neps_api or summary_csv"

46 changes: 25 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,27 +11,30 @@ NePS houses recently published and some more well-established algorithms that ar

Take a look at our [documentation](https://automl.github.io/neps/latest/) and continue following through current README for instructions on how to use NePS!


## Key Features

In addition to the common features offered by traditional HPO and NAS libraries, NePS stands out with the following key features:

1. [**Hyperparameter Optimization (HPO) With Prior Knowledge:**](neps_examples/template/priorband_template.py)
- NePS excels in efficiently tuning hyperparameters using algorithms that enable users to make use of their prior knowledge within the search space. This is leveraged by the insights presented in:
- [PriorBand: Practical Hyperparameter Optimization in the Age of Deep Learning](https://arxiv.org/abs/2306.12370)
- [πBO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization](https://arxiv.org/abs/2204.11051)

2. [**Neural Architecture Search (NAS) With Context-free Grammar Search Spaces:**](neps_examples/basic_usage/architecture.py)
- NePS is equipped to handle context-free grammar search spaces, providing advanced capabilities for designing and optimizing architectures. this is leveraged by the insights presented in:
- [Construction of Hierarchical Neural Architecture Search Spaces based on Context-free Grammars](https://arxiv.org/abs/2211.01842)
- NePS excels in efficiently tuning hyperparameters using algorithms that enable users to make use of their prior knowledge within the search space. This is leveraged by the insights presented in:
- [PriorBand: Practical Hyperparameter Optimization in the Age of Deep Learning](https://arxiv.org/abs/2306.12370)
- [πBO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization](https://arxiv.org/abs/2204.11051)

1. [**Neural Architecture Search (NAS) With Context-free Grammar Search Spaces:**](neps_examples/basic_usage/architecture.py)

- NePS is equipped to handle context-free grammar search spaces, providing advanced capabilities for designing and optimizing architectures. this is leveraged by the insights presented in:
- [Construction of Hierarchical Neural Architecture Search Spaces based on Context-free Grammars](https://arxiv.org/abs/2211.01842)

3. [**Easy Parallelization and Resumption of Runs:**](docs/parallelization.md)
- NePS simplifies the process of parallelizing optimization tasks both on individual computers and in distributed
computing environments. It also allows users to conveniently resume these optimization tasks after completion to
ensure a seamless and efficient workflow for long-running experiments.
1. [**Easy Parallelization and Resumption of Runs:**](docs/parallelization.md)

4. [**Seamless User Code Integration:**](neps_examples/template/)
- NePS's modular design ensures flexibility and extensibility. Integrate NePS effortlessly into existing machine learning workflows.
- NePS simplifies the process of parallelizing optimization tasks both on individual computers and in distributed
computing environments. It also allows users to conveniently resume these optimization tasks after completion to
ensure a seamless and efficient workflow for long-running experiments.

1. [**Seamless User Code Integration:**](neps_examples/template/)

- NePS's modular design ensures flexibility and extensibility. Integrate NePS effortlessly into existing machine learning workflows.

## Getting Started

Expand All @@ -51,8 +54,8 @@ Using `neps` always follows the same pattern:

1. Define a `run_pipeline` function capable of evaluating different architectural and/or hyperparameter configurations
for your problem.
2. Define a search space named `pipeline_space` of those Parameters e.g. via a dictionary
3. Call `neps.run` to optimize `run_pipeline` over `pipeline_space`
1. Define a search space named `pipeline_space` of those Parameters e.g. via a dictionary
1. Call `neps.run` to optimize `run_pipeline` over `pipeline_space`

In code, the usage pattern can look like this:

Expand Down Expand Up @@ -111,18 +114,19 @@ if __name__ == "__main__":
## Examples

Discover how NePS works through these practical examples:
* **[Pipeline Space via YAML](neps_examples/basic_usage/defining_search_space)**: Explore how to define the `pipeline_space` using a

- **[Pipeline Space via YAML](neps_examples/basic_usage/defining_search_space)**: Explore how to define the `pipeline_space` using a
YAML file instead of a dictionary.

* **[Hyperparameter Optimization (HPO)](neps_examples/basic_usage/hyperparameters.py)**: Learn the essentials of hyperparameter optimization with NePS.
- **[Hyperparameter Optimization (HPO)](neps_examples/basic_usage/hyperparameters.py)**: Learn the essentials of hyperparameter optimization with NePS.

* **[Architecture Search with Primitives](neps_examples/basic_usage/architecture.py)**: Dive into architecture search using primitives in NePS.
- **[Architecture Search with Primitives](neps_examples/basic_usage/architecture.py)**: Dive into architecture search using primitives in NePS.

* **[Multi-Fidelity Optimization](neps_examples/efficiency/multi_fidelity.py)**: Understand how to leverage multi-fidelity optimization for efficient model tuning.
- **[Multi-Fidelity Optimization](neps_examples/efficiency/multi_fidelity.py)**: Understand how to leverage multi-fidelity optimization for efficient model tuning.

* **[Utilizing Expert Priors for Hyperparameters](neps_examples/efficiency/expert_priors_for_hyperparameters.py)**: Learn how to incorporate expert priors for more efficient hyperparameter selection.
- **[Utilizing Expert Priors for Hyperparameters](neps_examples/efficiency/expert_priors_for_hyperparameters.py)**: Learn how to incorporate expert priors for more efficient hyperparameter selection.

* **[Additional NePS Examples](neps_examples/)**: Explore more examples, including various use cases and advanced configurations in NePS.
- **[Additional NePS Examples](neps_examples/)**: Explore more examples, including various use cases and advanced configurations in NePS.

## Documentation

Expand Down
29 changes: 16 additions & 13 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,24 +9,27 @@ Welcome to NePS, a powerful and flexible Python library for hyperparameter optim

NePS houses recently published and some more well-established algorithms that are all capable of being run massively parallel on any distributed setup, with tools to analyze runs, restart runs, etc.


## Key Features

In addition to the common features offered by traditional HPO and NAS libraries, NePS stands out with the following key features:

1. [**Hyperparameter Optimization (HPO) With Prior Knowledge:**](https://github.com/automl/neps/tree/master/neps_examples/template/priorband_template.py)
- NePS excels in efficiently tuning hyperparameters using algorithms that enable users to make use of their prior knowledge within the search space. This is leveraged by the insights presented in:
- [PriorBand: Practical Hyperparameter Optimization in the Age of Deep Learning](https://arxiv.org/abs/2306.12370)
- [πBO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization](https://arxiv.org/abs/2204.11051)

2. [**Neural Architecture Search (NAS) With Context-free Grammar Search Spaces:**](https://github.com/automl/neps/tree/master/neps_examples/basic_usage/architecture.py)
- NePS is equipped to handle context-free grammar search spaces, providing advanced capabilities for designing and optimizing architectures. this is leveraged by the insights presented in:
- [Construction of Hierarchical Neural Architecture Search Spaces based on Context-free Grammars](https://arxiv.org/abs/2211.01842)
- NePS excels in efficiently tuning hyperparameters using algorithms that enable users to make use of their prior knowledge within the search space. This is leveraged by the insights presented in:
- [PriorBand: Practical Hyperparameter Optimization in the Age of Deep Learning](https://arxiv.org/abs/2306.12370)
- [πBO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization](https://arxiv.org/abs/2204.11051)

1. [**Neural Architecture Search (NAS) With Context-free Grammar Search Spaces:**](https://github.com/automl/neps/tree/master/neps_examples/basic_usage/architecture.py)

- NePS is equipped to handle context-free grammar search spaces, providing advanced capabilities for designing and optimizing architectures. this is leveraged by the insights presented in:
- [Construction of Hierarchical Neural Architecture Search Spaces based on Context-free Grammars](https://arxiv.org/abs/2211.01842)

1. [**Easy Parallelization and Resumption of Runs:**](https://automl.github.io/neps/latest/parallelization)

- NePS simplifies the process of parallelizing optimization tasks both on individual computers and in distributed
computing environments. It also allows users to conveniently resume these optimization tasks after completion to
ensure a seamless and efficient workflow for long-running experiments.

3. [**Easy Parallelization and Resumption of Runs:**](https://automl.github.io/neps/latest/parallelization)
- NePS simplifies the process of parallelizing optimization tasks both on individual computers and in distributed
computing environments. It also allows users to conveniently resume these optimization tasks after completion to
ensure a seamless and efficient workflow for long-running experiments.
1. [**Seamless User Code Integration:**](https://github.com/automl/neps/tree/master/neps_examples/template/)

4. [**Seamless User Code Integration:**](https://github.com/automl/neps/tree/master/neps_examples/template/)
- NePS's modular design ensures flexibility and extensibility. Integrate NePS effortlessly into existing machine learning workflows.
- NePS's modular design ensures flexibility and extensibility. Integrate NePS effortlessly into existing machine learning workflows.
61 changes: 36 additions & 25 deletions docs/analyse.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ ROOT_DIRECTORY
├── best_loss_trajectory.txt
└── best_loss_with_config_trajectory.txt
```

## Summary CSV

The argument `post_run_summary` in `neps.run` allows for the automatic generation of CSV files after a run is complete. The new root directory after utilizing this argument will look like the following:
Expand Down Expand Up @@ -50,11 +51,14 @@ ROOT_DIRECTORY

The `tblogger.log` function is invoked within the model's training loop to facilitate logging of key metrics.

!!! tip
!!! tip

The logger function is primarily designed for implementation within the `run_pipeline` function during the training of the neural network.
```
The logger function is primarily designed for implementation within the `run_pipeline` function during the training of the neural network.
```

- **Signature:**

```python
tblogger.log(
loss: float,
Expand All @@ -67,37 +71,42 @@ tblogger.log(
```

- **Parameters:**
- `loss` (float): The loss value to be logged.
- `current_epoch` (int): The current epoch or iteration number.
- `write_config_scalar` (bool, optional): Set to `True` for a live loss trajectory for each configuration.
- `write_config_hparam` (bool, optional): Set to `True` for live parallel coordinate, scatter plot matrix, and table view.
- `write_summary_incumbent` (bool, optional): Set to `True` for a live incumbent trajectory.
- `extra_data` (dict, optional): Additional data to be logged, provided as a dictionary.
- `loss` (float): The loss value to be logged.
- `current_epoch` (int): The current epoch or iteration number.
- `write_config_scalar` (bool, optional): Set to `True` for a live loss trajectory for each configuration.
- `write_config_hparam` (bool, optional): Set to `True` for live parallel coordinate, scatter plot matrix, and table view.
- `write_summary_incumbent` (bool, optional): Set to `True` for a live incumbent trajectory.
- `extra_data` (dict, optional): Additional data to be logged, provided as a dictionary.

### Extra Custom Logging

NePS provides dedicated functions for customized logging using the `extra_data` argument.
NePS provides dedicated functions for customized logging using the `extra_data` argument.

!!! note "Custom Logging Instructions"

Name the dictionary keys as the names of the values you want to log and pass one of the following functions as the values for a successful logging process.
```
Name the dictionary keys as the names of the values you want to log and pass one of the following functions as the values for a successful logging process.
```

#### 1- Extra Scalar Logging

Logs new scalar data during training. Uses `current_epoch` from the log function as its `global_step`.

- **Signature:**

```python
tblogger.scalar_logging(value: float)
```

- **Parameters:**
- `value` (float): Any scalar value to be logged at the current epoch of `tblogger.log` function.
- `value` (float): Any scalar value to be logged at the current epoch of `tblogger.log` function.

#### 2- Extra Image Logging

Logs images during training. Images can be resized, randomly selected, and a specified number can be logged at specified intervals. Uses `current_epoch` from the log function as its `global_step`.

- **Signature:**

```python
tblogger.image_logging(
image: torch.Tensor,
Expand All @@ -110,12 +119,12 @@ tblogger.image_logging(
```

- **Parameters:**
- `image` (torch.Tensor): Image tensor to be logged.
- `counter` (int): Log images every counter epochs (i.e., when current_epoch % counter equals 0).
- `resize_images` (list of int, optional): List of integers for image sizes after resizing (default: [32, 32]).
- `random_images` (bool, optional): Images are randomly selected if True (default: True).
- `num_images` (int, optional): Number of images to log (default: 20).
- `seed` (int or np.random.RandomState or None, optional): Seed value or RandomState instance to control randomness and reproducibility (default: None).
- `image` (torch.Tensor): Image tensor to be logged.
- `counter` (int): Log images every counter epochs (i.e., when current_epoch % counter equals 0).
- `resize_images` (list of int, optional): List of integers for image sizes after resizing (default: \[32, 32\]).
- `random_images` (bool, optional): Images are randomly selected if True (default: True).
- `num_images` (int, optional): Number of images to log (default: 20).
- `seed` (int or np.random.RandomState or None, optional): Seed value or RandomState instance to control randomness and reproducibility (default: None).

### Logging Example

Expand All @@ -124,17 +133,19 @@ For illustration purposes, we have employed a straightforward example involving
You can find this example [here](https://github.com/automl/neps/blob/master/neps_examples/convenience/neps_tblogger_tutorial.py)

!!! info "Important"
We have optimized the example for computational efficiency. If you wish to replicate the exact results showcased in the following section, we recommend the following modifications:
We have optimized the example for computational efficiency. If you wish to replicate the exact results showcased in the following section, we recommend the following modifications:

1- Increase maximum epochs from 2 to 10
```
1- Increase maximum epochs from 2 to 10
2- Set the `write_summary_incumbent` argument to `True`
2- Set the `write_summary_incumbent` argument to `True`
3- Change the searcher from `random_search` to `bayesian_optimization`
3- Change the searcher from `random_search` to `bayesian_optimization`

4- Increase the maximum evaluations before disabling `tblogger` from 2 to 14
4- Increase the maximum evaluations before disabling `tblogger` from 2 to 14
5- Increase the maximum evaluations after disabling `tblogger` from 3 to 15
5- Increase the maximum evaluations after disabling `tblogger` from 3 to 15
```

### Visualization Results

Expand All @@ -144,7 +155,7 @@ The following command will open a local host for TensorBoard visualizations, all
tensorboard --logdir path/to/root_directory
```

This image shows visualizations related to scalar values logged during training. Scalars typically include metrics such as loss, incumbent trajectory, a summary of losses for all configurations, and any additional data provided via the `extra_data` argument in the `tblogger.log` function.
This image shows visualizations related to scalar values logged during training. Scalars typically include metrics such as loss, incumbent trajectory, a summary of losses for all configurations, and any additional data provided via the `extra_data` argument in the `tblogger.log` function.

![scalar_loggings](doc_images/tensorboard/tblogger_scalar.jpg)

Expand Down
Loading

0 comments on commit 98ee210

Please sign in to comment.