Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: a lot #167

Merged
merged 16 commits into from
Jan 15, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ repos:
files: '^\.github/dependabot\.ya?ml$'

- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: v0.8.6
rev: v0.9.1
hooks:
- id: ruff
args: [--fix, --exit-non-zero-on-fix, --no-cache]
Expand Down
47 changes: 0 additions & 47 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,6 @@ We have setup checks and tests at several points in the development flow:
This is setup during our [installation process](https://automl.github.io/neps/contributing/installation/).
- At every commit / push locally running a minimal suite of integration tests is encouraged.
The tests correspond directly to examples in [neps_examples](https://github.com/automl/neps/tree/master/neps_examples) and only check for crash-causing errors.
- At every push all integration tests and regression tests are run automatically using [github actions](https://github.com/automl/neps/actions).

## Checks and tests

Expand Down Expand Up @@ -151,54 +150,8 @@ pytest
If tests fail for you on the master, please raise an issue on github, preferably with some information on the error,
traceback and the environment in which you are running, i.e. python version, OS, etc.

## Regression Tests

Regression tests are run on each push to the repository to assure the performance of the optimizers don't degrade.

Currently, regression runs are recorded on JAHS-Bench-201 data for 2 tasks: `cifar10` and `fashion_mnist` and only for optimizers: `random_search`, `bayesian_optimization`, `mf_bayesian_optimization`.
This information is stored in the `tests/regression_runner.py` as two lists: `TASKS`, `OPTIMIZERS`.
The recorded results are stored as a json dictionary in the `tests/losses.json` file.

### Adding new optimizer algorithms

Once a new algorithm is added to NEPS library, we need to first record the performance of the algorithm for 100 optimization runs.

- If the algorithm expects standard loss function (pipeline) and accepts fidelity hyperparameters in pipeline space, then recording results only requires adding the optimizer name into `OPTIMIZERS` list in `tests/regression_runner.py` and running `tests/regression_runner.py`

- In case your algorithm requires custom pipeline and/or pipeline space you can modify the `runner.run_pipeline` and `runner.pipeline_space` attributes of the `RegressionRunner` after initialization (around line `#322` in `tests/regression_runner.py`)

You can verify the optimizer is recorded by rerunning the `regression_runner.py`.
Now regression test will be run on your new optimizer as well on every push.

### Regression test metrics

For each regression test the algorithm is run 10 times to sample its performance, then they are statistically compared to the 100 recorded runs. We use these 3 boolean metrics to define the performance of the algorithm on any task:

1. [Kolmogorov-Smirnov test for goodness of fit](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.kstest.html) - `pvalue` >= 10%
1. Absolute median distance - bounded within 92.5% confidence range of the expected median distance
1. Median improvement - Median improvement over the recorded median

Test metrics are run for each `(optimizer, task)` combination separately and then collected.
The collected metrics are then further combined into 2 metrics

1. Task pass - either both `Kolmogorov-Smirnov test` and `Absolute median distance` test passes or just `Median improvement`
1. Test aggregate - Sum_over_tasks(`Kolmogorov-Smirnov test` + `Absolute median distance` + 2 * `Median improvement`)

Finally, a test for an optimizer only passes when at least for one of the tasks `Task pass` is true, and `Test aggregate` is higher than 1 + `number of tasks`

### On regression test failures

Regression tests are stochastic by nature, so they might fail occasionally even the algorithm performance didn't degrade.
In the case of regression test failure, try running it again first, if the problem still persists, then you can contact [Danny Stoll](mailto:[email protected]) or [Samir](mailto:[email protected]).
You can also run tests locally by running:

```
uv run pytest -m regression_all
```

## Disabling and Skipping Checks etc.


### Pre-commit: How to not run hooks?

To commit without running `pre-commit` use `git commit --no-verify -m <COMMIT MESSAGE>`.
Expand Down
33 changes: 15 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,45 +39,44 @@ pip install neural-pipeline-search

Using `neps` always follows the same pattern:

1. Define a `run_pipeline` function capable of evaluating different architectural and/or hyperparameter configurations
1. Define a `evaluate_pipeline` function capable of evaluating different architectural and/or hyperparameter configurations
for your problem.
1. Define a search space named `pipeline_space` of those Parameters e.g. via a dictionary
1. Call `neps.run` to optimize `run_pipeline` over `pipeline_space`
1. Call `neps.run(evaluate_pipeline, pipeline_space)`

In code, the usage pattern can look like this:

```python
import neps
import logging

logging.basicConfig(level=logging.INFO)

# 1. Define a function that accepts hyperparameters and computes the validation error
def run_pipeline(
hyperparameter_a: float, hyperparameter_b: int, architecture_parameter: str
) -> dict:
def evaluate_pipeline(lr: float, alpha: int, optimizer: str) -> float:
# Create your model
model = MyModel(architecture_parameter)
model = MyModel(lr=lr, alpha=alpha, optimizer=optimizer)

# Train and evaluate the model with your training pipeline
validation_error = train_and_eval(
model, hyperparameter_a, hyperparameter_b
)
validation_error = train_and_eval(model)
return validation_error


# 2. Define a search space of parameters; use the same parameter names as in run_pipeline
# 2. Define a search space of parameters; use the same parameter names as in evaluate_pipeline
pipeline_space = dict(
hyperparameter_a=neps.Float(
lower=0.001, upper=0.1, log=True # The search space is sampled in log space
lr=neps.Float(
lower=1e-5,
upper=1e-1,
log=True, # Log spaces
prior=1e-3, # Incorporate you knowledge to help optimization
),
hyperparameter_b=neps.Integer(lower=1, upper=42),
architecture_parameter=neps.Categorical(["option_a", "option_b"]),
alpha=neps.Integer(lower=1, upper=42),
optimizer=neps.Categorical(choices=["sgd", "adam"])
)

# 3. Run the NePS optimization
logging.basicConfig(level=logging.INFO)
neps.run(
run_pipeline=run_pipeline,
evaluate_pipeline=evaluate_pipeline,
pipeline_space=pipeline_space,
root_directory="path/to/save/results", # Replace with the actual path.
max_evaluations_total=100,
Expand All @@ -94,8 +93,6 @@ Discover how NePS works through these examples:

- **[Utilizing Expert Priors for Hyperparameters](neps_examples/efficiency/expert_priors_for_hyperparameters.py)**: Learn how to incorporate expert priors for more efficient hyperparameter selection.

- **[Architecture Search](neps_examples/basic_usage/architecture.py)**: Dive into (hierarchical) architecture search in NePS.

- **[Additional NePS Examples](neps_examples/)**: Explore more examples, including various use cases and advanced configurations in NePS.

## Contributing
Expand Down
1 change: 0 additions & 1 deletion docs/_code/api_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@
# https://mkdocstrings.github.io/recipes/
"""


import logging
from pathlib import Path

Expand Down
97 changes: 0 additions & 97 deletions docs/doc_yamls/architecture_search_space.py

This file was deleted.

21 changes: 0 additions & 21 deletions docs/doc_yamls/customizing_neps_optimizer.yaml

This file was deleted.

24 changes: 0 additions & 24 deletions docs/doc_yamls/defining_hooks.yaml

This file was deleted.

42 changes: 0 additions & 42 deletions docs/doc_yamls/full_configuration_template.yaml

This file was deleted.

21 changes: 0 additions & 21 deletions docs/doc_yamls/loading_own_optimizer.yaml

This file was deleted.

11 changes: 0 additions & 11 deletions docs/doc_yamls/loading_pipeline_space_dict.yaml

This file was deleted.

18 changes: 0 additions & 18 deletions docs/doc_yamls/outsourcing_optimizer.yaml

This file was deleted.

Loading
Loading