Welcome to NePS, a powerful and flexible Python library for hyperparameter optimization (HPO) and neural architecture search (NAS) that makes HPO and NAS practical for deep learners.
NePS houses recently published and also well-established algorithms that can all be run massively parallel on distributed setups and, in general, NePS is tailored to the needs of deep learning experts.
To learn about NePS, check-out the documentation, our examples, or a colab tutorial.
In addition to the features offered by traditional HPO and NAS libraries, NePS stands out with:
- Hyperparameter Optimization (HPO) Efficient Enough for Deep Learning:
NePS excels in efficiently tuning hyperparameters using algorithms that enable users to make use of their prior knowledge, while also using many other efficiency boosters. - Neural Architecture Search (NAS) with Expressive Search Spaces:
NePS provides capabilities for optimizing DL architectures in an expressive and natural fashion. - Zero-effort Parallelization and an Experience Tailored to DL:
NePS simplifies the process of parallelizing optimization tasks both on individual computers and in distributed computing environments. As NePS is made for deep learners, all technical choices are made with DL in mind and common DL tools such as Tensorboard are embraced.
To install the latest release from PyPI run
pip install neural-pipeline-search
Using neps
always follows the same pattern:
- Define a
evaluate_pipeline
function capable of evaluating different architectural and/or hyperparameter configurations for your problem. - Define a search space named
pipeline_space
of those Parameters e.g. via a dictionary - Call
neps.run(evaluate_pipeline, pipeline_space)
In code, the usage pattern can look like this:
import neps
import logging
logging.basicConfig(level=logging.INFO)
# 1. Define a function that accepts hyperparameters and computes the validation error
def evaluate_pipeline(lr: float, alpha: int, optimizer: str) -> float:
# Create your model
model = MyModel(lr=lr, alpha=alpha, optimizer=optimizer)
# Train and evaluate the model with your training pipeline
validation_error = train_and_eval(model)
return validation_error
# 2. Define a search space of parameters; use the same parameter names as in evaluate_pipeline
pipeline_space = dict(
lr=neps.Float(
lower=1e-5,
upper=1e-1,
log=True, # Log spaces
prior=1e-3, # Incorporate you knowledge to help optimization
),
alpha=neps.Integer(lower=1, upper=42),
optimizer=neps.Categorical(choices=["sgd", "adam"])
)
# 3. Run the NePS optimization
neps.run(
evaluate_pipeline=evaluate_pipeline,
pipeline_space=pipeline_space,
root_directory="path/to/save/results", # Replace with the actual path.
max_evaluations_total=100,
)
Discover how NePS works through these examples:
-
Hyperparameter Optimization: Learn the essentials of hyperparameter optimization with NePS.
-
Multi-Fidelity Optimization: Understand how to leverage multi-fidelity optimization for efficient model tuning.
-
Utilizing Expert Priors for Hyperparameters: Learn how to incorporate expert priors for more efficient hyperparameter selection.
-
Additional NePS Examples: Explore more examples, including various use cases and advanced configurations in NePS.
Please see the documentation for contributors.
For pointers on citing the NePS package and papers refer to our documentation on citations.