Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: ask and tell #174

Merged
merged 2 commits into from
Jan 16, 2025
Merged

feat: ask and tell #174

merged 2 commits into from
Jan 16, 2025

Conversation

eddiebergman
Copy link
Contributor

@eddiebergman eddiebergman commented Jan 15, 2025

TODO: Write some basic tests... Although I'm not really sure what I would test here. It's an extremely shallow wrapper.
TODO: Write a small docs section on this, and link to the API doc.


Implements a basic wrapper around what neps considers an optimizer, which essentially just holds the relevant state in memory without any serialization, seed management or other hard-things the runtime does. We presume the usage of AskAndTell is usually for benchmarking/experimentation and so we keep it as light weight as possible. Also includes a tell_custom which essentially allows you to just update the information you'd like the optimizer to know about.

The only relevant state is trials: Mapping[str, Trial].

Here's an example script of using it with BO, along with a tiny example of a really dumb custom optimizer (a function which just returns a constant config).

from __future__ import annotations

from collections.abc import Mapping

import neps
from neps import AskAndTell, BudgetInfo, SampledConfig, Trial, algorithms


def evaluate_pipeline(float1, float2, categorical, integer1, integer2):
    return -sum([float1, float2, int(categorical), integer1, integer2])


space = neps.SearchSpace(
    {
        "float1": neps.Float(lower=0, upper=1),
        "float2": neps.Float(lower=-10, upper=10),
        "categorical": neps.Categorical(choices=[0, 1]),
        "integer1": neps.Integer(lower=0, upper=1),
        "integer2": neps.Integer(lower=1, upper=1000, log=True),
    }
)

# Creating an optimizer
optimizer = algorithms.bayesian_optimization(space, initial_design_size=5)

# Ask and tell loop
ask_and_tell = AskAndTell(optimizer)
for i in range(10):
    print(f"step {i + 1}")  # noqa: T201
    trial = ask_and_tell.ask()
    result = evaluate_pipeline(**trial.config)
    ask_and_tell.tell(trial, result)


# ------------- Making a custom optimizer with ask and tell ------------
# An example of the bare minimum to implement a custom optimizer
# which is just that we can __call__ something with the below signature
# which we'll also wrap in an AskAndTell optimizer
def my_custom_optimizer_function(
    trials: Mapping[str, Trial],
    budget_info: BudgetInfo | None = None,
    n: int | None = None,
) -> SampledConfig:
    # Just returns the same thing everytime... it's dumb
    return SampledConfig(
        id=str(len(trials) + 1),
        config={
            "float1": 0.5,
            "float2": 0.5,
            "categorical": 0,
            "integer1": 0,
            "integer2": 0,
        },
        previous_config_id=None,
    )


# Ask and tell loop
ask_and_tell = AskAndTell(my_custom_optimizer_function)
for i in range(10):
    print(f"step custom {i + 1}")  # noqa: T201
    trial = ask_and_tell.ask()
    result = evaluate_pipeline(**trial.config)
    ask_and_tell.tell(trial, result)

@eddiebergman eddiebergman merged commit 7db5d9e into master Jan 16, 2025
13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

1 participant