Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Runner should only need to know about outcomes, not objectives vs. constraints #2963

Closed
wants to merge 1 commit into from

Conversation

esantorella
Copy link
Contributor

Summary:
Context: In theory, a BenchmarkRunner should not have to know what metrics are objectives or constraints, and a test function should not have to be aware of that either. They are just generating data. That should be handled by the OptimizationConfig, so that various setups can be used without changing the runner and test function.

For historical reasons, runners track objectives and constraints separately and add noise to them separately, because this mimics how BoTorch test functions handle this. However, we now can and should isolate the quirks of BoTorch test functions within BoTorchTestProblem.

This diff:

  • Removes the argument constraint_noise_std from create_problem_from_botorch and from ParamBasedTestProblemRunner, in favor of just using noise_std.
  • Renames num_objectives to num_outcomes in ParamBasedTestProblemRunner
  • Removes evaluate_slack_true from ParamBasedTestProblem, changing evaluate_true to return both objectives and constraints
  • Updates validation

Tangentially related changes:

  • For simplicity, makes get_noise_stds always return a dict
  • Stops allowing noise_std to be None and defaults it to zero (it was eventually set to zero when it was None in the past)

Differential Revision: D64919207

@facebook-github-bot facebook-github-bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label Oct 24, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D64919207

@codecov-commenter
Copy link

codecov-commenter commented Oct 24, 2024

Codecov Report

Attention: Patch coverage is 97.18310% with 2 lines in your changes missing coverage. Please review.

Project coverage is 95.66%. Comparing base (cda1b82) to head (82ccf93).
Report is 2 commits behind head on main.

Files with missing lines Patch % Lines
ax/benchmark/runners/botorch_test.py 93.75% 1 Missing ⚠️
ax/benchmark/runners/surrogate.py 80.00% 1 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##             main    #2963   +/-   ##
=======================================
  Coverage   95.65%   95.66%           
=======================================
  Files         486      486           
  Lines       48796    48772   -24     
=======================================
- Hits        46676    46656   -20     
+ Misses       2120     2116    -4     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

esantorella added a commit to esantorella/Ax that referenced this pull request Oct 25, 2024
…nstraints (facebook#2963)

Summary:

Context: In theory, a `BenchmarkRunner` should not have to know what metrics are objectives or constraints, and a test function should not have to be aware of that, either. They are just generating data. A `BenchmarkProblem` should only store knowledge of objectives and constraints on the `OptimizationConfig`, so that various `OptimizationConfigs` can be used without changing the runner and test function.

For historical reasons, runners track objectives and constraints separately and add noise to them separately, because this mimics how BoTorch test functions handle this. However, we now can and should isolate the quirks of BoTorch test functions to `BoTorchTestProblem`.

This diff:
* Updates `ParamBasedTestFunction.evaluate_true` to return all outcomes, not just objectives, and gets rid of `ParamBasedTestFunction.evaluate_true`, which was for constraints
* Removes `num_objectives` from `ParamBasedTestProblem`, leaving `ParamBasedTestProblem` with nothing but an `evaluate_true` method
* Removes the argument `constraint_noise_std` from `create_problem_from_botorch` and from `ParamBasedTestProblemRunner`, in favor of just using `noise_std`.

* Updates argument validation

Tangentially related changes:
* For simplicity, makes `get_noise_stds` always return a dict
* Stops allowing `noise_std` to be `None` and defaults it to zero (it was eventually set to zero when it was None in the past)

Differential Revision: D64919207
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D64919207

esantorella added a commit to esantorella/Ax that referenced this pull request Oct 25, 2024
…nstraints (facebook#2963)

Summary:

Context: In theory, a `BenchmarkRunner` should not have to know what metrics are objectives or constraints, and a test function should not have to be aware of that, either. They are just generating data. A `BenchmarkProblem` should only store knowledge of objectives and constraints on the `OptimizationConfig`, so that various `OptimizationConfigs` can be used without changing the runner and test function.

For historical reasons, runners track objectives and constraints separately and add noise to them separately, because this mimics how BoTorch test functions handle this. However, we now can and should isolate the quirks of BoTorch test functions to `BoTorchTestProblem`.

This diff:
* Updates `ParamBasedTestFunction.evaluate_true` to return all outcomes, not just objectives, and gets rid of `ParamBasedTestFunction.evaluate_true`, which was for constraints
* Removes `num_objectives` from `ParamBasedTestProblem`, leaving `ParamBasedTestProblem` with nothing but an `evaluate_true` method
* Removes the argument `constraint_noise_std` from `create_problem_from_botorch` and from `ParamBasedTestProblemRunner`, in favor of just using `noise_std`.

* Updates argument validation

Tangentially related changes:
* For simplicity, makes `get_noise_stds` always return a dict
* Stops allowing `noise_std` to be `None` and defaults it to zero (it was eventually set to zero when it was None in the past)

Differential Revision: D64919207
esantorella added a commit to esantorella/Ax that referenced this pull request Oct 25, 2024
…nstraints (facebook#2963)

Summary:

Context: In theory, a `BenchmarkRunner` should not have to know what metrics are objectives or constraints, and a test function should not have to be aware of that, either. They are just generating data. A `BenchmarkProblem` should only store knowledge of objectives and constraints on the `OptimizationConfig`, so that various `OptimizationConfigs` can be used without changing the runner and test function.

For historical reasons, runners track objectives and constraints separately and add noise to them separately, because this mimics how BoTorch test functions handle this. However, we now can and should isolate the quirks of BoTorch test functions to `BoTorchTestProblem`.

This diff:
* Updates `ParamBasedTestFunction.evaluate_true` to return all outcomes, not just objectives, and gets rid of `ParamBasedTestFunction.evaluate_true`, which was for constraints
* Removes `num_objectives` from `ParamBasedTestProblem`, leaving `ParamBasedTestProblem` with nothing but an `evaluate_true` method
* Removes the argument `constraint_noise_std` from `create_problem_from_botorch` and from `ParamBasedTestProblemRunner`, in favor of just using `noise_std`.

* Updates argument validation

Tangentially related changes:
* For simplicity, makes `get_noise_stds` always return a dict
* Stops allowing `noise_std` to be `None` and defaults it to zero (it was eventually set to zero when it was None in the past)

Differential Revision: D64919207
esantorella added a commit to esantorella/Ax that referenced this pull request Oct 25, 2024
…nstraints (facebook#2963)

Summary:

Context: In theory, a `BenchmarkRunner` should not have to know what metrics are objectives or constraints, and a test function should not have to be aware of that, either. They are just generating data. A `BenchmarkProblem` should only store knowledge of objectives and constraints on the `OptimizationConfig`, so that various `OptimizationConfigs` can be used without changing the runner and test function.

For historical reasons, runners track objectives and constraints separately and add noise to them separately, because this mimics how BoTorch test functions handle this. However, we now can and should isolate the quirks of BoTorch test functions to `BoTorchTestProblem`.

This diff:
* Updates `ParamBasedTestFunction.evaluate_true` to return all outcomes, not just objectives, and gets rid of `ParamBasedTestFunction.evaluate_true`, which was for constraints
* Removes `num_objectives` from `ParamBasedTestProblem`, leaving `ParamBasedTestProblem` with nothing but an `evaluate_true` method
* Removes the argument `constraint_noise_std` from `create_problem_from_botorch` and from `ParamBasedTestProblemRunner`, in favor of just using `noise_std`.

* Updates argument validation

Tangentially related changes:
* For simplicity, makes `get_noise_stds` always return a dict
* Stops allowing `noise_std` to be `None` and defaults it to zero (it was eventually set to zero when it was None in the past)

Differential Revision: D64919207
esantorella added a commit to esantorella/Ax that referenced this pull request Oct 25, 2024
…nstraints (facebook#2963)

Summary:
Pull Request resolved: facebook#2963

Context: In theory, a `BenchmarkRunner` should not have to know what metrics are objectives or constraints, and a test function should not have to be aware of that, either. They are just generating data. A `BenchmarkProblem` should only store knowledge of objectives and constraints on the `OptimizationConfig`, so that various `OptimizationConfigs` can be used without changing the runner and test function.

For historical reasons, runners track objectives and constraints separately and add noise to them separately, because this mimics how BoTorch test functions handle this. However, we now can and should isolate the quirks of BoTorch test functions to `BoTorchTestProblem`.

This diff:
* Updates `ParamBasedTestFunction.evaluate_true` to return all outcomes, not just objectives, and gets rid of `ParamBasedTestFunction.evaluate_true`, which was for constraints
* Removes `num_objectives` from `ParamBasedTestProblem`, leaving `ParamBasedTestProblem` with nothing but an `evaluate_true` method
* Removes the argument `constraint_noise_std` from `create_problem_from_botorch` and from `ParamBasedTestProblemRunner`, in favor of just using `noise_std`.

* Updates argument validation

Tangentially related changes:
* For simplicity, makes `get_noise_stds` always return a dict
* Stops allowing `noise_std` to be `None` and defaults it to zero (it was eventually set to zero when it was None in the past)

Differential Revision: D64919207
…nstraints (facebook#2963)

Summary:

Context: In theory, a `BenchmarkRunner` should not have to know what metrics are objectives or constraints, and a test function should not have to be aware of that, either. They are just generating data. A `BenchmarkProblem` should only store knowledge of objectives and constraints on the `OptimizationConfig`, so that various `OptimizationConfigs` can be used without changing the runner and test function.

For historical reasons, runners track objectives and constraints separately and add noise to them separately, because this mimics how BoTorch test functions handle this. However, we now can and should isolate the quirks of BoTorch test functions to `BoTorchTestProblem`.

This diff:
* Updates `ParamBasedTestFunction.evaluate_true` to return all outcomes, not just objectives, and gets rid of `ParamBasedTestFunction.evaluate_true`, which was for constraints
* Removes `num_objectives` from `ParamBasedTestProblem`, leaving `ParamBasedTestProblem` with nothing but an `evaluate_true` method
* Removes the argument `constraint_noise_std` from `create_problem_from_botorch` and from `ParamBasedTestProblemRunner`, in favor of just using `noise_std`.

* Updates argument validation

Tangentially related changes:
* For simplicity, makes `get_noise_stds` always return a dict
* Stops allowing `noise_std` to be `None` and defaults it to zero (it was eventually set to zero when it was None in the past)

Reviewed By: saitcakmak

Differential Revision: D64919207
esantorella added a commit to esantorella/Ax that referenced this pull request Oct 25, 2024
…nstraints (facebook#2963)

Summary:

Context: In theory, a `BenchmarkRunner` should not have to know what metrics are objectives or constraints, and a test function should not have to be aware of that, either. They are just generating data. A `BenchmarkProblem` should only store knowledge of objectives and constraints on the `OptimizationConfig`, so that various `OptimizationConfigs` can be used without changing the runner and test function.

For historical reasons, runners track objectives and constraints separately and add noise to them separately, because this mimics how BoTorch test functions handle this. However, we now can and should isolate the quirks of BoTorch test functions to `BoTorchTestProblem`.

This diff:
* Updates `ParamBasedTestFunction.evaluate_true` to return all outcomes, not just objectives, and gets rid of `ParamBasedTestFunction.evaluate_true`, which was for constraints
* Removes `num_objectives` from `ParamBasedTestProblem`, leaving `ParamBasedTestProblem` with nothing but an `evaluate_true` method
* Removes the argument `constraint_noise_std` from `create_problem_from_botorch` and from `ParamBasedTestProblemRunner`, in favor of just using `noise_std`.

* Updates argument validation

Tangentially related changes:
* For simplicity, makes `get_noise_stds` always return a dict
* Stops allowing `noise_std` to be `None` and defaults it to zero (it was eventually set to zero when it was None in the past)

Reviewed By: saitcakmak

Differential Revision: D64919207
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D64919207

esantorella added a commit to esantorella/Ax that referenced this pull request Oct 25, 2024
…nstraints (facebook#2963)

Summary:

Context: In theory, a `BenchmarkRunner` should not have to know what metrics are objectives or constraints, and a test function should not have to be aware of that, either. They are just generating data. A `BenchmarkProblem` should only store knowledge of objectives and constraints on the `OptimizationConfig`, so that various `OptimizationConfigs` can be used without changing the runner and test function.

For historical reasons, runners track objectives and constraints separately and add noise to them separately, because this mimics how BoTorch test functions handle this. However, we now can and should isolate the quirks of BoTorch test functions to `BoTorchTestProblem`.

This diff:
* Updates `ParamBasedTestFunction.evaluate_true` to return all outcomes, not just objectives, and gets rid of `ParamBasedTestFunction.evaluate_true`, which was for constraints
* Removes `num_objectives` from `ParamBasedTestProblem`, leaving `ParamBasedTestProblem` with nothing but an `evaluate_true` method
* Removes the argument `constraint_noise_std` from `create_problem_from_botorch` and from `ParamBasedTestProblemRunner`, in favor of just using `noise_std`.

* Updates argument validation

Tangentially related changes:
* For simplicity, makes `get_noise_stds` always return a dict
* Stops allowing `noise_std` to be `None` and defaults it to zero (it was eventually set to zero when it was None in the past)

Reviewed By: saitcakmak

Differential Revision: D64919207
esantorella added a commit to esantorella/Ax that referenced this pull request Oct 25, 2024
…nstraints (facebook#2963)

Summary:

Context: In theory, a `BenchmarkRunner` should not have to know what metrics are objectives or constraints, and a test function should not have to be aware of that, either. They are just generating data. A `BenchmarkProblem` should only store knowledge of objectives and constraints on the `OptimizationConfig`, so that various `OptimizationConfigs` can be used without changing the runner and test function.

For historical reasons, runners track objectives and constraints separately and add noise to them separately, because this mimics how BoTorch test functions handle this. However, we now can and should isolate the quirks of BoTorch test functions to `BoTorchTestProblem`.

This diff:
* Updates `ParamBasedTestFunction.evaluate_true` to return all outcomes, not just objectives, and gets rid of `ParamBasedTestFunction.evaluate_true`, which was for constraints
* Removes `num_objectives` from `ParamBasedTestProblem`, leaving `ParamBasedTestProblem` with nothing but an `evaluate_true` method
* Removes the argument `constraint_noise_std` from `create_problem_from_botorch` and from `ParamBasedTestProblemRunner`, in favor of just using `noise_std`.

* Updates argument validation

Tangentially related changes:
* For simplicity, makes `get_noise_stds` always return a dict
* Stops allowing `noise_std` to be `None` and defaults it to zero (it was eventually set to zero when it was None in the past)

Reviewed By: saitcakmak

Differential Revision: D64919207
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in bdc5df7.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants