-
Notifications
You must be signed in to change notification settings - Fork 310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Runner should only need to know about outcomes, not objectives vs. constraints #2963
Conversation
This pull request was exported from Phabricator. Differential Revision: D64919207 |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2963 +/- ##
=======================================
Coverage 95.65% 95.66%
=======================================
Files 486 486
Lines 48796 48772 -24
=======================================
- Hits 46676 46656 -20
+ Misses 2120 2116 -4 ☔ View full report in Codecov by Sentry. |
…nstraints (facebook#2963) Summary: Context: In theory, a `BenchmarkRunner` should not have to know what metrics are objectives or constraints, and a test function should not have to be aware of that, either. They are just generating data. A `BenchmarkProblem` should only store knowledge of objectives and constraints on the `OptimizationConfig`, so that various `OptimizationConfigs` can be used without changing the runner and test function. For historical reasons, runners track objectives and constraints separately and add noise to them separately, because this mimics how BoTorch test functions handle this. However, we now can and should isolate the quirks of BoTorch test functions to `BoTorchTestProblem`. This diff: * Updates `ParamBasedTestFunction.evaluate_true` to return all outcomes, not just objectives, and gets rid of `ParamBasedTestFunction.evaluate_true`, which was for constraints * Removes `num_objectives` from `ParamBasedTestProblem`, leaving `ParamBasedTestProblem` with nothing but an `evaluate_true` method * Removes the argument `constraint_noise_std` from `create_problem_from_botorch` and from `ParamBasedTestProblemRunner`, in favor of just using `noise_std`. * Updates argument validation Tangentially related changes: * For simplicity, makes `get_noise_stds` always return a dict * Stops allowing `noise_std` to be `None` and defaults it to zero (it was eventually set to zero when it was None in the past) Differential Revision: D64919207
017fe54
to
58e3b5b
Compare
This pull request was exported from Phabricator. Differential Revision: D64919207 |
…nstraints (facebook#2963) Summary: Context: In theory, a `BenchmarkRunner` should not have to know what metrics are objectives or constraints, and a test function should not have to be aware of that, either. They are just generating data. A `BenchmarkProblem` should only store knowledge of objectives and constraints on the `OptimizationConfig`, so that various `OptimizationConfigs` can be used without changing the runner and test function. For historical reasons, runners track objectives and constraints separately and add noise to them separately, because this mimics how BoTorch test functions handle this. However, we now can and should isolate the quirks of BoTorch test functions to `BoTorchTestProblem`. This diff: * Updates `ParamBasedTestFunction.evaluate_true` to return all outcomes, not just objectives, and gets rid of `ParamBasedTestFunction.evaluate_true`, which was for constraints * Removes `num_objectives` from `ParamBasedTestProblem`, leaving `ParamBasedTestProblem` with nothing but an `evaluate_true` method * Removes the argument `constraint_noise_std` from `create_problem_from_botorch` and from `ParamBasedTestProblemRunner`, in favor of just using `noise_std`. * Updates argument validation Tangentially related changes: * For simplicity, makes `get_noise_stds` always return a dict * Stops allowing `noise_std` to be `None` and defaults it to zero (it was eventually set to zero when it was None in the past) Differential Revision: D64919207
…nstraints (facebook#2963) Summary: Context: In theory, a `BenchmarkRunner` should not have to know what metrics are objectives or constraints, and a test function should not have to be aware of that, either. They are just generating data. A `BenchmarkProblem` should only store knowledge of objectives and constraints on the `OptimizationConfig`, so that various `OptimizationConfigs` can be used without changing the runner and test function. For historical reasons, runners track objectives and constraints separately and add noise to them separately, because this mimics how BoTorch test functions handle this. However, we now can and should isolate the quirks of BoTorch test functions to `BoTorchTestProblem`. This diff: * Updates `ParamBasedTestFunction.evaluate_true` to return all outcomes, not just objectives, and gets rid of `ParamBasedTestFunction.evaluate_true`, which was for constraints * Removes `num_objectives` from `ParamBasedTestProblem`, leaving `ParamBasedTestProblem` with nothing but an `evaluate_true` method * Removes the argument `constraint_noise_std` from `create_problem_from_botorch` and from `ParamBasedTestProblemRunner`, in favor of just using `noise_std`. * Updates argument validation Tangentially related changes: * For simplicity, makes `get_noise_stds` always return a dict * Stops allowing `noise_std` to be `None` and defaults it to zero (it was eventually set to zero when it was None in the past) Differential Revision: D64919207
…nstraints (facebook#2963) Summary: Context: In theory, a `BenchmarkRunner` should not have to know what metrics are objectives or constraints, and a test function should not have to be aware of that, either. They are just generating data. A `BenchmarkProblem` should only store knowledge of objectives and constraints on the `OptimizationConfig`, so that various `OptimizationConfigs` can be used without changing the runner and test function. For historical reasons, runners track objectives and constraints separately and add noise to them separately, because this mimics how BoTorch test functions handle this. However, we now can and should isolate the quirks of BoTorch test functions to `BoTorchTestProblem`. This diff: * Updates `ParamBasedTestFunction.evaluate_true` to return all outcomes, not just objectives, and gets rid of `ParamBasedTestFunction.evaluate_true`, which was for constraints * Removes `num_objectives` from `ParamBasedTestProblem`, leaving `ParamBasedTestProblem` with nothing but an `evaluate_true` method * Removes the argument `constraint_noise_std` from `create_problem_from_botorch` and from `ParamBasedTestProblemRunner`, in favor of just using `noise_std`. * Updates argument validation Tangentially related changes: * For simplicity, makes `get_noise_stds` always return a dict * Stops allowing `noise_std` to be `None` and defaults it to zero (it was eventually set to zero when it was None in the past) Differential Revision: D64919207
…nstraints (facebook#2963) Summary: Pull Request resolved: facebook#2963 Context: In theory, a `BenchmarkRunner` should not have to know what metrics are objectives or constraints, and a test function should not have to be aware of that, either. They are just generating data. A `BenchmarkProblem` should only store knowledge of objectives and constraints on the `OptimizationConfig`, so that various `OptimizationConfigs` can be used without changing the runner and test function. For historical reasons, runners track objectives and constraints separately and add noise to them separately, because this mimics how BoTorch test functions handle this. However, we now can and should isolate the quirks of BoTorch test functions to `BoTorchTestProblem`. This diff: * Updates `ParamBasedTestFunction.evaluate_true` to return all outcomes, not just objectives, and gets rid of `ParamBasedTestFunction.evaluate_true`, which was for constraints * Removes `num_objectives` from `ParamBasedTestProblem`, leaving `ParamBasedTestProblem` with nothing but an `evaluate_true` method * Removes the argument `constraint_noise_std` from `create_problem_from_botorch` and from `ParamBasedTestProblemRunner`, in favor of just using `noise_std`. * Updates argument validation Tangentially related changes: * For simplicity, makes `get_noise_stds` always return a dict * Stops allowing `noise_std` to be `None` and defaults it to zero (it was eventually set to zero when it was None in the past) Differential Revision: D64919207
…nstraints (facebook#2963) Summary: Context: In theory, a `BenchmarkRunner` should not have to know what metrics are objectives or constraints, and a test function should not have to be aware of that, either. They are just generating data. A `BenchmarkProblem` should only store knowledge of objectives and constraints on the `OptimizationConfig`, so that various `OptimizationConfigs` can be used without changing the runner and test function. For historical reasons, runners track objectives and constraints separately and add noise to them separately, because this mimics how BoTorch test functions handle this. However, we now can and should isolate the quirks of BoTorch test functions to `BoTorchTestProblem`. This diff: * Updates `ParamBasedTestFunction.evaluate_true` to return all outcomes, not just objectives, and gets rid of `ParamBasedTestFunction.evaluate_true`, which was for constraints * Removes `num_objectives` from `ParamBasedTestProblem`, leaving `ParamBasedTestProblem` with nothing but an `evaluate_true` method * Removes the argument `constraint_noise_std` from `create_problem_from_botorch` and from `ParamBasedTestProblemRunner`, in favor of just using `noise_std`. * Updates argument validation Tangentially related changes: * For simplicity, makes `get_noise_stds` always return a dict * Stops allowing `noise_std` to be `None` and defaults it to zero (it was eventually set to zero when it was None in the past) Reviewed By: saitcakmak Differential Revision: D64919207
…nstraints (facebook#2963) Summary: Context: In theory, a `BenchmarkRunner` should not have to know what metrics are objectives or constraints, and a test function should not have to be aware of that, either. They are just generating data. A `BenchmarkProblem` should only store knowledge of objectives and constraints on the `OptimizationConfig`, so that various `OptimizationConfigs` can be used without changing the runner and test function. For historical reasons, runners track objectives and constraints separately and add noise to them separately, because this mimics how BoTorch test functions handle this. However, we now can and should isolate the quirks of BoTorch test functions to `BoTorchTestProblem`. This diff: * Updates `ParamBasedTestFunction.evaluate_true` to return all outcomes, not just objectives, and gets rid of `ParamBasedTestFunction.evaluate_true`, which was for constraints * Removes `num_objectives` from `ParamBasedTestProblem`, leaving `ParamBasedTestProblem` with nothing but an `evaluate_true` method * Removes the argument `constraint_noise_std` from `create_problem_from_botorch` and from `ParamBasedTestProblemRunner`, in favor of just using `noise_std`. * Updates argument validation Tangentially related changes: * For simplicity, makes `get_noise_stds` always return a dict * Stops allowing `noise_std` to be `None` and defaults it to zero (it was eventually set to zero when it was None in the past) Reviewed By: saitcakmak Differential Revision: D64919207
58e3b5b
to
82ccf93
Compare
This pull request was exported from Phabricator. Differential Revision: D64919207 |
…nstraints (facebook#2963) Summary: Context: In theory, a `BenchmarkRunner` should not have to know what metrics are objectives or constraints, and a test function should not have to be aware of that, either. They are just generating data. A `BenchmarkProblem` should only store knowledge of objectives and constraints on the `OptimizationConfig`, so that various `OptimizationConfigs` can be used without changing the runner and test function. For historical reasons, runners track objectives and constraints separately and add noise to them separately, because this mimics how BoTorch test functions handle this. However, we now can and should isolate the quirks of BoTorch test functions to `BoTorchTestProblem`. This diff: * Updates `ParamBasedTestFunction.evaluate_true` to return all outcomes, not just objectives, and gets rid of `ParamBasedTestFunction.evaluate_true`, which was for constraints * Removes `num_objectives` from `ParamBasedTestProblem`, leaving `ParamBasedTestProblem` with nothing but an `evaluate_true` method * Removes the argument `constraint_noise_std` from `create_problem_from_botorch` and from `ParamBasedTestProblemRunner`, in favor of just using `noise_std`. * Updates argument validation Tangentially related changes: * For simplicity, makes `get_noise_stds` always return a dict * Stops allowing `noise_std` to be `None` and defaults it to zero (it was eventually set to zero when it was None in the past) Reviewed By: saitcakmak Differential Revision: D64919207
…nstraints (facebook#2963) Summary: Context: In theory, a `BenchmarkRunner` should not have to know what metrics are objectives or constraints, and a test function should not have to be aware of that, either. They are just generating data. A `BenchmarkProblem` should only store knowledge of objectives and constraints on the `OptimizationConfig`, so that various `OptimizationConfigs` can be used without changing the runner and test function. For historical reasons, runners track objectives and constraints separately and add noise to them separately, because this mimics how BoTorch test functions handle this. However, we now can and should isolate the quirks of BoTorch test functions to `BoTorchTestProblem`. This diff: * Updates `ParamBasedTestFunction.evaluate_true` to return all outcomes, not just objectives, and gets rid of `ParamBasedTestFunction.evaluate_true`, which was for constraints * Removes `num_objectives` from `ParamBasedTestProblem`, leaving `ParamBasedTestProblem` with nothing but an `evaluate_true` method * Removes the argument `constraint_noise_std` from `create_problem_from_botorch` and from `ParamBasedTestProblemRunner`, in favor of just using `noise_std`. * Updates argument validation Tangentially related changes: * For simplicity, makes `get_noise_stds` always return a dict * Stops allowing `noise_std` to be `None` and defaults it to zero (it was eventually set to zero when it was None in the past) Reviewed By: saitcakmak Differential Revision: D64919207
This pull request has been merged in bdc5df7. |
Summary:
Context: In theory, a
BenchmarkRunner
should not have to know what metrics are objectives or constraints, and a test function should not have to be aware of that either. They are just generating data. That should be handled by theOptimizationConfig
, so that various setups can be used without changing the runner and test function.For historical reasons, runners track objectives and constraints separately and add noise to them separately, because this mimics how BoTorch test functions handle this. However, we now can and should isolate the quirks of BoTorch test functions within
BoTorchTestProblem
.This diff:
constraint_noise_std
fromcreate_problem_from_botorch
and fromParamBasedTestProblemRunner
, in favor of just usingnoise_std
.num_objectives
tonum_outcomes
inParamBasedTestProblemRunner
evaluate_slack_true
fromParamBasedTestProblem
, changingevaluate_true
to return both objectives and constraintsTangentially related changes:
get_noise_stds
always return a dictnoise_std
to beNone
and defaults it to zero (it was eventually set to zero when it was None in the past)Differential Revision: D64919207