Releases: keras-team/keras-tuner
Releases · keras-team/keras-tuner
Release v1.1.2
What's Changed
- add --profile=black to isort by @LukeWood in #672
- In model checkpointing callback, check logs before get objective value by @haifeng-jin in #674
New Contributors
Full Changelog: 1.1.1...1.1.2
Release v1.1.2RC0
What's Changed
- add --profile=black to isort by @LukeWood in #672
- In model checkpointing callback, check logs before get objective value by @haifeng-jin in #674
New Contributors
Full Changelog: 1.1.1...1.1.2rc0
Release v1.1.1
Highlights
- Support passing a list of objectives as the
objective
argument. - Raise better error message when the return value of
run_trial()
orHyperModel.fit()
are of wrong type. - Various bug fixes for
BayesianOptimization
tuner. - The trial IDs are changed from hex strings to integers counting from 0.
What's Changed
- Make hyperparameters names visible in Display output by @C-Pro in #634
- Replace import kerastuner with import keras_tuner by @ageron in #640
- Support multi-objective by @haifeng-jin in #641
- reorganize the tests to follow keras best practices by @haifeng-jin in #643
- keep Objective in oracle for backward compatibility by @haifeng-jin in #644
- better error check for returned eval results by @haifeng-jin in #646
- Mitigate the issue of hanging workers after chief already quits when running keras-tuner in distributed tuning mode. by @mtian29 in #645
- Ensure hallucination checks if the Gaussian regressor has been fit be… by @brydon in #650
- Resolves #609: Support for sklearn functions without sample_weight by @brydon in #651
- Resolves #652 and #605: Make human readable trial_id and sync trial numbers between worker Displays by @brydon in #653
- Update tuner.py by @haifeng-jin in #657
- fix(bayesian): scalar optimization result (#655) by @haifeng-jin in #662
- Generalize hallucination checks to avoid racing conditions by @alisterl in #664
- remove scipy from required dependency by @haifeng-jin in #665
- Import scipy.optimize by @haifeng-jin in #667
New Contributors
- @C-Pro made their first contribution in #634
- @ageron made their first contribution in #640
- @mtian29 made their first contribution in #645
- @brydon made their first contribution in #650
- @alisterl made their first contribution in #664
Full Changelog: 1.1.1rc0...1.1.1
Release v1.1.1RC0
Highlights
- Support passing a list of objectives as the
objective
argument. - Raise better error message when the return value of
run_trial()
orHyperModel.fit()
are of wrong type.
What's Changed
- Make hyperparameters names visible in Display output by @C-Pro in #634
- Replace import kerastuner with import keras_tuner by @ageron in #640
- Support multi-objective by @haifeng-jin in #641
- reorganize the tests to follow keras best practices by @haifeng-jin in #643
- keep Objective in oracle for backward compatibility by @haifeng-jin in #644
- better error check for returned eval results by @haifeng-jin in #646
- Mitigate the issue of hanging workers after chief already quits when running keras-tuner in distributed tuning mode. by @mtian29 in #645
- Ensure hallucination checks if the Gaussian regressor has been fit be… by @brydon in #650
- Resolves #609: Support for sklearn functions without sample_weight by @brydon in #651
- Resolves #652 and #605: Make human readable trial_id and sync trial numbers between worker Displays by @brydon in #653
- Update tuner.py by @haifeng-jin in #657
New Contributors
- @C-Pro made their first contribution in #634
- @ageron made their first contribution in #640
- @mtian29 made their first contribution in #645
- @brydon made their first contribution in #650
Full Changelog: 1.1.0...1.1.1rc0
1.1.0
What's Changed
- Support
HyperModel.fit()
to tune the fit process. - Support
Tuner.run_trial()
to return a single float as the objective value to minimize. - Support
Tuner.run_trial()
to return a dictionary of{metric_name: value}
or Keras history. - Allow not providing
hypermodel
toTuner
if overrideTuner.run_trial()
. - Allow not providing
objective
toTuner
ifHyperModel.fit()
orTuner.run_trial()
return a single float. - Bug fixes
Breaking Changes
- Change internal class
MultiExecutionTuner
toTuner
to replace all overridden methods. - Removed
KerasHyperModel
an internal class to wrap the user providedHyperModel
.
New Contributors
- @liqiongyu made their first contribution in #594
- @vardhanaleti made their first contribution in #595
- @howl-anderson made their first contribution in #607
Full Changelog: 1.0.4...1.1.0rc0
1.1.0rc0
What's Changed
- Support
HyperModel.fit()
to tune the fit process. - Support
Tuner.run_trial()
to return a single float as the objective value to minimize. - Support
Tuner.run_trial()
to return a dictionary of{metric_name: value}
or Keras history. - Allow not providing
hypermodel
toTuner
if overrideTuner.run_trial()
. - Allow not providing
objective
toTuner
ifHyperModel.fit()
orTuner.run_trial()
return a single float. - Bug fixes
Breaking Changes
- Change internal class
MultiExecutionTuner
toTuner
to replace all overridden methods. - Removed
KerasHyperModel
an internal class to wrap the user providedHyperModel
.
New Contributors
- @liqiongyu made their first contribution in #594
- @vardhanaleti made their first contribution in #595
- @howl-anderson made their first contribution in #607
Full Changelog: 1.0.4...1.1.0rc0
Release v1.0.4
- Support DataFrame in SklearnTuner.
- Support
Tuner.search_space_summary()
to print all the hyperparameters based onconditional_scope
s. - Support TensorFlow 2.0 for backward compatibility.
- Bug fixes and documentation improvements.
- Raise a warning when using with TF 1.
- Save TPUStrategy models with the TF format.
Release v1.0.4rc1
- Support DataFrame in SklearnTuner.
- Support
Tuner.search_space_summary()
to print all the hyperparameters based onconditional_scope
s. - Support TensorFlow 2.0 for backward compatibility.
- Bug fixes and documentation improvements.
- Raise a warning when using with TF 1.
Release v1.0.4rc0
- Support DataFrame in SklearnTuner.
- Support
Tuner.search_space_summary()
to print all the hyperparameters based onconditional_scope
s. - Support TensorFlow 2.0 for backward compatibility.
- Bug fixes and documentation improvements.
Release v1.0.3
- Renamed import name of
kerastuner
tokeras_tuner
. - Renamed the
Oracle
s to add theOracle
as suffix, e.g.,RandomSearch
oracle is renamed toRandomSearchOracle
. (TheRandomSearch
tuner is still namedRandomSearch
.) - Renamed
Tuner._populate_space
toTuner.populate_space
. - Renamed
Tuner._score_trail
toTuner.score_trial
. - Renamed
kt.tuners.Sklearn
tuner tokt.SklearnTuner
and put it at the root level of import. - Removed the
CloudLogger
feature, but theLogger
class still works. - Tuning
sklearn.pipeline.Pipeline
. - Improved the docstrings.