- Changed the chief server waiting time before shutting down to 60 minutes by default.
- When running in parallel, the chief may exit before some client ask for another trial, which informs the client to exit. Now, it is fixed.
- Updated the dependency from
keras-core
tokeras
version 3 and above. Also supportkeras
version 2 for backward compatibility.
- When running in parallel, the client oracle used to wait forever when the chief oracle is not responding. Now, it is fixed.
- When running in parallel, the client would call the chief after calling
oracle.end_trial()
, when the chief have already ended. Now, it is fixed. - When running in parallel, the chief used to start to block in
tuner.__init__()
. However, it makes more sense to block when callingtuner.search()
. Now, it is fixed. - Could not do
from keras_tuner.engine.hypermodel import HyperModel
. It is now fixed. - Could not do
from keras_tuner.engine.hyperparameters import HyperParameters
. It is now fixed. - Could not do
from keras_tuner.engine.metrics_tracking import infer_metric_direction
. It is now fixed. - Could not do
from keras_tuner.engine.oracle import Objective
. It is now fixed. - Could not do
from keras_tuner.engine.oracle import Oracle
. It is now fixed.
- Could not do
from keras_tuner.engine.hyperparameters import serialize
. It is now fixed. - Could not do
from keras_tuner.engine.hyperparameters import deserialize
. It is now fixed. - Could not do
from keras_tuner.engine.tuner import maybe_distribute
. It is now fixed.
- Could not do
from keras_tuner.engine.tuner import Tuner
. It is now fixed. - When TensorFlow version is low, it would error out with keras models have no
attributed called
get_build_config
. It is now fixed.
- Could not do
from keras_tuner.engine import trial
. It is now fixed.
- Could not do
from keras_tuner.engine import base_tuner
. It is now fixed.
- All private APIs are hidden under
keras_tuner.src.*
. For example, if you usekeras_tuner.some_private_api
, it will now bekeras_tuner.src.some_private_api
.
- Support Keras Core with multi-backend.
- Removed TensorFlow from the required dependencies of KerasTuner. The user need
to install TensorFlow either separately with KerasTuner or with
pip install keras_tuner[tensorflow]
. This change is because some people may want to use KerasTuner withtensorflow-cpu
instead oftensorflow
.
- KerasTuner used to require protobuf version to be under 3.20. The limit is removed. Now, it support both protobuf 3 and 4.
- If you have a protobuf version > 3.20, it would through an error when import KerasTuner. It is now fixed.
- KerasTuner would install protobuf 3.19 with
protobuf<=3.20
. We want to install3.20.3
, so we changed it toprotobuf<=3.20.3
. It is now fixed.
- It use to install protobuf 4.22.1 if install with TensorFlow 2.12, which is not compatible with KerasTuner. We limited the version to <=3.20. Now it is fixed.
- The
Tuner.results_summary()
did not print error messages for failed trials and did not displayObjective
information correctly. It is now fixed. - The
BayesianOptimization
would break when not specifying thenum_initial_points
and overriding.run_trial()
. It is now fixed. - TensorFlow 2.12 would break because the different protobuf version. It is now fixed.
- Removed
Logger
andCloudLogger
and the related arguments inBaseTuner.__init__(logger=...)
. - Removed
keras_tuner.oracles.BayesianOptimization
,keras_tuner.oracles.Hyperband
,keras_tuner.oracles.RandomSearch
, which were actuallyOracle
s instead ofTuner
s. Please usekeras_tuner.oracles.BayesianOptimizationOracle
,keras_tuner.oracles.HyperbandOracle
,keras_tuner.oracles.RandomSearchOracle
instead. - Removed
keras_tuner.Sklearn
. Please usekeras_tuner.SklearnTuner
instead.
keras_tuner.oracles.GridSearchOracle
is now available as a standaloneOracle
to be used with custom tuners.
- The resume feature (
overwrite=False
) would crash in 1.2.0. This is now fixed.
- If you implemented your own
Tuner
, the old use case of reporting results withOracle.update_trial()
inTuner.run_trial()
is deprecated. Please return the metrics inTuner.run_trial()
instead. - If you implemented your own
Oracle
and overridedOracle.end_trial()
, you need to change the signature of the function fromOracle.end_trial(trial.trial_id, trial.status)
toOracle.end_trial(trial)
. - The default value of the
step
argument in keras_tuner.HyperParameters.Int()
is changed toNone
, which was1
before. No change in default behavior.- The default value of the
sampling
argument inkeras_tuner.HyperParameters.Int()
is changed to"linear"
, which wasNone
before. No change in default behavior. - The default value of the
sampling
argument inkeras_tuner.HyperParameters.Float()
is changed to"linear"
, which wasNone
before. No change in default behavior. - If you explicitly rely on protobuf values, the new protobuf bug fix may affect you.
- Changed the mechanism of how a random sample is drawn for a hyperparameter. They now all start from a random value between 0 and 1, and convert the value to a random sample.
- A new tuner is added,
keras_tuner.GridSearch
, which can exhaust all the possible hyperparameter combinations. - Better fault tolerance during the search. Added two new arguments to
Tuner
andOracle
initializers,max_retries_per_trial
andmax_consecutive_failed_trials
. - You can now mark a
Trial
as failed byraise keras_tuner.FailedTrialError("error message.")
inHyperModel.build()
,HyperModel.fit()
, or your model build function. - Provides better error messages for invalid configs for
Int
andFloat
type hyperparameters. - A decorator
@keras_tuner.synchronized
is added to decorate the methods inOracle
and its subclasses to synchronize the concurrent calls to ensure thread safety in parallel tuning.
- Protobuf was not converting Boolean type hyperparameter correctly. This is now fixed.
- Hyperband was not loading the weights correctly for half-trained models. This is now fixed.
KeyError
may occur if usinghp.conditional_scope()
, or theparent
argument for hyperparameters. This is now fixed.num_initial_points
of theBayesianOptimization
should defaults to3 * dimension
, but it defaults to 2. This is now fixed.- It would through an error when using a concrete Keras optimizer object to
override the
HyperModel
compile arg. This is now fixed. - Workers might crash due to
Oracle
reloading when running in parallel. This is now fixed.