Categories
Misc

EarlyStopping: ‘patience’ count is reset when tuning in Keras

I’m using keras-tuner to perform a hyperparameter optimization of a neural network.

I’m using a Hyperband optimization, and I call the search method as:

import keras_tuner as kt tuner = kt.Hyperband(ann_model, objective=Objective('val_loss', direction="min"), max_epochs=100, factor=2, directory=/path/to/folder, project_name="project_name", seed=0) tuner.search(training_gen(), epochs=50, validation_data=valid_gen(), callbacks=[stop_early], steps_per_epoch=1000, validation_freq=1, validation_steps=100) 

where the EarlyStopping callback is defined as:

stop_early = tf.keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0.1, mode='min', patience=15) 

Hyperband initially trains many models (each one with a different combination of the hyperparameters previously chosen) for only 2 epochs; then, it discards poor performing models and it only trains the most promising ones, step by step, with an increasing number of epochs at each step (the final goal is to discard all models except one, the best perfoming one).

So the training of a specific model is not performed in one shot, but it’s perfomed by steps, where in each of them Keras saves the state of the training.

By setting max_epochs=100, I noticed that the training of a model is performed by these steps (called “Runnning trials“):

  1. firstly, from epoch 1 to epoch 3;
  2. secondly, from 4 to 7;
  3. then, from 8 to 13;
  4. then, from 14 to 25;
  5. then, from 26 to 50;
  6. and finally, from 51 to 100.

So, at the end of each “Running trial”, Keras saves the state of the training, in order to continue, at the next “Running trial”, the training from that state.

By setting patience=15: during “Runnning trials” 1), 2), 3), 4) of the list above, EarlyStopping could not operate because the number of training epochs is less than patience; thus, EarlyStopping could operate only during “Running trials” 5) and 6) of the list above.

Initially I thought that the patience count started at epoch 1 and should never reset itself when a new “Running trial” begins, but I noticed that the EarlyStopping callback stops the training at epoch 41, thus during the “Running trial” 5), which goes from epoch 26 to 50 .
Thus it seems to me that, at the beginning of each “Running trial”, patience count is reset; indeed: EarlyStopping arrests the training at epoch 41, the first epoch at which EarlyStopping is able to operate, because: start_epoch + patience = 26 + 15 = 41..

Is it normal/expected behavior that patience is automatically reset at the beginning of each “Running trial” while using Keras Hyperband tuning?

submitted by /u/RainbowRedditForum
[visit reddit] [comments]

Leave a Reply

Your email address will not be published.