Callbacks
Training Callbacks
Callbacks that add functionlities during the training phase, including Callbacks that make decisions depending how a monitored metric/loss behaves
ShortEpochCallback
ShortEpochCallback (pct=0.01, short_valid=True)
Fit just pct
of an epoch, then stop
GradientAccumulation
GradientAccumulation (n_acc=32)
Accumulate gradients before updating weights
EarlyStoppingCallback
EarlyStoppingCallback (monitor='valid_loss', comp=None, min_delta=0.0, patience=1, reset_on_fit=True)
A TrackerCallback
that terminates training when monitored quantity stops improving.
Type | Default | Details | |
---|---|---|---|
monitor | str | valid_loss | value (usually loss or metric) being monitored. |
comp | NoneType | None | numpy comparison operator; np.less if monitor is loss, np.greater if monitor is metric. |
min_delta | float | 0.0 | minimum delta between the last monitor value and the best monitor value. |
patience | int | 1 | number of epochs to wait when training has not improved model. |
reset_on_fit | bool | True | before model fitting, reset value being monitored to -infinity (if monitor is metric) or +infinity (if monitor is loss). |
SaveModelCallback
SaveModelCallback (monitor='valid_loss', comp=None, min_delta=0.0, fname='model', every_epoch=False, at_end=False, with_opt=False, reset_on_fit=True)
A TrackerCallback
that saves the model’s best during training and loads it at the end.
Type | Default | Details | |
---|---|---|---|
monitor | str | valid_loss | value (usually loss or metric) being monitored. |
comp | NoneType | None | numpy comparison operator; np.less if monitor is loss, np.greater if monitor is metric. |
min_delta | float | 0.0 | minimum delta between the last monitor value and the best monitor value. |
fname | str | model | model name to be used when saving model. |
every_epoch | bool | False | if true, save model after every epoch; else save only when model is better than existing best. |
at_end | bool | False | if true, save model when training ends; else load best model if there is only one saved model. |
with_opt | bool | False | if true, save optimizer state (if any available) when saving model. |
reset_on_fit | bool | True | before model fitting, reset value being monitored to -infinity (if monitor is metric) or +infinity (if monitor is loss). |
ReduceLROnPlateau
ReduceLROnPlateau (monitor='valid_loss', comp=None, min_delta=0.0, patience=1, factor=10.0, min_lr=0, reset_on_fit=True)
A TrackerCallback
that reduces learning rate when a metric has stopped improving.
Type | Default | Details | |
---|---|---|---|
monitor | str | valid_loss | value (usually loss or metric) being monitored. |
comp | NoneType | None | numpy comparison operator; np.less if monitor is loss, np.greater if monitor is metric. |
min_delta | float | 0.0 | minimum delta between the last monitor value and the best monitor value. |
patience | int | 1 | number of epochs to wait when training has not improved model. |
factor | float | 10.0 | the denominator to divide the learning rate by, when reducing the learning rate. |
min_lr | int | 0 | the minimum learning rate allowed; learning rate cannot be reduced below this minimum. |
reset_on_fit | bool | True | before model fitting, reset value being monitored to -infinity (if monitor is metric) or +infinity (if monitor is loss). |
Schedulers
Callback and helper functions to schedule hyper-parameters
ParamScheduler
ParamScheduler (scheds)
Schedule hyper-parameters according to scheds
scheds
is a dictionary with one key for each hyper-parameter you want to schedule, with either a scheduler or a list of schedulers as values (in the second case, the list must have the same length as the the number of parameters groups of the optimizer).
SchedCos
SchedCos (start, end)
Cosine schedule function from start
to end
SchedExp
SchedExp (start, end)
Exponential schedule function from start
to end
SchedLin
SchedLin (start, end)
Linear schedule function from start
to end
SchedNo
SchedNo (start, end)
Constant schedule function with start
value