回调函数Callbacks回调函数是一组在训练的特定阶段被调用的函数集,你可以使用回调函数来观察训练过程中网络内部的状态和统计信息。通过传递回调函数列表到模型的 【Tips】虽然我们称之为回调“函数”,但事实上Keras的回调函数是一个类,回调函数只是习惯性称呼 CallbackListkeras.callbacks.CallbackList(callbacks=[], queue_length=10) Callbackkeras.callbacks.Callback() 这是回调函数的抽象类,定义新的回调函数必须继承自该类 类属性
回调函数以字典 目前,模型的
BaseLoggerkeras.callbacks.BaseLogger() 该回调函数用来对每个epoch累加 该回调函数在每个Keras模型中都会被自动调用 ProgbarLoggerkeras.callbacks.ProgbarLogger() 该回调函数用来将 Historykeras.callbacks.History() 该回调函数在Keras模型上会被自动调用, ModelCheckpointkeras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', verbose=0, save_best_only=False, save_weights_only=False, mode='auto', period=1) 该回调函数将在每个epoch后保存模型到
例如, 参数
EarlyStoppingkeras.callbacks.EarlyStopping(monitor='val_loss', patience=0, verbose=0, mode='auto') 当监测值不再改善时,该回调函数将中止训练 参数
RemoteMonitorkeras.callbacks.RemoteMonitor(root='http://localhost:9000') 该回调函数用于向服务器发送事件流,该回调函数需要 参数
LearningRateSchedulerkeras.callbacks.LearningRateScheduler(schedule) 该回调函数是学习率调度器 参数
TensorBoardkeras.callbacks.TensorBoard(log_dir='./logs', histogram_freq=0) 该回调函数是一个可视化的展示器 TensorBoard是TensorFlow提供的可视化工具,该回调函数将日志信息写入TensorBorad,使得你可以动态的观察训练和测试指标的图像以及不同层的激活值直方图。 如果已经通过pip安装了TensorFlow,我们可通过下面的命令启动TensorBoard: tensorboard --logdir=/full_path_to_your_logs 更多的参考信息,请点击这里 参数
ReduceLROnPlateaukeras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=10, verbose=0, mode='auto', epsilon=0.0001, cooldown=0, min_lr=0) 当评价指标不在提升时,减少学习率 当学习停滞时,减少2倍或10倍的学习率常常能获得较好的效果。该回调函数检测指标的情况,如果在 参数
示例:reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.001)model.fit(X_train, Y_train, callbacks=[reduce_lr]) CSVLoggerkeras.callbacks.CSVLogger(filename, separator=',', append=False) 将epoch的训练结果保存在csv文件中,支持所有可被转换为string的值,包括1D的可迭代数值如np.ndarray. 参数
示例csv_logger = CSVLogger('training.log')model.fit(X_train, Y_train, callbacks=[csv_logger]) LambdaCallbackkeras.callbacks.LambdaCallback(on_epoch_begin=None, on_epoch_end=None, on_batch_begin=None, on_batch_end=None, on_train_begin=None, on_train_end=None) 用于创建简单的callback的callback类 该callback的匿名函数将会在适当的时候调用,注意,该回调函数假定了一些位置参数 参数
示例# Print the batch number at the beginning of every batch.batch_print_callback = LambdaCallback(on_batch_begin=lambda batch, logs: print(batch))# Plot the loss after every epoch.import numpy as npimport matplotlib.pyplot as pltplot_loss_callback = LambdaCallback(on_epoch_end=lambda epoch, logs: plt.plot(np.arange(epoch), logs['loss']))# Terminate some processes after having finished model training.processes = ...cleanup_callback = LambdaCallback(on_train_end=lambda logs: [p.terminate() for p in processes if p.is_alive()])model.fit(..., callbacks=[batch_print_callback, plot_loss_callback, cleanup_callback]) 编写自己的回调函数我们可以通过继承 这里是一个简单的保存每个batch的loss的回调函数: class LossHistory(keras.callbacks.Callback): def on_train_begin(self, logs={}): self.losses = [] def on_batch_end(self, batch, logs={}): self.losses.append(logs.get('loss')) 例子:记录损失函数的历史数据class LossHistory(keras.callbacks.Callback): def on_train_begin(self, logs={}): self.losses = [] def on_batch_end(self, batch, logs={}): self.losses.append(logs.get('loss'))model = Sequential()model.add(Dense(10, input_dim=784, init='uniform'))model.add(Activation('softmax'))model.compile(loss='categorical_crossentropy', optimizer='rmsprop')history = LossHistory()model.fit(X_train, Y_train, batch_size=128, nb_epoch=20, verbose=0, callbacks=[history])print history.losses# outputs'''[0.66047596406559383, 0.3547245744908703, ..., 0.25953155204159617, 0.25901699725311789] 例子:模型检查点from keras.callbacks import ModelCheckpointmodel = Sequential()model.add(Dense(10, input_dim=784, init='uniform'))model.add(Activation('softmax'))model.compile(loss='categorical_crossentropy', optimizer='rmsprop')'''saves the model weights after each epoch if the validation loss decreased'''checkpointer = ModelCheckpoint(filepath='/tmp/weights.hdf5', verbose=1, save_best_only=True)model.fit(X_train, Y_train, batch_size=128, nb_epoch=20, verbose=0, validation_data=(X_test, Y_test), callbacks=[checkpointer]) |
|