TensorFlow-SlimTF−Slim 是 2016 年开源库,主要用于"代码瘦身",便于模型定义,并给出了一些图像分析模型. TF-Slim 是用于 TensorFlow 复杂模型的定义、训练和评估的轻量库. [tensorflow/contrib/slim] 模块导入: import tensorflow.contrib.slim as slim <h2>1. TF-Slim 特点</h2> TF-Slim 用于神经网络的构建、训练和评估:
<h2>2. TF-Slim 组成</h2> TF-Slim 由独立的几个部分组成,主要有:
<h2>3. TF-Slim 定义模型</h2> TF-Slim 通过结合 variables, layers 和 scopes 进行模型定义. <h3>3.1 Variables</h3> TensorFlow 的原始变量 Variables 定义需要预定义值或者初始化方法如,随机采样如,Gaissian随机采样. TF-Slim 提供了更轻量的变量封装函数 - variables.py. 例如,创建 weights = slim.variable('weights', shape=[10, 10, 3 , 3], initializer=tf.truncated_normal_initializer(stddev=0.1), regularizer=slim.l2_regularizer(0.05), device='/CPU:0') TensorFlow 原始实现中,有两种类型的变量:regular variables 和 localtransientvariables. TF-Slim 通过定义模型变量,进一步对变量区分. 模型变量表示了模型的参数modelvariables. 非模型变量non−modelvariables是网络学习或评估时时的所有其它模型变量,但真实推断不需要的变量. 采用 TF-Slim 可以很简单的创建与检索模型变量modelvariables和正则变量regularvariables: # Model Variablesweights = slim.model_variable('weights', shape=[10, 10, 3 , 3], initializer=tf.truncated_normal_initializer(stddev=0.1), regularizer=slim.l2_regularizer(0.05), device='/CPU:0') model_variables = slim.get_model_variables()# Regular variablesmy_var = slim.variable('my_var', shape=[20, 1], initializer=tf.zeros_initializer()) regular_variables_and_model_variables = slim.get_variables() 这是怎么实现的呢? 如果需要自定义网络层或变量创建方法,仍想 TF-Slim 来管理模型变量呢? my_model_variable = CreateViaCustomCode()# Letting TF-Slim know about the additional variable.slim.add_model_variable(my_model_variable) <h3>3.2 Layers</h3> TensorFlow Ops 是非常广泛的,神经网络开发者对于模型是比较高层的概念,如 Layers,Losses, Metrics,Networks. 网络层Layer,如卷积层ConvLayer,全连接层FCLayer,BatchNormLayer,比 TensorFlow Ops 更抽象,且一般涉及多个 Ops.
ConvLayer 基于原始 TensorFlow 实现,相当繁琐: input = ...with tf.name_scope('conv1_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(input, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv1 = tf.nn.relu(bias, name=scope) 为了减少代码的重复,TF-Slim 提供了很多方便的 Ops,更抽象的定义网络层. input = ... net = slim.conv2d(input, 128, [3, 3], scope='conv1_1') TF-Slim 提供了很多网络构建的标准网络层:
TF-Slim 还提供了两个 meta-operations: net = ... net = slim.conv2d(net, 256, [3, 3], scope='conv3_1') net = slim.conv2d(net, 256, [3, 3], scope='conv3_2') net = slim.conv2d(net, 256, [3, 3], scope='conv3_3') net = slim.max_pool2d(net, [2, 2], scope='pool2') 采用 net = ...for i in range(3): net = slim.conv2d(net, 256, [3, 3], scope='conv3_%d' % (i+1)) net = slim.max_pool2d(net, [2, 2], scope='pool2') 采用 TF-Slim 的 net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3') net = slim.max_pool2d(net, [2, 2], scope='pool2')
TF-Slim 的 # 冗长方式:x = slim.fully_connected(x, 32, scope='fc/fc_1') x = slim.fully_connected(x, 64, scope='fc/fc_2') x = slim.fully_connected(x, 128, scope='fc/fc_3')# 等价方式, TF-Slim way using slim.stack:slim.stack(x, slim.fully_connected, [32, 64, 128], scope='fc') 该例中, 类似地,多卷积层的堆积: # Verbose way:x = slim.conv2d(x, 32, [3, 3], scope='core/core_1') x = slim.conv2d(x, 32, [1, 1], scope='core/core_2') x = slim.conv2d(x, 64, [3, 3], scope='core/core_3') x = slim.conv2d(x, 64, [1, 1], scope='core/core_4')# Using stack:slim.stack(x, slim.conv2d, [(32, [3, 3]), (32, [1, 1]), (64, [3, 3]), (64, [1, 1])], scope='core') <h3>3.3 Scopes</h3> 除了 TensorFlow 中的作用域类型scopetype- name_scope 和 variable_scope,TF-Slim 新增了一个作用域类型 - arg_scope. arg_scope 用于指定一个或多个 Ops,以及指定传递到在 例如: net = slim.conv2d(inputs, 64, [11, 11], 4, padding='SAME', weights_initializer=tf.truncated_normal_initializer(stddev=0.01), weights_regularizer=slim.l2_regularizer(0.0005), scope='conv1') net = slim.conv2d(net, 128, [11, 11], padding='VALID', weights_initializer=tf.truncated_normal_initializer(stddev=0.01), weights_regularizer=slim.l2_regularizer(0.0005), scope='conv2') net = slim.conv2d(net, 256, [11, 11], padding='SAME', weights_initializer=tf.truncated_normal_initializer(stddev=0.01), weights_regularizer=slim.l2_regularizer(0.0005), scope='conv3') 三个 ConvLayers 共享相同的超参数,其中两个具有相同的 padding,三个都是相同的 weights_initializer 和 weight_regularizer. padding = 'SAME'initializer = tf.truncated_normal_initializer(stddev=0.01) regularizer = slim.l2_regularizer(0.0005) net = slim.conv2d(inputs, 64, [11, 11], 4, padding=padding, weights_initializer=initializer, weights_regularizer=regularizer, scope='conv1') net = slim.conv2d(net, 128, [11, 11], padding='VALID', weights_initializer=initializer, weights_regularizer=regularizer, scope='conv2') net = slim.conv2d(net, 256, [11, 11], padding=padding, weights_initializer=initializer, weights_regularizer=regularizer, scope='conv3') 该方式可以保证三个 ConvLayer 共享相同的参数,但并没有完全减少代码冗余. 采用 with slim.arg_scope([slim.conv2d], padding='SAME', weights_initializer=tf.truncated_normal_initializer(stddev=0.01) weights_regularizer=slim.l2_regularizer(0.0005)): net = slim.conv2d(inputs, 64, [11, 11], scope='conv1') net = slim.conv2d(net, 128, [11, 11], padding='VALID', scope='conv2') net = slim.conv2d(net, 256, [11, 11], scope='conv3')
另外,也可以内嵌 with slim.arg_scope([slim.conv2d, slim.fully_connected], activation_fn=tf.nn.relu, weights_initializer=tf.truncated_normal_initializer(stddev=0.01), weights_regularizer=slim.l2_regularizer(0.0005)):with slim.arg_scope([slim.conv2d], stride=1, padding='SAME'): net = slim.conv2d(inputs, 64, [11, 11], 4, padding='VALID', scope='conv1') net = slim.conv2d(net, 256, [5, 5], weights_initializer=tf.truncated_normal_initializer(stddev=0.03), scope='conv2') net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc') 该例中,第一个 <h3>3.4 VGG16 网络层示例</h3> 利用 TF-Slim 的 Variables,Operation 和 Scopes,创建 VGG16 网络: def vgg16(inputs): with slim.arg_scope([slim.conv2d, slim.fully_connected], activation_fn=tf.nn.relu, weights_initializer=tf.truncated_normal_initializer(0.0, 0.01), weights_regularizer=slim.l2_regularizer(0.0005)): net = slim.repeat(inputs, 2, slim.conv2d, 64, [3, 3], scope='conv1') net = slim.max_pool2d(net, [2, 2], scope='pool1') net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2') net = slim.max_pool2d(net, [2, 2], scope='pool2') net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3') net = slim.max_pool2d(net, [2, 2], scope='pool3') net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4') net = slim.max_pool2d(net, [2, 2], scope='pool4') net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5') net = slim.max_pool2d(net, [2, 2], scope='pool5') net = slim.fully_connected(net, 4096, scope='fc6') net = slim.dropout(net, 0.5, scope='dropout6') net = slim.fully_connected(net, 4096, scope='fc7') net = slim.dropout(net, 0.5, scope='dropout7') net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc8') return net <h2>4. 模型训练</h2> TensorFlow 模型训练需要模型Model,Loss 函数,梯度计算和迭代的计算模型权重相对于 loss 的梯度和对应的权重更新的训练方案. TF-Slim 提供了常用 loss 函数和 helper 函数,以进行模型训练和评估. <h3>4.1 Losses</h3> Loss 函数定义了需要最小化的目标. 某些模型,如 multi-task learning 模型,需要同时采用多个 loss. 也就是说,最终的 loss 函数是不同 loss 函数之和的最小化. TF-Slim 提供了一种易用的 loss 函数定义机制,采用了 losses 模块. import tensorflow as tfimport tensorflow.contrib.slim.nets as nets vgg = nets.vgg# Load the images and labels.images, labels = ...# Create the model.predictions, _ = vgg.vgg_16(images)# Define the loss functions and get the total loss.loss = slim.losses.softmax_cross_entropy(predictions, labels) 该例中,先创建模型采用的实现采用TF−Slim的VGG实现,然后添加标准的分类 loss. 现在,针对 multi-task 模型的情况,模型有多个输出. # Load the images and labels.images, scene_labels, depth_labels = ...# Create the model.scene_predictions, depth_predictions = CreateMultiTaskModel(images)# Define the loss functions and get the total loss.classification_loss = slim.losses.softmax_cross_entropy(scene_predictions, scene_labels) sum_of_squares_loss = slim.losses.sum_of_squares(depth_predictions, depth_labels)# The following two lines have the same effect:total_loss = classification_loss + sum_of_squares_loss total_loss = slim.losses.get_total_loss(add_regularization_losses=False) 该例中,有两个 losses: 如果自定义 loss 函数,如何采用 TF-Slim 来管理 losses 呢? # Load the images and labels.images, scene_labels, depth_labels, pose_labels = ...# Create the model.scene_predictions, depth_predictions, pose_predictions = CreateMultiTaskModel(images)# Define the loss functions and get the total loss.classification_loss = slim.losses.softmax_cross_entropy(scene_predictions, scene_labels) sum_of_squares_loss = slim.losses.sum_of_squares(depth_predictions, depth_labels) pose_loss = MyCustomLossFunction(pose_predictions, pose_labels) slim.losses.add_loss(pose_loss) # Letting TF-Slim know about the additional loss.# The following two ways to compute the total loss are equivalent:regularization_loss = tf.add_n(slim.losses.get_regularization_losses()) total_loss1 = classification_loss + sum_of_squares_loss + pose_loss + regularization_loss# (Regularization Loss is included in the total loss by default).total_loss2 = slim.losses.get_total_loss() <h3>4.2 训练循环</h3> TF-Slim 提供了简单有效的模型训练工具 - learning.py. 例如,当模型、loss 函数和优化策略定义完成后,即可调用 g = tf.Graph()# Create the model and specify the losses...... total_loss = slim.losses.get_total_loss() optimizer = tf.train.GradientDescentOptimizer(learning_rate)# create_train_op ensures that each time we ask for the loss, the update_ops# are run and the gradients being computed are applied too.train_op = slim.learning.create_train_op(total_loss, optimizer) logdir = ... # Where checkpoints are stored.slim.learning.train( train_op, logdir, number_of_steps=1000, save_summaries_secs=300, save_interval_secs=600): 该例中,
<h3>4.3 VGG 模型训练示例</h3> import tensorflow as tfimport tensorflow.contrib.slim.nets as nets slim = tf.contrib.slim vgg = nets.vgg ... train_log_dir = ...if not tf.gfile.Exists(train_log_dir): tf.gfile.MakeDirs(train_log_dir)with tf.Graph().as_default(): # Set up the data loading: images, labels = ... # Define the model: predictions = vgg.vgg_16(images, is_training=True) # Specify the loss function: slim.losses.softmax_cross_entropy(predictions, labels) total_loss = slim.losses.get_total_loss() tf.summary.scalar('losses/total_loss', total_loss) # Specify the optimization scheme: optimizer = tf.train.GradientDescentOptimizer(learning_rate=.001) # create_train_op that ensures that when we evaluate it to get the loss, # the update_ops are done and the gradient updates are computed. train_tensor = slim.learning.create_train_op(total_loss, optimizer) # Actually runs training. slim.learning.train(train_tensor, train_log_dir) <h2>5. 模型 fine-tuning</h2> <h3>5.1 简单回顾 - 从断点文件恢复模型变量</h3> 模型训练后,可以采用 # Create some variables.v1 = tf.Variable(..., name="v1") v2 = tf.Variable(..., name="v2") ...# Add ops to restore all the variables.restorer = tf.train.Saver()# Add ops to restore some variables.restorer = tf.train.Saver([v1, v2])# Later, launch the model, use the saver to restore variables from disk, and# do some work with the model.with tf.Session() as sess:# Restore variables from disk.restorer.restore(sess, "/tmp/model.ckpt") print("Model restored.")# Do some work with the model... 更多细节可以参考:Restoring Variables 和 Choosing which Variables to Save and Restore <h3>5.2 部分恢复模型</h3> 在新的数据集和新任务的情况下,往往需要采用在 pre-trained 模型上 fine-tune. # Create some variables.v1 = slim.variable(name="v1", ...) v2 = slim.variable(name="nested/v2", ...) ...# Get list of variables to restore (which contains only 'v2'). These are all# equivalent methods:variables_to_restore = slim.get_variables_by_name("v2")# orvariables_to_restore = slim.get_variables_by_suffix("2")# orvariables_to_restore = slim.get_variables(scope="nested")# orvariables_to_restore = slim.get_variables_to_restore(include=["nested"])# orvariables_to_restore = slim.get_variables_to_restore(exclude=["v1"])# Create the saver which will be used to restore the variables.restorer = tf.train.Saver(variables_to_restore)with tf.Session() as sess:# Restore variables from disk.restorer.restore(sess, "/tmp/model.ckpt") print("Model restored.")# Do some work with the model... <h3>5.3 恢复不同变量名的模型</h3> 当从断点文件恢复变量时, 上面中,创建 saver 来传递变量. 这里,从每个提供的变量的 例如: # Assuming than 'conv1/weights' should be restored from 'vgg16/conv1/weights'def name_in_checkpoint(var):return 'vgg16/' + var.op.name# Assuming than 'conv1/weights' and 'conv1/bias' should be restored from 'conv1/params1' and 'conv1/params2'def name_in_checkpoint(var):if "weights" in var.op.name:return var.op.name.replace("weights", "params1")if "bias" in var.op.name:return var.op.name.replace("bias", "params2") variables_to_restore = slim.get_model_variables() variables_to_restore = {name_in_checkpoint(var):var for var in variables_to_restore} restorer = tf.train.Saver(variables_to_restore)with tf.Session() as sess: # Restore variables from disk. restorer.restore(sess, "/tmp/model.ckpt") <h3>5.4 在不同任务 Fine-tuning 模型</h3> 假如,已经有预训练的 VGG16 模型,其是在 ImageNet 数据集上训练得到,1000 类的分类模型. 此种情况,可以采用预训练模型初始化模型训练,但排除最后一网络层: # Load the Pascal VOC dataimage, label = MyPascalVocDataLoader(...) images, labels = tf.train.batch([image, label], batch_size=32)# Create the modelpredictions = vgg.vgg_16(images) train_op = slim.learning.create_train_op(...)# Specify where the Model, trained on ImageNet, was saved.model_path = '/path/to/pre_trained_on_imagenet.checkpoint'# Specify where the new model will live:log_dir = '/path/to/my_pascal_model_dir/'# Restore only the convolutional layers:variables_to_restore = slim.get_variables_to_restore(exclude=['fc6', 'fc7', 'fc8']) init_fn = assign_from_checkpoint_fn(model_path, variables_to_restore)# Start training.slim.learning.train(train_op, log_dir, init_fn=init_fn) <h2>6. 模型评估</h2> 当模型训练后,往往需要评估模型的实际表现. <h3>6.1 Metrics</h3> 定义 metric 来评估模型表现,但不是 loss 函数一般是在训练时直接优化loss一般是在训练时直接优化. TF-Slim 提供了很多 metric Ops,以易于评估模型.
例如,为了计算 例如: images, labels = LoadTestData(...) predictions = MyModel(images) mae_value_op, mae_update_op = slim.metrics.streaming_mean_absolute_error(predictions, labels) mre_value_op, mre_update_op = slim.metrics.streaming_mean_relative_error(predictions, labels) pl_value_op, pl_update_op = slim.metrics.percentage_less(mean_relative_errors, 0.3) TF-Slim 还提供了两个函数: # Aggregates the value and update ops in two lists:value_ops, update_ops = slim.metrics.aggregate_metrics( slim.metrics.streaming_mean_absolute_error(predictions, labels), slim.metrics.streaming_mean_squared_error(predictions, labels))# Aggregates the value and update ops in two dictionaries:names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({"eval/mean_absolute_error": slim.metrics.streaming_mean_absolute_error(predictions, labels),"eval/mean_squared_error": slim.metrics.streaming_mean_squared_error(predictions, labels), }) <h3>6.2 追踪多个 Metrics 示例</h3> import tensorflow as tfimport tensorflow.contrib.slim.nets as nets slim = tf.contrib.slim vgg = nets.vgg# 加载数据images, labels = load_data(...)# 定义网络predictions = vgg.vgg_16(images)# 选择计算的 metrics:names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({"eval/mean_absolute_error": slim.metrics.streaming_mean_absolute_error(predictions, labels),"eval/mean_squared_error": slim.metrics.streaming_mean_squared_error(predictions, labels), })# Evaluate the model using 1000 batches of data:num_batches = 1000with tf.Session() as sess: sess.run(tf.global_variables_initializer()) sess.run(tf.local_variables_initializer())for batch_id in range(num_batches): sess.run(names_to_updates.values()) metric_values = sess.run(names_to_values.values())for metric, value in zip(names_to_values.keys(), metric_values): print('Metric %s has value: %f' % (metric, value)) metric.py 可以在没有 layers 或 loss_ops.py 时独立使用. <h3>6.3 循环评估</h3> TF-Slim 提供了评估模块 - evaluation.py,包含了采用 metrics 从metric_ops.py 写入评测脚本的 helper 函数. 主要包括,周期地运行评估,对 batch 数据计算 metric,以及打印和 summarizing metric 结果. 例如: import tensorflow as tf slim = tf.contrib.slim# Load the dataimages, labels = load_data(...)# Define the networkpredictions = MyModel(images)# Choose the metrics to compute:names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({'accuracy': slim.metrics.accuracy(predictions, labels),'precision': slim.metrics.precision(predictions, labels),'recall': slim.metrics.recall(mean_relative_errors, 0.3), })# Create the summary ops such that they also print out to std output:summary_ops = []for metric_name, metric_value in names_to_values.iteritems(): op = tf.summary.scalar(metric_name, metric_value) op = tf.Print(op, [metric_value], metric_name) summary_ops.append(op) num_examples = 10000batch_size = 32num_batches = math.ceil(num_examples / float(batch_size))# Setup the global step.slim.get_or_create_global_step() output_dir = ... # Where the summaries are stored.eval_interval_secs = ... # How often to run the evaluation.slim.evaluation.evaluation_loop('local', checkpoint_dir, log_dir, num_evals=num_batches, eval_op=names_to_updates.values(), summary_op=tf.summary.merge(summary_ops), eval_interval_secs=eval_interval_secs) |
|
来自: LibraryPKU > 《机器学习框架》