批量标准化 - Tensorflow

时间:2017-01-17 17:57:42

标签: tensorflow

我看了几个BN的例子,但仍然有点困惑。所以我目前正在使用这个函数来调用函数;

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.batch_norm.md

from tensorflow.contrib.layers.python.layers import batch_norm as batch_norm
import tensorflow as tf

def bn(x,is_training,name):
    bn_train = batch_norm(x, decay=0.9, center=True, scale=True,
    updates_collections=None,
    is_training=True,
    reuse=None, 
    trainable=True,
    scope=name)
    bn_inference = batch_norm(x, decay=1.00, center=True, scale=True,
    updates_collections=None,
    is_training=False,
    reuse=True, 
    trainable=False,
    scope=name)
    z = tf.cond(is_training, lambda: bn_train, lambda: bn_inference)
    return z

以下部分是玩具运行,我只是检查该功能是否重用了在两个功能的训练步骤中计算的均值和方差。在测试模式is_training=False中运行这部分代码,在训练步骤中计算的运行平均值/方差正在改变,当我们打印出从调用bnParams <得到的BN变量时可以看到/ p>

if __name__ == "__main__":
    print("Example")

    import os
    import numpy as np
    import scipy.stats as stats
    np.set_printoptions(suppress=True,linewidth=200,precision=3)
    np.random.seed(1006)
    import pdb
    path = "batchNorm/"
    if not os.path.exists(path):
        os.mkdir(path)
    savePath = path + "bn.model"

    nFeats = 2
    X = tf.placeholder(tf.float32,[None,nFeats])
    is_training = tf.placeholder(tf.bool,name="is_training")
    Y = bn(X,is_training=is_training,name="bn")
    mvn = stats.multivariate_normal([0,100])
    bs = 4
    load = 0
    train = 1
    saver = tf.train.Saver()
    def bnCheck(batch,mu,std):
        # Checking calculation
        return (x - mu)/(std + 0.001)
    with tf.Session() as sess:
        if load == 1:
            saver.restore(sess,savePath)
        else:
            tf.global_variables_initializer().run()
        #### TRAINING #####
        if train == 1:
            for i in xrange(100):
                x = mvn.rvs(bs)
                y = Y.eval(feed_dict={X:x, is_training.name: True})

        def bnParams():
            beta, gamma, mean, var = [v.eval() for v in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES,scope="bn")]
            return beta, gamma, mean, var

        beta, gamma, mean, var = bnParams()
        #### TESTING #####
        for i in xrange(10):
            x = mvn.rvs(1).reshape(1,-1)
            check = bnCheck(x,mean,np.sqrt(var))
            y = Y.eval(feed_dict={X:x, is_training.name: False})
            print("x = {0}, y = {1}, check = {2}".format(x,y,check))
            beta, gamma, mean, var = bnParams()
            print("BN Params: Beta {0} Gamma {1} mean {2} var{3} \n".format(beta,gamma,mean,var))

        saver.save(sess,savePath)

测试循环的前三次迭代如下所示;

x = [[  -1.782  100.941]], y = [[-1.843  1.388]], check = [[-1.842  1.387]]
BN Params: Beta [ 0.  0.] Gamma [ 1.  1.] mean [ -0.2   99.93] var[ 0.818  0.589] 

x = [[  -1.245  101.126]], y = [[-1.156  1.557]], check = [[-1.155  1.557]]
BN Params: Beta [ 0.  0.] Gamma [ 1.  1.] mean [  -0.304  100.05 ] var[ 0.736  0.53 ] 

x = [[ -0.107  99.349]], y = [[ 0.23  -0.961]], check = [[ 0.23 -0.96]]
BN Params: Beta [ 0.  0.] Gamma [ 1.  1.] mean [ -0.285  99.98 ] var[ 0.662  0.477] 

我不是在做BP,所以beta和gamma都没有改变。但是我的跑步方式/差异正在发生变化。我哪里错了?

编辑: 很高兴知道为什么这些变量需要/不需要在测试和训练之间进行更改;

updates_collections, reuse, trainable

2 个答案:

答案 0 :(得分:3)

你的bn功能错了。请改用:

def bn(x,is_training,name):
    return batch_norm(x, decay=0.9, center=True, scale=True,
    updates_collections=None,
    is_training=is_training,
    reuse=None,
    trainable=True,
    scope=name)

is_training是bool 0-D张量信令是否更新运行平均值等。然后通过改变张量is_training,你就会发出信号,告知你是处于训练阶段还是测试阶段。

编辑: tensorflow中的许多操作都接受张量,而不是常量的True / False数字参数。

答案 1 :(得分:0)

使用slim.batch_norm时,请务必使用slim.learning.create_train_op代替tf.train.GradientDecentOptimizer(lr).minimize(loss)或其他优化程序。试试它是否有效!