使用tensorflow

时间:2016-10-02 14:23:09

标签: machine-learning tensorflow deep-learning text-classification

我尝试使用tensorflow设置多层感知器进行二元分类时遇到了一些麻烦。

我有一个非常大的数据集(约1.5 * 10 ^ 6个例子),每个数据集都带有二进制(0/1)标签和100个特征。我需要做的是设置一个简单的MLP,然后尝试改变学习率和初始化模式来记录结果(它是一个分配)。 然而,我得到了奇怪的结果,因为我的MLP似乎很早就陷入了一个低但不是很大的成本,并且永远不会离开它。由于学习率相当低,成本几乎立即变为NAN。我不知道问题在于我是如何构建MLP的(我做了几次尝试,发布了最后一个的代码),或者我是否遗漏了我的tensorflow实现的内容。

CODE

import tensorflow as tf
import numpy as np
import scipy.io

# Import and transform dataset
print("Importing dataset.")
dataset = scipy.io.mmread('tfidf_tsvd.mtx')

with open('labels.txt') as f:
    all_labels = f.readlines()

all_labels = np.asarray(all_labels)
all_labels = all_labels.reshape((1498271,1))

# Split dataset into training (66%) and test (33%) set
training_set    = dataset[0:1000000]
training_labels = all_labels[0:1000000]
test_set        = dataset[1000000:1498272]
test_labels     = all_labels[1000000:1498272]

print("Dataset ready.") 

# Parameters
learning_rate   = 0.01 #argv
mini_batch_size = 100
training_epochs = 10000
display_step    = 500

# Network Parameters
n_hidden_1  = 64    # 1st hidden layer of neurons
n_hidden_2  = 32    # 2nd hidden layer of neurons
n_hidden_3  = 16    # 3rd hidden layer of neurons
n_input     = 100   # number of features after LSA

# Tensorflow Graph input
x = tf.placeholder(tf.float64, shape=[None, n_input], name="x-data")
y = tf.placeholder(tf.float64, shape=[None, 1], name="y-labels")

print("Creating model.")

# Create model
def multilayer_perceptron(x, weights):
    # First hidden layer with SIGMOID activation
    layer_1 = tf.matmul(x, weights['h1'])
    layer_1 = tf.nn.sigmoid(layer_1)
    # Second hidden layer with SIGMOID activation
    layer_2 = tf.matmul(layer_1, weights['h2'])
    layer_2 = tf.nn.sigmoid(layer_2)
    # Third hidden layer with SIGMOID activation
    layer_3 = tf.matmul(layer_2, weights['h3'])
    layer_3 = tf.nn.sigmoid(layer_3)
    # Output layer with SIGMOID activation
    out_layer = tf.matmul(layer_2, weights['out'])
    return out_layer

# Layer weights, should change them to see results
weights = {
    'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1], dtype=np.float64)),       
    'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2], dtype=np.float64)),
    'h3': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_3],dtype=np.float64)),
    'out': tf.Variable(tf.random_normal([n_hidden_2, 1], dtype=np.float64))
}

# Construct model
pred = multilayer_perceptron(x, weights)

# Define loss and optimizer
cost = tf.nn.l2_loss(pred-y,name="squared_error_cost")
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

# Initializing the variables
init = tf.initialize_all_variables()

print("Model ready.")

# Launch the graph
with tf.Session() as sess:
    sess.run(init)

    print("Starting Training.")

    # Training cycle
    for epoch in range(training_epochs):
        #avg_cost = 0.
        # minibatch loading
        minibatch_x = training_set[mini_batch_size*epoch:mini_batch_size*(epoch+1)]
        minibatch_y = training_labels[mini_batch_size*epoch:mini_batch_size*(epoch+1)]
        # Run optimization op (backprop) and cost op
        _, c = sess.run([optimizer, cost], feed_dict={x: minibatch_x, y: minibatch_y})

        # Compute average loss
        avg_cost = c / (minibatch_x.shape[0])

        # Display logs per epoch
        if (epoch) % display_step == 0:
        print("Epoch:", '%05d' % (epoch), "Training error=", "{:.9f}".format(avg_cost))

    print("Optimization Finished!")

    # Test model
    # Calculate accuracy
    test_error = tf.nn.l2_loss(pred-y,name="squared_error_test_cost")/test_set.shape[0]
    print("Test Error:", test_error.eval({x: test_set, y: test_labels}))

输出

python nn.py
Importing dataset.
Dataset ready.
Creating model.
Model ready.
Starting Training.
Epoch: 00000 Training error= 0.331874878
Epoch: 00500 Training error= 0.121587482
Epoch: 01000 Training error= 0.112870921
Epoch: 01500 Training error= 0.110293652
Epoch: 02000 Training error= 0.122655269
Epoch: 02500 Training error= 0.124971940
Epoch: 03000 Training error= 0.125407845
Epoch: 03500 Training error= 0.131942481
Epoch: 04000 Training error= 0.121696954
Epoch: 04500 Training error= 0.116669835
Epoch: 05000 Training error= 0.129558477
Epoch: 05500 Training error= 0.122952110
Epoch: 06000 Training error= 0.124655344
Epoch: 06500 Training error= 0.119827300
Epoch: 07000 Training error= 0.125183779
Epoch: 07500 Training error= 0.156429254
Epoch: 08000 Training error= 0.085632880
Epoch: 08500 Training error= 0.133913128
Epoch: 09000 Training error= 0.114762624
Epoch: 09500 Training error= 0.115107805
Optimization Finished!
Test Error: 0.116647016708

这是MMN建议的

weights = {
    'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1], stddev=0, dtype=np.float64)),     
    'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2], stddev=0.01, dtype=np.float64)),
    'h3': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_3],  stddev=0.01, dtype=np.float64)),
    'out': tf.Variable(tf.random_normal([n_hidden_2, 1], dtype=np.float64))
}

这是输出

Epoch: 00000 Training error= 0.107566668
Epoch: 00500 Training error= 0.289380907
Epoch: 01000 Training error= 0.339091784
Epoch: 01500 Training error= 0.358559815
Epoch: 02000 Training error= 0.122639698
Epoch: 02500 Training error= 0.125160135
Epoch: 03000 Training error= 0.126219718
Epoch: 03500 Training error= 0.132500418
Epoch: 04000 Training error= 0.121795254
Epoch: 04500 Training error= 0.116499476
Epoch: 05000 Training error= 0.124532673
Epoch: 05500 Training error= 0.124484790
Epoch: 06000 Training error= 0.118491177
Epoch: 06500 Training error= 0.119977633
Epoch: 07000 Training error= 0.127532511
Epoch: 07500 Training error= 0.159053519
Epoch: 08000 Training error= 0.083876224
Epoch: 08500 Training error= 0.131488483
Epoch: 09000 Training error= 0.123161189
Epoch: 09500 Training error= 0.125011362
Optimization Finished!
Test Error: 0.129284643093

连接第三个隐藏图层,多亏了MMN

我的代码中有一个错误,我有两个隐藏层而不是三个。我纠正了:

'out': tf.Variable(tf.random_normal([n_hidden_3, 1], dtype=np.float64))

out_layer = tf.matmul(layer_3, weights['out'])

虽然我恢复了stddev的旧值,因为它似乎导致成本函数的波动减少。

输出仍然令人不安

Epoch: 00000 Training error= 0.477673073
Epoch: 00500 Training error= 0.121848744
Epoch: 01000 Training error= 0.112854530
Epoch: 01500 Training error= 0.110597624
Epoch: 02000 Training error= 0.122603499
Epoch: 02500 Training error= 0.125051472
Epoch: 03000 Training error= 0.125400717
Epoch: 03500 Training error= 0.131999354
Epoch: 04000 Training error= 0.121850889
Epoch: 04500 Training error= 0.116551533
Epoch: 05000 Training error= 0.129749704
Epoch: 05500 Training error= 0.124600464
Epoch: 06000 Training error= 0.121600218
Epoch: 06500 Training error= 0.121249676
Epoch: 07000 Training error= 0.132656938
Epoch: 07500 Training error= 0.161801757
Epoch: 08000 Training error= 0.084197352
Epoch: 08500 Training error= 0.132197409
Epoch: 09000 Training error= 0.123249055
Epoch: 09500 Training error= 0.126602369
Optimization Finished!
Test Error: 0.129230736355

由于Steven ,还有两个更改 所以Steven建议用ReLu改变Sigmoid激活功能,所以我试过了。与此同时,我注意到我没有为输出节点设置激活功能,所以我也这样做了(应该很容易看到我改变了什么)。

Starting Training.
Epoch: 00000 Training error= 293.245977809
Epoch: 00500 Training error= 0.290000000
Epoch: 01000 Training error= 0.340000000
Epoch: 01500 Training error= 0.360000000
Epoch: 02000 Training error= 0.285000000
Epoch: 02500 Training error= 0.250000000
Epoch: 03000 Training error= 0.245000000
Epoch: 03500 Training error= 0.260000000
Epoch: 04000 Training error= 0.290000000
Epoch: 04500 Training error= 0.315000000
Epoch: 05000 Training error= 0.285000000
Epoch: 05500 Training error= 0.265000000
Epoch: 06000 Training error= 0.340000000
Epoch: 06500 Training error= 0.180000000
Epoch: 07000 Training error= 0.370000000
Epoch: 07500 Training error= 0.175000000
Epoch: 08000 Training error= 0.105000000
Epoch: 08500 Training error= 0.295000000
Epoch: 09000 Training error= 0.280000000
Epoch: 09500 Training error= 0.285000000
Optimization Finished!
Test Error: 0.220196439287

这就是它对每个节点上的Sigmoid激活函数的作用,包括输出

Epoch: 00000 Training error= 0.110878121
Epoch: 00500 Training error= 0.119393080
Epoch: 01000 Training error= 0.109229532
Epoch: 01500 Training error= 0.100436962
Epoch: 02000 Training error= 0.113160662
Epoch: 02500 Training error= 0.114200962
Epoch: 03000 Training error= 0.109777990
Epoch: 03500 Training error= 0.108218725
Epoch: 04000 Training error= 0.103001394
Epoch: 04500 Training error= 0.084145737
Epoch: 05000 Training error= 0.119173495
Epoch: 05500 Training error= 0.095796251
Epoch: 06000 Training error= 0.093336573
Epoch: 06500 Training error= 0.085062860
Epoch: 07000 Training error= 0.104251661
Epoch: 07500 Training error= 0.105910949
Epoch: 08000 Training error= 0.090347288
Epoch: 08500 Training error= 0.124480612
Epoch: 09000 Training error= 0.109250224
Epoch: 09500 Training error= 0.100245836
Optimization Finished!
Test Error: 0.110234139674

我发现这些数字非常奇怪,在第一种情况下,它的成本高于sigmoid,尽管sigmoid应该很早就会饱和。在第二种情况下,它以训练错误开始,这几乎是最后一次...因此它基本上与一个小批量收敛。我开始认为我没有正确计算成本,在这一行:     avg_cost = c /(minibatch_x.shape [0])

3 个答案:

答案 0 :(得分:2)

所以可能有几件事情:

  1. 你可能会使sigmoid单位饱和(正如MMN提到的那样)我建议改用relu单位。
  2. 取代:

    tf.nn.sigmoid(layer_n)
    

    使用:

    tf.nn.relu(layer_n)
    
    1. 您的模型可能没有实际学习数据的表现力。即它需要更深入。
    2. 你也可以尝试像Adam()这样的其他优化器
    3. 取代:

      optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
      

      使用:

      optimizer = tf.train.AdamOptimizer().minimize(cost)
      

      其他几点:

      1. 您应该为权重添加偏见词
      2. 像这样:

        biases = {
         'b1': tf.Variable(tf.random_normal([n_hidden_1],   dtype=np.float64)),       
         'b2': tf.Variable(tf.random_normal([n_hidden_2], dtype=np.float64)),
         'b3': tf.Variable(tf.random_normal([n_hidden_3],dtype=np.float64)),
         'bout': tf.Variable(tf.random_normal([1], dtype=np.float64))
         }
        
        def multilayer_perceptron(x, weights):
            # First hidden layer with SIGMOID activation
            layer_1 = tf.matmul(x, weights['h1']) + biases['b1']
            layer_1 = tf.nn.sigmoid(layer_1)
            # Second hidden layer with SIGMOID activation
            layer_2 = tf.matmul(layer_1, weights['h2']) + biases['b2']
            layer_2 = tf.nn.sigmoid(layer_2)
            # Third hidden layer with SIGMOID activation
            layer_3 = tf.matmul(layer_2, weights['h3']) + biases['b3']
            layer_3 = tf.nn.sigmoid(layer_3)
            # Output layer with SIGMOID activation
            out_layer = tf.matmul(layer_2, weights['out']) + biases['bout']
            return out_layer
        
        1. 您可以随时更新学习率
        2. 像这样:

              learning_rate = tf.train.exponential_decay(INITIAL_LEARNING_RATE,
                                                     global_step,
                                                     decay_steps,
                                                     LEARNING_RATE_DECAY_FACTOR,
                                                     staircase=True)
          

          你只需要定义衰减步骤,即何时衰变和LEARNING_RATE_DECAY_FACTOR,即衰减多少。

答案 1 :(得分:1)

你的权重初始化为stddev为1,因此第1层的输出将具有10左右的stddev。这可能会使sigmoid函数饱和到大多数渐变为0的点。

您可以尝试使用.01的stddev初始化隐藏权重吗?

答案 2 :(得分:1)

除上述答案外,我建议您尝试使用成本函数 tf.nn.sigmoid_cross_entropy_with_logits(logits,targets,name = None)

作为二进制分类,您必须尝试 sigmoid_cross_entropy_with_logits 成本函数

我还建议您还必须绘制列车和测试精度与时期数量的关系图。即检查模型是否过度拟合?

如果它没有过度拟合,试着让你的神经网络变得更加复杂。这是通过增加神经元的数量,增加层数。除了你的训练准确度会不断提高之外你会得到这样的一点,但验证不会给出最好的模型。