Tensorflow总是返回nan

时间:2019-03-01 03:46:58

标签: python numpy tensorflow machine-learning

我正在张量流中创建一个神经网络,而我的输出始终是nan。我不确定这意味着什么,但是我听说这意味着问题类似于log(0)。我在代码中找不到真正的类似东西。这是我的代码:

import tensorflow as tf
import numpy as np
import pandas as pd
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

all_data = pd.read_csv('/projects/first-repository/train.csv')
all_data = all_data.values

layer1_size = 4
layer2_size = 20
layer3_size = 1

data = np.stack([all_data[:, 2], all_data[:, 5], all_data[:, 6], all_data[:, 7]])
data = tf.convert_to_tensor(data, np.float32)
labels = all_data[:, 1]
labels = tf.convert_to_tensor(labels, np.float32)
labels = tf.reshape(labels, [891, 1])

theta1 = tf.get_variable('theta1', shape=(layer2_size, layer1_size), initializer=tf.contrib.layers.xavier_initializer())
theta2 = tf.get_variable('theta2', shape=(layer3_size, layer2_size), initializer=tf.contrib.layers.xavier_initializer())

a1 = data
z2 = tf.matmul(theta1, a1)
a2 = tf.nn.relu(z2)
z3 = tf.matmul(theta2, a2)
a3 = tf.nn.softmax(z3)
h = tf.transpose(a3)

cost = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(labels=labels, logits=h))
train = tf.train.AdamOptimizer(0.01).minimize(cost)

init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    for i in range(1):
        sess.run(train)
        print(sess.run(cost))

我尝试用h + 1e-4替换h,并且输出没有变化。我该如何解决?

0 个答案:

没有答案