使用数据集来使用Numpy数组

时间:2017-12-13 16:57:55

标签: python tensorflow tensorflow-datasets

我试图在图表中使用Numpy数组,使用数据集输入数据。

我已经阅读了this,但我不太了解如何在数据集中提供占位符数组。

如果我们举一个简单的例子,我会从:

开始
A = np.arange(4)
B = np.arange(10, 14)

a = tf.placeholder(tf.float32, [None])
b = tf.placeholder(tf.float32, [None])
c = tf.add(a, b)

with tf.Session() as sess:
    for i in range(10):
        x = sess.run(c, feed_dict={a: A, b:B})
        print(i, x)

然后我尝试修改它以使用数据集,如下所示:

A = np.arange(4)
B = np.arange(10, 14)

a = tf.placeholder(tf.int32, A.shape)
b = tf.placeholder(tf.int32, B.shape)
c = tf.add(a, b)

dataset = tf.data.Dataset.from_tensors((a, b))

iterator = dataset.make_initializable_iterator()

with tf.Session() as sess3:
    sess3.run(tf.global_variables_initializer())
    sess3.run(iterator.initializer, feed_dict={a: A, b: B})

    for i in range(10):
        x = sess3.run(c)
        print(i, x)

如果我运行此操作,我会得到' InvalidArgumentError:您必须为占位符张量提供一个值...'

直到for循环的代码模仿示例here,但我不知道如何使用占位符a& b没有为每次调用sess3.run(c)提供一个feed_dict [这将是昂贵的]。我怀疑我必须以某种方式使用迭代器,但我不明白如何。

更新

在选择示例时,我看起来过于简单了。我真正想要做的是在训练神经网络时使用数据集,或类似的。

对于一个更明智的问题,我将如何使用数据集来提供下面的占位符(尽管想象X和Y_true更长......)。文档将我带到循环开始的点,然后我不确定。

X = np.arange(8.).reshape(4, 2)
Y_true = np.array([0, 0, 1, 1])

x = tf.placeholder(tf.float32, [None, 2], name='x')
y_true = tf.placeholder(tf.float32, [None], name='y_true')

w = tf.Variable(np.random.randn(2, 1), name='w', dtype=tf.float32)

y = tf.squeeze(tf.matmul(x, w), name='y')

loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
                                labels=y_true, logits=y),
                                name='x_entropy')

# set optimiser
optimiser = tf.train.AdamOptimizer().minimize(loss)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    for i in range(100):
        _, loss_out = sess.run([optimiser, loss], feed_dict={x: X, y_true:Y_true})
        print(i, loss_out)

尝试以下操作只会获得InvalidArgumentError

X = np.arange(8.).reshape(4, 2)
Y_true = np.array([0, 0, 1, 1])

x = tf.placeholder(tf.float32, [None, 2], name='x')
y_true = tf.placeholder(tf.float32, [None], name='y_true')

dataset = tf.data.Dataset.from_tensor_slices((x, y_true))
iterator = dataset.make_initializable_iterator()

w = tf.Variable(np.random.randn(2, 1), name='w', dtype=tf.float32)

y = tf.squeeze(tf.matmul(x, w), name='y')

loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
                                labels=y_true, logits=y),
                                name='x_entropy')

# set optimiser
optimiser = tf.train.AdamOptimizer().minimize(loss)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    sess.run(iterator.initializer, feed_dict={x: X,
                                              y_true: Y_true})

    for i in range(100):
        _, loss_out = sess.run([optimiser, loss])
        print(i, loss_out)

2 个答案:

答案 0 :(得分:4)

使用iterator.get_next()Dataset获取元素,如:

next_element = iterator.get_next()

而不是初始化迭代器

sess.run(iterator.initializer, feed_dict={a:A, b:B})

并且至少从Dataset

获取值

value = sess.run(next_element)

修改

上面的代码只返回Dataset中的元素。数据集API旨在为features提供labelsinput_fn,因此预处理的所有其他计算都应在数据集API中执行。如果要添加元素,则应定义应用于元素的函数,例如:

def add_fn(exp1, exp2):
  return tf.add(exp1, exp2)

并且您可以将这些功能映射到数据集:

dataset = dataset.map(add_fn)

完整的代码示例:

A = np.arange(4)
B = np.arange(10, 14)
a = tf.placeholder(tf.int32, A.shape)
b = tf.placeholder(tf.int32, B.shape)
#c = tf.add(a, b)
def add_fn(exp1, exp2):
  return tf.add(exp1, exp2)
dataset = tf.data.Dataset.from_tensors((a, b))
dataset = dataset.map(add_fn)
iterator = dataset.make_initializable_iterator()
next_element = iterator.get_next()
with tf.Session() as sess:
  sess.run(iterator.initializer, feed_dict={a: A, b: B})
  # just one element at dataset
  x = sess.run(next_element)
  print(x)

答案 1 :(得分:2)

您更复杂的示例中的问题是您使用相同的tf.placeholder()节点作为Dataset.from_tensor_slices()的输入(这是正确的)网络本身(其中)导致InvalidArgumentError。正如JEK在their answer中指出的那样,您应该使用iterator.get_next()作为网络的输入,如下所示(请注意,我添加了其他几个修复程序)使代码按原样运行:

X = np.arange(8.).reshape(4, 2)
Y_true = np.array([0, 0, 1, 1])

x = tf.placeholder(tf.float32, [None, 2], name='x')
y_true = tf.placeholder(tf.float32, [None], name='y_true')

dataset = tf.data.Dataset.from_tensor_slices((x, y_true))

# You will need to repeat the input (which has 4 elements) to be able to take
# 100 steps.
dataset = dataset.repeat()

iterator = dataset.make_initializable_iterator()

# Use `iterator.get_next()` to create tensors that will consume values from the
# dataset.
x_next, y_true_next = iterator.get_next()

w = tf.Variable(np.random.randn(2, 1), name='w', dtype=tf.float32)

# The `x_next` tensor is a vector (i.e. a row of `X`), so you will need to
# convert it to a matrix or apply batching in the dataset to make it work with
# `tf.matmul()`
x_next = tf.expand_dims(x_next, 0)

y = tf.squeeze(tf.matmul(x_next, w), name='y')  # Use `x_next` here.

loss = tf.reduce_mean(
    tf.nn.sigmoid_cross_entropy_with_logits(
        labels=y_true_next, logits=y),  # Use `y_true_next` here.
    name='x_entropy')

# set optimiser
optimiser = tf.train.AdamOptimizer().minimize(loss)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    sess.run(iterator.initializer, feed_dict={x: X,
                                              y_true: Y_true})

    for i in range(100):
        _, loss_out = sess.run([optimiser, loss])
        print(i, loss_out)