input.dim_size(0)== 0时,预期的begin [0] == 0(第-1号)和size [0] == 0(第1号)

时间:2019-04-17 13:07:15

标签: tensorflow

伙计们,这是从我的计算机中提取图像的功能

def get_image_array(file_location):
    img_array = []
    for dirpath, dirname, filename in os.walk(file_location):
        if filename:
            for file in filename:
                if str(file).startswith('dog'):
                    img=cv2.imread(file_location+"/"+file)
                    img = cv2.resize(img,(30,30)) 
                    img=np.expand_dims(img, axis=0)
                    img_array.append((1,img))
                elif ( str(file).startswith('cat')):
                    img=cv2.imread(file_location+"/"+file)
                    img = cv2.resize(img,(30,30))
                    img=np.expand_dims(img, axis=0)
                    img_array.append((0,img))
    return img_array

狗数据集和猫数据集的提取方式如下:

dogs_image_array = get_image_array(dogs_dataset_location)
cats_image_array = get_image_array(cats_dataset_location)

这是我定义的一些常量

images_num = 4000 #make it 2000 cats and 2000 dogs
image_size = 30
keep_prob = 0.5
filter_size = 5 
num_filters = 10
num_epochs = 10
batch_size = 100
num_batches = int(len(dogs_image_array)/batch_size)
num_channels = 3

x和y的占位符以及布尔占位符

x  = tf.placeholder(tf.float32, [None,image_size,image_size,num_channels])
y = tf.placeholder(tf.float32)
train = tf.placeholder(tf.bool)

卷积层的形状如下:

convul1 = tf.layers.conv2d(x,num_filters,filter_size,padding='same',activation= tf.nn.relu)
drop1 = tf.layers.dropout(convul1,keep_prob,training=train)
pool1 = tf.layers.max_pooling2d(drop1,2,2)

convul2 = tf.layers.conv2d(pool1,num_filters,filter_size,padding='same',activation= tf.nn.relu)
drop2 = tf.layers.dropout(convul2,keep_prob,training=train)
pool2 = tf.layers.max_pooling2d(drop2,3,3)

convul3 = tf.layers.conv2d(pool2,num_filters,filter_size,padding='same',activation = tf.nn.relu)
drop3 = tf.layers.dropout(convul3,keep_prob,training=train)


flatten_shape = 1 
for i in drop3.shape:
    i=str(i)
    if i.isdigit():
        flatten_shape = flatten_shape*int(i)
#changing them into one row and columns have flatten_shape of number
flatten = tf.reshape(drop3,[-1,flatten_shape])

学习率和生成随机图像及其标签的功能

learning_rate = 0.005
#returning labels and images with function
def getImageAndLabels(shuffle_rate):
    cats_dogs_images = dogs_image_array[:2000] + cats_image_array[:2000]
    for i in range(shuffle_rate):
        random.shuffle(cats_dogs_images)
    labels = [item[0] for item in cats_dogs_images]
    images = [item[1] for item in cats_dogs_images]
    return images[0], labels[0]

矩阵相乘由完全连接张量完成的平均值为:

with tf.contrib.framework.arg_scope([tf.contrib.layers.fully_connected], 
                                   normalizer_fn = tf.contrib.layers.batch_norm, 
                                   normalizer_params= {'is_training':train}):
    first_layer= tf.contrib.layers.fully_connected(flatten,flatten_shape)
    second_layer = tf.contrib.layers.fully_connected(first_layer,1,activation_fn=None) 

优化器为

loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits =second_layer, labels =y ))
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss)
init = tf.global_variables_initializer()

以下是调用图形的代码:

with tf.Session() as sess:
    sess.run(init)
    for eachEpoch in range(num_epochs):
        for eachBatch in range(num_batches):
            batch_images, batch_labels = getImageAndLabels(100)
            sess.run([optimizer], feed_dict={x:batch_images, y:batch_labels, train:True})

但这给了我这个奇怪的错误,我不知道为什么。

Expected begin[0] == 0 (got -1) and size[0] == 0 (got 1) when input.dim_size(0) == 0
     [[node softmax_cross_entropy_with_logits_sg_2/Slice_1 (defined at <ipython-input-27-9ecb08e8a83c>:1) ]]

有关此问题的任何帮助吗?很长的时间,很抱歉,但是我找不到任何与此问题相关的问题。人们赞赏一种更好的方法(如果有的话)或产生更好结果的任何修改,并且也可以解决此问题。

0 个答案:

没有答案