使用Conv2d进行图像调整

时间:2017-10-28 12:13:07

标签: numpy opencv tensorflow machine-learning tensor

我正在使用TensorFlow开展与CNN相关的项目。 我使用(20个这样的图像)导入图像

for filename in glob.glob('input_data/*.jpg'):
input_images.append(cv2.imread(filename,0))

image_size_input = len(input_images[0])

由于灰度,图像大小(250,250)。 但是对于conv2D,它需要一个4D输入张量来馈送。我的输入张量看起来像

x = tf.placeholder(tf.float32,shape=[None,image_size_output,image_size_output,1], name='x')

所以我无法将上面的2d图像转换为给定的形状(4D)。如何处理"无"领域。 我试过这个:

input_images_padded = []
for image in input_images:
temp = np.zeros((1,image_size_output,image_size_output,1))
for i in range(image_size_input):
    for j in range(image_size_input):
        temp[0,i,j,0] = image[i,j]
input_images_padded.append(temp)

我收到以下错误:

File "/opt/intel/intelpython3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 975, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))

ValueError: Cannot feed value of shape (20, 1, 250, 250, 1) for Tensor 'x_11:0', which has shape '(?, 250, 250, 1)'

这里是整个代码(仅供参考):

import tensorflow as tf
from PIL import Image
import glob
import cv2
import os
import numpy as np
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' 

input_images = []
output_images = []

for filename in glob.glob('input_data/*.jpg'):
    input_images.append(cv2.imread(filename,0))

for filename in glob.glob('output_data/*.jpg'):
    output_images.append(cv2.imread(filename,0))    

image_size_input = len(input_images[0])
image_size_output = len(output_images[0])

'''
now adding padding to the input images to convert from 125x125 to 250x2050 sized images
'''
input_images_padded = []
for image in input_images:
    temp = np.zeros((1,image_size_output,image_size_output,1))
    for i in range(image_size_input):
        for j in range(image_size_input):
            temp[0,i,j,0] = image[i,j]
    input_images_padded.append(temp)

output_images_padded = []
for image in output_images:
    temp = np.zeros((1,image_size_output,image_size_output,1))
    for i in range(image_size_input):
        for j in range(image_size_input):
            temp[0,i,j,0] = image[i,j]
    output_images_padded.append(temp)



sess = tf.Session()
'''
Creating tensor for the input
'''
x = tf.placeholder(tf.float32,shape=    [None,image_size_output,image_size_output,1], name='x')
'''
Creating tensor for the output
'''
y = tf.placeholder(tf.float32,shape=    [None,image_size_output,image_size_output,1], name='y')


def create_weights(shape):
    return tf.Variable(tf.truncated_normal(shape, stddev=0.05))

def create_biases(size):
    return tf.Variable(tf.constant(0.05, shape=[size]))

def create_convolutional_layer(input, bias_count, filter_height, filter_width, num_input_channels, num_out_channels, activation_function):  


    weights = create_weights(shape=[filter_height, filter_width, num_input_channels, num_out_channels])

    biases = create_biases(bias_count)


    layer = tf.nn.conv2d(input=input,
                  filter=weights,
                 strides=[1, 1, 1, 1],
                 padding='SAME')

    layer += biases


layer = tf.nn.max_pool(value=layer,
                        ksize=[1, 2, 2, 1],
                        strides=[1, 1, 1, 1],
                        padding='SAME')

if activation_function=="relu":
    layer = tf.nn.relu(layer)

return layer


'''
Conv. Layer 1: Patch extraction
64 filters of size 1 x 9 x 9
Activation function: ReLU
Output: 64 feature maps
Parameters to optimize: 
    1 x 9 x 9 x 64 = 5184 weights and 64 biases
'''
layer1 = create_convolutional_layer(input=x,
                                bias_count=64,
                                filter_height=9,
                                filter_width=9,
                                num_input_channels=1,
                                num_out_channels=64,
                                activation_function="relu")

'''
Conv. Layer 2: Non-linear mapping
32 filters of size 64 x 1 x 1
Activation function: ReLU
Output: 32 feature maps
Parameters to optimize: 64 x 1 x 1 x 32 = 2048 weights and 32 biases
'''

layer2 = create_convolutional_layer(input=layer1,
                                bias_count=32,
                                filter_height=1,
                                filter_width=1,
                                num_input_channels=64,
                                num_out_channels=32,
                                activation_function="relu")

'''Conv. Layer 3: Reconstruction
1 filter of size 32 x 5 x 5
Activation function: Identity
Output: HR image
Parameters to optimize: 32 x 5 x 5 x 1 = 800 weights and 1 bias'''
layer3 = create_convolutional_layer(input=layer2,
                                bias_count=1,
                                filter_height=5,
                                filter_width=5,
                                num_input_channels=32,
                                num_out_channels=1,
                                activation_function="identity")

'''print(layer1.get_shape().as_list()) 
print(layer2.get_shape().as_list())
print(layer3.get_shape().as_list())'''

'''
    applying gradient descent algorithm
'''
#loss_function
loss = tf.reduce_sum(tf.square(layer3-y))
#optimiser
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)


init = tf.global_variables_initializer()
sess.run(init)
for i in range(len(input_images)):
    sess.run(train,{x: input_images_padded, y:output_images_padded})


curr_loss = sess.run([loss], {x: x_train, y: y_train})
print("loss: %s"%(curr_loss))

3 个答案:

答案 0 :(得分:1)

一种选择是在创建占位符时忽略给出shape,以便它接受在sess.run()期间提供的任何形状的张量

来自文档:

  

形状:要进给的张量的形状(可选)。如果形状不是       如果指定,你可以提供任何形状的张量。

或者,您可以指定20,这是您的批量大小。请注意,张量中的第一个维度始终对应于batch_size

答案 1 :(得分:1)

我认为你的image_padded不对。我没有tf-code编写经验(虽然已经阅读了一些代码)。但试试这个:

// imgs is your input-image-sequences
// padded is to feed 
cnt = len(imgs)
H,W = imgs[0].shape[:2]
padded = np.zeros((cnt, H, W, 1))
for i in range(cnt):
    padded[i, :,:,0] = img[i]

答案 2 :(得分:0)

检查下一行。它对我有用:

train_set = np.zeros((input_images.shape[0], input_images.shape[1], input_images.shape[2],1))

for image in range(input_images.shape[0]):

    train_set[image,:,:,0] = input_images[image,:,:]
相关问题