完全连接层错误馈送

时间:2017-03-08 15:55:57

标签: python-3.x tensorflow conv-neural-network

here拖出一些构建CNN的想法,我想构建一个包含两个卷积层,一个完全连接层(FCL)和softmax层(SL)的convnet。我无法理解定义要在FCL上执行并回到SL连接的卷积操作。

在FCL中,卷积操作是在输入被压平的1D中执行的吗? FCL的权重是在2D中生成的,但如果是这样,我该如何进行Conv操作呢?因为矩阵维度与重新生成的输入和生成的权重不匹配(最后比较详细列中的VGGNET)。即使我可以进行1xM和MxN转换操作,矩阵的大小也不匹配,我在FCL中哪里出错?

Traceback (most recent call last):
File "D:/Lab_Project_Files/TF/Practice Files/basictest22.py", line 108, in <module>
   y = conv_net( x )
File "D:/Lab_Project_Files/TF/Practice Files/basictest22.py", line 93, in conv_net
   FClayer = tf.nn.relu(tf.add(tf.matmul(reshape,layer3_weights),layer3_biases))
ValueError: Shape must be rank 2 but is rank 1 for 'MatMul' (op: 'MatMul') with input shapes: [15360], [2240,64]

如何定义FCL? 我是否对这些操作是否适用于批次的每个图像感到困惑?

我的输入参数是

INPUT_WIDTH  = 16 # input image width
INPUT_HEIGHT = 12 # input image height
INPUT_DEPTH  = 1  # input image depth = 1 for monochrome
NUM_CLASSES  = 8  # output classes
BATCH_SIZE   = 5  # grouping batch for training 
# input output placeholders
x = tf.placeholder(tf.float32, [BATCH_SIZE, INPUT_WIDTH,INPUT_HEIGHT,INPUT_DEPTH ])
y_ = tf.placeholder(tf.float32, [BATCH_SIZE, NUM_CLASSES])

我的跟踪代码

def outputdetails(W1, H1,F, P, S):
# W1,W2 - width of input and output
# H1,H2 - height of input and output
# F     - size of the filter
# P     - padding
# S     - Stride
P = 0.00
W2 = int((W1 - F + 2*P)/S + 1)
H2 = int((H1 - F + 2*P)/S + 1)
return W2, H2

# CNN trail
def conv_net(x):
    # CONV1 layer
    FILTER_SIZE = 3   # applying 3x3 filter
    STRIDE = 1
    num_hidden = 64 # used for FCL as num of outputs
    NUM_CHANNELS = INPUT_DEPTH # input channels
    DEPTH = 16       # Output channels Apply 16 filters
    layer1_weights = tf.Variable(tf.random_normal([FILTER_SIZE,FILTER_SIZE,NUM_CHANNELS,DEPTH],stddev = 0.1))
    layer1_biases = tf.Variable(tf.zeros([DEPTH]))

    #CONV2 layer
    NUM_CHANNELS = 16
    DEPTH = 16
    layer2_weights = tf.Variable(tf.random_normal([FILTER_SIZE, FILTER_SIZE, NUM_CHANNELS, DEPTH], stddev=0.1))
    layer2_biases = tf.Variable(tf.zeros([DEPTH]))

    # Fully Connected layer
    # W1 - INPUT_WIDTH, H1 - INPUT_HEIGHT, F - FILTER_SIZE, S - STRIDE
    finalsize_width,finalsize_height = outputdetails(INPUT_WIDTH,INPUT_HEIGHT,FILTER_SIZE,1,STRIDE)
    layer3_weights = tf.Variable(
    tf.truncated_normal([finalsize_width * finalsize_height * DEPTH, num_hidden], stddev=0.1))
    layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]))
    # softmax layer
    Outlayer_weights = tf.Variable(tf.random_normal([num_hidden, NUM_CLASSES], stddev=0.1))
    Outlayer_biases = tf.Variable(tf.constant(1.0,shape = [NUM_CLASSES]))

    conv1 = tf.nn.relu(tf.add(tf.nn.conv2d(x,layer1_weights,strides = [1,1,1,1],padding='SAME'),layer1_biases))
    conv2 = tf.nn.relu(tf.add(tf.nn.conv2d(conv1, layer2_weights, strides=[1, 1, 1, 1], padding='SAME'), layer2_biases))
    shape = conv2.get_shape().as_list()
    reshape = tf.reshape(conv2,[shape[0]*shape[1]*shape[2]*shape[3]])
    FClayer = tf.nn.relu(tf.add(tf.matmul(reshape,layer3_weights),layer3_biases))
    out = tf.add(tf.matmul(FClayer, Outlayer_weights), Outlayer_biases)
    return out

文件(如果需要) source file

classes

data

1 个答案:

答案 0 :(得分:1)

更改此

proc sql;
create table test as
select * from animal
outer union corr
select * from plant
;
quit;

到这个

reshape = tf.reshape(conv2,[shape[0]*shape[1]*shape[2]*shape[3]])

matmul可以使用您正在销毁的批量维度。