如何为一维卷积网络设置尺寸?

时间:2018-09-27 19:08:21

标签: python tensorflow keras conv-neural-network

我有2k观察值。每个观测值都有2个通道,每个通道都是1D 1024向量。

Observation_dim = (2000,2,1024)

结构:

channe1 (1,1024)----> Convolutional layer1---
                                              \
                                               > concatenate-->FCN-->binary_classification     
                                              /   
channe2 (1,1024)----> Convolutional layer2---                                              

每个通道都包含独立的信息,因此必须对每个通道进行一维和单独的卷积。

这是一个二进制分类,因此每个观察值都可以属于0类或1类。

问题:我不知道如何为简单的分类设置尺寸):

注意:我正在尝试使用conv_1d函数从每个通道分别提取特征,然后将每个conv_1d的展平输出连接起来,并插入FCN中。

def conv_1d(x):
    w_s=3
    p_s=4
    conv1 = Conv1D(32, w_s, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(x)
    batch1 = BatchNormalization()(conv1)
    pool1 = MaxPooling1D(pool_size=p_s)(batch1)

    conv2 = Conv1D(64, w_s, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
    batch2 = BatchNormalization()(conv2)
    pool2 = MaxPooling1D(pool_size=p_s)(batch2)

    flat = Flatten()(pool2)
    return flat

def tst_1():
    model = Sequential()
    inputs = Input((2, 1024,1))#(batch, height, width, channels)

    x1 = Lambda(lambda x:x[:,0])(inputs)
    dense12= conv_1d(x1)

    x2 = Lambda(lambda x:x[:,1])(inputs)
    dense22 = conv_1d(x2)


    flat = keras.layers.Concatenate()([dense12, dense22])

    dense1 = Dense(512, activation='relu')(flat)
    BN1 = BatchNormalization()(dense1)
    dense2 = Dense(256, activation='relu')(BN1)
    drop2 = Dropout(0.5)(dense2)
    dense3 = Dense(64, activation='relu')(drop2)
    densef = Dense(1, activation='sigmoid')(dense3)

    model = Model(inputs = inputs, outputs = densef)

    model.compile(optimizer=Adam(), loss='binary_crossentropy', metrics=['accuracy'])


    return model

model = tst_1()
model.summary()

1 个答案:

答案 0 :(得分:0)

我终于能够使用Keras功能API解决此问题(不是常见的顺序方法),结果如下:

inputs = Input((2, 1000,1))#(batch, height, width, channels)

# first feature extractor
x1 = Lambda(lambda x:x[:,0])(inputs)
conv1 = Conv1D(32, kernel_size=4, activation='relu')(x1)
pool1 = MaxPooling1D(pool_size=2)(conv1)
flat1 = Flatten()(pool1)
# second feature extractor
x2 = Lambda(lambda x:x[:,1])(inputs)
conv2 = Conv1D(32, kernel_size=4, activation='relu')(x2)
pool2 = MaxPooling1D(pool_size=2)(conv2)
flat2 = Flatten()(pool2)
# merge feature extractors
merge = concatenate([flat1, flat2])
# interpretation layer
hidden1 = Dense(10, activation='relu')(merge)
# prediction output
output = Dense(1, activation='sigmoid')(hidden1)
model = Model(inputs=inputs, outputs=output)
# summarize layers
print(model.summary())

,网络将如下所示: enter image description here

您可以在此处找到使用Keras API的全面信息: enter link description here