在Keras中手动分配辍学层

时间:2019-06-21 16:57:36

标签: python tensorflow keras

我正在尝试学习NN中辍学正则化的内部工作原理。我主要是从Francois Chollet的“ Python深度学习”中学习的。

说我正在使用IMDB电影评论情绪数据并建立一个如下所示的简单模型:

# download IMDB movie review data
# keeping only the first 10000 most freq. occurring words to ensure manageble sized vectors
from keras.datasets import imdb

(train_data, train_labels), (test_data, test_labels) = imdb.load_data(
    num_words=10000)

# prepare the data
import numpy as np
# create an all 0 matrix of shape (len(sequences), dimension)
def vectorize_sequences(sequences, dimension=10000):
    results = np.zeros((len(sequences), dimension))
    for i, sequence in enumerate(sequences):
        # set specific indices of results[i] = 1
        results[i, sequence] = 1.
    return results

# vectorize training data
x_train = vectorize_sequences(train_data)
# vectorize test data
x_test = vectorize_sequences(test_data)

# vectorize response labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')

# build a model with L2 regularization
from keras import regularizers
from keras import models
from keras import layers

model = models.Sequential()
model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001),
                       activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001),
                       activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))

本书提供了使用以下行手动设置随机辍学权重的示例:

# at training time, zero out a random fraction of the values in the matrix
layer_output *= np.random.randint(0, high=2, size=layer_output.shape)

我将如何1)将其实际集成到我的模型中,以及2)如何在测试时删除辍学?

编辑:我知道像下面的代码一样使用dropout的集成方法,实际上我正在寻找一种手动实现上述方法的方法

model.add(layers.Dropout(0.5))

2 个答案:

答案 0 :(得分:1)

这可以使用Lambda层实现。

from keras import backend as K
def dropout(input):
    training = K.learning_phase()
    if training is 1 or training is True:
        input *= K.cast(K.random_uniform(K.shape(input), minval=0, maxval=2, dtype='int32'), dtype='float32')
        input /= 0.5    
    return input

def get_model():
        model = models.Sequential()
        model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001),
                               activation='relu', input_shape=(10000,)))
        model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001),
                               activation='relu'))
        model.add(layers.Lambda(dropout)) # add dropout using Lambda layer
        model.add(layers.Dense(1, activation='sigmoid'))
        print(model.summary())
        return model

K.set_learning_phase(1)
model = get_model()
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
weights = model.get_weights()
K.set_learning_phase(0)
model = get_model()
model.set_weights(weights)
print('model prediction is {}, label is {} '.format(model.predict(x_test[0][None]), y_test[0]))

模型预测为[[0.1484453]],标签为0.0

答案 1 :(得分:0)

  

我将如何1)实际将其集成到我的模型中

实际上,那段使用numpy库的Python代码仅用于说明辍学的工作方式。这不是在Keras模型中实现Dropout的方式。相反,要在Keras模型中使用Dropout,您需要使用Dropout层,并为其指定一个比率数字(介于0和1之间),该比率数字表示辍学率:

from keras import layers

# ...
model.add(layers.Dropout(dropout_rate))
# add the rest of layers to the model ...
  

2)我如何在测试时删除辍学?

您无需手动执行任何操作。它由Keras自动处理,在您使用predict()方法时会在预测阶段关闭。