keras BLSTM用于序列标记

时间:2016-05-18 18:25:02

标签: python neural-network keras lstm

我对神经网络比较陌生,所以请原谅我的无知。我试图改编keras BLSTM示例here。该示例读入文本并将它们分类为0或1.我想要一个非常类似于POS标记的BLSTM,尽管像lematizing或其他高级功能这样的额外功能不是必需的,我只想要一个基本模型。我的数据是一个句子列表,每个单词都有一个1-8类。我想训练一个BLSTM,它可以使用这些数据来预测一个看不见的句子中每个单词的类别。

e.g。 input = [''' dog','',' red']给出输出= [2,4,3,7 ]

如果keras示例不是最佳路线,我可以接受其他建议。

我目前有这个:

'''Train a Bidirectional LSTM.'''

from __future__ import print_function
import numpy as np
from keras.preprocessing import sequence
from keras.models import Model
from keras.layers import Dense, Dropout, Embedding, LSTM, Input, merge
from prep_nn import prep_scan


np.random.seed(1337)  # for reproducibility
max_features = 20000
batch_size = 16
maxlen = 18

print('Loading data...')
(X_train, y_train), (X_test, y_test) = prep_scan(nb_words=max_features,
                                                 test_split=0.2)
print(len(X_train), 'train sequences')
print(len(X_test), 'test sequences')

print("Pad sequences (samples x time)")
# type issues here? float/int?
X_train = sequence.pad_sequences(X_train, value=0.)
X_test = sequence.pad_sequences(X_test, value=0.)  # pad with zeros

print('X_train shape:', X_train.shape)
print('X_test shape:', X_test.shape)

# need to pad y too, because more than 1 ouput value, not classification?
y_train = sequence.pad_sequences(np.array(y_train), value=0.)
y_test = sequence.pad_sequences(np.array(y_test), value=0.)

print('y_train shape:', X_train.shape)
print('y_test shape:', X_test.shape)

# this is the placeholder tensor for the input sequences
sequence = Input(shape=(maxlen,), dtype='int32')

# this embedding layer will transform the sequences of integers
# into vectors of size 128
embedded = Embedding(max_features, 128, input_length=maxlen)(sequence)

# apply forwards LSTM
forwards = LSTM(64)(embedded)
# apply backwards LSTM
backwards = LSTM(64, go_backwards=True)(embedded)

# concatenate the outputs of the 2 LSTMs
merged = merge([forwards, backwards], mode='concat', concat_axis=-1)
after_dp = Dropout(0.5)(merged)
# number after dense has to corresponse to output matrix?
output = Dense(17, activation='sigmoid')(after_dp)

model = Model(input=sequence, output=output)

# try using different optimizers and different optimizer configs
model.compile('adam', 'categorical_crossentropy', metrics=['accuracy'])

print('Train...')
model.fit(X_train, y_train,
          batch_size=batch_size,
          nb_epoch=4,
          validation_data=[X_test, y_test])

X_test_new = np.array([[0,0,0,0,0,0,0,0,0,12,3,55,4,34,5,45,3,9],[0,0,0,0,0,0,0,1,7,65,34,67,34,23,24,67,54,43,]])

classes = model.predict(X_test_new, batch_size=16)
print(classes)

我的输出是正确的尺寸,但给我的浮动0-1。我认为这是因为它仍然在寻找二元分类。任何人都知道如何解决这个问题?

解决

只需确保标签是每个二进制数组:

(X_train, y_train), (X_test, y_test), maxlen, word_ids, tags_ids = prep_model(
    nb_words=nb_words, test_len=75)

W = (y_train > 0).astype('float')

print(len(X_train), 'train sequences')
print(int(len(X_train)*val_split), 'validation sequences')
print(len(X_test), 'heldout sequences')

# this is the placeholder tensor for the input sequences
sequence = Input(shape=(maxlen,), dtype='int32')

# this embedding layer will transform the sequences of integers
# into vectors of size 256
embedded = Embedding(nb_words, output_dim=hidden,
                     input_length=maxlen, mask_zero=True)(sequence)

# apply forwards LSTM
forwards = LSTM(output_dim=hidden, return_sequences=True)(embedded)
# apply backwards LSTM
backwards = LSTM(output_dim=hidden, return_sequences=True,
                 go_backwards=True)(embedded)

# concatenate the outputs of the 2 LSTMs
merged = merge([forwards, backwards], mode='concat', concat_axis=-1)
after_dp = Dropout(0.15)(merged)

# TimeDistributed for sequence
# change activation to sigmoid?
output = TimeDistributed(
    Dense(output_dim=nb_classes,
          activation='softmax'))(after_dp)

model = Model(input=sequence, output=output)

# try using different optimizers and different optimizer configs
# loss=binary_crossentropy, optimizer=rmsprop
model.compile(loss='categorical_crossentropy',
              metrics=['accuracy'], optimizer='adam',
              sample_weight_mode='temporal')

print('Train...')
model.fit(X_train, y_train,
          batch_size=batch_size,
          nb_epoch=epochs,
          shuffle=True,
          validation_split=val_split,
          sample_weight=W)

2 个答案:

答案 0 :(得分:4)

解决。主要问题是将分类类别的数据重新整形为二进制数组。还使用TimeDistributed并将return_sequences设置为True。

答案 1 :(得分:0)

我知道这个线程很旧,但是我希望我能帮上忙。

我将模型修改为二进制模型:

sequence = Input(shape=(X_train.shape[1],), dtype='int32')

embedded = Embedding(max_fatures,embed_dim,input_length=X_train.shape[1], mask_zero=True)(sequence)

# apply forwards LSTM
forwards = LSTM(output_dim=hidden, return_sequences=True)(embedded)
# apply backwards LSTM
backwards = LSTM(output_dim=hidden, return_sequences=True,go_backwards=True)(embedded)

# concatenate the outputs of the 2 LSTMs
merged = concatenate([forwards, backwards])
after_dp = Dropout(0.15)(merged)
# add now layer LSTM without return_sequence
lstm_normal = LSTM(hidden)(merged)

# TimeDistributed for sequence
# change activation to sigmoid?
#output = TimeDistributed(Dense(output_dim=2,activation='sigmoid'))(after_dp)
#I changed output layer TimeDistributed for a Dense, for the problem of dimensionality and output_dim = 1 (output binary) 
output = Dense(output_dim=1,activation='sigmoid')(lstm_normal)

model = Model(input=sequence, output=output)

# try using different optimizers and different optimizer configs
# loss=binary_crossentropy, optimizer=rmsprop
# I changed modelo compile by to binary and remove sample_weight_mode parameter
model.compile(loss='binary_crossentropy',
              metrics=['accuracy'], optimizer='adam',
              )

print(model.summary())


###################################
#this is the line of training

model.fit(X_train, Y_train,
          batch_size=128,
          epochs=10,
          shuffle=True,
          validation_split=0.2,
          #sample_weight=W
         )

#In this moment work fine.....
Train on 536000 samples, validate on 134000 samples
Epoch 1/10
536000/536000 [==============================] - 1814s 3ms/step - loss: 0.4794 - acc: 0.7679 - val_loss: 0.4624 - val_acc: 0.7784
Epoch 2/10
536000/536000 [==============================] - 1829s 3ms/step - loss: 0.4502 - acc: 0.7857 - val_loss: 0.4551 - val_acc: 0.7837
Epoch 3/10
 99584/536000 [====>.........................] - ETA: 23:10 - loss: 0.4291 - acc: 0.7980