LSTM CNN训练和测试准确性相同,但预测概率较低

时间:2018-06-24 13:23:34

标签: python neural-network deep-learning conv-neural-network lstm

我用python编写了一个代码,将非结构化文本标记为0到11这12个标签之一。该代码是LSTM CNN模型,但是训练和测试的准确性是相同的。当我对模型进行预测时,非结构化文本属于12类之一的可能性似乎很小。我无法找到关于为什么发生这种情况的解释。我已经浏览了答案,但是由于我是python和神经网络的初学者,所以大多数在线解决方案似乎很难解释。

import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D
from keras.layers.embeddings import Embedding
import pandas as pd
from keras.preprocessing import text as keras_text, sequence as keras_seq
from sklearn.model_selection import train_test_split
from keras.layers import Dense, Flatten, LSTM, Conv1D, MaxPooling1D, Dropout, Activation


#Preparing training data
raw = pd.read_fwf(Trainset)
xtrain_obfuscated = pd.read_fwf(Trainset_x)
ytrain = pd.read_fwf(Trainset_y,header=None)
xtrain_obfuscated['label']=ytrain[0]
xtrain_obfuscated.rename(columns={0:'text'}, inplace=True)

#Reading test file
xtest_obfuscated = pd.read_fwf(testset,header=None)
xtest_obfuscated.rename(columns={0:'text'}, inplace=True)

#One-hot encoding on training data
xtrain_encoded = pd.get_dummies(xtrain_obfuscated, columns=['label'])

#df_encoded_copy=df_encoded.copy()

#List sentences train
#Text matrix to be fed into neural network
train_sentence_list = xtrain_encoded["text"].fillna("unknown").values
list_classes = ["label_0","label_1","label_2",'label_3',"label_4","label_5","label_6","label_7","label_8","label_9","label_10","label_11"]
y = xtrain_encoded[list_classes].values

#List sentences test
test_sentence_list = xtest_obfuscated["text"].fillna("unknown").values

max_features = 20000
maxlen = raw[0].map(len).max()
batch_size=32

#Sequence Generation
tokenizer = keras_text.Tokenizer(char_level = True)
tokenizer.fit_on_texts(list(train_sentence_list))
# train data
train_list_tokenized = tokenizer.texts_to_sequences(train_sentence_list)
X = keras_seq.pad_sequences(train_list_tokenized, maxlen=maxlen)

X_train, X_valid= train_test_split(X, test_size=0.2)
y_train, y_valid= train_test_split(y, test_size=0.2)
# test data
test_list_tokenized = tokenizer.texts_to_sequences(test_sentence_list)
X_test = keras_seq.pad_sequences(test_list_tokenized, maxlen=maxlen)
#Model
embedding_vector_length = 128
model = Sequential()
model.add(Embedding(max_features, embedding_vector_length, input_length=maxlen))
model.add(Dropout(0.2))
model.add(Conv1D(filters=64, kernel_size=3, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=4))
model.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(12, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X_train, y_train, epochs=3, batch_size=64)
#cross_val_score(model, X_train, y, cv=3)
# Final evaluation of the model
scores = model.evaluate(X_valid, y_valid, verbose=0)
#print("Accuracy: %.2f%%" % (scores[1]*100))
a = model.predict(X_test)

2 个答案:

答案 0 :(得分:0)

尝试一下:更改

lt_es = VALUE #( BASE me->prepare_process_part_ztoa1( )
                 ( LINES OF me->prepare_process_part_protocol( ) ) ).

model.add(Dense(12, activation='sigmoid'))

model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

答案 1 :(得分:0)

R。 Giskard 建议,如果有2个以上的类,则sigmoid激活的确可以更改为softmax(这将使您的输出总计为1),而binary_crossentropy应该切换为categorical_crossentropy。顾名思义,binary_crossentropy是为二进制分类问题而设计的。

对于单个类别中的准确性差,可能有很多原因。最明显的一个可能是数据集的平衡-问题班级中的培训样本数量是否与其他班级相同?尝试构建分类器之前,请先分析数据。

此外,您似乎是从一个非常复杂的模型开始的,在该模型上您自己的角色被嵌入。您是否首先尝试过一种更简单的方法来更好地理解数据?像TF-IDF这样的东西可以对数据进行矢量化处理,并且更容易解释分类器,例如随机森林模型。如果更简单的模型能够解决您的问题,则无需自定义NN架构。您可以从scikit-learn之类的库开始,并进行一些基本测试以更好地理解数据,然后再决定使用深度学习。尤其是因为DL模型通常需要大量的训练才能达到良好的效果。

实际上,您可能根本不应该从头开始构建自定义嵌入或模型。使用FastText或BERT等预制模型可能会产生更好的结果。