Keras预测类标签

时间:2019-07-15 12:09:38

标签: keras

我需要帮助来了解Keras预测类标签

我正在用Keras构建二进制分类模型。我使用3个'relu'层加上S型输出,损失函数为binary_crossentropy。 关键是我无法理解简单的东西: -对于y_train / y_test,我使用两个值:False / True。当我在X_test上预测时,我得到了他预测的概率或类别,很好。但是不幸的是,该模型在50%的情况下的精度为0.1,而在其他50%的情况下的精度为0.9。这是由于False / True的分布不均匀的事实-False的90%和True的10%。 ...然后,发生的情况是False的类标签在一半情况下为0,在其他情况下为1。我不知道为什么。 我发现一些帖子说Keras会按字母顺序对类进行编号,因此我假设False始终为0,True始终为1,但事实并非如此。 请在下面找到我的代码,并告知我做错了什么。

# ####################### Fetch data #################################
#df = pd.read_csv(os.path.join(data_folder, data_file))
df = pd.read_csv('\\\...\\train_data_with_anomalous_column.csv', delimiter=',')
x = df.drop('ANOMALOUS', axis=1)
# x = x.drop('TIMESTAMP', axis=1)
y = df['ANOMALOUS'].copy()
x = x.as_matrix()
y = y.as_matrix()
# y.astype(int)

X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.1)

# ####################### get rid of TIMESTAMP columns ###############
# ###### first, copy the TIMESTAMP column into ananother np-array
# ###### in order to be able to join it later
X_test_TST = X_test[:, [0]]
X_train = np.delete(X_train, 0, 1)
X_test = np.delete(X_test, 0, 1)

class_weight = {False: 1.,
                True: 1.
                }
training_set_size = X_train.shape[0]

n_inputs = 22
n_h1 = args.n_hidden_1
n_h2 = args.n_hidden_2
n_h3 = args.n_hidden_3
n_outputs = 1
n_epochs = 2
model_validation_split = 0.1
batch_size = args.batch_size
learning_rate = args.learning_rate

print(X_train.shape, y_train.shape, X_test.shape, y_test.shape, sep='\n')

# Build a simple MLP model
model = Sequential()
# first hidden layer
model.add(Dense(n_h1, activation='relu', input_shape=(n_inputs,)))
# second hidden layer
model.add(Dense(n_h2, activation='relu'))
# second hidden layer
model.add(Dense(n_h3, activation='relu'))
# output layer
model.add(Dense(n_outputs, activation='sigmoid'))

model.summary()

model.compile(loss='binary_crossentropy',
              optimizer=optimizer,
              metrics=['accuracy']
              )

# start an Azure ML run
run = Run.get_context()

class LogRunMetrics(Callback):
    # callback at the end of every epoch
    def on_epoch_end(self, epoch, log):
        # log a value repeated which creates a list
        run.log('Loss', log['loss'])
        run.log('Accuracy', log['acc'])

history = model.fit(X_train, y_train,
                    batch_size=batch_size,
                    epochs=n_epochs,
                    verbose=2,
                    class_weight=class_weight,
                    callbacks=[LogRunMetrics()])

score = model.evaluate(X_test, y_test, verbose=1)

# #### get accuracy, F1, precision and recall
# Get the class predictions
y_classes = model.predict_classes(X_test, batch_size=batch_size, verbose=1)
print(y_classes
print(classification_report(y_test, y_classes))
...

0 个答案:

没有答案
相关问题