Keras和MLPClassifier sklearn之间的结果不一致

时间:2017-10-22 00:10:32

标签: neural-network deep-learning keras

我一直在阅读Keras文档,以构建我自己的MLP网络,实现MLP反向传播。我熟悉sklearn中的MLPClassifier,但我想学习Keras深入学习。以下是第一次尝试。网络有3层1输入(功能= 64),1输出和1隐藏。总数是(64,64,1)。输入为numpy矩阵X,125K样本(64 dim),y是1D numpy二进制类(1,-1):

# Keras imports
from keras.models import Sequential
from sklearn.model_selection import train_test_split
from keras.layers import Dense, Dropout, Activation
from keras.initializers import RandomNormal, VarianceScaling, RandomUniform
from keras.optimizers import SGD, Adam, Nadam, RMSprop

# System imports
import sys
import os
import numpy as np
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'


def train_model(X, y, num_streams, num_stages):

    '''
    STEP1: Initialize the Model
    '''

    tr_X, ts_X, tr_y, ts_y = train_test_split(X, y, train_size=.8)
    model = initialize_model(num_streams, num_stages)

    '''
    STEP2: Train the Model
    '''
    model.compile(loss='binary_crossentropy',
                  optimizer=Adam(lr=1e-3),
                  metrics=['accuracy'])
    model.fit(tr_X, tr_y,
              validation_data=(ts_X, ts_y),
              epochs=3,
              batch_size=200)


def initialize_model(num_streams, num_stages):

    model = Sequential()
    hidden_units = 2 ** (num_streams + 1)
    # init = VarianceScaling(scale=5.0, mode='fan_in', distribution='normal')
    init_bound1 = np.sqrt(3.5 / ((num_stages + 1) + num_stages))
    init_bound2 = np.sqrt(3.5 / ((num_stages + 1) + hidden_units))
    init_bound3 = np.sqrt(3.5 / (hidden_units + 1))
    # drop_out = np.random.uniform(0, 1, 3)

    # This is the input layer (that's why you have to state input_dim value)
    model.add(Dense(num_stages,
                    input_dim=num_stages,
                    activation='relu',
                    kernel_initializer=RandomUniform(minval=-init_bound1, maxval=init_bound1)))

    model.add(Dense(hidden_units,
                    activation='relu',
                    kernel_initializer=RandomUniform(minval=-init_bound2, maxval=init_bound2)))

    # model.add(Dropout(drop_out[1]))

    # This is the output layer
    model.add(Dense(1,
                    activation='sigmoid',
                    kernel_initializer=RandomUniform(minval=-init_bound3, maxval=init_bound3)))

    return model

问题是,在使用X时,使用相同的数据集yMLPClassifier sklearn可以获得99%的准确率。但是,Keras的准确性很差,如下所示:

Train on 100000 samples, validate on 25000 samples
Epoch 1/3
100000/100000 [==============================] - 1s - loss: -0.5358 - acc: 0.0022 - val_loss: -0.7322 - val_acc: 0.0000e+00
Epoch 2/3
100000/100000 [==============================] - 1s - loss: -0.6353 - acc: 0.0000e+00 - val_loss: -0.7385 - val_acc: 0.0000e+00
Epoch 3/3
100000/100000 [==============================] - 1s - loss: -0.7720 - acc: 9.0000e-05 - val_loss: -0.9474 - val_acc: 5.2000e-04

我不明白为什么?我在这里错过了什么吗?任何帮助表示赞赏。

2 个答案:

答案 0 :(得分:0)

在训练模型之前,将标记数据转换为一个热门代码进行检查。

有关一个热门代码查看https://machinelearningmastery.com/why-one-hot-encode-data-in-machine-learning/

的详细信息

答案 1 :(得分:0)

我认为问题在于您使用的是sigmoid输出图层(绑定到[0,1])但是您的类是(1,-1),您需要更改输出值或使用{ {1}}。

此外,keras图层可能具有与sklearn不同的默认参数,请务必查看文档中的参数。

最后一件事,对于tanh尝试kernel_initializer,这是一个很好的默认设置。

相关问题