在keras中复制sklearn的MLPClassifier()

时间:2017-06-19 21:03:51

标签: scikit-learn neural-network classification keras sequential

我是keras的新手。我正在尝试ML问题。 关于数据:

它有5个输入功能,4个输出类和约26000个记录。

我首先尝试使用MLPClassifier(),如下所示:

clf = MLPClassifier(verbose=True, tol=1e-6, batch_size=300, hidden_layer_sizes=(200,100,100,100), max_iter=500, learning_rate_init= 0.095, solver='sgd', learning_rate='adaptive', alpha = 0.002)
clf.fit(train, y_train)

经过测试,我的LB分数通常在99.90左右。为了在模型上获得更大的灵活性,我决定在Keras中实现相同的模型,然后对其进行更改以尝试增加LB分数。我想出了以下内容:

model = Sequential()
model.add(Dense(200, input_dim=5, init='uniform', activation = 'relu'))
model.add(Dense(100, init='uniform', activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(100, init='uniform', activation='relu'))
model.add(Dense(100, init='uniform', activation='relu'))
model.add(Dense(4, init='uniform', activation='softmax'))

lrate = 0.095
decay = lrate/125
sgd = SGD(lr=lrate, momentum=0.9, decay=decay, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
hist = model.fit(train, categorical_labels, nb_epoch=125, batch_size=256, shuffle=True,  verbose=2)

该模型看起来与MLPClassifier()模型非常相似,但LB得分在97左右相当令人失望。 有人可以告诉我这个型号究竟出了什么问题吗?或者我们如何在keras中复制MLPClassifier模型。我认为正规化可能是这里出错的因素之一。

编辑1:损失曲线: enter image description here

编辑2: 这是代码:

#import libraries
import pandas as pd
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import log_loss
from sklearn.preprocessing import MinMaxScaler, scale, StandardScaler, Normalizer

from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras import regularizers
from keras.optimizers import SGD

#load data
train = pd.read_csv("train.csv")
test = pd.read_csv("test.csv")

#generic preprocessing 
#encode as integer
mapping = {'Front':0, 'Right':1, 'Left':2, 'Rear':3}
train = train.replace({'DetectedCamera':mapping})
test = test.replace({'DetectedCamera':mapping})
#renaming column
train.rename(columns = {'SignFacing (Target)': 'Target'}, inplace=True)
mapping = {'Front':0, 'Left':1, 'Rear':2, 'Right':3}
train = train.replace({'Target':mapping})


#split data
y_train = train['Target']
test_id = test['Id']
train.drop(['Target','Id'], inplace=True, axis=1)
test.drop('Id',inplace=True,axis=1)
train_train, train_test, y_train_train, y_train_test = train_test_split(train, y_train)

scaler = StandardScaler()
scaler.fit(train_train)
train_train = scaler.transform(train_train)
train_test = scaler.transform(train_test)
test = scaler.transform(test)

#training and modelling
model = Sequential()
model.add(Dense(200, input_dim=5, kernel_initializer='uniform', activation = 'relu'))
model.add(Dense(100, kernel_initializer='uniform', activation='relu'))
# model.add(Dropout(0.2))
# model.add(Dense(100, init='uniform', activation='relu'))
# model.add(Dense(100, init='uniform', activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(100, kernel_initializer='uniform', activation='relu'))
model.add(Dense(100, kernel_initializer='uniform', activation='relu'))
model.add(Dense(4, kernel_initializer='uniform', activation='softmax'))

lrate = 0.095
decay = lrate/250
sgd = SGD(lr=lrate, momentum=0.9, decay=decay, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
hist = model.fit(train_train, categorical_labels, validation_data=(train_test, categorical_labels_test), nb_epoch=100, batch_size=256, shuffle=True,  verbose=2)

编辑3:这些是文件: train.csv test.csv

1 个答案:

答案 0 :(得分:0)

要获得真实的scikit estimator,可以使用tensorflow.keras.wrappers.scikit_learn中的KerasClassifier。例如:

from sklearn.datasets import make_classification
from tensorflow import keras
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier


X, y = make_classification(
    n_samples=26000, n_features=5, n_classes=4, n_informative=3, random_state=0
)


def build_fn(optimizer):
    model = Sequential()
    model.add(
        Dense(200, input_dim=5, kernel_initializer="he_normal", activation="relu")
    )
    model.add(Dense(100, kernel_initializer="he_normal", activation="relu"))
    model.add(Dense(100, kernel_initializer="he_normal", activation="relu"))
    model.add(Dense(100, kernel_initializer="he_normal", activation="relu"))
    model.add(Dense(4, kernel_initializer="he_normal", activation="softmax"))
    model.compile(
        loss="categorical_crossentropy",
        optimizer=optimizer,
        metrics=[
            keras.metrics.Precision(name="precision"),
            keras.metrics.Recall(name="recall"),
            keras.metrics.AUC(name="auc"),
        ],
    )
    return model


clf = KerasClassifier(build_fn, optimizer="rmsprop", epochs=500, batch_size=300)
clf.fit(X, y)
clf.predict(X)