优化最佳功能的数量

时间:2018-02-08 00:19:16

标签: optimization scikit-learn neural-network keras feature-selection

我正在使用Keras训练神经网络。每次我训练模型时,我都会使用Tree-based feature selection通过ExtraTreesClassifier()选择稍微不同的一组功能。每次训练后,我在我的验证集上计算AUCROC,然后返回循环以使用不同的功能集再次训练模型。这个过程非常低效,我想使用某些python库中提供的一些优化技术来选择最佳数量的特征。 要优化的函数是用于交叉验证的auroc,只能在对所选要素进行模型训练后计算。通过以下函数选择特征ExtraTreesClassifier(n_estimators=10, criterion=’gini’, max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=’auto’)这里我们看到目标函数不直接依赖于要优化的参数。 auroc的目标函数与神经网络训练有关,神经网络将特征作为输入,这些特征是根据其ExtraTreesClassifier的重要性提取的。 因此,在某种程度上,我优化auroc的参数是n_estimators=10, criterion=’gini’, max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=’auto’ExtraTreesClassifier中的其他变量。这些与auroc没有直接关系。

2 个答案:

答案 0 :(得分:1)

你应该结合使用GridSearchCV和Pipeline。 Find more here 当您需要按顺序运行一组指令以获得最佳配置时,请使用Pipeline。

例如,您需要执行以下步骤: 1.选择KBest功能 2.使用分类器DecisionTree或NaiveBayes

通过组合GridSearchCV和Pipeline,您可以根据评分标准选择最适合特定分类器的功能,分类器上的最佳配置等等。

示例:

#set your configuration options 
param_grid = [{
    'classify': [DecisionTreeClassifier()], #first option use DT
    'kbest__k': range(1, 22), #range of n in SelectKBest(n)

    #classifier's specific configs
    'classify__criterion': ('gini', 'entropy'), 
    'classify__min_samples_split': range(2,10),
    'classify__min_samples_leaf': range(1,10)
},
{
    'classify': [GaussianNB()], #second option use NB
    'kbest__k': range(1, 22), #range of n in SelectKBest(n)
}]

pipe =  Pipeline(steps=[("kbest", SelectKBest()), ("classify",  DecisionTreeClassifier())]) #I put DT as default, but eventually the program will ignore this when you use GridSearchCV.

# Here the might of GridSearchCV working, this may takes time especially if you have more than one classifiers to be evaluated
grid = GridSearchCV(pipe, param_grid=param_grid, cv=10, scoring='f1')
grid.fit(features, labels)

#Find your best params if you want to use optimal setting later without running the grid search again (by commenting all these grid search lines)
print grid.best_params_

#You can now use pipeline again to wrap the steps with it best configs to build your model
pipe =  Pipeline(steps=[("kbest", SelectKBest(k=12)), ("classify",  DecisionTreeClassifier(criterion="entropy", min_samples_leaf=2, min_samples_split=9))])

希望这有帮助

答案 1 :(得分:0)

我的计划流程分两个阶段。

我正在使用Sklearn ExtraTreesClassifierSelectFromModel方法来选择最重要的功能。这里应该注意的是,ExtraTreesClassifier将许多参数作为输入(如n_estimators等)进行分类,并最终通过n_estimatorsSelectFromModel的不同值提供不同的重要要素集。这意味着我可以优化n_estimators以获得最佳功能。

在第二阶段,我正在根据第一阶段选择的功能来训练我的NN keras模型。我使用AUROC作为网格搜索的得分,但是这个AUROC是使用基于Keras的神经网络计算的。我想在我的n_estimators中使用网格搜索ExtraTreesClassifier来优化keras神经网络的AUROC。我知道我必须使用Pipline,但我很困惑在一起实施。

我不知道将Pipeline放在我的代码中的哪个位置。我收到的错误是TypeError: estimator should be an estimator implementing 'fit' method, <function fs at 0x0000023A12974598> was passed

#################################################################################
I concatenate the CV set and the train set so that I may select the most important features  
in both CV and Train together.
##############################################################################

frames11 = [train_x_upsampled, cross_val_x_upsampled]
train_cv_x = pd.concat(frames11)
frames22 = [train_y_upsampled, cross_val_y_upsampled]
train_cv_y = pd.concat(frames22)


def fs(n_estimators):
  m = ExtraTreesClassifier(n_estimators = tree_number)
  m.fit(train_cv_x,train_cv_y)
  sel = SelectFromModel(m, prefit=True)


  ##################################################
  The code below is to get the names of the selected important features
  ###################################################

  feature_idx = sel.get_support()
  feature_name = train_cv_x.columns[feature_idx]
  feature_name =pd.DataFrame(feature_name)

  X_new = sel.transform(train_cv_x)
  X_new =pd.DataFrame(X_new)

 ######################################################################
 So Now the important features selected are in the data-frame X_new. In 
 code below, I am again dividing the data into train and CV but this time 
 only with the important features selected.
 #################################################################### 

  train_selected_x = X_new.iloc[0:train_x_upsampled.shape[0], :]
  cv_selected_x = X_new.iloc[train_x_upsampled.shape[0]:train_x_upsampled.shape[0]+cross_val_x_upsampled.shape[0], :]

  train_selected_y = train_cv_y.iloc[0:train_x_upsampled.shape[0], :]
  cv_selected_y = train_cv_y.iloc[train_x_upsampled.shape[0]:train_x_upsampled.shape[0]+cross_val_x_upsampled.shape[0], :]

  train_selected_x=train_selected_x.values
  cv_selected_x=cv_selected_x.values
  train_selected_y=train_selected_y.values
  cv_selected_y=cv_selected_y.values

  ##############################################################
  Now with this new data which only contains the important features,
  I am training a neural network as below.
  #########################################################
  def create_model():
     n_x_new=train_selected_x.shape[1]

     model = Sequential()
     model.add(Dense(n_x_new, input_dim=n_x_new, kernel_initializer='glorot_normal', activation='relu'))
     model.add(Dense(10, kernel_initializer='glorot_normal', activation='relu'))
     model.add(Dropout(0.8))

     model.add(Dense(1, kernel_initializer='glorot_normal', activation='sigmoid'))
     optimizer = keras.optimizers.Adam(lr=0.001)


     model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])

  seed = 7
  np.random.seed(seed)

model = KerasClassifier(build_fn=create_model, epochs=20, batch_size=400, verbose=0)

n_estimators=[10,20,30]
param_grid = dict(n_estimators=n_estimators)

grid = GridSearchCV(estimator=fs, param_grid=param_grid,scoring='roc_auc',cv = PredefinedSplit(test_fold=my_test_fold), n_jobs=1)
grid_result = grid.fit(np.concatenate((train_selected_x, cv_selected_x), axis=0), np.concatenate((train_selected_y, cv_selected_y), axis=0))
相关问题