即使清除GPU会话后OOM错误

时间:2019-01-25 15:32:04

标签: tensorflow gpu conv-neural-network cross-validation

我正在将CNN应用于大小为2000 * 102的4684张图像的数据集。我在keras中使用5折交叉验证来记录性能指标。我正在使用del.model()del.histroyK.clear_session(),但是经过2次运行两次后,它给出了OOM错误。请在下面查看开发的算法。在具有11GB内存的1080Ti上运行。 PC内存32GB

kf = KFold(n_splits=5, shuffle=True)
kf.get_n_splits(data_new)

AUC_SCORES = []
KAPPA_SCORES = []
MSE = []
Accuracy = []
for train, test in kf.split(data_new):
    Conf_model = None
    Conf_model = Sequential()
    Conf_model.add(Conv2D(32, (20,102),activation='relu',input_shape=(img_rows,img_cols,1),padding='same',data_format='channels_last'))
    Conf_model.add(MaxPooling2D((2,2),padding='same',dim_ordering="th"))
    Conf_model.add(Dropout(0.2))
    Conf_model.add(Flatten())     
    Conf_model.add(Dense(64, activation='relu'))  
    Conf_model.add(Dropout(0.5))        
    Conf_model.add(Dense(num_classes, activation='softmax'))
    Conf_model.compile(loss=keras.losses.binary_crossentropy, optimizer=keras.optimizers.Adam(),metrics=['accuracy'])

    data_train = data_new[train]
    data_train.shape
    labels_train = labels[train]

    data_test = data_new[test]
    data_test_Len = len(data_test)
    data_train = data_train.reshape(data_train.shape[0],img_rows,img_cols,1)
    data_test = data_test.reshape(data_test.shape[0],img_rows,img_cols,1)
    data_train = data_train.astype('float32')
    data_test = data_test.astype('float32')
    labels_test = labels[test]
    test_lab = list(labels_test)#test_lab.append(labels_test)
    labels_train = to_categorical(labels_train,num_classes)
    labels_test_Shot = to_categorical(labels_test,num_classes)
    print("Running Fold")
    history = Conf_model.fit(data_train, labels_train, batch_size=batch_size,epochs=epochs,verbose=1)
    Conf_predicted_classes=Conf_model.predict(data_test)
    Conf_predict=Conf_model.predict_classes(data_test)
    Conf_Accuracy = accuracy_score(labels_test, Conf_predict)
    Conf_Mean_Square = mean_squared_error(labels_test, Conf_predict)
    Label_predict = list(Conf_predict)#Label_predict.append(Conf_predict)
    Conf_predicted_classes = np.argmax(np.round(Conf_predicted_classes),axis=1)
    Conf_Confusion = confusion_matrix(labels_test, Conf_predicted_classes)
    print(Conf_Confusion)
    Conf_AUC = roc_auc_score(labels_test, Conf_predict)
    print("AUC value for Conf Original Data: ", Conf_AUC)
    Conf_KAPPA = cohen_kappa_score(labels_test, Conf_predict)
    print("Kappa value for Conf Original Data: ", Conf_KAPPA)
    AUC_SCORES.append(Conf_AUC)
    KAPPA_SCORES.append(abs(Conf_KAPPA))
    MSE.append(Conf_Mean_Square)
    Accuracy.append(Conf_Accuracy)
    del history
    del Conf_model
    K.clear_session()

下面的错误

ResourceExhaustedError: OOM when allocating tensor with shape[1632000,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
     [[{{node training/Adam/gradients/dense_1/MatMul_grad/MatMul_1}} = MatMul[T=DT_FLOAT, transpose_a=true, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](flatten_1/Reshape, training/Adam/gradients/dense_1/Relu_grad/ReluGrad)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

我尝试了下面的代码,并且看起来很有效。

  def clear_mem():
     try: tf.sess.close()
     except: pass
     sess = tf.InteractiveSession()
     K.set_session(sess)
     return

1 个答案:

答案 0 :(得分:2)

考虑评论中更新的一些建议:

1)创建一个bash脚本,该脚本分别启动python脚本(在进程死后,释放内存),并使它们将结果写入单独的文件中,以供以后处理并结合在一起。例如,使用bash脚本来迭代并提供1)种子,以及2)python脚本的当前索引。使用种子,可以确保折叠部分没有泄漏,并且使用索引,您只需抓住相关部分即可

2)使用python进程对结果进行多处理

在我推荐使用方法1)之前,已经在python多处理中使用过tensorflow。在执行此操作时,我遇到了许多难题

这些方法有意义吗?