如何为Keras Fit生成器正确调用和设置参数

时间:2019-05-14 04:12:22

标签: keras neural-network

我是Keras的新手,所以我对Keras文档和其他人使用fit_generator的示例感到困惑。当我测试该代码以一次训练32张图像的目的时,每个时期都将所有图像(在这种情况下为100张),然后再次训练数据(以2个时期表示):

# Create generator that generate an image and a label one at a time (because loading all data into memory will freeze my laptop)

def generate_transform(imgs, lbls):
    while 1:       
        for i in range(len(imgs)):
            img = np.array(cv2.resize(imgs[i], (224, 224)))            
            lbl = to_categorical(lbls[i], num_classes=10)
            yield (img, lbl)  

history =  model.fit_generator(generate_transform(x[:100], y[:100]),
                                   steps_per_epoch=100/32,
                                   samples_per_epoch=100, 
                                   nb_epoch=2,
                                   validation_data=generate_transform(x_test[:100], y_test[:100]),
                                   validation_steps=100)
                                   #nb_val_samples=100)

我收到了此用户警告:

D:\Users\jason\AppData\Local\Continuum\Anaconda3\lib\site-packages\ipykernel_launcher.py:8: UserWarning: The semantics of the Keras 2 argument `steps_per_epoch` is not the same as the Keras 1 argument `samples_per_epoch`. `steps_per_epoch` is the number of batches to draw from the generator at each epoch. Basically steps_per_epoch = samples_per_epoch/batch_size. Similarly `nb_val_samples`->`validation_steps` and `val_samples`->`steps` arguments have changed. Update your method calls accordingly.

D:\Users\jason\AppData\Local\Continuum\Anaconda3\lib\site-packages\ipykernel_launcher.py:8: UserWarning: Update your `fit_generator` call to the Keras 2 API: `fit_generator(<generator..., steps_per_epoch=100, validation_data=<generator..., validation_steps=100, epochs=2)`

输出看起来像这样:

Epoch 1/2
100/100 [==============================] - 84s 836ms/step - loss: 3.0745 - acc: 0.4500 - val_loss: 2.3886 - val_acc: 0.0300
Epoch 2/2
100/100 [==============================] - 86s 864ms/step - loss: 0.3654 - acc: 0.9000 - val_loss: 2.4644 - val_acc: 0.0900

我的问题是:

  1. 我的调用是否正确使用这些参数及其提供的值?

  2. 我的模型是否在每个步骤中训练了32个图像和标签,并且每个时期以100/32步进行了训练?

  3. 我需要使用参数steps_per_epoch吗?

  4. 我应该使用哪个参数:validation_stepsnb_val_samples

  5. 我的模型将对验证生成器的所有100个样本(如x_test[:100]进行100次验证(如validation_steps=100所示),还是仅对一个样本的100次验证(因为验证生成器一次只能产生一个样本)?为什么输出中没有显示步数?

  6. 我的模型是否使用了第一个时期的训练权重来再次训练相同的数据,这就是为什么训练精度从第一个时期的0.45跃升到第二个时期的0.9?

您能帮我解决上述问题吗?

谢谢。

1 个答案:

答案 0 :(得分:1)

我遇到了这个问题,并在下面的代码中{在Keras 1.1.2之前==>在Keras 2.2.4之后}中解决了该问题:

294 # Old Keras==1.1.2 fit_generator
295 # history = model.fit_generator(
296 #    train_data_generator.get_data(),
297 #    samples_per_epoch=train_data_generator.get_num_files(),
298 #    nb_epoch=config["num_epochs"],
300 #    verbose=1,
301 #    validation_data=validation_data_generator.get_data(should_shuffle=False),
302 #    nb_val_samples=validation_data_generator.get_num_files(),
303 #    nb_worker=2,
304 #    max_q_size=batch_size,
305 #    pickle_safe=True)
306
307 # New working! Keras 2.2.4 fit_generator        
309 history = model.fit_generator(
310      train_data_generator.get_data(),
312      verbose=1,
313      validation_data=validation_data_generator.get_data(should_shuffle=False),
314      steps_per_epoch=train_data_generator.get_num_files() // batch_size,
315      epochs=config["num_epochs"],
316      validation_steps=validation_data_generator.get_num_files() // batch_size,
317      workers=2, use_multiprocessing=True,
318      max_queue_size=batch_size)

查看您的代码,只需要steps_per_epoch而不是samples_per_epoch,将nb_epoch更改为epochs。我不完全了解您的代码或培训/验证设置(100个培训和验证示例?),最好每个帖子问一个问题,但我会努力修复您的代码(当然未经培训):< / p>

请记住,number_of_steps == number_of_samples // batch_size并且如果100是num_training_samples,则必须有一个很小的batch_sizenumber_of_steps才有意义:

history =  model.fit_generator(
    generate_transform(x[:100], y[:100]), # training data generator
    verbose=1,                  
    val_data=generate_transform(x_test[:100], y_test[:100]), # validation data generator
    steps_per_epoch=100 // batch_size, # 100 is num_training_samples, divided by batch_size == steps_per_epoch
    epochs=2,
    val_steps=100 // batch_size # 100 is num_val_samples, divided by batch_size == val_steps
    )
相关问题