完成数据扩充后会发生什么?

时间:2017-07-10 01:13:52

标签: python machine-learning tensorflow deep-learning

我使用Kaggle" Dogs Vs cats" date set,并按照TensorFlow的cifar-10教程(我没有使用重量衰减,移动平均线和L2损失方便),我已经训练了我的网络成功,但是当我添加数据增强时在我的代码的一部分,奇怪的事情刚刚发生,即使经过数千个步骤(添加之前,每件事都没问题),损失从未下降。代码如下所示:

def get_batch(image, label, image_w, image_h, batch_size, capacity, test_flag=False):
  '''
  Args:
      image: list type
      label: list type
      image_w: image width
      image_h: image height
      batch_size: batch size
      capacity: the maximum elements in queue 
      test_flag: create training batch or test batch
  Returns:
      image_batch: 4D tensor [batch_size, width, height, 3], dtype=tf.float32
      label_batch: 1D tensor [batch_size], dtype=tf.int32
  '''

  image = tf.cast(image, tf.string)
  label = tf.cast(label, tf.int32)

  # make an input queue
  input_queue = tf.train.slice_input_producer([image, label])

  label = input_queue[1]
  image_contents = tf.read_file(input_queue[0])
  image = tf.image.decode_jpeg(image_contents, channels=3)

  ####################################################################
  # Data argumentation should go to here
  # but when we want to do test, stay the images what they are

  if not test_flag:
     image = tf.image.resize_image_with_crop_or_pad(image, RESIZED_IMG, RESIZED_IMG)
     # Randomly crop a [height, width] section of the image.
     distorted_image = tf.random_crop(image, [image_w, image_h, 3])

    # Randomly flip the image horizontally.
     distorted_image = tf.image.random_flip_left_right(distorted_image)

    # Because these operations are not commutative, consider randomizing
    # the order their operation.
    # NOTE: since per_image_standardization zeros the mean and makes
    # the stddev unit, this likely has no effect see tensorflow#1458.
     distorted_image = tf.image.random_brightness(distorted_image, max_delta=63)

     image = tf.image.random_contrast(distorted_image, lower=0.2, upper=1.8)
  else:
     image = tf.image.resize_image_with_crop_or_pad(image, image_w, image_h)

  ######################################################################

  # Subtract off the mean and divide by the variance of the pixels.
  image = tf.image.per_image_standardization(image)
  # Set the shapes of tensors.
  image.set_shape([image_h, image_w, 3])
  # label.set_shape([1])

  image_batch, label_batch = tf.train.batch([image, label],
                                            batch_size=batch_size,
                                            num_threads=64,
                                            capacity=capacity)

  label_batch = tf.reshape(label_batch, [batch_size])
  image_batch = tf.cast(image_batch, tf.float32)

  return image_batch, label_batch

1 个答案:

答案 0 :(得分:0)

确保您使用的限制(例如"@ngx-translate/core": "^8.0.0", npm install @ngx-translate/core --save 用于亮度,max_delta=63用于对比度)足够低,以便图像仍然可识别。其他一个问题可能就是一遍又一遍地应用扩充,所以经过几次迭代后它完全失真了(尽管我没有在你的代码片段中发现这个错误)。

我建议您将数据的可视化添加到tensorboard。要显示图像,请使用tf.summary.image方法。你将能够清楚地看到增强的结果。

upper=1.8

This gist可以作为一个例子。