TensorFlow使用我自己的再培训初始模型在Docker上服务

时间:2017-04-03 21:31:42

标签: python tensorflow tensorflow-serving

我按照https://tensorflow.github.io/serving/https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/index.html?index=..%2F..%2Findex#0链接,运行我的服务器没有任何问题。

现在我想服务我自己的再训练模型。我有我的再培训图,我做了一些推论,一切都很好。但是当我将Graph导出到服务并运行model_server时,响应错误(我的标签丢失且分数很奇怪)。

我使用以下代码导出:

def export(session, image_lists):
 synsets = []
 with open('D:\\imagenet_lsvrc_2015_synsets.txt') as f:
  synsets = f.read().splitlines()
 ## Create synset->metadata mapping
 texts = {}
 with open('D:\\imagenet_metadata.txt') as f:
   for line in f.read().splitlines():
     parts = line.split('\t')
     assert len(parts) == 2
     texts[parts[0]] = parts[1]

 with session.graph.as_default():
   # Build inference model.
   # Please refer to Tensorflow inception model for details.

   # Input transformation.
   serialized_tf_example = tf.placeholder(tf.string, name='tf_example')
   feature_configs = {
    'image/encoded': tf.FixedLenFeature(
        shape=[], dtype=tf.string),
   }
   tf_example = tf.parse_example(serialized_tf_example, feature_configs)
   jpegs = tf_example['image/encoded']
   images = tf.map_fn(preprocess_image, jpegs, dtype=tf.float32)

  # Run inference.
   logits, _ = inception_model.inference(images, 2 + 1)

   # Transform output to topK result.
   values, indices = tf.nn.top_k(logits, 3)

   print(str(values))

   # Create a constant string Tensor where the i'th element is
   # the human readable class description for the i'th index.
   # Note that the 0th index is an unused background class
   # (see inception model definition code).

   class_descriptions = ['unused background']
   for s in synsets:
     class_descriptions.append(texts[s])


   #class_descriptions = ['unused background']
   #class_descriptions = ['unused background', 'noeyeglasses', 'eyeglasses']
   #for label_index, label_name in enumerate(image_lists.keys()):
   #print(str(label_index))
   #class_descriptions.append(label_name)


   #print(class_descriptions)
   class_tensor = tf.constant(class_descriptions)

   print("criou o class_tensor: ", class_tensor)

   classes = tf.contrib.lookup.index_to_string(
    tf.to_int64(indices), mapping=class_tensor)

   print("criou classes")

   # Restore variables from training checkpoint.
   variable_averages = tf.train.ExponentialMovingAverage(
    inception_model.MOVING_AVERAGE_DECAY)
   variables_to_restore = variable_averages.variables_to_restore()
   saver = tf.train.Saver(variables_to_restore)
   #with tf.Session() as sess:
   with session as sess:
    # Restore variables from training checkpoints.
    ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)
    #if ckpt and ckpt.model_checkpoint_path:
    #  saver.restore(sess, ckpt.model_checkpoint_path)
    #  # Assuming model_checkpoint_path looks something like:
    #  #   /my-favorite-path/imagenet_train/model.ckpt-0,
    #  # extract global_step from it.
    #  global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1]
    #  print ('Successfully loaded model from %s at step=%s.' % (
    #      ckpt.model_checkpoint_path, global_step))
    #else:
    #  print ('No checkpoint file found at %s' % FLAGS.checkpoint_dir)
    #  return

    # Export inference model.
    output_path = os.path.join(
      compat.as_bytes(FLAGS.output_dir),
      compat.as_bytes(str(FLAGS.model_version)))
    print ('Exporting trained model to', output_path)
    builder = saved_model_builder.SavedModelBuilder(output_path)

    # Build the signature_def_map.
    print ('classify_inputs_tensor_info')
    classify_inputs_tensor_info = utils.build_tensor_info(
      serialized_tf_example)
    classes_output_tensor_info = utils.build_tensor_info(classes)
    scores_output_tensor_info = utils.build_tensor_info(values)

    print ('classification_signature')
    classification_signature = signature_def_utils.build_signature_def(
      inputs={
          signature_constants.CLASSIFY_INPUTS: classify_inputs_tensor_info
      },
      outputs={
          signature_constants.CLASSIFY_OUTPUT_CLASSES:
              classes_output_tensor_info,
          signature_constants.CLASSIFY_OUTPUT_SCORES:
              scores_output_tensor_info
      },
      method_name=signature_constants.CLASSIFY_METHOD_NAME)

    predict_inputs_tensor_info = utils.build_tensor_info(jpegs)

    print ('prediction_signature')
    prediction_signature = signature_def_utils.build_signature_def(
      inputs={'images': predict_inputs_tensor_info},
      outputs={
          'classes': classes_output_tensor_info,
          'scores': scores_output_tensor_info
      },
      method_name=signature_constants.PREDICT_METHOD_NAME)


    print (prediction_signature)
    print ('legacy_init_op')
    legacy_init_op = tf.group(
      tf.tables_initializer(), name='legacy_init_op')

    print ('initialize_all_variables')
    init = tf.global_variables_initializer()


    sess.run(init)      
    sess.run(legacy_init_op)

    print ('builder')
    builder.add_meta_graph_and_variables(
      sess, [tag_constants.SERVING],
      signature_def_map={
          'predict_images':
              prediction_signature,
          signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
              classification_signature,
      },
      legacy_init_op=legacy_init_op)


    print ('save')
    builder.save()
    print ('Successfully exported model to %s' % FLAGS.output_dir)

我获取导出的模型并在Docker上运行它。该服务加载正常,但我的标签丢失(响应来自初始标签(imagenet))。 所以我替换了变量class_descriptions并放置了我自己的标签(class_descriptions = ['未使用的背景',' noeyeglasses',' eyeglasses'])。

服务加载新版本,标签在那里,但所有响应都是错误的。

我需要一些帮助。有什么必要使用我自己的标签重新训练模型并将其导出以用于tensorflow服务? (我已经看过所有链接和教程,但所有这些都讲述了提供原始的incepetion(没有重新训练)或mnist)。 我做错了什么?

由于

0 个答案:

没有答案