如何将Tensorflow冻结图转换为TF Lite模型?

时间:2020-05-04 16:34:19

标签: python android tensorflow keras object-detection

我正在使用Faster RCNN,可以在链接中找到我正在使用的存储库,以检测视频帧中的汽车。我使用Keras 2.2.3和Tensorflow 1.15.0。我想在我的Android设备上部署并运行它。 Faster RCNN的每个部分都在Keras中实现,为了将其部署在Android上,我想将它们转换为TF Lite模型。最终网络分类器具有一个自定义层,称为RoiPoolingConv,我无法将最终网络转换为TF Lite。首先,我尝试了以下方法

converter = tf.lite.TFLiteConverter.from_keras_model_file('model_classifier_with_architecture.h5',
                custom_objects={"RoiPoolingConv": RoiPoolingConv})
tfmodel = converter.convert()
open ("model_cls.tflite" , "wb") .write(tfmodel)

这将导致以下错误

Traceback (most recent call last):
  File "Keras-FasterRCNN/model_to_tflite.py", line 26, in <module>
    custom_objects={"RoiPoolingConv": RoiPoolingConv})
  File "/home/alp/.local/lib/python3.6/site-packages/tensorflow/lite/python/lite.py", line 747, in from_keras_model_file
    keras_model = _keras.models.load_model(model_file, custom_objects)
  File "/home/alp/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/save.py", line 146, in load_model
    return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
  File "/home/alp/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 212, in load_model_from_hdf5
    custom_objects=custom_objects)
  File "/home/alp/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/model_config.py", line 55, in model_from_config
    return deserialize(config, custom_objects=custom_objects)
  File "/home/alp/.local/lib/python3.6/site-packages/tensorflow/python/keras/layers/serialization.py", line 89, in deserialize
    printable_module_name='layer')
  File "/home/alp/.local/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 192, in deserialize_keras_object
    list(custom_objects.items())))
  File "/home/alp/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1131, in from_config
    process_node(layer, node_data)
  File "/home/alp/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1089, in process_node
    layer(input_tensors, **kwargs)
  File "/home/alp/.local/lib/python3.6/site-packages/keras/engine/base_layer.py", line 475, in __call__
    previous_mask = _collect_previous_mask(inputs)
  File "/home/alp/.local/lib/python3.6/site-packages/keras/engine/base_layer.py", line 1441, in _collect_previous_mask
    mask = node.output_masks[tensor_index]
AttributeError: 'Node' object has no attribute 'output_masks'

作为一种解决方法,我尝试将Keras模型转换为Tensorflow冻结图,然后对这些冻结图进行TF Lite转换。这会产生以下错误

Traceback (most recent call last):
  File "/home/alp/.local/bin/toco_from_protos", line 11, in <module>
    sys.exit(main())
  File "/home/alp/.local/lib/python3.6/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 59, in main
    app.run(main=execute, argv=[sys.argv[0]] + unparsed)
  File "/home/alp/.local/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/home/alp/.local/lib/python3.6/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/home/alp/.local/lib/python3.6/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "/home/alp/.local/lib/python3.6/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 33, in execute
    output_str = tensorflow_wrap_toco.TocoConvert(model_str, toco_str, input_str)
Exception: We are continually in the process of adding support to TensorFlow Lite for more ops. It would be helpful if you could inform us of how this conversion went by opening a github issue at https://github.com/tensorflow/tensorflow/issues/new?template=40-tflite-op-request.md
 and pasting the following:

Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, CAST, CONCATENATION, CONV_2D, DEPTHWISE_CONV_2D, FULLY_CONNECTED, MUL, PACK, RESHAPE, RESIZE_BILINEAR,   SOFTMAX, STRIDED_SLICE. Here is a list of operators for which you will need custom implementations: AddV2.

是否有一种方法可以将带有自定义层的模型转换为TF Lite模型?

0 个答案:

没有答案