Categories
Misc

TFlite model maker vs Tensorflow object detection api for edge inference

I have used Tensorflow object detection API ( https://github.com/tensorflow/models/tree/master/research/object_detection ) when I needed to do transfer learning of object detection models in the past 2 years. In most cases, I have used the trained models both in Tensorflow during development ( full version not TFLite ) on desktop as well as in TFLite after converting them to run on edge.

Some of the edge applications require a high FPS and therefore need to accelerate the inference using a Coral edge TPU. A constant issue with this approach has been that most model architectures in the Tensorflow object detection zoo are not possible to quantize and use with the Coral TPU. Some SSD models even fail or throw an Exception when trying to convert them to TFLite without quantization, although the documentation states that SSD models are supported.

I saw that the Tensorflow Lite Model maker ( https://www.tensorflow.org/lite/tutorials/model_maker_object_detection ) nowadays has support for transfer learning of EfficientDet models, including quantization and compilation for Coral. TFLite model maker also supports saving to “saved model” format. If I am not mistaken, It should then be possible to save the trained model both as .tflite for use in TFLite with Coral on edge and as saved_model for use with Tensorflow on desktop during development.

Does anyone have experience to share from working with Tensorflow lite model maker for object detection and then deployment on edge with Coral TPU? It would be valuable to hear what works well and what surprises / bugs to expect.

Thanks!

submitted by /u/NilViktor
[visit reddit] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *