Hello everyone!
I have some questions on finishing the implementation of my TensorFlow application. I need advice on how to optimize my model.
Background
I have been working on an object detection Android app based on the one provided by TensorFlow. I have added bluetooth capabilities and implemented my own standalone Simple Online and Realtime Tracking algorithm (just so I could understand the code better in case I have to tune things). I do not want to get into the specifics of my application of the Android app but the simplest analogy is an Android app looking down at a conveyor belt. When the Android app sees a specific object on the conveyor belt at a certain location, it sends a bluetooth signal for some mechanism to take action on the specific object at the certain location (this probably describes half the possible apps here haha).
My application has been tested and works successfully when using one of the default tflite models in a simulation environment. However, the objects I plan to track are not in the standard tflite models. Therefore I need to create my own custom model. This is the final step of my app development.
I have (with much pain) figured out how to create a model generation pipeline: Tfrecords > train > convert to tflite > test on android app. I have not studied machine learning but realize that with my technical/programming/math skills I can kind of brute force a basic model and then learn the theory in more detail once my prototype is working. I have spent a fair bit of time browsing the TensorFlow’s github issues to produce a model that can somewhat detect my objects but not well enough and slower than the example tflite model (on my phone inference time is now 150ms instead of average of 50ms). I am now looking to decrease inference time and accuracy of my model.
My current model generation pipeline uses ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8 (as I couldn’t get ssd_mobilenet_v2_320x320_coco17_tpu-8 to work), uses my tfrecords then trains on the data, then converts to tflite (with optimization tf.lite.Optimize.DEFAULT flag) and finally attaches metadata. I plug this into the android app and then test.
My computer is slow, so I eventually plan on renting an EC2 and going through a bunch of parameters in ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8’s pipeline.config and thus generating a bunch of tflite models and rate their accuracy. As a final final test step, I will test the models for speed on my phone. Combination of fastest/accurate will be the tflite model of choice.
Questions
In ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8’s pipeline.config what parameters are good to vary to get a good parameter sweep?
What parameters are good to vary so that the resultant tflite model is faster ( 5mb tflite model is 50ms inference while 10mb model is 150ms inference time)?
What EC2 machine do you recommend using? I understand that amazon has machine learning tools, but with the time I spent creating my model and generating pipeline I am very hesitant to jump into additional exploratory work.
I’ll add the ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8’s pipeline.config file in the comments.
submitted by /u/tensorpipelinetest
[visit reddit] [comments]