TensorFlow Lite - Object Detection API YOLOv3 - android

I want to implement a TFLite Classifier based on YOLOv3 for Android. I'm a little noob with tensorflow lite object detection code...
I want to start from this implementation of Object Detection TFLite. I tried to merge this code with this other implementation with Yolo Classifier but I had a lot of problems in adapting non-lite code with the lite version.
My question is: can i implement a classifier based on Yolov3 starting from TFLite examples?
I think that TFLiteObjectDetectionAPIModel is the class that i have to modify..is this correct? Or this API can be used to call a YoloClassifier Implementation written by myself?
I want to understand in details how I can use API to generate and apply my own classifier based on yolo. I have to implement a new class YoloClassifier.java that interfaces with the API.java file or i can only work on API to adapt new classifier?
Thanks to all in advance and I hope I was clear :)

Unfortunately you can't convert the complete YOLOv3 model to a tensorflow lite model at the moment. This is because YOLOv3 extends on the original darknet backend used by YOLO and YOLOv2 by introducing some extra layers (also referred to as YOLOv3 head portion), which doesn't seem to be handled correctly (atleast in keras) in preparing the model for tflite conversion.
You can convert YOLOv3 to .tflite without the model's 'head' portion (See here: https://github.com/benjamintanweihao/YOLOv3), but then you will have to implement the missing parts in your Java code (as suggested here: https://github.com/wics1224/yolov3-android-tflite). Make sure you have the correct anchor box sizes if you do so. The second link would hopefully answer the second part of your question.
If you plan to keep things simple, your other options would be using SSD-mobilenet or yolov2-tiny for your application. They will give you a more real-time experience.
I am currently working on a similar project involving object detection in flutter/tflite so I'll keep you updated if I find anything new.
Edit:
In https://github.com/benjamintanweihao/YOLOv3, you'll need to change how you import libraries because lite library is moved out from contrib from tensorflow 1.14 onwards.

Try https://github.com/zldrobit/onnx_tflite_yolov3, but the NMS is not in the TensorFlow compute graph. You have to implement your own NMS in your JAVA code.
Another issue with this repo is that it requires ONNX and PyTorch. If you are not familiar with them, it may cost you some time.

Related

Saving/Transmitting Model - TensorFlow Lite Transfer Learning on Android

I am trying to create a pair of Android apps: one which trains an image classification transfer-learning model and one which simply uses the trained model for inference. These apps would run on separate devices, and the usefulness would lie in training the model on a more-powerful device and being able to perform inference with that model on a less-powerful wearable device. Transfer learning is being implemented as explained in the post here: https://blog.tensorflow.org/2019/12/example-on-device-model-personalization.html.
The problem is I cannot find a good way to save and transmit the trained model from the first device to the second. I have tried implementing serialization for Bluetooth transmission, but the Android TFL library is not easy to make serializable. How difficult would it be to somehow save a .tflite file on Android? Does this feature already exist and I have missed it? Any help or ideas would be greatly appreciated. Thank you!
For transferring the model, you should do this as a binary instead of trying to explicitly serialize/deserialize. There are a number of different libraries available for this on Android, so it shouldn't be too difficult to find something that works for your app.
As for loading the TFLite model itself and running inference, it's possible to do this device-local using the TFLite Interpreter class and simply pointing it at your on-device file. You can find an example of this here: https://www.tensorflow.org/lite/inference_with_metadata/lite_support

Is it possible to use .mlmodel in Android, trained with playground (Xcode)?

There is so few material about Android application examples.
Could someone answer is it possible to use .mlmodel trained with playground in the Android project?
Official sources refers to ML Kit, TensorFlow Lite and AutoML.
Moreover, there is detailed example of use for Android SDK level 16.
But:
(usually ending in .tflite or .lite)
Could you give me any constructive advice or an knowledge I should have to complete the Android project trained with Machine Learning model?
I believe, this information would be useful for every beginner interested in Android development also.
From Can I convert mlmodel files back to tflite? the answer appears to be no.
From what I can tell, the .mlmodel format is a client-end inference model similar to .tflite where .tflite is a minimized format for deployment on device.
I suspect that in the process of conversion from the original full machine learning model, trade-offs are made which may or may not have equivalents between the two formats.

How to run Conv3D Model in Android?

I've recently delved into the Computer Vision and Deep Learning world. I developed a 3D CNN model for action recognition in Keras and now I'm interested in running it in Android (Java). The layers I'm using are Conv3D and MaxPool3D. The total size of the model is 40MB
I've been looking for solutions in the tensorflow-lite space but it seems that they don't have the operations implemented yet.
I got the following error when using the converter.convert() function to get the tflite model
ConverterError: TOCO failed. See console for info.
2019-05-05 14:39:07.006669: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Conv3D
So what can I do to be able to run it in Java? Should I:
run .pb file directly? I don't even know if this is possible now (after tflite). If so, how much time would a new-gen smartphone take to run a 40MB file?
implement ops by myself? If so, how to?
try different approach outside tensorflow?
implement a new action recognition architecture that uses only tflite supported ops
other
I didn't find any Conv3D implementation in Android so far in the web...
Thank you so much for your attention!
If you want to execute as standalon JAVA code using tensorflow,
please have a look on
this. But if you
want to implement something for android using JAVA, the only way is using
Tensorflow Lite.
For infernece time, you can compare your model with some of the state of art architecure in performance benchmarks. You can find here benchmark values , it shows the comparison with Pixel 2 and Pixel XL device.
For your implementation of Conv3D, if you want to implement ops, you can have look on custom operators.
I would prefere your suggestion of 'implement a new action recognition architecture that uses only tflite supported ops'. Here you can find list of supported operations using TF Lite.

using Tensorflow Estimator exported model on Android app

I have an Tensorflow model trained using the Estimator & Dataset APIs and I would like to use it locally on an Android app.
Can someone point me to a good reference and/or tutorial? I looked at the TensorflowInferenceInterface, but my understanding is it need you to specify which operator you want to feed the input to, but the Estimator/Dataset abstraction is at another level. So I am somewhat lost here.
Thanks.
Here is official documentation for this question: https://www.tensorflow.org/mobile/prepare_models

Tensorflow Object Detection API on Android

I'd like to swap out the multibox_model.pb being used by TensorFlowMultiBoxDetector.java in Google's Tensorflow Detect Sample App with the mobilenet frozen_inference_graph.pb included in the object detection API's model zoo.
I've ran the optimize_for_inference script on it but the tensorflowInferenceInterface can't parse the optimized model. It can, however, parse the original frozen_inference_graph.pb. I still think I need to modify this graph somehow to take in a square-sized input image, such as the 224x224 one that multibox_model.pb does.
I'm one of the developers --- just FYI, we'll be releasing an update to the Android Detection demo in the next few weeks to make it compatible with the Tensorflow Object Detection API, so please stay tuned.

Categories

Resources