Tensorflow Object Detection API on Android - android

I'd like to swap out the multibox_model.pb being used by TensorFlowMultiBoxDetector.java in Google's Tensorflow Detect Sample App with the mobilenet frozen_inference_graph.pb included in the object detection API's model zoo.
I've ran the optimize_for_inference script on it but the tensorflowInferenceInterface can't parse the optimized model. It can, however, parse the original frozen_inference_graph.pb. I still think I need to modify this graph somehow to take in a square-sized input image, such as the 224x224 one that multibox_model.pb does.

I'm one of the developers --- just FYI, we'll be releasing an update to the Android Detection demo in the next few weeks to make it compatible with the Tensorflow Object Detection API, so please stay tuned.

Related

MLKit bad recognition

Hi I trained the model through Firebase MLKit. I have selected the option to train him for 13 hours. And despite this, when I get into the details, Firebase only shows me 1 hour during training hours. And after uploading to Android device, CameraX is detecting images incorrectly. The same when I test the model in Firebase also incorrectly detects. I use photos from here:
https://www.kaggle.com/grassknoted/asl-alphabet/data
How can I improve the detection and classification of photos?
I will try to help with my version, what you need to know is that MLkit runs on TensorFlow.
MLKit can use the cloud model or ondevice, when you use your own model you must first upload it to the cloud (if you use MLKit), but if you use TensorFlowLite then you can put the model and label directly on the device.
Errors may occur when you make a model, you can use this tool to make your own model easily https://teachablemachine.withgoogle.com/train .
After that, upload your model on the MLKit Firebase, and apply it to the Android / iOS application using this sample method https://firebase.google.com/docs/ml-kit/use-custom-models
i have sample project using MLKit & its work for more detail check my article here https://medium.com/#rakaadinugroho/face-image-cropper-with-machine-learning-c448f9d19858

TensorFlow Lite - Object Detection API YOLOv3

I want to implement a TFLite Classifier based on YOLOv3 for Android. I'm a little noob with tensorflow lite object detection code...
I want to start from this implementation of Object Detection TFLite. I tried to merge this code with this other implementation with Yolo Classifier but I had a lot of problems in adapting non-lite code with the lite version.
My question is: can i implement a classifier based on Yolov3 starting from TFLite examples?
I think that TFLiteObjectDetectionAPIModel is the class that i have to modify..is this correct? Or this API can be used to call a YoloClassifier Implementation written by myself?
I want to understand in details how I can use API to generate and apply my own classifier based on yolo. I have to implement a new class YoloClassifier.java that interfaces with the API.java file or i can only work on API to adapt new classifier?
Thanks to all in advance and I hope I was clear :)
Unfortunately you can't convert the complete YOLOv3 model to a tensorflow lite model at the moment. This is because YOLOv3 extends on the original darknet backend used by YOLO and YOLOv2 by introducing some extra layers (also referred to as YOLOv3 head portion), which doesn't seem to be handled correctly (atleast in keras) in preparing the model for tflite conversion.
You can convert YOLOv3 to .tflite without the model's 'head' portion (See here: https://github.com/benjamintanweihao/YOLOv3), but then you will have to implement the missing parts in your Java code (as suggested here: https://github.com/wics1224/yolov3-android-tflite). Make sure you have the correct anchor box sizes if you do so. The second link would hopefully answer the second part of your question.
If you plan to keep things simple, your other options would be using SSD-mobilenet or yolov2-tiny for your application. They will give you a more real-time experience.
I am currently working on a similar project involving object detection in flutter/tflite so I'll keep you updated if I find anything new.
Edit:
In https://github.com/benjamintanweihao/YOLOv3, you'll need to change how you import libraries because lite library is moved out from contrib from tensorflow 1.14 onwards.
Try https://github.com/zldrobit/onnx_tflite_yolov3, but the NMS is not in the TensorFlow compute graph. You have to implement your own NMS in your JAVA code.
Another issue with this repo is that it requires ONNX and PyTorch. If you are not familiar with them, it may cost you some time.

Creating a simple neural network on Tensorflow by means of Android

I want to create a simple neural network based on the example https://github.com/googlesamples/android-ndk/tree/master/nn_sample. Is it possible to create this with the help on Tensorflow only with Android tools on Java
Take a look at this folder https://github.com/googlesamples/android-ndk/tree/master/nn_sample/app/src/main/cpp
simple_model.h is the model trained in Tensorflow before creating the Android project. Now the model likes black-box, get input and predict output only, if you want to build your own model, try this tutorial (All steps from training, evaluating, prediction to deploy onto Android):
https://medium.com/#elye.project/applying-tensorflow-in-android-in-4-steps-to-recognize-superhero-f224597eb055
Affirmative. You can use TensorFlow Lite on Android, it's an open source deep learning framework which helps to compress and deploy models to a mobile or embedded application. It basically can take models as input and then deploys and interpret and perform resource-conserving optimizations for mobile applications. The NNAPI of Android NDK can interface with TFLite easily too. This link contains gesture, image, object & speech detection and classification example implementations on Android with Java using TFLite.

How to run Conv3D Model in Android?

I've recently delved into the Computer Vision and Deep Learning world. I developed a 3D CNN model for action recognition in Keras and now I'm interested in running it in Android (Java). The layers I'm using are Conv3D and MaxPool3D. The total size of the model is 40MB
I've been looking for solutions in the tensorflow-lite space but it seems that they don't have the operations implemented yet.
I got the following error when using the converter.convert() function to get the tflite model
ConverterError: TOCO failed. See console for info.
2019-05-05 14:39:07.006669: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Conv3D
So what can I do to be able to run it in Java? Should I:
run .pb file directly? I don't even know if this is possible now (after tflite). If so, how much time would a new-gen smartphone take to run a 40MB file?
implement ops by myself? If so, how to?
try different approach outside tensorflow?
implement a new action recognition architecture that uses only tflite supported ops
other
I didn't find any Conv3D implementation in Android so far in the web...
Thank you so much for your attention!
If you want to execute as standalon JAVA code using tensorflow,
please have a look on
this. But if you
want to implement something for android using JAVA, the only way is using
Tensorflow Lite.
For infernece time, you can compare your model with some of the state of art architecure in performance benchmarks. You can find here benchmark values , it shows the comparison with Pixel 2 and Pixel XL device.
For your implementation of Conv3D, if you want to implement ops, you can have look on custom operators.
I would prefere your suggestion of 'implement a new action recognition architecture that uses only tflite supported ops'. Here you can find list of supported operations using TF Lite.

using Tensorflow Estimator exported model on Android app

I have an Tensorflow model trained using the Estimator & Dataset APIs and I would like to use it locally on an Android app.
Can someone point me to a good reference and/or tutorial? I looked at the TensorflowInferenceInterface, but my understanding is it need you to specify which operator you want to feed the input to, but the Estimator/Dataset abstraction is at another level. So I am somewhat lost here.
Thanks.
Here is official documentation for this question: https://www.tensorflow.org/mobile/prepare_models

Categories

Resources