How to work with Tensorflow on Android platform? - android

Google has made the TENSORFLOW open source for developers..
Is there any way out for using this on android?
The link is here TensorFlow.
I would love to have some directions to work on this API.

The TensorFlow source repository includes an Android example application, with some documentation.
The Android example includes a pre-trained model for image classification, and uses this to classify images captured by the camera. Typically you would build and train a model using the Python API; generate a serialised version of the model as a GraphDef protocol buffer (and possibly a checkpoint of the model parameters); and then load that and run inference steps using the C++ API.

Related

Android Tensorflow lite Recognizing keywords

how can I use the model from https://www.tensorflow.org/tutorials/audio/simple_audio in my Android app? How to provide inputs correctly and how to interpret outputs?
TensorFlow Lite's Task Library has an Audio Classification example for Android, which is what you might be looking for. The guide explains how the Java AudioClassifier API works.
The Task Library uses YAMNet for audio analysis, which has a pre-trained version on TFHub. If you want to train with your own dataset, please refer to the notebooks mentioned here.

Adding an ML model into Android Studio

I have written an ML model using Keras in Google Colab which detects whether an animal is a cat or a dog based on an input image.
Now, I want to create an app in Android Studio that takes an image as input, uses the algorithm I have designed to detect whether the image is a cat or a dog and outputs the result.
My question is how can I incorporate this algorithm into Android Studio? My first thought was to rewrite the code in Java and copy it into Android Studio but writing Java in Google Colab would lead to multiple complications. Hence, is there a way to download the algorithm I have created and upload it into Android Studio such that it works? If not, what other approach can I implement?
My desired outcome is something where I can add the algorithm into Android Studio and write a code like:
if (algorithm == true)
//output dog detected
else
//output cat detected
Android Studio is just an IDE. It doesn't run the actual code. And no, it doesn't run Python.
You should be able to export the Keras model into a offline format that Android can use via Tensorflow; Keras deep learning model to android
Alternatively, to deploy an "online model", you'd run a hosted web-server that exposed the model over HTTP, which your Android code would send in requests to and parse the response.

Tensorflow lite - depth map - based on stereo vision

I have not found any project, library, model or guide to measure distance using stereo imaging with tensorflow lite 2.
I want to be able to measure distance from stereo images.
I would like to be able to run it on android, so I would like to create a model in tensorflow 2 or use an existing one. But I don't know where to start, everything I found uses pytorch
I know that opencv provides a method to do it, but according to the literature neural networks achieve better results. But I have not found any model in tensorflow 2.
I can't use google ar, because my device doesn't support google play services.
I have just uploaded a repository for this purpose:
https://github.com/ibaiGorordo/TFLite-HITNET-Stereo-depth-estimation
It uses the stereo depth estimation HITNET model (from Google-Research) converted to tensorflow lite by #PINTO0309 (find the models here: https://github.com/PINTO0309/PINTO_model_zoo/tree/main/142_HITNET)

Creating a simple neural network on Tensorflow by means of Android

I want to create a simple neural network based on the example https://github.com/googlesamples/android-ndk/tree/master/nn_sample. Is it possible to create this with the help on Tensorflow only with Android tools on Java
Take a look at this folder https://github.com/googlesamples/android-ndk/tree/master/nn_sample/app/src/main/cpp
simple_model.h is the model trained in Tensorflow before creating the Android project. Now the model likes black-box, get input and predict output only, if you want to build your own model, try this tutorial (All steps from training, evaluating, prediction to deploy onto Android):
https://medium.com/#elye.project/applying-tensorflow-in-android-in-4-steps-to-recognize-superhero-f224597eb055
Affirmative. You can use TensorFlow Lite on Android, it's an open source deep learning framework which helps to compress and deploy models to a mobile or embedded application. It basically can take models as input and then deploys and interpret and perform resource-conserving optimizations for mobile applications. The NNAPI of Android NDK can interface with TFLite easily too. This link contains gesture, image, object & speech detection and classification example implementations on Android with Java using TFLite.

Tensorflow Object Detection API on Android

I'd like to swap out the multibox_model.pb being used by TensorFlowMultiBoxDetector.java in Google's Tensorflow Detect Sample App with the mobilenet frozen_inference_graph.pb included in the object detection API's model zoo.
I've ran the optimize_for_inference script on it but the tensorflowInferenceInterface can't parse the optimized model. It can, however, parse the original frozen_inference_graph.pb. I still think I need to modify this graph somehow to take in a square-sized input image, such as the 224x224 one that multibox_model.pb does.
I'm one of the developers --- just FYI, we'll be releasing an update to the Android Detection demo in the next few weeks to make it compatible with the Tensorflow Object Detection API, so please stay tuned.

Categories

Resources