Android Tensorflow lite Recognizing keywords - android

how can I use the model from https://www.tensorflow.org/tutorials/audio/simple_audio in my Android app? How to provide inputs correctly and how to interpret outputs?

TensorFlow Lite's Task Library has an Audio Classification example for Android, which is what you might be looking for. The guide explains how the Java AudioClassifier API works.
The Task Library uses YAMNet for audio analysis, which has a pre-trained version on TFHub. If you want to train with your own dataset, please refer to the notebooks mentioned here.

Related

Is it possible to use .mlmodel in Android, trained with playground (Xcode)?

There is so few material about Android application examples.
Could someone answer is it possible to use .mlmodel trained with playground in the Android project?
Official sources refers to ML Kit, TensorFlow Lite and AutoML.
Moreover, there is detailed example of use for Android SDK level 16.
But:
(usually ending in .tflite or .lite)
Could you give me any constructive advice or an knowledge I should have to complete the Android project trained with Machine Learning model?
I believe, this information would be useful for every beginner interested in Android development also.
From Can I convert mlmodel files back to tflite? the answer appears to be no.
From what I can tell, the .mlmodel format is a client-end inference model similar to .tflite where .tflite is a minimized format for deployment on device.
I suspect that in the process of conversion from the original full machine learning model, trade-offs are made which may or may not have equivalents between the two formats.

TensorFlow Lite - Object Detection API YOLOv3

I want to implement a TFLite Classifier based on YOLOv3 for Android. I'm a little noob with tensorflow lite object detection code...
I want to start from this implementation of Object Detection TFLite. I tried to merge this code with this other implementation with Yolo Classifier but I had a lot of problems in adapting non-lite code with the lite version.
My question is: can i implement a classifier based on Yolov3 starting from TFLite examples?
I think that TFLiteObjectDetectionAPIModel is the class that i have to modify..is this correct? Or this API can be used to call a YoloClassifier Implementation written by myself?
I want to understand in details how I can use API to generate and apply my own classifier based on yolo. I have to implement a new class YoloClassifier.java that interfaces with the API.java file or i can only work on API to adapt new classifier?
Thanks to all in advance and I hope I was clear :)
Unfortunately you can't convert the complete YOLOv3 model to a tensorflow lite model at the moment. This is because YOLOv3 extends on the original darknet backend used by YOLO and YOLOv2 by introducing some extra layers (also referred to as YOLOv3 head portion), which doesn't seem to be handled correctly (atleast in keras) in preparing the model for tflite conversion.
You can convert YOLOv3 to .tflite without the model's 'head' portion (See here: https://github.com/benjamintanweihao/YOLOv3), but then you will have to implement the missing parts in your Java code (as suggested here: https://github.com/wics1224/yolov3-android-tflite). Make sure you have the correct anchor box sizes if you do so. The second link would hopefully answer the second part of your question.
If you plan to keep things simple, your other options would be using SSD-mobilenet or yolov2-tiny for your application. They will give you a more real-time experience.
I am currently working on a similar project involving object detection in flutter/tflite so I'll keep you updated if I find anything new.
Edit:
In https://github.com/benjamintanweihao/YOLOv3, you'll need to change how you import libraries because lite library is moved out from contrib from tensorflow 1.14 onwards.
Try https://github.com/zldrobit/onnx_tflite_yolov3, but the NMS is not in the TensorFlow compute graph. You have to implement your own NMS in your JAVA code.
Another issue with this repo is that it requires ONNX and PyTorch. If you are not familiar with them, it may cost you some time.

Creating a simple neural network on Tensorflow by means of Android

I want to create a simple neural network based on the example https://github.com/googlesamples/android-ndk/tree/master/nn_sample. Is it possible to create this with the help on Tensorflow only with Android tools on Java
Take a look at this folder https://github.com/googlesamples/android-ndk/tree/master/nn_sample/app/src/main/cpp
simple_model.h is the model trained in Tensorflow before creating the Android project. Now the model likes black-box, get input and predict output only, if you want to build your own model, try this tutorial (All steps from training, evaluating, prediction to deploy onto Android):
https://medium.com/#elye.project/applying-tensorflow-in-android-in-4-steps-to-recognize-superhero-f224597eb055
Affirmative. You can use TensorFlow Lite on Android, it's an open source deep learning framework which helps to compress and deploy models to a mobile or embedded application. It basically can take models as input and then deploys and interpret and perform resource-conserving optimizations for mobile applications. The NNAPI of Android NDK can interface with TFLite easily too. This link contains gesture, image, object & speech detection and classification example implementations on Android with Java using TFLite.

using Tensorflow Estimator exported model on Android app

I have an Tensorflow model trained using the Estimator & Dataset APIs and I would like to use it locally on an Android app.
Can someone point me to a good reference and/or tutorial? I looked at the TensorflowInferenceInterface, but my understanding is it need you to specify which operator you want to feed the input to, but the Estimator/Dataset abstraction is at another level. So I am somewhat lost here.
Thanks.
Here is official documentation for this question: https://www.tensorflow.org/mobile/prepare_models

Tensorflow Object Detection API on Android

I'd like to swap out the multibox_model.pb being used by TensorFlowMultiBoxDetector.java in Google's Tensorflow Detect Sample App with the mobilenet frozen_inference_graph.pb included in the object detection API's model zoo.
I've ran the optimize_for_inference script on it but the tensorflowInferenceInterface can't parse the optimized model. It can, however, parse the original frozen_inference_graph.pb. I still think I need to modify this graph somehow to take in a square-sized input image, such as the 224x224 one that multibox_model.pb does.
I'm one of the developers --- just FYI, we'll be releasing an update to the Android Detection demo in the next few weeks to make it compatible with the Tensorflow Object Detection API, so please stay tuned.

Categories

Resources