MLKit bad recognition - android

Hi I trained the model through Firebase MLKit. I have selected the option to train him for 13 hours. And despite this, when I get into the details, Firebase only shows me 1 hour during training hours. And after uploading to Android device, CameraX is detecting images incorrectly. The same when I test the model in Firebase also incorrectly detects. I use photos from here:
https://www.kaggle.com/grassknoted/asl-alphabet/data
How can I improve the detection and classification of photos?

I will try to help with my version, what you need to know is that MLkit runs on TensorFlow.
MLKit can use the cloud model or ondevice, when you use your own model you must first upload it to the cloud (if you use MLKit), but if you use TensorFlowLite then you can put the model and label directly on the device.
Errors may occur when you make a model, you can use this tool to make your own model easily https://teachablemachine.withgoogle.com/train .
After that, upload your model on the MLKit Firebase, and apply it to the Android / iOS application using this sample method https://firebase.google.com/docs/ml-kit/use-custom-models
i have sample project using MLKit & its work for more detail check my article here https://medium.com/#rakaadinugroho/face-image-cropper-with-machine-learning-c448f9d19858

Related

Adding an ML model into Android Studio

I have written an ML model using Keras in Google Colab which detects whether an animal is a cat or a dog based on an input image.
Now, I want to create an app in Android Studio that takes an image as input, uses the algorithm I have designed to detect whether the image is a cat or a dog and outputs the result.
My question is how can I incorporate this algorithm into Android Studio? My first thought was to rewrite the code in Java and copy it into Android Studio but writing Java in Google Colab would lead to multiple complications. Hence, is there a way to download the algorithm I have created and upload it into Android Studio such that it works? If not, what other approach can I implement?
My desired outcome is something where I can add the algorithm into Android Studio and write a code like:
if (algorithm == true)
//output dog detected
else
//output cat detected
Android Studio is just an IDE. It doesn't run the actual code. And no, it doesn't run Python.
You should be able to export the Keras model into a offline format that Android can use via Tensorflow; Keras deep learning model to android
Alternatively, to deploy an "online model", you'd run a hosted web-server that exposed the model over HTTP, which your Android code would send in requests to and parse the response.

ML project in android app using Tensorflowlite

I have a model file in tensorflow. I have to integrate in Android application and build a sample app. The model takes in tokenised input. (number array instead of sentences). The application takes sentence input from user. What should be done to implement the tokeniser too in Java android application if the tensorflow model is in Python?
I think you can find some reusable code here: https://github.com/tensorflow/examples/tree/master/lite/examples
May be in the bert_qa example? https://github.com/tensorflow/examples/tree/master/lite/examples/bert_qa/android/app/src/main/java/org/tensorflow/lite/examples/bertqa/tokenization

Firebase Ml kit, Google cloud vision API or openCV

I want to build an android app for gaze tracking and I would like to ask which of the following tools I should use for better results.
Google Cloud Vision API
OpenCV (ex HaarCascade classifier)
Firebase ML kit with facial landmarks
I don't know if you plan to create a commercial application or if it's for research purposes, the things to consider change a bit in these two scenarios.
For object tracking I'd problably go with google's mlkit, it has some ready-to-use models that also works offline, it also simplifies all the hard work of pure tensorflow (even on iOS) if you want to use your custom models. So your hard work will be to create an efficient model and not running it.
Google Cloud Vision API I've not used yet, just the GCP machines to train a NN and they came in handy for it.
OpenCV is a good one but might be hard to implement and mantain after, your app size will also considerably increase. I've used HaarCascade in my final paper 2 years ago, the work was hard and the result not that accurate, today I'd check the OpenCV's DNN module and go with Yolo like here. To summarize, I'd just recomment it if you have some specific image processing demand, but first check the Android's ColorFilter or ImageFilterView. If you choose to use OpenCV, I'd recommend you to compile it by yourself with cmake like described here just with the modules you need to use, so you app size won't increase that much.
There's also some other options like Dlib or PyTorch, I've been working with dlib's SVM with a custom model last year, its results were good but it's slow to run, about 3~4 seconds, compared to a NN with tensorflow that runs in 50~60 milliseconds (even faster with quantized models). I don't have experience with PyTorch or other framework to share something with you.

Firebase ML Kit: Pre-Trained Models

I can see there are 3 pre-trained models included in the assets folder of the ML-Kit sample app.
Please help me understand
Which of these models correspond to face detection?
What is the data set used to train these models?
Those are custom models provided to highlight the ML Kit functionality that allows you to bring your own TFLite models.
The pre-trained APIs, also known as Base APIs (text recognition, image labeling, etc) get added by including the appropriate libraries in your app's gradle file. No need to include anything in your assets folder. See an example here:
https://firebase.google.com/docs/ml-kit/android/recognize-text

Tensorflow Object Detection API on Android

I'd like to swap out the multibox_model.pb being used by TensorFlowMultiBoxDetector.java in Google's Tensorflow Detect Sample App with the mobilenet frozen_inference_graph.pb included in the object detection API's model zoo.
I've ran the optimize_for_inference script on it but the tensorflowInferenceInterface can't parse the optimized model. It can, however, parse the original frozen_inference_graph.pb. I still think I need to modify this graph somehow to take in a square-sized input image, such as the 224x224 one that multibox_model.pb does.
I'm one of the developers --- just FYI, we'll be releasing an update to the Android Detection demo in the next few weeks to make it compatible with the Tensorflow Object Detection API, so please stay tuned.

Categories

Resources