Tensorflow retrain model performance - android

I am using Tensorflow for poet guide for train own model. I have create retrained_graph.pb and retrained_labels.txt. While I use it in application then I get error that
Caused by: java.lang.UnsupportedOperationException: Op BatchNormWithGlobalNormalization is not available in GraphDef version 21. It has been removed in version 9. Use tf.nn.batch_normalization(). at org.tensorflow.Graph.importGraphDef(Native Method) at org.tensorflow.Graph.importGraphDef(Graph.java:118)
After That further train model for application use Tensorflow for mobile blog and create optimized_graph.pb, rounded_graph.pb, mmapped_graph.pb files.
optimized_graph.pb and rounded_graph.pb file work in android application without any error.
While use mmapped_graph.pb I get error that Failed to initialize: java.io.IOException: Not a valid TensorFlow Graph serialization: Invalid GraphDef
Performance of application is not good while use optimized_graph.pb and rounded_graph.pb file.While application camera screen not contain any flower photos otherwise random flower name show with high confidence rate. Any way to detect only flower and remain blank when not flowers.
Screenshot

Performance of the application is very good, and is realy fast on the gpu of a mobile phone. The problem is how you build your own model. In fact the tensorflow graph of this application is built to recognize images based on the classes you give to it. In other words if, for example, you teach to the model to recognize 4 different classes of images, it try to label everything it see in this 4 classes.
For this reason you have wrong results when camera screen don't contain flowers.
A way to "solve" this problem is add an extra class with random images, that probably will have high confidence with no-flowera photos.
If you want a more strict model, you have to use a completely different algorithm.
However keep in mind that which used in the application is probably the state of the art of computer images recognition

Related

Android: Converted Tensorflow 2.0 SavedModel to TFLite problem with confidence value

Hello I've converted a SavedModel file to TFLite file after using transfer learning technique on TFHub MobileNet module (SavedModel TF2.0) using this source code https://gist.github.com/mypapit/e3b26787c95caf840e5c16a79327d443 and I tried running it on Tensorflow for Poet sample Android application
image 1
The resulting Android app seems to be able to classify my retrained classes correctly. However, the accuracy is way off (like ridiculously off!), you can refer to the screenshot.
Normally, the accuracy supposedly to be in the range of 0.000 to 1.000. But with my converted TFLite model, the accuracy range varies wildly from -400 to 500.00++
FYI I've already tinker with IMAGE_MEAN and IMAGE_STD value (255f,0f) and (127.5f, 127.5f), but to no avail.
can somebody help me?
The standard TensorFlow for Poets Android source code which I used to test the model is here: https://gist.github.com/mypapit/f7a9b54ee502f02ca72da3f972d25fb9
The converted TFLite file is here:https://1drv.ms/u/s!AmVw1Hsqu0-CguVlKyCNE0W-NzODEg?e=LkjBXl
and its labelmap is here: https://gist.github.com/mypapit/56845dde0c47e21d0e18ec86d25a3ff2
I've noticed that this only happened when I use tfhub module (TF2.0 SavedModel) with Tensorflow 2.x, it does not happened when I use tfhub module with Tensorflow 1.14
I'm already at my wits end, can somebody help me? :(
Seems like you're missing last softmax layer which makes sum of all the label outputs to be 1.
https://medium.com/data-science-bootcamp/understand-the-softmax-function-in-minutes-f3a59641e86d
As I don't have your dataset, I tried your model with flower dataset and converted the model as in your code and then deployed on Android device using Android studio. I had used TFLite app and replaced assets folder with the *.tflite and 'labels.txt`. Accuracy is very good. Please check the images below. Based on this, I can surely tell root-cause is not your model or TFLite conversion. Root-cause may be your data (preprocessing of images) or Android part of the code. Please check the code implementation here

Is it possible to make the Tensorflow Lite model downloadable in Android?

I have a use-case where the APK size of my Android app is a very important parameter. Adding a TFLite model obviously increases this APK size, which is undesirable for me.
I have already quantized the model so that the APK size increase is minimal. However, I would like to make this model downloadable, rather than including it as an asset file.
Downloading the model after the app has been installed, is not as much of a problem for me.

Tensorflow SSD300 for Android

I'm a college student who studies machine leraning in Japan.
I'm not good at using English, but I will make efforts to convey my situation in English.
I'm now trying to use the object detection model for android.
I used SSD300_mobile_net for training and then I got .hdf5 file which has model's weights.
Next, I convert this .hdf5file into .pb file which is fit for tensorflow.
At last, I want to use this .pb file for tensorflow android .(https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android)
But I got an error on Android Studio as below while debugging the application by using my device
This is the error capture
I want to know how to solve this problem.
I sense that it's an issue with your hardware. Generally the hdf5 files are very big and even bigger is the computation that is being done on it. Since your system is giving OOM exception, I believe you are getting this error while loading weights into the model.

Unity3D Android - Download assets from an API and import on device?

Questions are first then some description which helps explain the questions:
Questions:
Will Unity be able to easily import downloaded assets from our API if we save them in a specified directory with our core Android app?
Can we pre-serialize the 3d model assets before uploading them to our API so that they do not need to be serialized by Unity on device? This is so that they load faster
Description
I'm not a Unity/3D developer or an Android developer so apologies if this question doesn't meet community guidelines (I built the API in this scenario). This is also probably very long for what should be simple questions.
We've built an API which serves 3D model files in Collada 1.4 (.dae) format to iOS and Android client applications. File sizes can be quite large (upwards of 80-90mb in some instances) although the majority are in the 4-15mb region.
On iOS we've been having the application download the models from the API and then render them onto the screen via SceneKit which has been relatively simple to achieve. We put the Collada files through a preprocessing step before serving them which converts them into a proprietary format used by SceneKit.
In Android we've encountered any number of problems. We initially experimented with converting models to OBJ instead of Collada and using Rajawali, jPCT and a few other loaders for Open GL ES2.0 but due to our file sizes the time to read and load the models was far too long. We've now decided Unity is probably where we need to go on Android as a rendering engine.
The Android app will consist of two parts, a core app which has an interface for viewing images of the models and downloading the model files and a Unity app which is loaded via an activity from the core Android app to actually render the models.

The size of Graph file(.pb) from Tensorflow is too large for android Usage, how to reduce it?

I try to build a network implementing the Yolo Object detection using tensorflow, and I want it could be used on Android. After building the structure, I use the tf.train.write_graph to get the graph file and want to replace the original file in android demo.
But the pb file is too large (1.1G) which is not usable on Android. So, how could I reduce the size?
I would suggest you to first try quantizing your graph, for that you'll only need an official TensorFlow script. Here's a great tutorial by Pete Warden:
https://petewarden.com/2016/05/03/how-to-quantize-neural-networks-with-tensorflow/
In theory if you used 32 bit floats your model is going to end up ~4 times (~250Mb) smaller since the values in the graph will be converted to 8 bit integers (For inference it has no significant effect on the performance). Note that this comes into play when you compress the Protocol Buffer file.

Categories

Resources