I'm a college student who studies machine leraning in Japan.
I'm not good at using English, but I will make efforts to convey my situation in English.
I'm now trying to use the object detection model for android.
I used SSD300_mobile_net for training and then I got .hdf5 file which has model's weights.
Next, I convert this .hdf5file into .pb file which is fit for tensorflow.
At last, I want to use this .pb file for tensorflow android .(https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android)
But I got an error on Android Studio as below while debugging the application by using my device
This is the error capture
I want to know how to solve this problem.
I sense that it's an issue with your hardware. Generally the hdf5 files are very big and even bigger is the computation that is being done on it. Since your system is giving OOM exception, I believe you are getting this error while loading weights into the model.
Related
Hello I've converted a SavedModel file to TFLite file after using transfer learning technique on TFHub MobileNet module (SavedModel TF2.0) using this source code https://gist.github.com/mypapit/e3b26787c95caf840e5c16a79327d443 and I tried running it on Tensorflow for Poet sample Android application
image 1
The resulting Android app seems to be able to classify my retrained classes correctly. However, the accuracy is way off (like ridiculously off!), you can refer to the screenshot.
Normally, the accuracy supposedly to be in the range of 0.000 to 1.000. But with my converted TFLite model, the accuracy range varies wildly from -400 to 500.00++
FYI I've already tinker with IMAGE_MEAN and IMAGE_STD value (255f,0f) and (127.5f, 127.5f), but to no avail.
can somebody help me?
The standard TensorFlow for Poets Android source code which I used to test the model is here: https://gist.github.com/mypapit/f7a9b54ee502f02ca72da3f972d25fb9
The converted TFLite file is here:https://1drv.ms/u/s!AmVw1Hsqu0-CguVlKyCNE0W-NzODEg?e=LkjBXl
and its labelmap is here: https://gist.github.com/mypapit/56845dde0c47e21d0e18ec86d25a3ff2
I've noticed that this only happened when I use tfhub module (TF2.0 SavedModel) with Tensorflow 2.x, it does not happened when I use tfhub module with Tensorflow 1.14
I'm already at my wits end, can somebody help me? :(
Seems like you're missing last softmax layer which makes sum of all the label outputs to be 1.
https://medium.com/data-science-bootcamp/understand-the-softmax-function-in-minutes-f3a59641e86d
As I don't have your dataset, I tried your model with flower dataset and converted the model as in your code and then deployed on Android device using Android studio. I had used TFLite app and replaced assets folder with the *.tflite and 'labels.txt`. Accuracy is very good. Please check the images below. Based on this, I can surely tell root-cause is not your model or TFLite conversion. Root-cause may be your data (preprocessing of images) or Android part of the code. Please check the code implementation here
I am using Tensorflow for poet guide for train own model. I have create retrained_graph.pb and retrained_labels.txt. While I use it in application then I get error that
Caused by: java.lang.UnsupportedOperationException: Op BatchNormWithGlobalNormalization is not available in GraphDef version 21. It has been removed in version 9. Use tf.nn.batch_normalization(). at org.tensorflow.Graph.importGraphDef(Native Method) at org.tensorflow.Graph.importGraphDef(Graph.java:118)
After That further train model for application use Tensorflow for mobile blog and create optimized_graph.pb, rounded_graph.pb, mmapped_graph.pb files.
optimized_graph.pb and rounded_graph.pb file work in android application without any error.
While use mmapped_graph.pb I get error that Failed to initialize: java.io.IOException: Not a valid TensorFlow Graph serialization: Invalid GraphDef
Performance of application is not good while use optimized_graph.pb and rounded_graph.pb file.While application camera screen not contain any flower photos otherwise random flower name show with high confidence rate. Any way to detect only flower and remain blank when not flowers.
Screenshot
Performance of the application is very good, and is realy fast on the gpu of a mobile phone. The problem is how you build your own model. In fact the tensorflow graph of this application is built to recognize images based on the classes you give to it. In other words if, for example, you teach to the model to recognize 4 different classes of images, it try to label everything it see in this 4 classes.
For this reason you have wrong results when camera screen don't contain flowers.
A way to "solve" this problem is add an extra class with random images, that probably will have high confidence with no-flowera photos.
If you want a more strict model, you have to use a completely different algorithm.
However keep in mind that which used in the application is probably the state of the art of computer images recognition
I've build an application that uses Tesseract (V3.03 rc1) to identify some specific text strings. These are, unfortunately, printed on a custom font that requires that I build my own traineddata file. I've built the application on both iOS (using https://github.com/gali8/Tesseract-OCR-iOS for inspiration) and Android (using https://github.com/rmtheis/tess-two/ for inspiration as well).
The workflow for both platforms is as follows:
I select a bounding box on the preview screen for where I can crop out the relevant text, and crop the image accordingly.
I use OpenCV to get a binary image (using OpenCV's adaptive threshold function with the same parameters for both platforms)
I pass this binary image to Tesseract. Both platforms (Android and iOS) use the same traineddata file.
And yet, iOS recognizes the text strings perfectly, while Android keeps misidentifying certain characters (6s for Ss, As for Hs).
On both platforms, I use the same white list string, I disable load_type_dawg and load_system_dawg, and also choose to save the blob choices.
Has anyone encountered this kind of situation before? Am I missing a setting on Android that's automatically handled in iOS? Is there something particular about Android that hasn't crossed my mind?
Any thoughts or advice would be greatly appreciated!
So, after a lot of work, I found out what was wrong with my Android application (thankfully, it wasn't an issue with Tesseract at all). As I'm more familiar with iOS apps than Android, I wasn't sure how I could load the traineddata file onto the application without requiring the user to have the file loaded on their external storage device. I found inspiration in this project (http://www.codeproject.com/Tips/840623/Android-Character-Recognition), as they autoload the trained data file.
However, I misunderstood how it worked. I originally thought that the TessDataManager did a file lookup on the project's local tesseract/tessdata folder in order to get the trained data file (as I do this also on iOS). However, that's not what it does. It, rather, checks the internal file structure (data/data/projectname/files/tesseract/tessdata/traineddatafilegoeshere) to see if the file exists and if it doesn't, it copies over the trained data file it keeps in the Resources/Raw directory. In my case, it defaulted to the eng file, so it never read my custom font file.
Hopefully this helps someone else having similar issues. Thanks to Robin and RmTheis for all of your help!
I have a problem with an image for an android game. The problem is not a problem with the code because the code that I use I took from a book (Beginning Android 4 Games Developer).
The problem is this: I know the format that I have to use in android: png, but I don't know the settings for this format that I have to use (like RGB565...). Because if I use simply png, when I run the game the images are not good. So I need someone to explain to which settings I need to use for images for android games.
P.S The software that I used is photoshop. If there is better software for this purpose tell me.
I think there is a strong misconception in your understanding of Android and how it implements graphics. You are not constrained to .png for nearly any of your development. The .png and .9.png are only enforced strictly for managing drawable constants.
Android uses Java and has the capability to utilize nearly any graphical format. In particular native support for .bmp, .png, and .jpg are present for every device and Android OS version. You may even create your graphics in realtime byte by byte.
As for a good image editor, there are a number out there. I often use a combination of GIMP and Photoshop, myself.
Hope this helps,
FuzzicalLogic
sorry if it's a silly question. Do comments in the java or xml file effect the memory usage of the android application? has anyone tried to monitor the memory usage of his/her application with and without the comments?
No, comments do not use any memory.
It's important to understand that, in programming in C, Java, etc. what you're writing is source code which, before being run on the computer (or, specifically, your Android device) is compiled into a machine code format. The processor does not run your source code as you see it. The source code you write typically contains lots of stuff like comments (which do NOT have any effect on the actual code) or perhaps things like compiler directives (which may control how the compiler compiles sections of your code).
(I realise it's more correct to use the term byte code in the case of Java, but trying to keep the answer simple here.)
An exception to this however would be if you're talking about the case where you insert a file (e.g. XML file) as a raw resource within your Android application. But, I think this topic is an advanced one for you to learn about later.
Comments in your code are compiled out and have no effect whatsoever on memory usage in an application.