I have trained my model using ssd_mobilenet_v2_quantized_coco, which was also a long painstaking process of digging. Once training was successful, the model was correctly detecting images from my laptop but on my phone as soon as an object is detected, app crashes. I used TF lite Android app available at GitHub. I did some debugging on Android Studio and getting the following error log when an object gets detected and app crashes:
I/tensorflow: MultiBoxTracker: Processing 0 results from 314 I/tensorflow:
DetectorActivity: Preparing image 506 for detection in bg thread.
I/tensorflow: DetectorActivity: Running detection on image 506
I/tensorflow: MultiBoxTracker: Processing 0 results from 506
I/tensorflow: DetectorActivity: Preparing image 676 for detection in bg thread.
I/tensorflow: DetectorActivity: Running detection on image 676
E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.lite.demo, PID: 3122
java.lang.ArrayIndexOutOfBoundsException: length=80; index=-2147483648
at java.util.Vector.elementData(Vector.java:734)
at java.util.Vector.get(Vector.java:750)
at org.tensorflow.demo.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:213)
at org.tensorflow.demo.DetectorActivity$3.run(DetectorActivity.java:247)
at android.os.Handler.handleCallback(Handler.java:873)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:193)
at android.os.HandlerThread.run(HandlerThread.java:65)
My guess is labels located in .txt file being somehow misread. This is because of the line:
at org.tensorflow.demo.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:213)
and that line corresponds to the following code:
labels.get((int) outputClasses[0][i] + labelOffset)
However, I don't know what to change in labels.txt. Possibly, I need to edit that txt as suggested here. Any other suggestions and explanation for possible causes are appreciated.
Update. I added ??? to the labels.txt and compiled/run, but I am still getting the same error as above.
P.S. I trained ssdmobilenet_V2_coco (the model without quantization) as well and it is working without crash on the app. I am guessing, perhaps, quantization is converting label indices differently and maybe resulting in outofbound error for labels.
Yes it is because the output of labels at times gets garbage value. For a quick fix you can try this:
add a condition:
if((int) outputClasses[0][i]>10)
{
outputClasses[0][i]=-1;
}
here 10 is the number of classes for which the model was trained for. You can change it accordingly.
Related
I recently trained an object detection model on Google Cloud Vision. I exported the metadat jason file, label text file, and the model tflite file of the trained model and I intend to run it on Android. However, I cannot run this model using the Android demo app as it crashes every time.
The demo app used is compatible with a locally trained and converted tflite model but not the one exported from Google Cloud.
What might be wrong here and how can it be solved?
Thanks
Reference:
Demo App: https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection
Partial Log:
2020-01-24 11:29:11.628 18071-18071/org.tensorflow.lite.examples.detection E/libc: Access denied finding property "persist.camera.privapp.list"
2020-01-24 11:29:11.732 18071-18101/org.tensorflow.lite.examples.detection I/tensorflow: CameraConnectionFragment: Opening camera preview: 640x480
2020-01-24 11:29:11.769 18071-18102/org.tensorflow.lite.examples.detection D/vndksupport: Loading /vendor/lib/hw/android.hardware.graphics.mapper#2.0-impl.so from current namespace instead of sphal namespace.
2020-01-24 11:29:11.770 18071-18102/org.tensorflow.lite.examples.detection D/vndksupport: Loading /vendor/lib/hw/gralloc.msm8937.so from current namespace instead of sphal namespace.
2020-01-24 11:29:11.803 18071-18071/org.tensorflow.lite.examples.detection I/Timeline: Timeline: Activity_idle id: android.os.BinderProxy#5ab1c5e time:332335506
2020-01-24 11:29:12.198 18071-18101/org.tensorflow.lite.examples.detection D/tensorflow: CameraActivity: Initializing buffer 0 at size 307200
2020-01-24 11:29:12.201 18071-18101/org.tensorflow.lite.examples.detection D/tensorflow: CameraActivity: Initializing buffer 1 at size 153599
2020-01-24 11:29:12.203 18071-18101/org.tensorflow.lite.examples.detection D/tensorflow: CameraActivity: Initializing buffer 2 at size 153599
2020-01-24 11:29:12.204 18071-18101/org.tensorflow.lite.examples.detection I/tensorflow: DetectorActivity: Preparing image 1 for detection in bg thread.
2020-01-24 11:29:12.311 18071-18100/org.tensorflow.lite.examples.detection I/tensorflow: DetectorActivity: Running detection on image 1
2020-01-24 11:29:12.475 18071-18100/org.tensorflow.lite.examples.detection E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.lite.examples.detection, PID: 18071
java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite buffer with 307200 bytes and a Java Buffer with 270000 bytes.
at org.tensorflow.lite.Tensor.throwIfShapeIsIncompatible(Tensor.java:332)
at org.tensorflow.lite.Tensor.throwIfDataIsIncompatible(Tensor.java:305)
at org.tensorflow.lite.Tensor.setTo(Tensor.java:123)
at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:148)
at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:296)
at org.tensorflow.lite.examples.detection.tflite.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:193)
at org.tensorflow.lite.examples.detection.DetectorActivity$2.run(DetectorActivity.java:183)
at android.os.Handler.handleCallback(Handler.java:790)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:175)
at android.os.HandlerThread.run(HandlerThread.java:65)
=========================================================
Update: Now we know that it is because the image fed to the model and the input shape of the model does not match. The input/output shape of models trained from Google Cloud Vision doesn't seem to be consistent. I recently got one of [ 1 320 320 3] in and [ 1 20 4] out, and another of [ 1 512 512 3] in and [ 1 20 4] out.
The demo app is made to handle models of [ 1 300 300 3] in and [ 1 10 4] out.
How do I assign the shapes of a model before training on Google Cloud Vision or how do I make the demo app capable of handling a model of a specific shape?
=========================================================
As an attempt to enable the demo app to handle a model of a specific shape, I changed TF_OD_API_INPUT_SIZE from 300 to 320, which seems to have solved the input data shape issue. However, problems come at the output side.
The new error log says:
java.lang.IllegalArgumentException: Cannot copy between a TensorFlowLite tensor with shape [1, 20, 4] and a Java object with shape [1, 10, 4].
Changing TEXT_SIZE_DIP from 10 to 20 doesn't help.
The cause for the crash is that the input shape doesn't not match with that of the model, after solving which, another crash is caused due to mismatch of the output shape.
The solution to that is to adjust the I/O shape of the demo application according to the model metadata provided by AutoML on Google Cloud.
I am trying to implement a YOLO detector on android. I followed the demo code provided by tensorflow. The model runs fine on CPU or with NNAPI. However, when I try to run it with GPU, the program crashes and debugger gives out the following error:
java.lang.IllegalArgumentException: Internal error: Failed to apply delegate: GpuDelegate Prepare: Dimensions are not BHWCNode number 23 (GpuDelegate) failed to prepare.
The final layer of the model looks like self.conv9 = layers.Conv2D(425, (1,1), strides=(1,1), padding='same', name='conv_9', use_bias=False)
The model runs perfectly fine on PC with either CPU or GPU.
I try to extract image directly from the GPU (so without using AcquireCameraImageBytes()) for performance reason (Samsung S9 can't reach 10fps) and to support the Xiaomi Pocophone I bought. I use the class TextureReader, included inside the ComputerVision example, but OnImageAvailableCallback is never called and log show some error during initialization:
camera_utility: Failed to create OpenGL frame buffer.
camera_utility: Failed to create OpenGL frame buffer.
I insert a couple of breakpoints inside libarcore_camera_utility.so. And I see that glCheckFramebufferStatus inside TextureReader::create return 0 (so a error occurred), but glGetError() don't return any error. How it's possible resolve the problem?
I've trained my own model for object detection with tensorflow and I got it working with Tensorflow mobile for android. Now since Tensorflow Lite is released and is going to replace mobile in the future I wanted to start working with it. The Tensorflow team provided a demo for TFLite for object detection (you can find it here). So I tried to get it working with my model but I got the error in the title. Here's the logcat :
05-17 11:18:50.624 25688-25688/? I/tensorflow: DetectorActivity: Camera orientation relative to screen canvas: 90
05-17 11:18:50.624 25688-25688/? I/tensorflow: DetectorActivity: Initializing at size 640x480
05-17 11:18:50.628 25688-25688/? I/tensorflow: MultiBoxTracker: Initializing ObjectTracker: 640x480
05-17 11:18:50.637 25688-25688/? I/tensorflow: DetectorActivity: Preparing image 1 for detection in bg thread.
05-17 11:18:50.689 25688-25707/? I/tensorflow: DetectorActivity: Running detection on image 1
05-17 11:18:52.496 25688-25707/? E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.lite.demo, PID: 25688
java.lang.IllegalArgumentException: Output error: Shape of output target [1, 1917, 4] does not match with the shape of the Tensor [1, 1917, 1, 4].
at org.tensorflow.lite.Tensor.copyTo(Tensor.java:44)
at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:154)
at org.tensorflow.demo.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:222)
at org.tensorflow.demo.DetectorActivity$3.run(DetectorActivity.java:242)
at android.os.Handler.handleCallback(Handler.java:761)
at android.os.Handler.dispatchMessage(Handler.java:98)
at android.os.Looper.loop(Looper.java:156)
at android.os.HandlerThread.run(HandlerThread.java:61)
Note : as a checkpoint to train the model I used ssd_mobilenet_v1_coco_2017_11_17 and the only thing I changed in the code is this (TFLiteObjectDetectionAPIModel.java):
private static final int NUM_CLASSES = 3;
because I only have two objects to detect. Any help or information would much appreciated.
I also had the same problem but here is how i solved it with a small hack:
in the "TFLiteObjectDetectionAPIModel.java" file create a new variable array:
float [][][][] temp1 = new float[1][NUM_RESULTS][1][4];
and then for your "outputMap" object replace:
outputMap.put(0, outputLocations);
by:
outputMap.put(0, temp1);
This will solve the shape miss-match problem. and also be sure to put the correct number of classes. For example i had only one class but in the .txt file, the first class is listed as "???" and then the second one is my actual class. Hence i had:
private static final int NUM_CLASSES = 2;
even if i only have one class. But those two hacks seem to solve the problem.
P.S: The TFLite version of the fronzen model seems to run even slower than the .pb extension(on my Samsung Galaxy S8 android API 26).
I have a scene transition that I had some issues with:
Scene transition with hero elements throws Layer exceeds max. dimensions supported by the GPU
But setting transitionGroup fixed it. When I compile the exact same app with the the latest Android M SDK it crashes when pressing back from the transition.
Abort message: 'art/runtime/java_vm_ext.cc:410] JNI DETECTED ERROR IN APPLICATION:
JNI CallObjectMethod called with pending exception java.lang.IllegalStateException:
Unable to create layer for RelativeLayout'
Anyone knows if Google changed anything regarding this in Android M?
It is reported here:
https://code.google.com/p/android-developer-preview/issues/detail?id=2416
But no resolution...