Google Cloud Vision Object Detection Model Crashes on Android - android

I recently trained an object detection model on Google Cloud Vision. I exported the metadat jason file, label text file, and the model tflite file of the trained model and I intend to run it on Android. However, I cannot run this model using the Android demo app as it crashes every time.
The demo app used is compatible with a locally trained and converted tflite model but not the one exported from Google Cloud.
What might be wrong here and how can it be solved?
Thanks
Reference:
Demo App: https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection
Partial Log:
2020-01-24 11:29:11.628 18071-18071/org.tensorflow.lite.examples.detection E/libc: Access denied finding property "persist.camera.privapp.list"
2020-01-24 11:29:11.732 18071-18101/org.tensorflow.lite.examples.detection I/tensorflow: CameraConnectionFragment: Opening camera preview: 640x480
2020-01-24 11:29:11.769 18071-18102/org.tensorflow.lite.examples.detection D/vndksupport: Loading /vendor/lib/hw/android.hardware.graphics.mapper#2.0-impl.so from current namespace instead of sphal namespace.
2020-01-24 11:29:11.770 18071-18102/org.tensorflow.lite.examples.detection D/vndksupport: Loading /vendor/lib/hw/gralloc.msm8937.so from current namespace instead of sphal namespace.
2020-01-24 11:29:11.803 18071-18071/org.tensorflow.lite.examples.detection I/Timeline: Timeline: Activity_idle id: android.os.BinderProxy#5ab1c5e time:332335506
2020-01-24 11:29:12.198 18071-18101/org.tensorflow.lite.examples.detection D/tensorflow: CameraActivity: Initializing buffer 0 at size 307200
2020-01-24 11:29:12.201 18071-18101/org.tensorflow.lite.examples.detection D/tensorflow: CameraActivity: Initializing buffer 1 at size 153599
2020-01-24 11:29:12.203 18071-18101/org.tensorflow.lite.examples.detection D/tensorflow: CameraActivity: Initializing buffer 2 at size 153599
2020-01-24 11:29:12.204 18071-18101/org.tensorflow.lite.examples.detection I/tensorflow: DetectorActivity: Preparing image 1 for detection in bg thread.
2020-01-24 11:29:12.311 18071-18100/org.tensorflow.lite.examples.detection I/tensorflow: DetectorActivity: Running detection on image 1
2020-01-24 11:29:12.475 18071-18100/org.tensorflow.lite.examples.detection E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.lite.examples.detection, PID: 18071
java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite buffer with 307200 bytes and a Java Buffer with 270000 bytes.
at org.tensorflow.lite.Tensor.throwIfShapeIsIncompatible(Tensor.java:332)
at org.tensorflow.lite.Tensor.throwIfDataIsIncompatible(Tensor.java:305)
at org.tensorflow.lite.Tensor.setTo(Tensor.java:123)
at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:148)
at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:296)
at org.tensorflow.lite.examples.detection.tflite.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:193)
at org.tensorflow.lite.examples.detection.DetectorActivity$2.run(DetectorActivity.java:183)
at android.os.Handler.handleCallback(Handler.java:790)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:175)
at android.os.HandlerThread.run(HandlerThread.java:65)
=========================================================
Update: Now we know that it is because the image fed to the model and the input shape of the model does not match. The input/output shape of models trained from Google Cloud Vision doesn't seem to be consistent. I recently got one of [ 1 320 320 3] in and [ 1 20 4] out, and another of [ 1 512 512 3] in and [ 1 20 4] out.
The demo app is made to handle models of [ 1 300 300 3] in and [ 1 10 4] out.
How do I assign the shapes of a model before training on Google Cloud Vision or how do I make the demo app capable of handling a model of a specific shape?
=========================================================
As an attempt to enable the demo app to handle a model of a specific shape, I changed TF_OD_API_INPUT_SIZE from 300 to 320, which seems to have solved the input data shape issue. However, problems come at the output side.
The new error log says:
java.lang.IllegalArgumentException: Cannot copy between a TensorFlowLite tensor with shape [1, 20, 4] and a Java object with shape [1, 10, 4].
Changing TEXT_SIZE_DIP from 10 to 20 doesn't help.

The cause for the crash is that the input shape doesn't not match with that of the model, after solving which, another crash is caused due to mismatch of the output shape.
The solution to that is to adjust the I/O shape of the demo application according to the model metadata provided by AutoML on Google Cloud.

Related

Android - TFLite OD - Cannot copy to a TensorFlowLite tensor (normalized_input_image_tensor) with 307200 bytes from a Java Buffer with 4320000 bytes

I'm trying to run my own custom model for object detection. I created my dataset from Google cloud - Vision (https://console.cloud.google.com/vision/) (I boxed and labeled the images) and it looks like this:
After training the model, I downloaded the TFLite files (labelmap.txt, model.tflite and a json file) from here:
Then, I added them to the Android Object Detection example ( https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection/android ) .
But when I run the project it crashes:
2020-07-12 18:03:05.160 14845-14883/? E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.lite.examples.detection, PID: 14845
java.lang.IllegalArgumentException: Cannot copy to a TensorFlowLite tensor (normalized_input_image_tensor) with 307200 bytes from a Java Buffer with 4320000 bytes.
at org.tensorflow.lite.Tensor.throwIfSrcShapeIsIncompatible(Tensor.java:423)
at org.tensorflow.lite.Tensor.setTo(Tensor.java:189)
at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:154)
at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:343)
at org.tensorflow.lite.examples.detection.tflite.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:197)
at org.tensorflow.lite.examples.detection.DetectorActivity$2.run(DetectorActivity.java:182)
at android.os.Handler.handleCallback(Handler.java:883)
at android.os.Handler.dispatchMessage(Handler.java:100)
at android.os.Looper.loop(Looper.java:214)
at android.os.HandlerThread.run(HandlerThread.java:67)
I tried changing the parameters TF_OD_API_IS_QUANTIZED to false and labelOffset to 0, and also I modified this line from the TFLiteObjectDetectionAPIModel.java to d.imgData = ByteBuffer.allocateDirect(_4_ * d.inputSize * d.inputSize * 3 * numBytesPerChannel); (I replaced 1 for 4)
I am new to this, I would really appreciate if someone could help me understand and resolve the error. Thank you!
Update:
Here are the tflite files : https://drive.google.com/drive/folders/11QT8CgaYF2EseORgGCceh4DT80_pMiFM?usp=sharing (I don't care if the model recognize correctly the squares and circles, I just want to check if it compiles on the android app and then I will improve it)
There is a superb visualization tool that is called Netron . I used your .tflite file and the input of your model is:
So at your code at line where you calculate bytebuffer
1 * d.inputSize * d.inputSize * 3 * numBytesPerChannel
you have to input
1* 320 * 320 * 3 * 1
the last "1" is for uint8....if you had floats you should put "4".
After I change TensorImage DataType from UINT8 to FLOAT32, it works.
val tfImageBuffer = TensorImage(DataType.UINT8)
->
val tfImageBuffer = TensorImage(DataType.FLOAT32)

Cannot convert between a TensorFlowLite buffer with 307200 bytes and a Java Buffer with 270000 bytes

I am trying to run a pre-trained Object Detection TensorFlowLite model from Tensorflow detection model zoo. I used the ssd_mobilenet_v3_small_coco model from this site under the Mobile Models heading. According to the instructions under Running our model on Android, I commented out the model download script to avoid the assets being overwritten: // apply from:'download_model.gradle' in build.gradle file and replaced the detect.tflite and labelmap.txt file in assets directory. Build was successful without any errors and the app was installed in my android device but it crashed as soon as it launched and the logcat showed:
E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.lite.examples.detection, PID: 16960
java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite buffer with 307200 bytes and a Java Buffer with 270000 bytes.
at org.tensorflow.lite.Tensor.throwIfShapeIsIncompatible(Tensor.java:425)
at org.tensorflow.lite.Tensor.throwIfDataIsIncompatible(Tensor.java:392)
at org.tensorflow.lite.Tensor.setTo(Tensor.java:188)
at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:150)
at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:314)
at org.tensorflow.lite.examples.detection.tflite.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:196)
at org.tensorflow.lite.examples.detection.DetectorActivity$2.run(DetectorActivity.java:185)
at android.os.Handler.handleCallback(Handler.java:873)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:201)
at android.os.HandlerThread.run(HandlerThread.java:65)
I have searched through many TensorFlowLite documentations but did not find anything related to this error and I found some questions on stackoverflow having same error message but for a custom trained model, so that did not help. The same error keeps on coming even on a custom trained model. What should I do to eliminate this error?
You should resize your input tensors, so your model can take data of any size, pixels or batches.
The below code is for image classification and yours is object detection:
TFLiteObjectDetectionAPIModel is responsible to get size. Try to manipulate the size in some where TFLiteObjectDetectionAPIModel.
The labels length needs to be match the output tensor length for your
trained model.
int[] dimensions = new int[4];
dimensions[0] = 1; // Batch_size // No of frames at a time
dimensions[1] = 224; // Image Width required by model
dimensions[2] = 224; // Image Height required by model
dimensions[3] = 3; // No of Pixels
Tensor tensor = c.tfLite.getInputTensor(0);
c.tfLite.resizeInput(0, dimensions);
Tensor tensor1 = c.tfLite.getInputTensor(0);
Change input size

tensorflow TF lite android app crashing after detection

I have trained my model using ssd_mobilenet_v2_quantized_coco, which was also a long painstaking process of digging. Once training was successful, the model was correctly detecting images from my laptop but on my phone as soon as an object is detected, app crashes. I used TF lite Android app available at GitHub. I did some debugging on Android Studio and getting the following error log when an object gets detected and app crashes:
I/tensorflow: MultiBoxTracker: Processing 0 results from 314 I/tensorflow:
DetectorActivity: Preparing image 506 for detection in bg thread.
I/tensorflow: DetectorActivity: Running detection on image 506
I/tensorflow: MultiBoxTracker: Processing 0 results from 506
I/tensorflow: DetectorActivity: Preparing image 676 for detection in bg thread.
I/tensorflow: DetectorActivity: Running detection on image 676
E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.lite.demo, PID: 3122
java.lang.ArrayIndexOutOfBoundsException: length=80; index=-2147483648
at java.util.Vector.elementData(Vector.java:734)
at java.util.Vector.get(Vector.java:750)
at org.tensorflow.demo.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:213)
at org.tensorflow.demo.DetectorActivity$3.run(DetectorActivity.java:247)
at android.os.Handler.handleCallback(Handler.java:873)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:193)
at android.os.HandlerThread.run(HandlerThread.java:65)
My guess is labels located in .txt file being somehow misread. This is because of the line:
at org.tensorflow.demo.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:213)
and that line corresponds to the following code:
labels.get((int) outputClasses[0][i] + labelOffset)
However, I don't know what to change in labels.txt. Possibly, I need to edit that txt as suggested here. Any other suggestions and explanation for possible causes are appreciated.
Update. I added ??? to the labels.txt and compiled/run, but I am still getting the same error as above.
P.S. I trained ssdmobilenet_V2_coco (the model without quantization) as well and it is working without crash on the app. I am guessing, perhaps, quantization is converting label indices differently and maybe resulting in outofbound error for labels.
Yes it is because the output of labels at times gets garbage value. For a quick fix you can try this:
add a condition:
if((int) outputClasses[0][i]>10)
{
outputClasses[0][i]=-1;
}
here 10 is the number of classes for which the model was trained for. You can change it accordingly.

java.lang.IllegalArgumentException: Output error: Shape of output target [1, 1917, 4] does not match with the shape of the Tensor [1, 1917, 1, 4]

I've trained my own model for object detection with tensorflow and I got it working with Tensorflow mobile for android. Now since Tensorflow Lite is released and is going to replace mobile in the future I wanted to start working with it. The Tensorflow team provided a demo for TFLite for object detection (you can find it here). So I tried to get it working with my model but I got the error in the title. Here's the logcat :
05-17 11:18:50.624 25688-25688/? I/tensorflow: DetectorActivity: Camera orientation relative to screen canvas: 90
05-17 11:18:50.624 25688-25688/? I/tensorflow: DetectorActivity: Initializing at size 640x480
05-17 11:18:50.628 25688-25688/? I/tensorflow: MultiBoxTracker: Initializing ObjectTracker: 640x480
05-17 11:18:50.637 25688-25688/? I/tensorflow: DetectorActivity: Preparing image 1 for detection in bg thread.
05-17 11:18:50.689 25688-25707/? I/tensorflow: DetectorActivity: Running detection on image 1
05-17 11:18:52.496 25688-25707/? E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.lite.demo, PID: 25688
java.lang.IllegalArgumentException: Output error: Shape of output target [1, 1917, 4] does not match with the shape of the Tensor [1, 1917, 1, 4].
at org.tensorflow.lite.Tensor.copyTo(Tensor.java:44)
at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:154)
at org.tensorflow.demo.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:222)
at org.tensorflow.demo.DetectorActivity$3.run(DetectorActivity.java:242)
at android.os.Handler.handleCallback(Handler.java:761)
at android.os.Handler.dispatchMessage(Handler.java:98)
at android.os.Looper.loop(Looper.java:156)
at android.os.HandlerThread.run(HandlerThread.java:61)
Note : as a checkpoint to train the model I used ssd_mobilenet_v1_coco_2017_11_17 and the only thing I changed in the code is this (TFLiteObjectDetectionAPIModel.java):
private static final int NUM_CLASSES = 3;
because I only have two objects to detect. Any help or information would much appreciated.
I also had the same problem but here is how i solved it with a small hack:
in the "TFLiteObjectDetectionAPIModel.java" file create a new variable array:
float [][][][] temp1 = new float[1][NUM_RESULTS][1][4];
and then for your "outputMap" object replace:
outputMap.put(0, outputLocations);
by:
outputMap.put(0, temp1);
This will solve the shape miss-match problem. and also be sure to put the correct number of classes. For example i had only one class but in the .txt file, the first class is listed as "???" and then the second one is my actual class. Hence i had:
private static final int NUM_CLASSES = 2;
even if i only have one class. But those two hacks seem to solve the problem.
P.S: The TFLite version of the fronzen model seems to run even slower than the .pb extension(on my Samsung Galaxy S8 android API 26).

Why is video made with MediaCodec garbled for Samsung Galaxy S7?

When I encode a video via Surface -> MediaCodec -> MediaMuxer, I get a very strange result when testing on the Samsung Galaxy S7. For other devices tested (emulator with Marshmallow and HTC Desire), the video comes out correctly, but on this device the video is garbled.
Using MediaCodec to save series of images as Video had a similar output of video, but I don't see how the solution could apply here because I am using a Surface as input and set the color format to COLOR_FormatSurface.
I also tried messing with the video resolution (settled on 1280 x 720) per MediaCodec Encoded video has green bar at bottom and chrominance screwed up, but that didn't solve the problem either. (c.f. Nexus 7 2013 mediacodec video encoder garbled output)
Does anyone have suggestions for what I might try to get the video formatted correctly?
Here is part of the log from the encoding:
D/ViewRootImpl: #1 mView = android.widget.LinearLayout{1dc79f2 V.E...... ......I. 0,0-0,0 #102039c android:id/toast_layout_root}
I/ACodec: [] Now uninitialized
I/OMXClient: Using client-side OMX mux.
I/ACodec: [OMX.qcom.video.encoder.avc] Now Loaded
W/ACodec: [OMX.qcom.video.encoder.avc] storeMetaDataInBuffers (output) failed w/ err -1010
W/ACodec: do not know color format 0x7fa30c06 = 2141391878
W/ACodec: do not know color format 0x7fa30c04 = 2141391876
W/ACodec: do not know color format 0x7fa30c08 = 2141391880
W/ACodec: do not know color format 0x7fa30c07 = 2141391879
W/ACodec: do not know color format 0x7f000789 = 2130708361
D/ViewRootImpl: MSG_RESIZED_REPORT: ci=Rect(0, 0 - 0, 0) vi=Rect(0, 0 - 0, 0) or=1
I/ACodec: setupVideoEncoder succeeded
W/ACodec: do not know color format 0x7f000789 = 2130708361
I/ACodec: [OMX.qcom.video.encoder.avc] Now Loaded->Idle
I/ACodec: [OMX.qcom.video.encoder.avc] Now Idle->Executing
I/ACodec: [OMX.qcom.video.encoder.avc] Now Executing
I/MPEG4Writer: setStartTimestampUs: 0
I/MPEG4Writer: Earliest track starting time: 0
The 5th unrecognized color seems to be COLOR_FormatSurface... Is that a problem?
Other details:
MIME: video/avc
Resolution: 1280 x 720
Frame rate: 30
IFrame interval: 2
Bitrate: 8847360
Per Android Docs for MediaCodec.createInputSurface():
The Surface must be rendered with a hardware-accelerated API, such as
OpenGL ES. lockCanvas(android.graphics.Rect) may fail or produce
unexpected results.
I must have missed (or ignored) that in writing the code. Since I was using lockCanvas() to get a canvas upon which to draw my video frames, the code broke. I have put a quick fix on the problem by using lockHardwareCanvas() if API level >= 23 (since it is unavailable prior to that and since the code ran fine on API level 19).
Long term however (for me and anyone else who might stumble across this), I may have to get into more OpenGL stuff for a more permanent and stable solution. It's not worth going that route though unless I find an example of a device which will not work with my quick fix.
If you are still looking for an example for rendering bitmaps to a InputSurface.
I was able to get this to work.
Look at my answers here.
https://stackoverflow.com/a/49331192/7602598
https://stackoverflow.com/a/49331352/7602598
https://stackoverflow.com/a/49331295/7602598

Categories

Resources