Android M: Scene transition with hero elements throws java.lang.IllegalStateException - android

I have a scene transition that I had some issues with:
Scene transition with hero elements throws Layer exceeds max. dimensions supported by the GPU
But setting transitionGroup fixed it. When I compile the exact same app with the the latest Android M SDK it crashes when pressing back from the transition.
Abort message: 'art/runtime/java_vm_ext.cc:410] JNI DETECTED ERROR IN APPLICATION:
JNI CallObjectMethod called with pending exception java.lang.IllegalStateException:
Unable to create layer for RelativeLayout'
Anyone knows if Google changed anything regarding this in Android M?
It is reported here:
https://code.google.com/p/android-developer-preview/issues/detail?id=2416
But no resolution...

Related

Tensorflow Lite GPUdelegate gives "Dimensions are not BHWC" error for the last layer. But the layer appears to be BHWC like every other layers to me

I am trying to implement a YOLO detector on android. I followed the demo code provided by tensorflow. The model runs fine on CPU or with NNAPI. However, when I try to run it with GPU, the program crashes and debugger gives out the following error:
java.lang.IllegalArgumentException: Internal error: Failed to apply delegate: GpuDelegate Prepare: Dimensions are not BHWCNode number 23 (GpuDelegate) failed to prepare.
The final layer of the model looks like self.conv9 = layers.Conv2D(425, (1,1), strides=(1,1), padding='same', name='conv_9', use_bias=False)
The model runs perfectly fine on PC with either CPU or GPU.

ARCore Image Extraction from GPU

I try to extract image directly from the GPU (so without using AcquireCameraImageBytes()) for performance reason (Samsung S9 can't reach 10fps) and to support the Xiaomi Pocophone I bought. I use the class TextureReader, included inside the ComputerVision example, but OnImageAvailableCallback is never called and log show some error during initialization:
camera_utility: Failed to create OpenGL frame buffer.
camera_utility: Failed to create OpenGL frame buffer.
I insert a couple of breakpoints inside libarcore_camera_utility.so. And I see that glCheckFramebufferStatus inside TextureReader::create return 0 (so a error occurred), but glGetError() don't return any error. How it's possible resolve the problem?

tensorflow TF lite android app crashing after detection

I have trained my model using ssd_mobilenet_v2_quantized_coco, which was also a long painstaking process of digging. Once training was successful, the model was correctly detecting images from my laptop but on my phone as soon as an object is detected, app crashes. I used TF lite Android app available at GitHub. I did some debugging on Android Studio and getting the following error log when an object gets detected and app crashes:
I/tensorflow: MultiBoxTracker: Processing 0 results from 314 I/tensorflow:
DetectorActivity: Preparing image 506 for detection in bg thread.
I/tensorflow: DetectorActivity: Running detection on image 506
I/tensorflow: MultiBoxTracker: Processing 0 results from 506
I/tensorflow: DetectorActivity: Preparing image 676 for detection in bg thread.
I/tensorflow: DetectorActivity: Running detection on image 676
E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.lite.demo, PID: 3122
java.lang.ArrayIndexOutOfBoundsException: length=80; index=-2147483648
at java.util.Vector.elementData(Vector.java:734)
at java.util.Vector.get(Vector.java:750)
at org.tensorflow.demo.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:213)
at org.tensorflow.demo.DetectorActivity$3.run(DetectorActivity.java:247)
at android.os.Handler.handleCallback(Handler.java:873)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:193)
at android.os.HandlerThread.run(HandlerThread.java:65)
My guess is labels located in .txt file being somehow misread. This is because of the line:
at org.tensorflow.demo.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:213)
and that line corresponds to the following code:
labels.get((int) outputClasses[0][i] + labelOffset)
However, I don't know what to change in labels.txt. Possibly, I need to edit that txt as suggested here. Any other suggestions and explanation for possible causes are appreciated.
Update. I added ??? to the labels.txt and compiled/run, but I am still getting the same error as above.
P.S. I trained ssdmobilenet_V2_coco (the model without quantization) as well and it is working without crash on the app. I am guessing, perhaps, quantization is converting label indices differently and maybe resulting in outofbound error for labels.
Yes it is because the output of labels at times gets garbage value. For a quick fix you can try this:
add a condition:
if((int) outputClasses[0][i]>10)
{
outputClasses[0][i]=-1;
}
here 10 is the number of classes for which the model was trained for. You can change it accordingly.

React Native Camera resets zoom when switching type

Which implementation are you using
RCTCamera
Steps to reproduce
On Android, have a RCTCamera view with the zoom prop set to read from this.state.zoom, then do
this.setState({
zoom: [any level],
type: [front type if current camera is back camera, back type if current camera is front camera],
});
Expected behaviour
The other camera should presumably open with the zoom set to whatever the zoom prop is.
Actual behaviour
The other camera opens with no set zoom at all.
It works fine when the component is first mounted and such, but not afterwards. I've run into two thrown exceptions while experimenting with different approaches:
In some cases an exception is thrown because a setZoom() call tries to use a camera after it's been released.
When trying to call setZoom directly through React Native at the same time as setting the state, or as a callback parameter in the setState, it throws an exception because it failed to connect to the camera service. Directly calling setZoom through React Native when not switching camera types works fine.
The current and closest thing to working, at least insofar as it doesn't directly throw an exception, is the above example, which comes out like this with some logging:
05-02 15:33:30.482 1953-1953/com.appname D/zoom: CameraView setZoom called, setting to 30
05-02 15:33:30.482 1953-1953/com.appname D/zoom: RCTCamera setZoom called, setting cameraType 1 to 30
05-02 15:33:30.672 345-11616/? W/QCameraParameters: [PARM_DBG] zoom_level = 30
05-02 15:33:31.112 345-6416/? W/QCameraParameters: [PARM_DBG] zoom_level = 0
05-02 15:33:35.572 1953-1953/com.appname D/zoom: CameraView setZoom called, setting to 20
05-02 15:33:35.572 1953-1953/com.appname D/zoom: RCTCamera setZoom called, setting cameraType 2 to 20
05-02 15:33:35.912 345-12088/? W/QCameraParameters: [PARM_DBG] zoom_level = 20
05-02 15:33:36.312 345-31706/? W/QCameraParameters: [PARM_DBG] zoom_level = 0
Environment
Node.js version: 9.3
React Native version: 0.55.2
React Native platform + platform version: Android 6.0.1, API 23
react-native-camera
Version: Master branch, but the RCTCamera parts are a modified version of 0.12, since RCTCamera saves pictures much faster than RNCamera, and has pinch zooming.
Solved it. For anyone that runs into this same or similar very specific issue:
When switching camera types, camera parameters are actually set twice. Once as part of setZoom(), which works as intended, and again as part of adjustPreviewLayout(), which zeroes out the parameter set in setZoom().
Since both of these functions are in RTCamera.java, I solved this by making setZoom() store the zoom value in a variable (in addition to what it was already doing), and then having adjustPreviewLayout() set the zoom parameter according to that variable.

MediaCodec.createInputSurface() throws IllegalStateException on some devices

I'm working on a video processing app. The app has one Activity that contains a Fragment. The Fragment in turn contains a VideoSurfaceView derived from GLSurfaceView for me to show the preview of the video with effect (using OpenGL) to users. After previewing, users can start processing the video.
To process the video, I mainly apply the method described in here.
Everything works fine on most devices, but the Oppo Mirror 3 (Android 4.4). On this device, everytime I try to create an Surface using MediaCodec.createInputSurface(), it throws out java.lang.IllegalStateException with code -38.
E/OMXMaster: A component of name 'OMX.qcom.audio.decoder.aac' already exists, ignoring this one.
E/SoftAVCEncoder: internalSetParameter: StoreMetadataInBuffersParams.nPortIndex not zero!
E/OMXNodeInstance: OMX_SetParameter() failed for StoreMetaDataInBuffers: 0x80001001
E/ACodec: [OMX.google.h264.encoder] storeMetaDataInBuffers (output) failed w/ err -2147483648
E/OMXNodeInstance: createInputSurface requires COLOR_FormatSurface (AndroidOpaque) color format
E/ACodec: [OMX.google.h264.encoder] onCreateInputSurface returning error -38
E/VideoProcessing: java.lang.IllegalStateException
at android.media.MediaCodec.createInputSurface(Native Method)
at com.ltpquang.android.core.processing.codec.VideoEncoder.<init>(VideoEncoder.java:46)
at com.ltpquang.android.core.VideoProcessing.setupVideo(VideoProcessing.java:200)
at com.ltpquang.android.core.VideoProcessing.<init>(VideoProcessing.java:167)
at com.ltpquang.android.ui.activity.PreviewEditActivity.lambda$btNext$12(PreviewEditActivity.java:723)
at com.ltpquang.android.ui.activity.PreviewEditActivity.access$lambda$12(PreviewEditActivity.java)
at com.ltpquang.android.ui.activity.PreviewEditActivity$$Lambda$13.run(Unknown Source)
at java.lang.Thread.run(Thread.java:841)
Playing around a little bit, I observed that:
BEFORE creating and adding the VideoSurfaceView to the layout, I can create MediaCodec encoder and obtain the input surface successfully. And I can create as many as I want if I release the previous MediaCodec before creating a new one, otherwise I can only obtain one and only one input surface regardless how many MediaCodec I have.
AFTER creating and adding the VideoSurfaceView to the layout, there is no chance that I can get the input surface from the MediaCodec, it thows java.lang.IllegalStateException always.
I've tried removing the VideoSurfaceView from the layout, set it to null, before creating the surface, but no luck for me.
I also tried with suggestions from here, or here, but they didn't help.
From this, it seems that my device can only get the software codec. So that I cant create the input surface.
My question is:
Why was that?
If the device's resources is limited, what can I do (release something for example) to continue the process?
If it is related to the software codec, what should I do? How can I detect and release the resource?
Is this related to GL contexts? If yes, what should I do? Should I manage the contexts my self?

Categories

Resources