I've a custom class which extends TextureView. In this TextureView I've a surfaceTexture, mSurfaceTexture. This is a single-buffered surface.
As per android documentation, here:
In single buffered mode the application is responsible for serializing access to the image content buffer. Each time the image content is to be updated, the releaseTexImage() method must be called before the image content producer takes ownership of the buffer.
So I'm calling this whenever the image producer is taking ownership. It works fine on Android 10, however on android 9 I get the following error:
11-05 19:05:53.960 23442 24509 E AndroidRuntime: java.lang.IllegalStateException: Unable to release texture contents (see logcat for details)
11-05 19:05:53.960 23442 24509 E AndroidRuntime: at android.graphics.SurfaceTexture.nativeReleaseTexImage(Native Method)
11-05 19:05:53.960 23442 24509 E AndroidRuntime: at android.graphics.SurfaceTexture.releaseTexImage(SurfaceTexture.java:252)
I'm calling ANativeWindow_fromSurface. After using this should I do something else too like release nativewindow etc?
Any idea as to why this is happening and also anyone else had a similar issue ?
Related
I have trained my model using ssd_mobilenet_v2_quantized_coco, which was also a long painstaking process of digging. Once training was successful, the model was correctly detecting images from my laptop but on my phone as soon as an object is detected, app crashes. I used TF lite Android app available at GitHub. I did some debugging on Android Studio and getting the following error log when an object gets detected and app crashes:
I/tensorflow: MultiBoxTracker: Processing 0 results from 314 I/tensorflow:
DetectorActivity: Preparing image 506 for detection in bg thread.
I/tensorflow: DetectorActivity: Running detection on image 506
I/tensorflow: MultiBoxTracker: Processing 0 results from 506
I/tensorflow: DetectorActivity: Preparing image 676 for detection in bg thread.
I/tensorflow: DetectorActivity: Running detection on image 676
E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.lite.demo, PID: 3122
java.lang.ArrayIndexOutOfBoundsException: length=80; index=-2147483648
at java.util.Vector.elementData(Vector.java:734)
at java.util.Vector.get(Vector.java:750)
at org.tensorflow.demo.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:213)
at org.tensorflow.demo.DetectorActivity$3.run(DetectorActivity.java:247)
at android.os.Handler.handleCallback(Handler.java:873)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:193)
at android.os.HandlerThread.run(HandlerThread.java:65)
My guess is labels located in .txt file being somehow misread. This is because of the line:
at org.tensorflow.demo.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:213)
and that line corresponds to the following code:
labels.get((int) outputClasses[0][i] + labelOffset)
However, I don't know what to change in labels.txt. Possibly, I need to edit that txt as suggested here. Any other suggestions and explanation for possible causes are appreciated.
Update. I added ??? to the labels.txt and compiled/run, but I am still getting the same error as above.
P.S. I trained ssdmobilenet_V2_coco (the model without quantization) as well and it is working without crash on the app. I am guessing, perhaps, quantization is converting label indices differently and maybe resulting in outofbound error for labels.
Yes it is because the output of labels at times gets garbage value. For a quick fix you can try this:
add a condition:
if((int) outputClasses[0][i]>10)
{
outputClasses[0][i]=-1;
}
here 10 is the number of classes for which the model was trained for. You can change it accordingly.
INTRODUCTION:
I work on an android app which has a lot of images. These images are displayed within several recycler views and view pagers. For caching the images I use Fresco which is initialised like this in Application class:
Fresco.initialize(this, ImagePipelineConfig
.newBuilder(getApplicationContext())
.setBitmapMemoryCacheParamsSupplier(new BitmapMemoryCacheParamsSupplier(getApplicationContext()))
.setDownsampleEnabled(true)
.build());
When images are displayed within RecyclerView I use SimpleDraweeView and are loaded like this:
val imageRequest = ImageRequestBuilder
.newBuilderWithSource(Uri.parse(imageUrl))
.setResizeOptions(ResizeOptions(50, 10))
.build()
(itemView as SimpleDraweeView).setImageRequest(imageRequest)
Should also be noted that for each request in order to avoid memory caching the images the following line is called
Fresco.getImagePipeline().evictFromMemoryCache(Uri.parse(imageUrl))
I also load images within view pagers where I use ZoomableDraweeView need also for zooming in, for those cases images are loaded like this:
val view = ZoomableDraweeView(container.context)
view.setAllowTouchInterceptionWhileZoomed(true)
view.controller = Fresco.newDraweeControllerBuilder()
.setImageRequest(ImageRequestBuilder.newBuilderWithSource(Uri.parse(imageUrls[position]))
.setResizeOptions(ResizeOptions(50, 20)).build()).build()
Also for these requests I avoid using the memory caching and I use
Fresco.getImagePipeline().evictFromMemoryCache(Uri.parse(imageUrls[position]))
As well should be noted that each time the user leaves these Activities that contain images I have this:
#Override
protected void onStop() {
super.onStop();
Fresco.getImagePipeline().clearMemoryCaches();
}
THE PROBLEM:
The issue is that on some devices at some point while scrolling through these images my app crashes with Out Of Memory
QUESTION
Is there any way I could improve the Fresco setup, or is there anything I'm missing that could help me get rid of OOM ?
UPADTE WITH CRASH LOG:
java.lang.OutOfMemoryError: Failed to allocate a 2833932 byte allocation with 1972896 free bytes and 1926KB until OOM
E at dalvik.system.VMRuntime.newNonMovableArray(Native Method)
E at android.graphics.Bitmap.nativeCreate(Native Method)
E at android.graphics.Bitmap.createBitmap(Bitmap.java:975)
E at android.graphics.Bitmap.createBitmap(Bitmap.java:946)
E at android.graphics.Bitmap.createBitmap(Bitmap.java:913)
E at android.graphics.drawable.VectorDrawable$VectorDrawableState.createCachedBitmapIfNeeded(VectorDrawable.java:834)
E at android.graphics.drawable.VectorDrawable.draw(VectorDrawable.java:318)
E at android.graphics.drawable.LayerDrawable.draw(LayerDrawable.java:916)
E at com.facebook.drawee.drawable.ForwardingDrawable.draw(ForwardingDrawable.java:185)
E at com.facebook.drawee.drawable.ScaleTypeDrawable.draw(ScaleTypeDrawable.java:123)
E at com.facebook.drawee.drawable.FadeDrawable.drawDrawableWithAlpha(FadeDrawable.java:302)
E at com.facebook.drawee.drawable.FadeDrawable.draw(FadeDrawable.java:289)
E at com.facebook.drawee.drawable.ForwardingDrawable.draw(ForwardingDrawable.java:185)
E at com.facebook.drawee.generic.RootDrawable.draw(RootDrawable.java:81)
I'm working on a video processing app. The app has one Activity that contains a Fragment. The Fragment in turn contains a VideoSurfaceView derived from GLSurfaceView for me to show the preview of the video with effect (using OpenGL) to users. After previewing, users can start processing the video.
To process the video, I mainly apply the method described in here.
Everything works fine on most devices, but the Oppo Mirror 3 (Android 4.4). On this device, everytime I try to create an Surface using MediaCodec.createInputSurface(), it throws out java.lang.IllegalStateException with code -38.
E/OMXMaster: A component of name 'OMX.qcom.audio.decoder.aac' already exists, ignoring this one.
E/SoftAVCEncoder: internalSetParameter: StoreMetadataInBuffersParams.nPortIndex not zero!
E/OMXNodeInstance: OMX_SetParameter() failed for StoreMetaDataInBuffers: 0x80001001
E/ACodec: [OMX.google.h264.encoder] storeMetaDataInBuffers (output) failed w/ err -2147483648
E/OMXNodeInstance: createInputSurface requires COLOR_FormatSurface (AndroidOpaque) color format
E/ACodec: [OMX.google.h264.encoder] onCreateInputSurface returning error -38
E/VideoProcessing: java.lang.IllegalStateException
at android.media.MediaCodec.createInputSurface(Native Method)
at com.ltpquang.android.core.processing.codec.VideoEncoder.<init>(VideoEncoder.java:46)
at com.ltpquang.android.core.VideoProcessing.setupVideo(VideoProcessing.java:200)
at com.ltpquang.android.core.VideoProcessing.<init>(VideoProcessing.java:167)
at com.ltpquang.android.ui.activity.PreviewEditActivity.lambda$btNext$12(PreviewEditActivity.java:723)
at com.ltpquang.android.ui.activity.PreviewEditActivity.access$lambda$12(PreviewEditActivity.java)
at com.ltpquang.android.ui.activity.PreviewEditActivity$$Lambda$13.run(Unknown Source)
at java.lang.Thread.run(Thread.java:841)
Playing around a little bit, I observed that:
BEFORE creating and adding the VideoSurfaceView to the layout, I can create MediaCodec encoder and obtain the input surface successfully. And I can create as many as I want if I release the previous MediaCodec before creating a new one, otherwise I can only obtain one and only one input surface regardless how many MediaCodec I have.
AFTER creating and adding the VideoSurfaceView to the layout, there is no chance that I can get the input surface from the MediaCodec, it thows java.lang.IllegalStateException always.
I've tried removing the VideoSurfaceView from the layout, set it to null, before creating the surface, but no luck for me.
I also tried with suggestions from here, or here, but they didn't help.
From this, it seems that my device can only get the software codec. So that I cant create the input surface.
My question is:
Why was that?
If the device's resources is limited, what can I do (release something for example) to continue the process?
If it is related to the software codec, what should I do? How can I detect and release the resource?
Is this related to GL contexts? If yes, what should I do? Should I manage the contexts my self?
I'm trying a camera capture application with SurfaceView.
The application's overview is
It can detect Face in realtime (on camera view).
It can store the movie.
I'm using android.media.MediaRecord class for saving the movie.
myRecorder = new MediaRecorder();
myRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);
myRecorder.setOutputFile(Environment.getExternalStorageDirectory() + "/Movies/sample.3gp");
myRecorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
myRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.MPEG_4_SP);
myRecorder.setVideoFrameRate(30);
myRecorder.setVideoSize(320, 240);
myRecorder.prepare();
myRecorder.start();
after avobe step,RuntimeException occur in below step.(camera is android.hardware.Camera object)
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
//Log.d(TAG, "onPreviewFrame: ");
int width = camera.getParameters().getPreviewSize().width; <--
int height = camera.getParameters().getPreviewSize().height;
03-22 22:54:09.134 27875-27875/? E/AndroidRuntime: FATAL EXCEPTION: main
Process: wbcompany.co.jp.facedetector3, PID: 27875
java.lang.RuntimeException: getParameters failed (empty parameters)
at android.hardware.Camera.native_getParameters(Native Method)
at android.hardware.Camera.getParameters(Camera.java:2019)
at wbcompany.co.jp.facedetector3.CameraView.onPreviewFrame(CameraView.java:150)
at android.hardware.Camera$EventHandler.handleMessage(Camera.java:1192)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:154)
at android.app.ActivityThread.main(ActivityThread.java:6189)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:866)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:756)
before calling 'myRecorder.start()',this Exception doesn't occur.
I have no idea of this error's solution.
Please give me the solution of this problem.
My runtime enviroment:
Android 7.0/ API level 24
This is a strange error message, but the case is very real. When the camera is busy in MediaRecorder, it will not be accessible for other uses.
Generally speaking, access to camera parameters may be very inefficient on some devices. It is strongly recommended not to call camera.getParameters() for every preview frame. Create local fields in your CameraView class or in the activity that embeds it, and store width and height there when you start preview. They will not change unless you explicitly stop the camera and change its configuration.
If I am not missing something, your onPreviewFrame() callback happens on the main (UI) thread. The good practice is to call Camera.open() on a separate HandlerThread, to keep preview processing from slowing down UI.
I have a scene transition that I had some issues with:
Scene transition with hero elements throws Layer exceeds max. dimensions supported by the GPU
But setting transitionGroup fixed it. When I compile the exact same app with the the latest Android M SDK it crashes when pressing back from the transition.
Abort message: 'art/runtime/java_vm_ext.cc:410] JNI DETECTED ERROR IN APPLICATION:
JNI CallObjectMethod called with pending exception java.lang.IllegalStateException:
Unable to create layer for RelativeLayout'
Anyone knows if Google changed anything regarding this in Android M?
It is reported here:
https://code.google.com/p/android-developer-preview/issues/detail?id=2416
But no resolution...