ARCore Image Extraction from GPU - android

I try to extract image directly from the GPU (so without using AcquireCameraImageBytes()) for performance reason (Samsung S9 can't reach 10fps) and to support the Xiaomi Pocophone I bought. I use the class TextureReader, included inside the ComputerVision example, but OnImageAvailableCallback is never called and log show some error during initialization:
camera_utility: Failed to create OpenGL frame buffer.
camera_utility: Failed to create OpenGL frame buffer.
I insert a couple of breakpoints inside libarcore_camera_utility.so. And I see that glCheckFramebufferStatus inside TextureReader::create return 0 (so a error occurred), but glGetError() don't return any error. How it's possible resolve the problem?

Related

Phone crashes when calling mediaCodec.configure with error MediaCodec$CodecException: Error 0x80001001

The app I am working on gets the video from the camera through Surface and encodes it to video/avc (H264) I am doing that successfully and it is working great on phones like galaxy Note 10+ but on phones like Xiaomi note 10s which is a new phone I am having this issue. Here is what I am doing:
create format:
format = MediaFormat.createVideoFormat(
H264, videoWidth, videoHeight
).apply {
setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 0)
setInteger(MediaFormat.KEY_BIT_RATE, bitrate)
setInteger(MediaFormat.KEY_FRAME_RATE, videoFrameRate)
setInteger(
MediaFormat.KEY_COLOR_FORMAT,
CodecCapabilities.COLOR_FormatSurface
)
setFloat(MediaFormat.KEY_I_FRAME_INTERVAL, 1f)
}```
Then create encoderName:
val encoderName = MediaCodecList(
MediaCodecList.ALL_CODECS
).findEncoderForFormat(format) //using the format I shared in the first step
Then create:
codec = MediaCodec.createByCodecName(encoderName)
Then .setCallback(callback) //not important since we won't make it till this point, it will crash before that.
4. And this is the line where it crashes.
codec.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE) //CRASH => MediaCodec$CodecException: Error 0x80001001
The rest
codec.setInputSurface(surface)
codec.start()
I am suspecting the
setInteger(
MediaFormat.KEY_COLOR_FORMAT,
CodecCapabilities.COLOR_FormatSurface
) //I tried changing the value and completely removing this setInteger, no luck :/
Error 0x80001001 also known as OMX_ErrorUndefined says:
"There was an error, but the cause of the error could not be determined".
Most likely cause for this error is insufficient resources. This can happen for example if you try to configure a hardware codec but there is not enough graphics memory available at the moment.
Suggestion 1: Make sure you release the codecs when you are done using them. You need to check all code paths.
Suggestion 2: Knowing that this can happen, you can filter the MediaCodecList keeping all the encoders that support the given format. Then wrap the configure() call in a try/catch block. And, if the call fails, try the next option from the list of codecs.
Note that on most devices there are at least two codecs for H264: a hardware codec and a software codec. The former one having better performance, the latter one being more resilient.

Android LibVLC, take snapshot of RTSP stream without TextureView

Consider using libVLC for Android, based on the official recommended way.
I went through the compilation process without problems (but took some time).
I'd like to have the snapshot functionality, but I've found some very old (2-3 years old) threads around which states that this feature is still not available (2016) at least "not out of the box' by this thread (2014).
Snapshot functionality is available on other platforms.
Also there are some solutions where they switch from SurfaceView to TextureView.
However I prefer sticking to SurfaceView as TextureView brings some performance drawbacks (according to this topic).
Also on an official android page it's stated:
In API 24 and higher, it's recommended to implement SurfaceView instead of TextureView.
In 2014 there were only 2 dependecies of the snapshot function based on the thread I've mentioned earlier:
enabling sout module
enabling png as encoder
When looking the "VLC-Android" repository of VideoLAN, there is a file responsible for building libVLC.
In line 396, sout module seems to be enabled by default.
Before compilation I've enabled png as encoder in vlc/contrib/src/ffmpeg/rules.mak as pointed out in the forum.
However there is still no function related to snapshot in either org.videolan.libvlc.MediaPlayer or in org.videolan.libvlc.VLCVideoLayout.
The question is how can I create a snapshot (either into file, or into buffer) on Android with libVLC, without using TextureView?
Update1:
Fact1:
Found the reason on why it's unavailable on Android. In VLC's core source tree, in file lib/video.c on line 145 there is the snapshot function with a massive FIXME warning:
/* FIXME: This is not atomic. All parameters should be passed at once
* (obviously _not_ with var_*()). Also, the libvlc object should not be
* used for the callbacks: that breaks badly if there are concurrent
* media players in the instance. */
var_Create( p_vout, "snapshot-width", VLC_VAR_INTEGER );
var_SetInteger( p_vout, "snapshot-width", i_width);
var_Create( p_vout, "snapshot-height", VLC_VAR_INTEGER );
var_SetInteger( p_vout, "snapshot-height", i_height );
var_Create( p_vout, "snapshot-path", VLC_VAR_STRING );
var_SetString( p_vout, "snapshot-path", psz_filepath );
var_Create( p_vout, "snapshot-format", VLC_VAR_STRING );
var_SetString( p_vout, "snapshot-format", "png" );
var_TriggerCallback( p_vout, "video-snapshot" );
vlc_object_release( p_vout );
Fact2:
I wanted to go to another direction with this. If snapshot function is not usable (and also not wise to use it), I thought of some emergency solutions:
there is a video-filter in VLC named scene. This produce still images of the video to a specific path. I tried using this, but video-filters are not able to change at runtime. So this attempt failed.
I also tried to do it from the MediaPlayer (via Media.addOption), but video filters are also not possible to change at MediaPlayer level on Android.
I tried then to pass the filter config as an argument to libVLC initialization and finally it succeeded, however that won't be effective to create a new libVLC instance everytime when I need a screenshot.
A few ways to go about this...
Here's a crossplatform thumbnailer example using libvlc https://code.videolan.org/mfkl/libvlcsharp-samples/blob/master/PreviewThumbnailExtractor.Skia/Program.cs It should work on Android without much editing since it doesn't use any OS specific feature in the script. Should be able to translate it to Java/Kotlin as well I guess.
There is a libvlc function that allows to take snapshot. Just go the time you want and call it. https://www.videolan.org/developers/vlc/doc/doxygen/html/group__libvlc__video.html#ga9b0a3870ce962aa0358050b2d5a59143
In VLC Android, the medialibrary now manages thumbnails.
LibVLC 4 now bundles a thumbnailer https://github.com/videolan/vlc/blob/d40eb012b10cc355ea9ad7a13eaf494b8e826d78/include/vlc/libvlc_media.h#L845
Good luck.

Tensorflow Lite GPUdelegate gives "Dimensions are not BHWC" error for the last layer. But the layer appears to be BHWC like every other layers to me

I am trying to implement a YOLO detector on android. I followed the demo code provided by tensorflow. The model runs fine on CPU or with NNAPI. However, when I try to run it with GPU, the program crashes and debugger gives out the following error:
java.lang.IllegalArgumentException: Internal error: Failed to apply delegate: GpuDelegate Prepare: Dimensions are not BHWCNode number 23 (GpuDelegate) failed to prepare.
The final layer of the model looks like self.conv9 = layers.Conv2D(425, (1,1), strides=(1,1), padding='same', name='conv_9', use_bias=False)
The model runs perfectly fine on PC with either CPU or GPU.

tensorflow TF lite android app crashing after detection

I have trained my model using ssd_mobilenet_v2_quantized_coco, which was also a long painstaking process of digging. Once training was successful, the model was correctly detecting images from my laptop but on my phone as soon as an object is detected, app crashes. I used TF lite Android app available at GitHub. I did some debugging on Android Studio and getting the following error log when an object gets detected and app crashes:
I/tensorflow: MultiBoxTracker: Processing 0 results from 314 I/tensorflow:
DetectorActivity: Preparing image 506 for detection in bg thread.
I/tensorflow: DetectorActivity: Running detection on image 506
I/tensorflow: MultiBoxTracker: Processing 0 results from 506
I/tensorflow: DetectorActivity: Preparing image 676 for detection in bg thread.
I/tensorflow: DetectorActivity: Running detection on image 676
E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.lite.demo, PID: 3122
java.lang.ArrayIndexOutOfBoundsException: length=80; index=-2147483648
at java.util.Vector.elementData(Vector.java:734)
at java.util.Vector.get(Vector.java:750)
at org.tensorflow.demo.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:213)
at org.tensorflow.demo.DetectorActivity$3.run(DetectorActivity.java:247)
at android.os.Handler.handleCallback(Handler.java:873)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:193)
at android.os.HandlerThread.run(HandlerThread.java:65)
My guess is labels located in .txt file being somehow misread. This is because of the line:
at org.tensorflow.demo.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:213)
and that line corresponds to the following code:
labels.get((int) outputClasses[0][i] + labelOffset)
However, I don't know what to change in labels.txt. Possibly, I need to edit that txt as suggested here. Any other suggestions and explanation for possible causes are appreciated.
Update. I added ??? to the labels.txt and compiled/run, but I am still getting the same error as above.
P.S. I trained ssdmobilenet_V2_coco (the model without quantization) as well and it is working without crash on the app. I am guessing, perhaps, quantization is converting label indices differently and maybe resulting in outofbound error for labels.
Yes it is because the output of labels at times gets garbage value. For a quick fix you can try this:
add a condition:
if((int) outputClasses[0][i]>10)
{
outputClasses[0][i]=-1;
}
here 10 is the number of classes for which the model was trained for. You can change it accordingly.

MediaCodec.createInputSurface() throws IllegalStateException on some devices

I'm working on a video processing app. The app has one Activity that contains a Fragment. The Fragment in turn contains a VideoSurfaceView derived from GLSurfaceView for me to show the preview of the video with effect (using OpenGL) to users. After previewing, users can start processing the video.
To process the video, I mainly apply the method described in here.
Everything works fine on most devices, but the Oppo Mirror 3 (Android 4.4). On this device, everytime I try to create an Surface using MediaCodec.createInputSurface(), it throws out java.lang.IllegalStateException with code -38.
E/OMXMaster: A component of name 'OMX.qcom.audio.decoder.aac' already exists, ignoring this one.
E/SoftAVCEncoder: internalSetParameter: StoreMetadataInBuffersParams.nPortIndex not zero!
E/OMXNodeInstance: OMX_SetParameter() failed for StoreMetaDataInBuffers: 0x80001001
E/ACodec: [OMX.google.h264.encoder] storeMetaDataInBuffers (output) failed w/ err -2147483648
E/OMXNodeInstance: createInputSurface requires COLOR_FormatSurface (AndroidOpaque) color format
E/ACodec: [OMX.google.h264.encoder] onCreateInputSurface returning error -38
E/VideoProcessing: java.lang.IllegalStateException
at android.media.MediaCodec.createInputSurface(Native Method)
at com.ltpquang.android.core.processing.codec.VideoEncoder.<init>(VideoEncoder.java:46)
at com.ltpquang.android.core.VideoProcessing.setupVideo(VideoProcessing.java:200)
at com.ltpquang.android.core.VideoProcessing.<init>(VideoProcessing.java:167)
at com.ltpquang.android.ui.activity.PreviewEditActivity.lambda$btNext$12(PreviewEditActivity.java:723)
at com.ltpquang.android.ui.activity.PreviewEditActivity.access$lambda$12(PreviewEditActivity.java)
at com.ltpquang.android.ui.activity.PreviewEditActivity$$Lambda$13.run(Unknown Source)
at java.lang.Thread.run(Thread.java:841)
Playing around a little bit, I observed that:
BEFORE creating and adding the VideoSurfaceView to the layout, I can create MediaCodec encoder and obtain the input surface successfully. And I can create as many as I want if I release the previous MediaCodec before creating a new one, otherwise I can only obtain one and only one input surface regardless how many MediaCodec I have.
AFTER creating and adding the VideoSurfaceView to the layout, there is no chance that I can get the input surface from the MediaCodec, it thows java.lang.IllegalStateException always.
I've tried removing the VideoSurfaceView from the layout, set it to null, before creating the surface, but no luck for me.
I also tried with suggestions from here, or here, but they didn't help.
From this, it seems that my device can only get the software codec. So that I cant create the input surface.
My question is:
Why was that?
If the device's resources is limited, what can I do (release something for example) to continue the process?
If it is related to the software codec, what should I do? How can I detect and release the resource?
Is this related to GL contexts? If yes, what should I do? Should I manage the contexts my self?

Categories

Resources