Android - SurfaceTexture - Video with MediaPlayer - android

I use a SurfaceTexture to display a video with OpenGL ES 2.0 on a SurfaceView. The OpenGL context used to display the video is linked to the SurfaceView. It works fine, but when I quit the activity while playing the video, and return, the SurfaceView has been destroyed, so I have to recreate a OpenGL context. Doing that, the video can't play anymore.
surfaceTexture.updateTexImage();
This will raise an IllegalStateException, since its called on a new context, I guess ? I also see this in the logs just before :
E/GLConsumer: [unnamed-13710-14] checkAndUpdateEglState: invalid current EGLContext
The doc says I can use
surfaceTexture.detachFromGLContext();
before destroying the opengl context. And then :
surfaceTexture.attachToGLContext(mTextureID);
To attach it to a new context (on the new context thread)
However, if I'm doing this, I have this error :
attachToContext: GLConsumer is already attached to a context
But it was destroyed..
So my question is, is there any way to continue the video where it was ? Or is my only option to restart the video ?
Thanks in advance !

Related

Android LibVLC, take snapshot of RTSP stream without TextureView

Consider using libVLC for Android, based on the official recommended way.
I went through the compilation process without problems (but took some time).
I'd like to have the snapshot functionality, but I've found some very old (2-3 years old) threads around which states that this feature is still not available (2016) at least "not out of the box' by this thread (2014).
Snapshot functionality is available on other platforms.
Also there are some solutions where they switch from SurfaceView to TextureView.
However I prefer sticking to SurfaceView as TextureView brings some performance drawbacks (according to this topic).
Also on an official android page it's stated:
In API 24 and higher, it's recommended to implement SurfaceView instead of TextureView.
In 2014 there were only 2 dependecies of the snapshot function based on the thread I've mentioned earlier:
enabling sout module
enabling png as encoder
When looking the "VLC-Android" repository of VideoLAN, there is a file responsible for building libVLC.
In line 396, sout module seems to be enabled by default.
Before compilation I've enabled png as encoder in vlc/contrib/src/ffmpeg/rules.mak as pointed out in the forum.
However there is still no function related to snapshot in either org.videolan.libvlc.MediaPlayer or in org.videolan.libvlc.VLCVideoLayout.
The question is how can I create a snapshot (either into file, or into buffer) on Android with libVLC, without using TextureView?
Update1:
Fact1:
Found the reason on why it's unavailable on Android. In VLC's core source tree, in file lib/video.c on line 145 there is the snapshot function with a massive FIXME warning:
/* FIXME: This is not atomic. All parameters should be passed at once
* (obviously _not_ with var_*()). Also, the libvlc object should not be
* used for the callbacks: that breaks badly if there are concurrent
* media players in the instance. */
var_Create( p_vout, "snapshot-width", VLC_VAR_INTEGER );
var_SetInteger( p_vout, "snapshot-width", i_width);
var_Create( p_vout, "snapshot-height", VLC_VAR_INTEGER );
var_SetInteger( p_vout, "snapshot-height", i_height );
var_Create( p_vout, "snapshot-path", VLC_VAR_STRING );
var_SetString( p_vout, "snapshot-path", psz_filepath );
var_Create( p_vout, "snapshot-format", VLC_VAR_STRING );
var_SetString( p_vout, "snapshot-format", "png" );
var_TriggerCallback( p_vout, "video-snapshot" );
vlc_object_release( p_vout );
Fact2:
I wanted to go to another direction with this. If snapshot function is not usable (and also not wise to use it), I thought of some emergency solutions:
there is a video-filter in VLC named scene. This produce still images of the video to a specific path. I tried using this, but video-filters are not able to change at runtime. So this attempt failed.
I also tried to do it from the MediaPlayer (via Media.addOption), but video filters are also not possible to change at MediaPlayer level on Android.
I tried then to pass the filter config as an argument to libVLC initialization and finally it succeeded, however that won't be effective to create a new libVLC instance everytime when I need a screenshot.
A few ways to go about this...
Here's a crossplatform thumbnailer example using libvlc https://code.videolan.org/mfkl/libvlcsharp-samples/blob/master/PreviewThumbnailExtractor.Skia/Program.cs It should work on Android without much editing since it doesn't use any OS specific feature in the script. Should be able to translate it to Java/Kotlin as well I guess.
There is a libvlc function that allows to take snapshot. Just go the time you want and call it. https://www.videolan.org/developers/vlc/doc/doxygen/html/group__libvlc__video.html#ga9b0a3870ce962aa0358050b2d5a59143
In VLC Android, the medialibrary now manages thumbnails.
LibVLC 4 now bundles a thumbnailer https://github.com/videolan/vlc/blob/d40eb012b10cc355ea9ad7a13eaf494b8e826d78/include/vlc/libvlc_media.h#L845
Good luck.

MediaCodec.createInputSurface() throws IllegalStateException on some devices

I'm working on a video processing app. The app has one Activity that contains a Fragment. The Fragment in turn contains a VideoSurfaceView derived from GLSurfaceView for me to show the preview of the video with effect (using OpenGL) to users. After previewing, users can start processing the video.
To process the video, I mainly apply the method described in here.
Everything works fine on most devices, but the Oppo Mirror 3 (Android 4.4). On this device, everytime I try to create an Surface using MediaCodec.createInputSurface(), it throws out java.lang.IllegalStateException with code -38.
E/OMXMaster: A component of name 'OMX.qcom.audio.decoder.aac' already exists, ignoring this one.
E/SoftAVCEncoder: internalSetParameter: StoreMetadataInBuffersParams.nPortIndex not zero!
E/OMXNodeInstance: OMX_SetParameter() failed for StoreMetaDataInBuffers: 0x80001001
E/ACodec: [OMX.google.h264.encoder] storeMetaDataInBuffers (output) failed w/ err -2147483648
E/OMXNodeInstance: createInputSurface requires COLOR_FormatSurface (AndroidOpaque) color format
E/ACodec: [OMX.google.h264.encoder] onCreateInputSurface returning error -38
E/VideoProcessing: java.lang.IllegalStateException
at android.media.MediaCodec.createInputSurface(Native Method)
at com.ltpquang.android.core.processing.codec.VideoEncoder.<init>(VideoEncoder.java:46)
at com.ltpquang.android.core.VideoProcessing.setupVideo(VideoProcessing.java:200)
at com.ltpquang.android.core.VideoProcessing.<init>(VideoProcessing.java:167)
at com.ltpquang.android.ui.activity.PreviewEditActivity.lambda$btNext$12(PreviewEditActivity.java:723)
at com.ltpquang.android.ui.activity.PreviewEditActivity.access$lambda$12(PreviewEditActivity.java)
at com.ltpquang.android.ui.activity.PreviewEditActivity$$Lambda$13.run(Unknown Source)
at java.lang.Thread.run(Thread.java:841)
Playing around a little bit, I observed that:
BEFORE creating and adding the VideoSurfaceView to the layout, I can create MediaCodec encoder and obtain the input surface successfully. And I can create as many as I want if I release the previous MediaCodec before creating a new one, otherwise I can only obtain one and only one input surface regardless how many MediaCodec I have.
AFTER creating and adding the VideoSurfaceView to the layout, there is no chance that I can get the input surface from the MediaCodec, it thows java.lang.IllegalStateException always.
I've tried removing the VideoSurfaceView from the layout, set it to null, before creating the surface, but no luck for me.
I also tried with suggestions from here, or here, but they didn't help.
From this, it seems that my device can only get the software codec. So that I cant create the input surface.
My question is:
Why was that?
If the device's resources is limited, what can I do (release something for example) to continue the process?
If it is related to the software codec, what should I do? How can I detect and release the resource?
Is this related to GL contexts? If yes, what should I do? Should I manage the contexts my self?

Android MediaCodec Encode and Decode In Asynchronous Mode

I am trying to decode a video from a file and encode it into a different format with MediaCodec in the new Asynchronous Mode supported in API Level 21 and up (Android OS 5.0 Lollipop).
There are many examples for doing this in Synchronous Mode on sites such as Big Flake, Google's Grafika, and dozens of answers on StackOverflow, but none of them support Asynchronous mode.
I do not need to display the video during the process.
I believe that the general procedure is to read the file with a MediaExtractor as the input to a MediaCodec(decoder), allow the output of the Decoder to render into a Surface that is also the shared input into a MediaCodec(encoder), and then finally to write the Encoder output file via a MediaMuxer. The Surface is created during setup of the Encoder and shared with the Decoder.
I can Decode the video into a TextureView, but sharing the Surface with the Encoder instead of the screen has not been successful.
I setup MediaCodec.Callback()s for both of my codecs. I believe that an issues is that I do not know what to do in the Encoder's callback's onInputBufferAvailable() function. I do not what to (or know how to) copy data from the Surface into the Encoder - that should happen automatically (as is done on the Decoder output with codec.releaseOutputBuffer(outputBufferId, true);). Yet, I believe that onInputBufferAvailable requires a call to codec.queueInputBuffer in order to function. I just don't know how to set the parameters without getting data from something like a MediaExtractor as used on the Decode side.
If you have an Example that opens up a video file, decodes it, encodes it to a different resolution or format using the asynchronous MediaCodec callbacks, and then saves it as a file, please share your sample code.
=== EDIT ===
Here is a working example in synchronous mode of what I am trying to do in asynchronous mode: ExtractDecodeEditEncodeMuxTest.java: https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/ExtractDecodeEditEncodeMuxTest.java This example is working in my application
I believe you shouldn't need to do anything in the encoder's onInputBufferAvailable() callback - you should not call encoder.queueInputBuffer(). Just as you never call encoder.dequeueInputBuffer() and encoder.queueInputBuffer() manually when doing Surface input encoding in synchronous mode, you shouldn't do it in asynchronous mode either.
When you call decoder.releaseOutputBuffer(outputBufferId, true); (in both synchronous and asynchronous mode), this internally (using the Surface you provided) dequeues an input buffer from the surface, renders the output into it, and enqueues it back to the surface (to the encoder). The only difference between synchronous and asynchronous mode is in how the buffer events are exposed in the public API, but when using Surface input, it uses a different (internal) API to access the same, so synchronous vs asynchronous mode shouldn't matter for this at all.
So as far as I know (although I haven't tried it myself), you should just leave the onInputBufferAvailable() callback empty for the encoder.
EDIT:
So, I tried doing this myself, and it's (almost) as simple as described above.
If the encoder input surface is configured directly as output to the decoder (with no SurfaceTexture inbetween), things just work, with a synchronous decode-encode loop converted into an asynchronous one.
If you use SurfaceTexture, however, you may run into a small gotcha. There is an issue with how one waits for frames to arrive to the SurfaceTexture in relation to the calling thread, see https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/DecodeEditEncodeTest.java#106 and https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/EncodeDecodeTest.java#104 and https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/OutputSurface.java#113 for references to this.
The issue, as far as I see it, is in awaitNewImage as in https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/OutputSurface.java#240. If the onFrameAvailable callback is supposed to be called on the main thread, we have an issue if the awaitNewImage call also is run on the main thread. If the onOutputBufferAvailable callbacks also are called on the main thread and you call awaitNewImage from there, we have an issue, since you'll end up waiting for a callback (with a wait() that blocks the whole thread) that can't be run until the current method returns.
So we need to make sure that the onFrameAvailable callbacks come on a different thread than the one that calls awaitNewImage. One pretty simple way of doing this is to create a new separate thread, that does nothing but service the onFrameAvailable callbacks. To do that, you can do e.g. this:
private HandlerThread mHandlerThread = new HandlerThread("CallbackThread");
private Handler mHandler;
...
mHandlerThread.start();
mHandler = new Handler(mHandlerThread.getLooper());
...
mSurfaceTexture.setOnFrameAvailableListener(this, mHandler);
I hope this is enough for you to be able to solve your issue, let me know if you need me to edit one of the public examples to implement asynchronous callbacks there.
EDIT2:
Also, since the GL rendering might be done from within the onOutputBufferAvailable callback, this might be a different thread than the one that set up the EGL context. So in that case, one needs to release the EGL context in the thread that set it up, like this:
mEGL.eglMakeCurrent(mEGLDisplay, EGL10.EGL_NO_SURFACE, EGL10.EGL_NO_SURFACE, EGL10.EGL_NO_CONTEXT);
And reattach it in the other thread before rendering:
mEGL.eglMakeCurrent(mEGLDisplay, mEGLSurface, mEGLSurface, mEGLContext);
EDIT3:
Additionally, if the encoder and decoder callbacks are received on the same thread, the decoder onOutputBufferAvailable that does rendering can block the encoder callbacks from being delivered. If they aren't delivered, the rendering can be blocked infinitely since the encoder don't get the output buffers returned. This can be fixed by making sure the video decoder callbacks are received on a different thread instead, and this avoids the issue with the onFrameAvailable callback instead.
I tried implementing all this on top of ExtractDecodeEditEncodeMuxTest, and got it working seemingly fine, have a look at https://github.com/mstorsjo/android-decodeencodetest. I initially imported the unchanged test, and did the conversion to asynchronous mode and fixes for the tricky details separately, to make it easy to look at the individual fixes in the commit log.
Can also set the Handler in the MediaEncoder.
---> AudioEncoderCallback(aacSamplePreFrameSize),mHandler);
MyAudioCodecWrapper myMediaCodecWrapper;
public MyAudioEncoder(long startRecordWhenNs){
super.startRecordWhenNs = startRecordWhenNs;
}
#RequiresApi(api = Build.VERSION_CODES.M)
public MyAudioCodecWrapper prepareAudioEncoder(AudioRecord _audioRecord , int aacSamplePreFrameSize) throws Exception{
if(_audioRecord==null || aacSamplePreFrameSize<=0)
throw new Exception();
audioRecord = _audioRecord;
Log.d(TAG, "audioRecord:" + audioRecord.getAudioFormat() + ",aacSamplePreFrameSize:" + aacSamplePreFrameSize);
mHandlerThread.start();
mHandler = new Handler(mHandlerThread.getLooper());
MediaFormat audioFormat = new MediaFormat();
audioFormat.setString(MediaFormat.KEY_MIME, MIMETYPE_AUDIO_AAC);
//audioFormat.setInteger(MediaFormat.KEY_BIT_RATE, BIT_RATE );
audioFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC);
audioFormat.setInteger(MediaFormat.KEY_SAMPLE_RATE, audioRecord.getSampleRate());//44100
audioFormat.setInteger(MediaFormat.KEY_CHANNEL_COUNT, audioRecord.getChannelCount());//1(單身道)
audioFormat.setInteger(MediaFormat.KEY_BIT_RATE, 128000);
audioFormat.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 16384);
MediaCodec codec = MediaCodec.createEncoderByType(MIMETYPE_AUDIO_AAC);
codec.configure(audioFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
codec.setCallback(new AudioEncoderCallback(aacSamplePreFrameSize),mHandler);
//codec.start();
MyAudioCodecWrapper myMediaCodecWrapper = new MyAudioCodecWrapper();
myMediaCodecWrapper.mediaCodec = codec;
super.mediaCodec = codec;
return myMediaCodecWrapper;
}

Controlling Frame Rate of VirtualDisplay

I'm writing an Android application, and in it, I have a VirtualDisplay to mirror what is on the screen and I then send the frames from the screen to an instance of a MediaCodec. It works, but, I want to add a way of specifying the FPS of the encoded video, but I'm unsure how to do so.
From what I've read and experimented with, dropping encoded frames (based on the presentation times) doesn't work well as it ends up with blocky/artifact ridden video as opposed to a smooth video at a lower framerate. Other reading suggests that the only way to do what I want (limit the FPS) would be to limit the incoming FPS to the MediaCodec, but the VirtualDisplay just receives a Surface which is constructed from the MediaCodec as below
mSurface = <instance of MediaCodec>.createInputSurface();
mVirtualDisplay = mMediaProjection.createVirtualDisplay(
"MyDisplay",
screenWidth,
screenHeight,
screenDensity,
DisplayManager.VIRTUAL_DISPLAY_FLAG_AUTO_MIRROR,
mSurface,
null,
null);
I've also tried subclassing Surface and limit the frames that are fed to the MediaCodec via the unlockCanvasAndPost(Canvas canvas) but the function never seems to be called on my instance, so, there may be some weirdness in how I extended Surface and the interaction with the Parcel as writeToParcel function is called on my instance, but that is the only function that is called in my instance (that I can tell).
Other reading suggests that I can go from encoder -> decoder -> encoder and limit the rate in which the second encoder is fed frames, but that's a lot of extra computation that I'd rather not do if I can avoid it.
Has anyone successfully limited the rate at which a VirtualDisplay feeds its Surface? Any help would be greatly appreciated!
Starting off with what you can't do...
You can't drop content from the encoded stream. Most of the frames in the encoded stream are essentially "diffs" from other frames. Without knowing how the frames interact, you can't safely drop content, and will end up with that corrupted macroblock look.
You can't specify the frame rate to the MediaCodec encoder. It might stuff that into metadata somewhere, but the only thing that really matters to the codec is the frames you're feeding into it, and the presentation time stamps associated with each frame. The encoder will not drop frames.
You can't do anything useful by subclassing Surface. The Canvas operations are only used for software rendering, which is unrelated to feeding in frames from a camera or virtual display.
What you can do is send the frames to an intermediate Surface, and then choose whether or not to forward them to the MediaCodec's input Surface. One approach would be to create a SurfaceTexture, construct a Surface from it, and pass that to the virtual display. When the SurfaceTexture's frame-available callback fires, you either ignore it, or render the texture onto the MediaCodec input Surface with GLES.
Various examples can be found in Grafika and on bigflake, none of which are an exact fit, but all of the necessary EGL and GLES classes are there.
You can reference the code sample from saki4510t's ScreenRecordingSample or RyanRQ's ScreenRecoder, they are all use the additional EGL Texture between the virtual display and media encoder, and the first one can keep at least 15 fps for the output video. You can search the keyword createVirtualDisplay from their code base for more details.

OpenSL ES Android: "Too many objects" SL_RESULT_MEMORY_FAILURE

I'm having a problem with OpenSL ES on Android. I'm using OpenSL to play sound effects. Currently I'm creating a new player each time I play a sound. (I know this isn't terribly efficient, but it's "good enough" for the time being.)
After a while of playback, I start to get these errors:
E/libOpenSLES(25131): Too many objects
W/libOpenSLES(25131): Leaving Engine::CreateAudioPlayer (SL_RESULT_MEMORY_FAILURE)
I'm tracking my create/destroy pattern and I never go above 4 outstanding objects at any given time, well below the system limit of 32. Of course, this is assuming that the Destroy is properly working.
My only guess right now is that I'm doing something incorrectly when I clean up the player objects. One possible issue is that the Destroy is often called in the context of the player callback (basically destroying the player after it's finished playing), although I can't find any reference suggesting this is a problem. Are there any other cleanup steps I should be taking besides "Destroy"-ing the player object? Do the Interfaces need to be cleaned up somehow as well?
-- Added --
After more testing, it happens consistently after the 30th player is created (there is an engine and a mix too, so that brings the total to 32 objects). So I must not be destroying the object properly. Here's the code--I'd love to know what's going wrong:
SLuint32 playerState = 0;
SLresult result = (*pPlayerObject)->GetState(pPlayerObject, &playerState);
return_if_fail(result);
if (playerState == SL_OBJECT_STATE_REALIZED)
{
(*pPlayerObject)->AbortAsyncOperation(pPlayerObject);
(*pPlayerObject)->Destroy(pPlayerObject);
}
else
{
__android_log_print(1, LOG_TAG, "Player object in unexpected state (%d)", playerState);
return 1002;
}
if (playerState == SL_OBJECT_STATE_REALIZED)
is not needed. Try to do it always.
AbortAsyncOperation is called in Destroy => not needed.
So try just (*pPlayerObject)->Destroy(pPlayerObject); it should be enough.
Edit:
I tested, and found solution.
You cannot call Destroy() from player callback. Should make "destroy" list and destroy it somewhere else, for example, in main thread.

Categories

Resources