Video decoder Configure using MediaCodec - android

I am trying to decode video samples using MediaCodec API. I am using surfaceView to show rendered samples. If i press home button, app going into pause state and surface destroyed. When i coming back to resume state, new surfaceView reference is creating, but decoder is unable to pump samples on surfaceView. so screen appearing as black.
video configure:
videoDecoder.configure(format, surface, null, 0);
So how can i reconfigure videoDecoder in above statement. It is similar to the following problem
How to keep decoding alive during screen orientation?

The MediaCodec API does not currently (API 19) provide a way to replace the output Surface.
As in the other question you refer to, I think the way to deal with this will be to decode to a Surface that isn't tied to the view hierarchy (and, hence, doesn't get torn down when the Activity is destroyed).
If you direct the output of the MediaCodec to a SurfaceTexture, you can then render that texture onto the SurfaceView. This will require a bit of GLES code. You can find the necessary pieces in the Grafika sources, but there isn't currently a full implementation of what you want (e.g. PlayMovieActivity decodes video to a SurfaceTexture, but that ST is part of a TextureView, which will get torn down).
The additional rendering step will increase the GPU load, and won't work for DRM-protected video. For most devices and apps this won't matter.
See also the bigflake examples.
Update: I've added this to Grafika, with a twist. See the "Double decode" example. The output goes to a SurfaceTexture associated with a TextureView. If the screen is rotated (or, currently, blanked by hitting the power button), decoding continues. If you leave the activity with the "back" or "home" button, decoding stops. It works by retaining the SurfaceTexture, attaching it to the new TextureView.

Related

Editing frames and encoding with MediaCodec

I was able to decode an mp4 video. If I configure the decoder using a Surface I can see the video on screen. Now, I want to edit the frame (adding a yellow line or even better overlapping a tiny image) and encode the video as a new video. It is not necessary to show the video and I don't care now about the performance.(If I show the frames while editing I could have a gap if the editing function takes a lot of time), So, What do you recommend to me, configure the decoder with a GlSurface anyway and use OpenGl (GLES), or configure it with null and somehow convert the Bytebuffer to a Bitmap, modify it, and encode the bitmap as a byte array? Also I saw in Grafika page that you cand use a Surface with a custom Rederer and use OpenGl (GLES). Thanks
You will have to use OpenGLES. ByteBuffer/Bitmap approach can not give realistic performance/features.
Now that you've been able to decode the Video (using MediaExtractor and Codec) to a Surface, you need to use the SurfaceTexture used to create the Surface as an External Texture and render using GLES to another Surface retrieved from MediaCodec configured as an encoder.
Though Grafika doesn't have an exactly similar complete project, you can start with your existing project and then try to use either of the following subprojects in grafika Continuous Camera or Show + capture camera, which currently renders Camera frames (fed to SurfaceTexture) to a Video (and display).
So essentially, the only change is the MediaCodec feeding frames to SurfaceTexture instead of the Camera.
Google CTS DecodeEditEncodeTest does exactly the same and can be used as a reference in order to make the learning curve smoother.
Using this approach, you can certainly do all sorts of things like manipulating the playback speed of video (fast forward and slow-down), adding all sorts of overlays on the scene, play with colors/pixels in the video using shaders etc.
Checkout filters in Show + capture camera for an illustration for the same.
Decode-edit-Encode flow
When using OpenGLES, 'editing' of the frame happens via rendering using GLES to the Encoder's input surface.
If decoding and rendering+encoding are separated out in different threads, you're bound to skip a few frames every frame, unless you implement some sort of synchronisation between the two threads to keep the decoder waiting until the render+encode for that frame has happened on the other thread.
Although modern hardware codecs support simultaneous video encoding and decoding, I'd suggest, do the decoding, rendering and encoding in the same thread, especially in your case, when the performance is not a major concern right now. That will help avoiding the problems of having to handle synchronisation on your own and/or frame jumps.

why SurfaceView is not deprecated?

i know that TextureView is show up after ICS.
but, SurfaceView is not deprecated at ICS.
SurfaceView has hole-punching structure, so it has many limit point.
can't stack two SurfaceView and can't translate and etc..
why SurfaceView is not deprecated despite of TextureView is show up?
SurfaceView is faster, and can handle DRM-protected video.
The hole-punching structure is necessary because SurfaceView's Surface is handled directly by the system compositor. For TextureView, you draw on a Surface, which is converted to a GL texture within the app, which is rendered a second time by the app onto the View layer. So there's an extra copy.
For DRM-protected video, no user or system code -- not even the Linux kernel -- is allowed to see unencrypted pixels. Only the video decoder and the display hardware. Because SurfaceView just forwards references through, and doesn't touch the actual data, this works.
For more details, see the graphics architecture doc.

Render video effect one time for each frame in Grafika - CameraCaptureActivity

I am referring to the demo app Grafika, in which the CameraCaptureActivity records a video while showing a live preview of the effects applied.
While recording in the CameraCaptureActivity, any effect that is applied to the frame that comes from the camera is done twice.
Once for the preview and once while saving the video to the file.
Since the same frame that is previewed is being saved to the file, it would save a lot of processing if this could be done just once.
The rendering of the frames happens directly on the two surfaces, one being the GLSurfaceView (for preview) and the other being MediaCodec (saving part).
Is there a way to render the OpenGL effect only once?
If I could copy the contents of one surface to the other it would be great.
Is there a way to do this?
Yes: you can render to an FBO, then blit the output twice, once for display and once for recording.
Grafika's "record GL app" activity demonstrates three different approaches to the problem (one of which only works on GLES 3.0+). The doFrame() method does the work; the draw-once-blit-twice approach is currently here.

Detect if Android SurfaceView is drawing/moving

I'm using libvlc on and android app to play a network stream; the native code draws the image on a surfaceview.
Let's assume the video stops and I have no access to the native code, how can I detect if the SurfaceView is still moving (the video is not frozen)?
I tried getViewTreeObserver().addOnDrawListener(); and getViewTreeObserver().addOnPreDrawListener(); but they do not have the effect I'm looking for.
Thanks
You can't get that information from the SurfaceView, because the SurfaceView itself does not know.
The SurfaceView's job is to set up the Surface, and create a hole in the View layout that you can see through. Once it has done these things, it is no longer actively involved in the process of displaying video. The content flows from the decoder to the Surface, which is managed by SurfaceFlinger (the system graphics compositor).
This is, for example, how DRM video works. The app "playing" the video has no access to DRM-protected video frames. (Neither does SurfaceFlinger, actually, but that's a longer story.)
The best way to know if content is still arriving is to ask the video source if it is still sending you content. Another approach would be to change your SurfaceView to a TextureView, and provide an onSurfaceTextureUpdated() callback method.
I am not sure what exactly what you are trying to achieve here but you can see if surface view is rendering or not through implementing an interface called SurfaceHolder.Callback which gives you access to the following methods,
On Surface Created - This is called immediately after the surface is first created.
On Surface Changed - This is called immediately after any structural changes (format or size) have been made to the surface.
On Surface Destroyed - This is called immediately before a surface is being destroyed.
Take a look at the documentation for surface view. For SurfaceHolder take a look at this link. Basically in order to know

How to pass Camera preview to the Surface created by MediaCodec.createInputSurface()?

Ideally I'd like to accomplish two goals:
Pass the Camera preview data to a MediaCodec encoder via a Surface. I can create the Surface using MediaCodec.createInputSurface() but the Camera.setPreviewDisplay() takes a SurfaceHolder, not a Surface.
In addition to passing the Camera preview data to the encoder, I'd also like to display the preview on-screen (so the user can actually see what they are encoding). If the encoder wasn't involved then I'd use a SurfaceView, but that doesn't appear to work in this scenario since SurfaceView creates its own Surface and I think I need to use the one created by MediaCodec.
I've searched online quite a bit for a solution and haven't found one. Some examples on bigflake.com seem like a step in the right direction but they take an approach that adds a bunch of EGL/SurfaceTexture overhead that I'd like to avoid. I'm hoping there is a simpler example or solution where I can get the Camera and MediaCodec talking more directly without involving EGL or textures.
As of Android 4.3 (API 18), the bigflake CameraToMpegTest approach is the correct way.
The EGL/SurfaceTexture overhead is currently unavoidable, especially for what you want to do in goal #2. The idea is:
Configure the Camera to send the output to a SurfaceTexture. This makes the Camera output available to GLES as an "external texture".
Render the SurfaceTexture to the Surface returned by MediaCodec#createInputSurface(). That feeds the video encoder.
Render the SurfaceTexture a second time, to a GLSurfaceView. That puts it on the display for real-time preview.
The only data copying that happens is performed by the GLES driver, so you're doing hardware-accelerated blits, which will be fast.
The only tricky bit is you want the external texture to be available to two different EGL contexts (one for the MediaCodec, one for the GLSurfaceView). You can see an example of creating a shared context in the "Android Breakout game recorder patch" sample on bigflake -- it renders the game twice, once to the screen, once to a MediaCodec encoder.
Update: This is implemented in Grafika ("Show + capture camera").
Update: The multi-context approach in "show + capture camera" approach is somewhat flawed. The "continuous capture" Activity uses a plain SurfaceView, and is able to do both screen rendering and video recording with a single EGL context. This is recommended.

Categories

Resources