Multiple videos on one Surface - android

I have a single, fullscreen SurfaceView. And I have multiple network streams with h264 video which I can decode using MediaCodec. Is it possible to specify to which coordinates of the Surface will the video be rendered? So I can create kind of video mozaic?

No, that's not possible. You'll need to use multiple SurfaceTextures instead, one per video decoder, and render all the textures into one view using Open GL.
See https://source.android.com/devices/graphics/architecture.html for more explanations on how this works; in particular, each Surface can only have one producer and one consumer.

In single SurfaceView - no. For more information you can explore SurfaceView source code. Maybe some effect of mosaic you can create using the few SurfaceView and adding special byte buffer trimer - to combine one video to several SV and getting full video.
But anyway! It will not be a good idea, if we talk about performance.

Related

Editing video frames in exo player

I have an encoded video stream that I'm playing through exoplayer. What I want to do is get each frame of the video and edit it before it is displayed (e.g. changing some pixels).
Is it possible to do this with exoplayer? I've been looking at the implementation of MediaCodecVideoRenderer.java in the exoplayer source, but it seems that each MediaCodec releases its output buffer to a surface itself, without possibility of editing the frame before rendering.
It will depend on exactly what you want to modify, but it is possible to use a GLSurface view and listen for each frame and then transform the frame, assuming it is not encrypted (with encrypted you van usually still apply transformation bit you definitely should not be able to read the frame itself).
There is a good example project which does something similar to apply filters to videos, extending ExoPlayer - take a look at the EPlayerRenderer class in particular.
https://github.com/MasayukiSuda/ExoPlayerFilter
You can also do a similar thing with openCV - read in a frame, modify it and then display it. This may be easier if you are doing compilacted image manipulations.

Video in Android : change visual properties (e.g. saturation, brightness)

Assuming we have a Surface in Android that displays a video (e.g. h264) with a MediaPlayer:
1) Is it possible to change the displayed saturation, contrast & brightness of the displayed on the surface video? In real time? E.g. Images can use setColorFilter is there anything similar in Android to process the video frames?
Alternative question (if no. 1 is too difficult):
2) If we would like to export this video with e.g. an increased saturation, we should use a Codec, e.g. MediaCodec. What technology (method, class, library, etc...) should we use before the codec/save action to apply the saturation change?
For display only, one easy approach is to use a GLSurfaceView, a SurfaceTexture to render the video frames, and a MediaPlayer. Prokash's answer links to an open source library that shows how to accomplish that. There are a number of other examples around if you search those terms together. Taking that route, you draw video frames to an OpenGL texture and create OpenGL shaders to manipulate how the texture is rendered. (I would suggest asking Prokash for further details and accepting his answer if this is enough to fill your requirements.)
Similarly, you could use the OpenGL tools with MediaCodec and MediaExtractor to decode video frames. The MediaCodec would be configured to output to a SurfaceTexture, so you would not need to do much more than code some boilerplate to get the output buffers rendered. The filtering process would be the same as with a MediaPlayer. There are a number of examples using MediaCodec as a decoder available, e.g. here and here. It should be fairly straightforward to substitute the TextureView or SurfaceView used in those examples with the GLSurfaceView of Prokash's example.
The advantage of this approach is that you have access to all the separate tracks in the media file. Because of that, you should be able to filter the video track with OpenGL and straight copy other tracks for export. You would use a MediaCodec in encode mode with the Surface from the GLSurfaceView as input and a MediaMuxer to put it all back together. You can see several relevant examples at BigFlake.
You can use a MediaCodec without a Surface to access decoded byte data directly and manipulate it that way. This example illustrates that approach. You can manipulate the data and send it to an encoder for export or render it as you see fit. There is some extra complexity in dealing with the raw byte data. Note that I like this example because it illustrates dealing with the audio and video tracks separately.
You can also use FFMpeg, either in native code or via one of the Java wrappers out there. This option is more geared towards export than immediate playback. See here or here for some libraries that attempt to make FFMpeg available to Java. They are basically wrappers around the command line interface. You would need to do some extra work to manage playback via FFMpeg, but it is definitely doable.
If you have questions, feel free to ask, and I will try to expound upon whatever option makes the most sense for your use case.
If you are using a player that support video filters then you can do that.
Example of such a player is VLC, which is built around FFMPEG [1].
VLC is pretty easy to compile for Android. Then all you need is the libvlc (aar file) and you can build your own app. See compile instructions here.
You will also need to write your own module. Just duplicate an existing one and modify it. Needless to say that VLC offers strong transcoding and streaming capabilities.
As powerful VLC for Android is, it has one huge drawback - video filters cannot work with hardware decoding (Android only). This means that the entire video processing is on the CPU.
Your other options are to use GLSL / OpenGL over surfaces like GLSurfaceView and TextureView. This guaranty GPU power.

Processing Frames from Mediacodec Output and Update the Frames on Android

I am doing a project on image processing stuff. I receive a raw h264 video stream in real time and decode it using MediaCodec. I have successfully displayed the decoded video on a TextureView or SurfaceView. Now I want to process each frame, do something to it using OpenCV4Android and show the updated video frame on the screen. I know OpenCV has a sample project that demonstrates how to process video frames from the phone camera, but I wonder how to do it if I have another video source.
Also I have some questions on TextureView:
What does the onSurfaceTextureUpdated() from SurfaceTextureListener do? If I call getBitmap() in this function, then does that mean I get each frame of the video? And what about SurfaceTexture.onFrameAvailableListener?
Is it possible to use a hidden TextureView as an intermediate, extract its frames for processing and render it back to another surface, say, OpenGL ES texture for displaying?
The various examples in Grafika that use Camera as input can also work with input from a video stream. Either way you send the video frame to a Surface.
If you want to work with a frame of video in software, rather than on the GPU, things get more difficult. You either have to receive the frame on a Surface and copy it to a memory buffer, probably performing an RGB-to-YUV color conversion in the process, or you have to get the YUV buffer output from MediaCodec. The latter is tricky because a few different formats are possible, including Qualcomm's proprietary tiled format.
With regard to TextureView:
onSurfaceTextureUpdated() is called whenever TextureView receives a new frame. You can use getBitmap() to get every frame of the video, but you need to pace the video playback to match your filter speed -- TextureView will drop frames if you fall behind.
You could create a "hidden TextureView" by putting other View elements on top of it, but that would be silly. TextureView uses a SurfaceTexture to convert the video frames to OpenGL ES textures, then renders them as part of drawing the View UI. The bitmap data is retrieved with glReadPixels(). You can just use these elements directly. The bigflake ExtractMpegFramesTest demonstrates this.

Recording a Surface using MediaCodec

So, In my application, I am able to show effects(like blur filter, gaussian) to video that comes from Camera using GPUImage library.
Basically, I (library) will take the input from the Camera, get's the raw byte data, converts it into RGBA format from YUV format, then applies effects to this image and displays on the Surface of GLSurfaceView using OpenGL. finally, to the user, it looks like a video with effects applied.
Now I want to record the frames of Surface as a video using MediaCodec API.
but this discussion says that we can not pass a predefined Surface to the MediaCodec.
I have seen some samples at bigflake where he is creating Surface using MediaCodec.createInputSurface() but for me, Surface comes from the GLSurfaceView.
So, how can I record a frames of a Surface as a video?
I will record the audio parallelly, merge that video and audio using FFMPEG and present to the user as a Video with effects applied.
You can see a complete example of this in Grafika.
In particular, the "Show + capture camera" activity records camera output to .mp4. It also demonstrates applying some simple image processing techniques in the GL shader. It uses a GLSurfaceView and a convoluted dance to keep the recording going across orientation changes.
Also possibly of interest, the "Record GL app with FBO" activity records OpenGL ES rendering a couple different ways. It uses plain SurfaceView and is much more straightforward.

render overlay graphics into camera video

I want to make an app which takes a video from the camera, adds additional visual info (overlays) and creates a video file from it which can later be uploaded to a server.
How to do that?
Without prior experience with such tasks, I assume there are 2 options:
screen-capture and encoding to video file. However the resulting framerate may not be sufficient.
record the video to sdcard and reencode later with added overlays. Live encoding is not needed, thus it's ok for the encoding process to be slower then realtime.
You will have to resort to using for instance ffmpeg and the NDK to encode your own video. There's plenty of examples out there, but it's still somewhat cumbersome.
Hope this helps:
Use RelativeLayout. Put the camera
preview as the first child of the
RelativeLayout and the VideoView as
the second child. The VideoView will
appear to be "on top of" the
SurfaceView for the camera preview.
BTW, VideoView really is a
SurfaceView. Note that you may decide
someday to use a SurfaceView and
MediaPlayer, rather than a VideoView,
so you can get more control on video
playback
Source: http://osdir.com/ml/Android-Developers/2010-03/msg00077.html

Categories

Resources