I have an encoded video stream that I'm playing through exoplayer. What I want to do is get each frame of the video and edit it before it is displayed (e.g. changing some pixels).
Is it possible to do this with exoplayer? I've been looking at the implementation of MediaCodecVideoRenderer.java in the exoplayer source, but it seems that each MediaCodec releases its output buffer to a surface itself, without possibility of editing the frame before rendering.
It will depend on exactly what you want to modify, but it is possible to use a GLSurface view and listen for each frame and then transform the frame, assuming it is not encrypted (with encrypted you van usually still apply transformation bit you definitely should not be able to read the frame itself).
There is a good example project which does something similar to apply filters to videos, extending ExoPlayer - take a look at the EPlayerRenderer class in particular.
https://github.com/MasayukiSuda/ExoPlayerFilter
You can also do a similar thing with openCV - read in a frame, modify it and then display it. This may be easier if you are doing compilacted image manipulations.
Related
I am pulling h264 and AAC frames and at the moment I am feeding them to MediaCodec, decoding and rendering them myself, but the code is getting too complicated and I need to cover all cases. I was thinking if it's possible to set up an Exoplayer instance and feed them as a source.
I can only find that it supports normal files and streams, but not separate frames? Do I need to mux the frames myself, and if so is there an easy way to do it?
If you mean that you are extracting frames from a video file or a live stream, and then want to work on them individually or display them individually, you may find that OpenCV would suit your use case.
You can fairly simply open a stream or file, go frame by frame and do what you want with the resulting decoded bitmap.
This answer has a Python and Android example that might be useful: https://stackoverflow.com/a/58921325/334402
Assuming we have a Surface in Android that displays a video (e.g. h264) with a MediaPlayer:
1) Is it possible to change the displayed saturation, contrast & brightness of the displayed on the surface video? In real time? E.g. Images can use setColorFilter is there anything similar in Android to process the video frames?
Alternative question (if no. 1 is too difficult):
2) If we would like to export this video with e.g. an increased saturation, we should use a Codec, e.g. MediaCodec. What technology (method, class, library, etc...) should we use before the codec/save action to apply the saturation change?
For display only, one easy approach is to use a GLSurfaceView, a SurfaceTexture to render the video frames, and a MediaPlayer. Prokash's answer links to an open source library that shows how to accomplish that. There are a number of other examples around if you search those terms together. Taking that route, you draw video frames to an OpenGL texture and create OpenGL shaders to manipulate how the texture is rendered. (I would suggest asking Prokash for further details and accepting his answer if this is enough to fill your requirements.)
Similarly, you could use the OpenGL tools with MediaCodec and MediaExtractor to decode video frames. The MediaCodec would be configured to output to a SurfaceTexture, so you would not need to do much more than code some boilerplate to get the output buffers rendered. The filtering process would be the same as with a MediaPlayer. There are a number of examples using MediaCodec as a decoder available, e.g. here and here. It should be fairly straightforward to substitute the TextureView or SurfaceView used in those examples with the GLSurfaceView of Prokash's example.
The advantage of this approach is that you have access to all the separate tracks in the media file. Because of that, you should be able to filter the video track with OpenGL and straight copy other tracks for export. You would use a MediaCodec in encode mode with the Surface from the GLSurfaceView as input and a MediaMuxer to put it all back together. You can see several relevant examples at BigFlake.
You can use a MediaCodec without a Surface to access decoded byte data directly and manipulate it that way. This example illustrates that approach. You can manipulate the data and send it to an encoder for export or render it as you see fit. There is some extra complexity in dealing with the raw byte data. Note that I like this example because it illustrates dealing with the audio and video tracks separately.
You can also use FFMpeg, either in native code or via one of the Java wrappers out there. This option is more geared towards export than immediate playback. See here or here for some libraries that attempt to make FFMpeg available to Java. They are basically wrappers around the command line interface. You would need to do some extra work to manage playback via FFMpeg, but it is definitely doable.
If you have questions, feel free to ask, and I will try to expound upon whatever option makes the most sense for your use case.
If you are using a player that support video filters then you can do that.
Example of such a player is VLC, which is built around FFMPEG [1].
VLC is pretty easy to compile for Android. Then all you need is the libvlc (aar file) and you can build your own app. See compile instructions here.
You will also need to write your own module. Just duplicate an existing one and modify it. Needless to say that VLC offers strong transcoding and streaming capabilities.
As powerful VLC for Android is, it has one huge drawback - video filters cannot work with hardware decoding (Android only). This means that the entire video processing is on the CPU.
Your other options are to use GLSL / OpenGL over surfaces like GLSurfaceView and TextureView. This guaranty GPU power.
I have a single, fullscreen SurfaceView. And I have multiple network streams with h264 video which I can decode using MediaCodec. Is it possible to specify to which coordinates of the Surface will the video be rendered? So I can create kind of video mozaic?
No, that's not possible. You'll need to use multiple SurfaceTextures instead, one per video decoder, and render all the textures into one view using Open GL.
See https://source.android.com/devices/graphics/architecture.html for more explanations on how this works; in particular, each Surface can only have one producer and one consumer.
In single SurfaceView - no. For more information you can explore SurfaceView source code. Maybe some effect of mosaic you can create using the few SurfaceView and adding special byte buffer trimer - to combine one video to several SV and getting full video.
But anyway! It will not be a good idea, if we talk about performance.
I work on a project that requires to have exact seeking of a video because the system needs to be synchronized to other devices. The OS uses for video playback is Android. So far I used the MediaPlayer class, but depending on the key frame amount, seeking is highly inaccurate.
So my next idea is to cache decoded images and wrap an own playback class around it. So far I understand how to use the MediaExtractor and the MediaCodec classes to decode videos manually. The class android.media.ImageReader seems to be exactly what I want.
But what I do not understand is how to render such an android.media.Image manually once I've got it? I'd like to prevent to do the YUV to RGB conversion manually, instead a preferred method would be to be able to put such an Image into a Surface or copy it to a SurfaceTexture somehow.
Please take a look here
In order to support use cases where need to sync videos being played on several devices this player makes exact seek
How can i apply effects to video while capturing i am tried in a lot of ways but output is nothing..i have searched and find one Application VideoFx whaich done what i want..but i didn't get what they are doing..
i have done the applying effects to the image with using GPUImageProcessing library..For Applying effects shall i have capture normal video and make it to frames and Apply effects to that frames and again recombine those frames into video..is this is the only process or any other alternatives..Most of the stack answers suggest me FFMPEG with using this i get frames from the video ..how to recombine it again??
I think with using this Camera effects we can apply effects to videos while recording..But i don't know how to apply it with using openGl.
Just make a video using frames which you are getting from the ffmpeg. You can find some libraries for making video from frames. Or you can make a Movie from those frmaes and save like video file. You can also treat your frames as raw data and then make a file by adding header to it.