Android video filter - android

I'm trying to create an app where I am able to add filters to a recorded video. Basically, I want to replicate the functionality that exists in Instagram video, or Viddy.
I've done research and I can't piece it all together. I've looked into using GLSurfaceView to play the recorded video and I know I could use NDK to do the pixel manipulation and send it back to the SurfaceView or save it somehow. The problem is, I don't know how to send the pixel data because there seems to be no function to access it. This idea came from the Camera function "onPreviewFrame". The function returns a byte array allowing me to manipulate the pixels and display it.
Another idea is to use GLSurfaceView and use OpenGL to render the filter. GLSurfaceView has a renderer you can set, but I'm not very familiar with OpenGL. But again, this goes back to actually getting the pixels of each video frame. I also read about ripping each frame as a texture and then manipulating the texture in OpenGL but the answers I've come across are not very detailed.
Lastly, I've looked into JavaCV. Trying to use FFmpegFrameGrabber, but I haven't had much either. I wanted to just grab one frame, but when I try to write the frame's ByteBuffer to an ImageView, I get a "buffer not large enough for pixels" error.
Any guidance would be great.

From Android 4.3 you can use a Surface as the input to your encoder. http://developer.android.com/about/versions/android-4.3.html#Multimedia
So you can use GLSurfaceView and apply the filters using fragment shaders.
You can find some good examples here. http://bigflake.com/mediacodec/

It is good to use the exoplayer filter library and this one will do your work but in order to merge the filtered layered with the video you have to do an extra work.
Link for exoplayer filter is there for you : ExoplayerFilter
You have to se the exoplayer for this but follow their instructions you'll be able to do the task. Ping me if something comes up.

Related

Is it possible to render two video file on a one surfaceview for blending on Android?

Is it possible to render two video streams on a one surfaceview for blending?
I wanna make an application to render two videos for blending to a one same surfaceview and then save as a video file.
If that's impossible, is this approach possible that render two videos using two surfaceview for blending and save as a one video file?
Please help me.
Thank you for reading.
No, that's not possible. You'll need to use multiple SurfaceTextures instead, one per video decoder, and render all the textures into one view using Open GL.
See https://source.android.com/devices/graphics/architecture.html for more explanations on how this works; in particular, each Surface can only have one producer and one consumer.

Can MediaCodecSDK be used to add overlay to video in Android?

Say you have a 20 second video (perhaps taken with the device camera) and you want to add an overlay in to the video.
(The overlay would simply be a normal raster image, i.e. an Android Image (doc).)
You want to create a new video, with the overlay as part of the video image, and save the video.
In fact, can MediaCodec SDK be used to do this job?
https://developer.android.com/reference/android/media/MediaCodec
In the past, you would usually use FFMPEG for such a problem, but that is a mess and slow.
Is MediaCodec possible here?
Since it is "new" I just can't find any information on this.....
this is possible using MediaCodec.
For a start, take a look at the DecodeEditEncode example from here
This example is shows how to resize a vide using a OpenGL ES shader. What you want to do is render your overlay over the video also using a OpenGL ES shader.
Another good source for examples on MediaCodec can be found here
Here you can find some examples on how to use basic rendering techniques. Look at the Hardware scaler exerciser.
When you have the video part up and running, this is probably where the actual struggle starts since there are no standard methods to render text in OpenGl ES. I'd probably just draw text to a Canvas and make a texture out of it, probably is slow though.
If you have a static overlay, like a watermark, you could create it beforehand and ship as a resource.

Android - Capturing video frame from GLSurfaceView and SurfacView

To play a media using Android MediaPlayer or MediaCodec, most of the time, you use SurfaceView or GLSurfaceView (There is another way to achieve this using TextureView, but let's not talk about it here, since it's a bit different type of view)
And as far as I know, capturing the video frame from SurfaceView is not possible - you don't have access to hw overlay.
How about GLSurfaceView? Since we have access to YUV pixels (we're, right?), is it possible?
Can anyone point me where i can find a sample code to do it?
I don't think below explanation can work, because it's assuming the color format is RGBA, and in above case, I think it's YUV.
When using GLES20.glReadPixels on android, the data returned by it is not exactly the same with the living preview
Thank you and have a great day.
You are correct in that you cannot read back from a Surface. It's the producer side of a producer-consumer pair. GLSurfaceView is just a bunch of code wrapped around a SurfaceView that (in theory) makes working with GLES easier.
So you have to send the preview somewhere else. One approach is to send it to a SurfaceTexture, which converts every frame sent to its Surface into a GLES texture. The texture can then be rendered twice, once for display and once to an offscreen pbuffer that can be saved as a bitmap (just like this question).
I'm not sure why you don't want to talk about TextureView. It's a View that uses SurfaceTexture under the hood, and it provides a getBitmap() call that does exactly what you want.

First steps in creating a chroma key effect using android camera

I'd like to create a chroma key effect using the android camera. I don't need a step by step, but I'd like to know the best way to hijack the android camera and apply the filters. I've checked out the API and haven't found anything super definitive on how to manipulate data coming from the camera. At first I looked into using a surface texture, but I'm not fully aware how that helps or how to even use it. Then I checked out using a GLSurfaceView, which may be the right direction, but not really sure.
Also, to add to my question, how would I handle both preview and saving of the image? Would I process the image at minimum, twice? Once while previewing and once while saving? I think that's probably the best solution.
Lastly, would it make sense to create a C/++ wrapper to handle the processing to optimize speed?
Any help at all would be greatly appreciated. A link to some examples would also be greatly appreciated.
Thanks.
The only real chance is to use openGL ES and fragment shader (it will require at least openGL ES 2.0) and do the chroma key effect on GPU. The shader itself will be quite easy (google).
But to do that, you need to display camera preview with callback. You will have to implement Camera.PreviewCallback, create a buffer for image data and use setPreviewCallbackWithBuffer method. You can get the basic idea from my answer to a similar question. Note that there is a significant problem with performance of this custom camera preview, but it might work on hardware that supports ES 2.0.
To display the preview with openGL, you will need to extend GLSurfaceView and also implement GLSurfaceView.Renderer. Then you will bind the camera preview frame as a texture with glTexImage2D to some simple rectangle and the rest will be handled by shaders. See how to use shaders in ES here or if you have no experience with shaders, this tutorial might be a good start.
To the other question: you could save the current image from the preview, but the preview has lower resolution than a taken picture, so you will probably want to take a picture and then process it separately (you could use the same shader for it).
As for the C++, it's a lot of additional effort with questionable output. But it can improve performance if done right. Try to check this article, it's on a similar topic, it describes how to use NDK to process camera preview and display it in openGL. But if you were thinking about doing the chroma key effect in C++, it would be significantly slower than shaders.
You can check this library: https://github.com/cyberagent/android-gpuimage.
It provides a framework to do image processing on device's GPU using GL shaders.
There is also a sample showing how to use the library with a camera.
There is a Chroma-Key-Project on Google-Code: http://code.google.com/p/chroma-key-project/ It includes a way to upload pictures that are token using chroma-key:
After an exhaustive search online, I have failed to find any open source projects working >with Chroma-keying for Android devices. The aim of this project is to provide a useful >Chroma-key library, that will make it easy to implement applications and games that can take >pictures in front of a Green or Blue screen, and apply the pictures on a chosen background. >Furthermore, the application will also allow the user to upload the picture using Intent.

Simulating an Android Camera

I am testing imaging algorithms using a android phone's camera as input, and need a way to consistently test the algorithms. Ideally I want to take a pre-recorded video feed and have the phone 'pretend' that the video feed is a live video from the camera.
My ideal solution would be where the app running the algorithms has no knowledge that the video is pre-recorded. I do not want to load the video file directly into the app, but rather read it in as sensor data if at all possible.
Is this approach possible? If so, any pointers in the right direction would be extremely helpful, as Google searches have failed me so far
Thanks!
Edit: To clarify, my understanding is that the Camera class uses a camera service to read video from the hardware. Rather than do something application-side, I would like to create a custom camera service that reads from a video file instead of the hardware. Is that doable?
When you are doing processing on a live android video feed you will need to build your own custom camera application that feeds you individual frames via the PreviewCallback interface that Android provides.
Now, simulating this would be a little bit tricky seen as the format for the preview frames will generally be in the NV21 format. If you are using a pre-recorded video, I don't think there is any clear way of reading frames one by one unless you try the getFrameAtTime method which will give you bitmaps in an entirely different format.
This leads me to suggest that you could probably test with these Bitmaps (though I'm really not sure what you are trying to do here) from the getFrameAtTime method. In order for this code to then work on a live camera preview, you would need to have to convert your NV21 frames from the PreviewCallback interface into the same format as the Bitmaps from getFrameAtTime, or you could then adapt your algorithm to process NV21 format frames. NV21 is a pretty neat format, presenting color and luminance data separately, but it can be tricky to use.

Categories

Resources