I am very beginner to android OS.
I am writing a media player and I get frames from the native code and will be displayed from java code as bitmaps. I convert frames from bytes array into bitmaps and then display it. Right now I am able to display one frame, but i am unable to display them continuously.
My code is as below
canvas.drawbitmap(mbitmap,0,0,null);
but when i am trying to display the next frame, it is still displaying the same previous frame and not changing. Do I need to clear the bitmap or something? Or is there any otherway to draw the rendered frames.
Thanks for any help
In android developers page, you can see
the explanation for frame animation!
Something similar at this repo.
Related
What I try to do
I made a video player app by using the source code from the following link.
https://github.com/kylelo/VideoPlayerGH
I want to implement some methods to calculate the complexity for each frame, then I can do some image processing after the calculation.
So the first step I need to do is to get the bitmap or pixel values from the video frame to analyze before it render on the screen, I have used glReadPixels() to get the pixel values into a new ByteBuffer in the draw() function. I can get the RGBA values successfully, but the frame rate droped from 60 fps to 20 fps on my device(HTC buffterfly s), I even have not did any image processing on it...
My question is
Is there any other more efficient way to realize this task? Even working on other layers of Android system is fine.
I really need some hints on it...
Because I am new in Android, so if there is any concept I am wrong, please tell me! I am really appreciate for everyone's help!
What I'm working on is an application which can get frames then modify it (like move pixels in the frame). After that show the edited frames onto the screen with good speed, glSurfaceView will do that part I think. And MediaCodec will help getting the frames.
I'm trying a good example about MediaCodec like ExtractMpegFramesTest from http://bigflake.com/mediacodec/. Can I directly edit the received frames with GLES code and show it onto the screen without getting Bitmap?
I'm trying to create an app where I am able to add filters to a recorded video. Basically, I want to replicate the functionality that exists in Instagram video, or Viddy.
I've done research and I can't piece it all together. I've looked into using GLSurfaceView to play the recorded video and I know I could use NDK to do the pixel manipulation and send it back to the SurfaceView or save it somehow. The problem is, I don't know how to send the pixel data because there seems to be no function to access it. This idea came from the Camera function "onPreviewFrame". The function returns a byte array allowing me to manipulate the pixels and display it.
Another idea is to use GLSurfaceView and use OpenGL to render the filter. GLSurfaceView has a renderer you can set, but I'm not very familiar with OpenGL. But again, this goes back to actually getting the pixels of each video frame. I also read about ripping each frame as a texture and then manipulating the texture in OpenGL but the answers I've come across are not very detailed.
Lastly, I've looked into JavaCV. Trying to use FFmpegFrameGrabber, but I haven't had much either. I wanted to just grab one frame, but when I try to write the frame's ByteBuffer to an ImageView, I get a "buffer not large enough for pixels" error.
Any guidance would be great.
From Android 4.3 you can use a Surface as the input to your encoder. http://developer.android.com/about/versions/android-4.3.html#Multimedia
So you can use GLSurfaceView and apply the filters using fragment shaders.
You can find some good examples here. http://bigflake.com/mediacodec/
It is good to use the exoplayer filter library and this one will do your work but in order to merge the filtered layered with the video you have to do an extra work.
Link for exoplayer filter is there for you : ExoplayerFilter
You have to se the exoplayer for this but follow their instructions you'll be able to do the task. Ping me if something comes up.
I am trying to figure out the right way to approach processing video (from file or camera) frame by frame on android.
I need to get each frame and convert it to RGB, process it (each color separately) and send it to the screen.
Has anyone done it? how could it be done (preferably without any native code processing)?
Look at this post where I suggest using OpenCV for Android. With OpenCV you can grab a video frame in RGB format and process each of the components individually in real-time.
I am able to display an image using OpenGL ES in android ndk. now I want to display 2 or four images using multithreading in OPENGL ES through android ndk.
I have done huge search for this and came to know a Surfaceview can only have one picture. Then what is the way to display multiple pictures on GLSurface view..
Can anybody please tell me how it can be done..
Thanks in Advance
It seems there are several issues here.
First of all, if you are trying to display "pictures" through OpenGL(ES), you mean textures (OpenGL readable format for "pictures" or "image"), right ? If you are not sure of what I am talking about, find some tutorial about displaying images using OpenGLES. Learn how to display juste 1 and you will be able to display 4.
a Surfaceview can only have one picture
You may have misunderstand something. A GLSurfaceView can draw as many textures as your video memory can handle.
Basically, to display your textures, you will draw 2 or 4 quads and bind the appropriate textures to them.
About the multithreading, I guess you gather your pictures asynchronously. Just wait for a complete picture, and while in the OpenGL thread, create a texture and bind it to a quad.