I am trying to figure out the right way to approach processing video (from file or camera) frame by frame on android.
I need to get each frame and convert it to RGB, process it (each color separately) and send it to the screen.
Has anyone done it? how could it be done (preferably without any native code processing)?
Look at this post where I suggest using OpenCV for Android. With OpenCV you can grab a video frame in RGB format and process each of the components individually in real-time.
Related
What I try to do
I made a video player app by using the source code from the following link.
https://github.com/kylelo/VideoPlayerGH
I want to implement some methods to calculate the complexity for each frame, then I can do some image processing after the calculation.
So the first step I need to do is to get the bitmap or pixel values from the video frame to analyze before it render on the screen, I have used glReadPixels() to get the pixel values into a new ByteBuffer in the draw() function. I can get the RGBA values successfully, but the frame rate droped from 60 fps to 20 fps on my device(HTC buffterfly s), I even have not did any image processing on it...
My question is
Is there any other more efficient way to realize this task? Even working on other layers of Android system is fine.
I really need some hints on it...
Because I am new in Android, so if there is any concept I am wrong, please tell me! I am really appreciate for everyone's help!
My goal is as follows:
I have to read in a video that is stored on the sd card, process it frame for frame and then store it in a new file on the SD card again. The problem is that OpenCV4Android does not come with an video encoder/decoder as it does not include ffmpeg. Moreover, using JavaCV for processing the image is no option for me, as the code is already written in native OpenCV and I access them through the JNI. I did a lot of reading here on stackoverflow and the rest of Google. But I did not find the solution.
JavaCV allows me to read a video frame by frame and also store it frame by frame. However, I am not able to convert the video to plain OpenCV Mat objects which can be processed by usual OpenCV4Android.
I read about JCodec as a library for encoding/decoding videos. Would JCodec allow me to fulfill my task? If yes, do you know any examples.
Compiling FFMPEG for Android would also be an option. However, I think it is a bit overkill to write FrameGrabber and FrameRecorder my self. I think that there must exist some solution besides the one of JavaCV.
Starting with API 18 there are the MediaCodec and the MediaMuxer in Android. Perhaps they can help me?
So lets come to my requirements. I'm currently targeting Android API 19, so I have every function available which I need. The most important requirement for me is the following:
If I process a video of 10 seconds with 30 FPS, the result should also be a video of 10 seconds with 30 FPS. So I want an exact copy of the video but with some drawing added to each frame by OpenCV. Using OpenCV via Python for example can do this task by using the VideoWriter class and VideoInput class. I need the same functionality on Android.
I am wondering that no one had this problem so far (or I did not find it).
Hopefully, I explained everything.
#Alekz: Yes, I found a solution. Even if it is additional overhead, it was sufficient for my research project.
The solution covers the usage of OpenCV 4 Android and JavaCV. I first use JavaCV to read in a frame one by another. Afterwards, I convert this frame to the format used by OpenCV, process the frame. Finally, I either display the processed frame directly on the screen or I convert it back to JavaCV and store it in a new file. Below you find the detailed solution for the different used frame formats.
Grab three channel IplImage frame from video with the FrameGrabber
Convert grabbed three channel frame to four channel IplImage frame
Convert four channel frame to Android Bitmap image
Convert Android Bitmap image to OpenCV Mat frame
Process OpenCV Mat frame
Convert Processed Mat frame to Android Bitmap image
Convert Android Bitmap image to four channel IplImage frame
Convert four channel IplImage frame to three channel IplImage
Store three channel IplImage as frame in the video with the FrameRecorder
You can open a video file in OpenCV with VideoCapture, and then use grab to get the next frame.
Is there a specific video codec you need support for?
OpenCv grab() documentation
I am testing imaging algorithms using a android phone's camera as input, and need a way to consistently test the algorithms. Ideally I want to take a pre-recorded video feed and have the phone 'pretend' that the video feed is a live video from the camera.
My ideal solution would be where the app running the algorithms has no knowledge that the video is pre-recorded. I do not want to load the video file directly into the app, but rather read it in as sensor data if at all possible.
Is this approach possible? If so, any pointers in the right direction would be extremely helpful, as Google searches have failed me so far
Thanks!
Edit: To clarify, my understanding is that the Camera class uses a camera service to read video from the hardware. Rather than do something application-side, I would like to create a custom camera service that reads from a video file instead of the hardware. Is that doable?
When you are doing processing on a live android video feed you will need to build your own custom camera application that feeds you individual frames via the PreviewCallback interface that Android provides.
Now, simulating this would be a little bit tricky seen as the format for the preview frames will generally be in the NV21 format. If you are using a pre-recorded video, I don't think there is any clear way of reading frames one by one unless you try the getFrameAtTime method which will give you bitmaps in an entirely different format.
This leads me to suggest that you could probably test with these Bitmaps (though I'm really not sure what you are trying to do here) from the getFrameAtTime method. In order for this code to then work on a live camera preview, you would need to have to convert your NV21 frames from the PreviewCallback interface into the same format as the Bitmaps from getFrameAtTime, or you could then adapt your algorithm to process NV21 format frames. NV21 is a pretty neat format, presenting color and luminance data separately, but it can be tricky to use.
I am very beginner to android OS.
I am writing a media player and I get frames from the native code and will be displayed from java code as bitmaps. I convert frames from bytes array into bitmaps and then display it. Right now I am able to display one frame, but i am unable to display them continuously.
My code is as below
canvas.drawbitmap(mbitmap,0,0,null);
but when i am trying to display the next frame, it is still displaying the same previous frame and not changing. Do I need to clear the bitmap or something? Or is there any otherway to draw the rendered frames.
Thanks for any help
In android developers page, you can see
the explanation for frame animation!
Something similar at this repo.
Having a video file, there is any way to access single pixel values?
I have two cases where I would like to access the pixels:
From the video camera
From a video file What I need is geting a pixel information for a certain place with something like getPixel(posX, posY) and returning the RGB information
I have an algorithm that detects blobs (homogeneous parts) of an image and I would like to implement it in real time using the android video camera and offline processing analyzing a video file.
Yes, but you'll need to do some work.
Extract a video frame from the source file with a tool such as FFmpeg. The result will be a JPEG or other such image file
Use an image processing tool, like Imagemagick, to extract the information for a pixel.
Presto!