How to turn a MediaExtractor sample into a full image Bitmap? - android

I'm trying to make a video-compression algorithm, and one way I want to reduce size is by dropping the frame rate. The code below shows how I achieve this - basically I keep advancing to the next sample until the time difference is greater than the inverse of the desired frame rate.
if (!firstFrame) {
while (!eof && extractor.getSampleTime() - prevSampleTime < 1000000 / frameRate) {
eof = !extractor.advance();
}
}
firstFrame = false;
prevSampleTime = extractor.getSampleTime();
However the obvious problem with this is that dropping a frame means that the next diff frame is going off the wrong frame, resulting in a distorted video. Is there any way to get extract the full image Bitmap at a particular frame? Basically I want to achieve something like this:
Video frames are extracted iteratively and unwanted frames are dropped
Remaining frames are converted into their full Bitmap
All Bitmaps are strung together to form the raw video
Raw video is compressed with AVC compression.
I don't mind how long this process takes as it will be running in the background and the output will not be displayed immediately.

Since most video compression algorithms today use Inter Frame Prediction, some frames (usually most of them) can't be decoded on their own (without feeding previous frames) as you noted in the question.
It means, that you must decode all frames. And then you can drop some of them before encoding.

Related

How to access all Images in the ImageReader Queue using Android Camera2 API

In my project, I need to capture the frames of the camera streams continuously. Here is the current code snippet I used.
To set up the ImageReader, I set the maxImages to 20. Let is every time when callback is triggered, there would have 20 frames in the ImageReader Queue.
imageReader = ImageReader.newInstance(
optimumSize.getWidth(),
optimumSize.getHeight(),
ImageFormat.YUV_420_888,
20
);
Then to access each image of these 20 frames. I used the following snippet.
imageReader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image image = reader.acquireNextImage();
while (image != null) {
// some processing here.....
image.close();
image = reader.acquireNextImage();
}
if (image != null) {
image.close();
}
}
}, processingHandler);
The key obstacle here is to be able to access each of 20 frames in a callback, for further image processing. However the aforementioned code seems have some problems (I can only access the latest image in the underlying queue). In fact, I only need to access a small patch (50 x 50 pixels) in each frames, specified by users.
The reason for doing this is that I need to get the 20 continuous frames data with sampling frequency being ~60Hz. This seems really hard to achieve if we can only access single frame in each callback, which can only achieve up to 30fps.
Any suggestions would be super welcome! Thanks!
Setting maxImages to 20 just means the queue will allow you to acquire 20 Images at the same time; it does not mean the onImageAvailable callback will fire only once 20 images are queued. The callback fires as soon as a single image is present.
Most camera devices run at 30fps max, so it's not surprising that's the speed you're seeing. Some cameras do have 60fps modes, but you have to explicitly switch to a CONTROL_AE_TARGET_FPS_RANGE of (60,60) to get that, and that's only if the device's CONTROL_AE_AVAILABLE_TARGET_FPS_RANGE values include that range.
60fps may also be resolution-limited (check the StreamConfigurationMap for minimum frame durations to find what resolutions can support 60fps, if you want to double-check).

Use Camera2 to Preview and Process camera data

Question
I want to do something similar to what's done on the Camera2Basic sample, that is:
Previewing images from the camera using a TextureView
Processing images from the camera using a ImageReader
With a few differences regarding 2:
I'm only interested on the gray channel (brightness) from the images to be processed. Their dimensions should be around 1000 x 1000 pixels (and not the highest resolution available)
When a image to be processed is available, a generic process(Image) method will be called instead of saving images to disk. What this method does is out of the scope of this question, but it takes around 50 ms to return
The image data should be processed periodically (around 10 FPS, but speed is not critical) instead of eventually
How can I accomplish this using the Camera2 API?
Observations
I've changed the way I'm creating the ImageReader instance, selecting smaller dimensions and a different format (YUV_420_888 instead of JPEG). The Y plane will be accessed in order to get the brightness data. Is there a more efficient format (since I'm simply ignoring the U and V planes)?
Both TextureView and ImageReader surfaces should be filled periodically, but at different rates. Since there can be only one repeating CameraRequest on a CameraCaptureSession (which can be set by calling setRepeatingRequest()), am I supposed to manually call capture() periodically (e.g. call setRepeatingRequest() with the preview request and call capture() periodically with the process request)?
Can the performance be improved by sending reprocessed requests to obtain the images to be processed from the preview images? If so, how can I do it?
I don't know how to help you with the gray channel, I suggest you to try to study the planes of the YUV format image and try to get it from there.
Also try to check all the values that you can set in the CaptureBuilder, maybe you can achieve your objetive using SENSOR_TEST_PATTERN_MODE, COLOR_CORRECTION_MODE, or BLACK_LEVEL_LOCK. You can check all the info in android documentation
About process just one of every 10 frames, just discard the frames in your process() method using a simple:
if (result.getFrameNumber() % 10 != 0) return;
Finally remember to close all the images that you recieve in your ImageReader OnImageAvailableListener to avoid memory leaks and improve your performance :P
#Override
public void onImageAvailable(ImageReader imageReader) {
Image image = null;
try {
image = imageReader.acquireNextImage();
//Do whatever you want with your Image
if (image != null) {
image.close();
}
} catch (IllegalStateException iae) {
if (image != null) {
image.close();
}
}
}
hope that it will help you, let me know if I can help you in something else!

Is it possible that phone camera shoots video with irregular fps?

My goal is to synchronize movie’s frames with device orientation data from phone’s gyroscope/accelerometer. Android provides orientation data with proper timestamps. As for a video its frame rate known only in general. Experiments show that changes in orientation measured by accelerometer don’t match changes in the scene. Sometimes it goes faster, sometimes it goes slower during the same video.
Is there any way to find out how much time passed between two consequent frames?
I’m going to answer this question myself.
First, yes, it is possible that fps or frame rate is not constant. Here is the quote from Android Developers Guide: “NOTE: On some devices that have auto-frame rate, this sets the maximum frame rate, not a constant frame rate.”
http://developer.android.com/reference/android/media/MediaRecorder.html#setVideoFrameRate%28int%29
Second, there is a function get(CV_CAP_PROP_POS_MSEC) in OpenCV VideoCapture class that allows reading current frame time position:
VideoCapture capture("test.mp4");
double curFrameTime;
double prevFrameTime = 0;
double dt;
while (capture.grab())
{
curFrameTime = capture.get(CV_CAP_PROP_POS_MSEC); //Current frame position
dt = curFrameTime - prevFrameTime; //Time difference
prevFrameTime = curFrameTime; //Previous frame position
}
capture.release();
If there is a better suggestion I’ll be glad to see it.

Change keyframe interval using android camera

Is there a way of changing the frequency of a keyframe when using the android camera? I'm using an intent to record video, and then the MediaMetaDataRetriever class to extract frames, but the keyframes are too far apart for my liking.
You can use getFrameAtTime (long timeUs, int option) using OPTION_CLOSEST to get a frame that is not a key frame, so you can access any frame you want.
The only problem is that this frame is not a key frame so it can be pixelated.

Grabbing consecutive frames in android using opencv

I am trying to grab consecutive frames from android using opencv VideoCapture class. Actually I want to implement optical flow on android for which i need 2 frames. I implemented optical flow in C first where I grabbed the frames using using cvQueryFrame and every thing work fine. But in android when I call
if(capture.grab())
{
if(capture.retrieve(mRgba))
Log.i(TAG, "first frame retrived");
}
if(capture.grab())
{
if(capture.retrieve(mRgba2))
Log.i(TAG, "2nd frame retrived");
}
and then subtract the matrices using Imgproc.subtract(mRgba,mRgba2,output) and then display the output it give me black image indicating that mRgba and mRgba2 are image frames with same data. Can any one help how to grab two different images. According to opencv documentation mRgba and mRgba2 should be different.
This question is an exact duplicate of
read successive frames OpenCV using cvQueryframe
You have to copy the image to another memory block, because the capture always returns the same pointer.

Categories

Resources