Is there a way of changing the frequency of a keyframe when using the android camera? I'm using an intent to record video, and then the MediaMetaDataRetriever class to extract frames, but the keyframes are too far apart for my liking.
You can use getFrameAtTime (long timeUs, int option) using OPTION_CLOSEST to get a frame that is not a key frame, so you can access any frame you want.
The only problem is that this frame is not a key frame so it can be pixelated.
Related
I'm trying to make a video-compression algorithm, and one way I want to reduce size is by dropping the frame rate. The code below shows how I achieve this - basically I keep advancing to the next sample until the time difference is greater than the inverse of the desired frame rate.
if (!firstFrame) {
while (!eof && extractor.getSampleTime() - prevSampleTime < 1000000 / frameRate) {
eof = !extractor.advance();
}
}
firstFrame = false;
prevSampleTime = extractor.getSampleTime();
However the obvious problem with this is that dropping a frame means that the next diff frame is going off the wrong frame, resulting in a distorted video. Is there any way to get extract the full image Bitmap at a particular frame? Basically I want to achieve something like this:
Video frames are extracted iteratively and unwanted frames are dropped
Remaining frames are converted into their full Bitmap
All Bitmaps are strung together to form the raw video
Raw video is compressed with AVC compression.
I don't mind how long this process takes as it will be running in the background and the output will not be displayed immediately.
Since most video compression algorithms today use Inter Frame Prediction, some frames (usually most of them) can't be decoded on their own (without feeding previous frames) as you noted in the question.
It means, that you must decode all frames. And then you can drop some of them before encoding.
I have used JavaCv (and opencv too) to implement live face detection preview on Android. I work ok. Now I want to take a picture or record a video from live preview which have face detection (I mean when I take a picture, this picture will have a person and a rectangle around his/her face). I have researched a lot but get no result. Can anyone help me please !!!
What you're looking for is the imwrite() method.
Since your question isn't clear on the use-case, I'll give a generic algorithm, as shown:
imwrite writes a specified Mat object to a file and it accepts 2 arguments - fileName and Mat object, for example - imwrite('output.jpg',img);
Here's the logic you can follow:
Receive input frame (Mat input from video and run face detection using your existing method.
Draw a rectangle on an output image (Mat output).
Use imwrite as - imwrite('face.jpg',output)
In case you want to record all the frames with a face in them, replace 'face.jpg' with a string variable that is updated with each loop iteration and run imwrite in a loop
If you wish to record a video. Have a look at VideoWriter() class
My goal is to synchronize movie’s frames with device orientation data from phone’s gyroscope/accelerometer. Android provides orientation data with proper timestamps. As for a video its frame rate known only in general. Experiments show that changes in orientation measured by accelerometer don’t match changes in the scene. Sometimes it goes faster, sometimes it goes slower during the same video.
Is there any way to find out how much time passed between two consequent frames?
I’m going to answer this question myself.
First, yes, it is possible that fps or frame rate is not constant. Here is the quote from Android Developers Guide: “NOTE: On some devices that have auto-frame rate, this sets the maximum frame rate, not a constant frame rate.”
http://developer.android.com/reference/android/media/MediaRecorder.html#setVideoFrameRate%28int%29
Second, there is a function get(CV_CAP_PROP_POS_MSEC) in OpenCV VideoCapture class that allows reading current frame time position:
VideoCapture capture("test.mp4");
double curFrameTime;
double prevFrameTime = 0;
double dt;
while (capture.grab())
{
curFrameTime = capture.get(CV_CAP_PROP_POS_MSEC); //Current frame position
dt = curFrameTime - prevFrameTime; //Time difference
prevFrameTime = curFrameTime; //Previous frame position
}
capture.release();
If there is a better suggestion I’ll be glad to see it.
I am using GPUImage library to compress a video in my iOs app (GPUimageVideoCamera)
https://github.com/BradLarson/GPUImage/
I have worked with it on iOS and it is very fast
I want to do the same in my android app, but it seems that GPUImageMovie class doesn't exist in android library:
https://github.com/CyberAgent/android-gpuimage/tree/master/library/src/jp/co/cyberagent/android/gpuimage
It seems that android library only work on images (no video).
Anyone know if this library can do the job? If not, did someone developed GPUImage all library? If not, what is the best library i can use that can do the job as fast as GPUImage library do.
That's what GPUimageVideoCamera do in iOs (Filtering live video):
To filter live video from an iOS device's camera, you can use code like the following:
GPUImageVideoCamera *videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
GPUImageFilter *customFilter = [[GPUImageFilter alloc] initWithFragmentShaderFromFile:#"CustomShader"];
GPUImageView *filteredVideoView = [[GPUImageView alloc] initWithFrame:CGRectMake(0.0, 0.0, viewWidth, viewHeight)];
// Add the view somewhere so it's visible
[videoCamera addTarget:customFilter];
[customFilter addTarget:filteredVideoView];
[videoCamera startCameraCapture];
This sets up a video source coming from the iOS device's back-facing camera, using a preset that tries to capture at 640x480. This video is captured with the interface being in portrait mode, where the landscape-left-mounted camera needs to have its video frames rotated before display. A custom filter, using code from the file CustomShader.fsh, is then set as the target for the video frames from the camera. These filtered video frames are finally displayed onscreen with the help of a UIView subclass that can present the filtered OpenGL ES texture that results from this pipeline.
The fill mode of the GPUImageView can be altered by setting its fillMode property, so that if the aspect ratio of the source video is different from that of the view, the video will either be stretched, centered with black bars, or zoomed to fill.
For blending filters and others that take in more than one image, you can create multiple outputs and add a single filter as a target for both of these outputs. The order with which the outputs are added as targets will affect the order in which the input images are blended or otherwise processed.
Also, if you wish to enable microphone audio capture for recording to a movie, you'll need to set the audioEncodingTarget of the camera to be your movie writer, like for the following:
videoCamera.audioEncodingTarget = movieWriter;
Is there a library that can do the same in android?
I am trying to grab consecutive frames from android using opencv VideoCapture class. Actually I want to implement optical flow on android for which i need 2 frames. I implemented optical flow in C first where I grabbed the frames using using cvQueryFrame and every thing work fine. But in android when I call
if(capture.grab())
{
if(capture.retrieve(mRgba))
Log.i(TAG, "first frame retrived");
}
if(capture.grab())
{
if(capture.retrieve(mRgba2))
Log.i(TAG, "2nd frame retrived");
}
and then subtract the matrices using Imgproc.subtract(mRgba,mRgba2,output) and then display the output it give me black image indicating that mRgba and mRgba2 are image frames with same data. Can any one help how to grab two different images. According to opencv documentation mRgba and mRgba2 should be different.
This question is an exact duplicate of
read successive frames OpenCV using cvQueryframe
You have to copy the image to another memory block, because the capture always returns the same pointer.