I'm working on android camera application, I want to process my filter on sample frames (not all frames 8-10 fps enough) from camera on my android-openCV application. My frames have Mat format and I am using "SampleJavaCameraView" extends CameraCameraView. Briefly, I want to skip some frame without processing.
Use a counter, and do your processing only if counter % 10 == 0 (in order to process each tenth frame).
Related
I've developed an Android app that allows user to create boomerang-alike mp4 video. This video consists of 10 still images being played back and forth quite fast. I know that such video (boomerang effect) can be easily looped from single video file while playing it, but I really need to create a mp4 video that would essentially contain already prepared boomerang video. The output video can be downloaded and played by user on any external player (over which obviously I don't have any control).
For that purpose currently I create a video from images in a loop. The loop starts from 1st picture and goes to 10th picture with 0.25 sec delay between frames, then goes back from 10th to 1st including delay. And there is 5 of those loops, which essentialy means creating a single video from 5 * 10 * 2 = 100 images. I know it's kinda ridiculous, so the time that it takes to prepare this video is riduculous as well (around 1:40 min).
What solution could you recommend assuming that the output video really has to consist of 5 loops back-and-forth? I've thought about creating single loop video (20 pictures) and then create final output video by concatenating it 5 times. But could it be any good? I'm trying to find an efficient yet understandable for a beginner Android programmer way.
You can use FFMPEG to Create boomerang like video below is a simple example code :-
ffmpeg -i input_loop.mp4 -filter_complex "[0]reverse[r];[0][r]concat,loop=5:250,setpts=N/55/TB" output_looped_video.mp4
1.5 seconds of video file as input named input_loop.mp4
n loop=5:250, 5 is number of loops, 250 is frame rate x double length of clip. The setpts is applied to avoid frame drops, and the value 25 should be replaced with the framerate of the clip
setpts=N/<VALUE>/TB" you can alter value according to your need
increase value to speed up boomerang effect
decrease value to slow down boomerang effect
I was looking for a way to create a boomerang video and found a pretty cool example of how to do it on GitHub.
You create the video by using the FFMPEG library org.bytedeco.javacpp-presets to clone the frames.
https://github.com/trantrungduc/boomerang-android
This is the place in code in which you can customize the video loop:
for (int k = 0; k < 3; k++) {
for (Frame frame1 : loop) {
frecorder.record(frame1);
}
for (int i=loop.size()-1;i>=0;i--){
frecorder.record(loop.get(i));
}
}
I am developing an application using Android Opencv.
This app, which I am developing, offers two operations.
The frame read from the camera is passed to Jni using native function
Mat.getNativeObjAddr (), and the new image is returned through
javaCameraView's onCameraFrame() function
It reads a video clip inside Storage, processes each frame the same
as # 1, and returns the resulting image via the onCameraFrame()
function.
First,function is implemented as simple as the following and works normally:
#Override
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame)
{
if(inputFrame!=null){
Detect(inputFrame.rgba().getNativeObjAddr(), boardImage.getNativeObjAddr());
}
return boardImage;
}
}
However, the problem occurred in the second operation.
As far as I know, the files inside Java Storage are not readable by jni.
I already tried FFmpegMediaPlayer or MediaMetadataRetriever through Google search. However, the getFrameAtTime () function provided by this MetadataRetriever took an average of 170ms when grabbing a bitmap to a specific frame of 1920 * 1080 image. What I have to develop is to show the video results in real time at 30 fps. In # 1, the native function Detect () takes about 2ms to process one frame.
For these reasons, I want to do this.
java sends a video's path (eg : /storage/emulated/0/download/video.mp4) to jni, and native functions process the video one frame at a time, and display the result image on 'onCameraFrame'.
Is there a proper way? I look forward to your reply. Thank you!
I am trying to save image sequences with fixed framerates (preferably up to 30) on an android device with FULL capability for camera2 (Galaxy S7), but I am unable to a) get a steady framerate, b) reach even 20fps (with jpeg encoding). I already included the suggestions from Android camera2 capture burst is too slow.
The minimum frame duration for JPEG is 33.33 milliseconds (for resolutions below 1920x1080) according to
characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP).getOutputMinFrameDuration(ImageFormat.JPEG, size);
and the stallduration is 0ms for every size (similar for YUV_420_888).
My capture builder looks as follows:
captureBuilder.set(CaptureRequest.CONTROL_AE_MODE, CONTROL_AE_MODE_OFF);
captureBuilder.set(CaptureRequest.SENSOR_EXPOSURE_TIME, _exp_time);
captureBuilder.set(CaptureRequest.CONTROL_AE_LOCK, true);
captureBuilder.set(CaptureRequest.SENSOR_SENSITIVITY, _iso_value);
captureBuilder.set(CaptureRequest.LENS_FOCUS_DISTANCE, _foc_dist);
captureBuilder.set(CaptureRequest.CONTROL_AF_MODE, CONTROL_AF_MODE_OFF);
captureBuilder.set(CaptureRequest.CONTROL_AWB_MODE, _wb_value);
// https://stackoverflow.com/questions/29265126/android-camera2-capture-burst-is-too-slow
captureBuilder.set(CaptureRequest.EDGE_MODE,CaptureRequest.EDGE_MODE_OFF);
captureBuilder.set(CaptureRequest.COLOR_CORRECTION_ABERRATION_MODE, CaptureRequest.COLOR_CORRECTION_ABERRATION_MODE_OFF);
captureBuilder.set(CaptureRequest.NOISE_REDUCTION_MODE, CaptureRequest.NOISE_REDUCTION_MODE_OFF);
captureBuilder.set(CaptureRequest.CONTROL_AF_TRIGGER, CaptureRequest.CONTROL_AF_TRIGGER_CANCEL);
// Orientation
int rotation = getWindowManager().getDefaultDisplay().getRotation();
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION,ORIENTATIONS.get(rotation));
Focus distance is set to 0.0 (inf), iso is set to 100, exposure-time 5ms. Whitebalance can be set to OFF/AUTO/ANY VALUE, it does not impact the times below.
I start the capture session with the following command:
session.setRepeatingRequest(_capReq.build(), captureListener, mBackgroundHandler);
Note: It does not make a difference if I request RepeatingRequest or RepeatingBurst..
In the preview (only texture surface attached), everything is at 30fps.
However, as soon as I attach an image reader (listener running on HandlerThread) which I instantiate like follows (without saving, only measuring time between frames):
reader = ImageReader.newInstance(_img_width, _img_height, ImageFormat.JPEG, 2);
reader.setOnImageAvailableListener(readerListener, mBackgroundHandler);
With time-measuring code:
ImageReader.OnImageAvailableListener readerListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader myreader) {
Image image = null;
image = myreader.acquireNextImage();
if (image == null) {
return;
}
long curr = image.getTimestamp();
Log.d("curr- _last_ts", "" + ((curr - last_ts) / 1000000) + " ms");
last_ts = curr;
image.close();
}
}
I get periodically repeating time differences like this:
99 ms - 66 ms - 66 ms - 99 ms - 66 ms - 66 ms ...
I do not understand why these take double or triple the time that the stream configuration map advertised for jpeg? The exposure time is well below the frame duration of 33ms. Is there some other internal processing happening that I am not aware of?
I tried the same for the YUV_420_888 format, which resulted in constant time-differences of 33ms. The problem I have here is that the cellphone lacks the bandwidth to store the images fast enough (I tried the method described in How to save a YUV_420_888 image?). If you know of any method to compress or encode these images fast enough myself, please let me know.
Edit: From the documentation of getOutputStallDuration: "In other words, using a repeating YUV request would result in a steady frame rate (let's say it's 30 FPS). If a single JPEG request is submitted periodically, the frame rate will stay at 30 FPS (as long as we wait for the previous JPEG to return each time). If we try to submit a repeating YUV + JPEG request, then the frame rate will drop from 30 FPS." Does this imply that I need to periodically request a single capture()?
Edit2: From https://developer.android.com/reference/android/hardware/camera2/CaptureRequest.html: "The necessary information for the application, given the model above, is provided via the android.scaler.streamConfigurationMap field using getOutputMinFrameDuration(int, Size). These are used to determine the maximum frame rate / minimum frame duration that is possible for a given stream configuration.
Specifically, the application can use the following rules to determine the minimum frame duration it can request from the camera device:
Let the set of currently configured input/output streams be called S.
Find the minimum frame durations for each stream in S, by looking it up in android.scaler.streamConfigurationMap using getOutputMinFrameDuration(int, Size) (with its respective size/format). Let this set of frame durations be called F.
For any given request R, the minimum frame duration allowed for R is the maximum out of all values in F. Let the streams used in R be called S_r.
If none of the streams in S_r have a stall time (listed in getOutputStallDuration(int, Size) using its respective size/format), then the frame duration in F determines the steady state frame rate that the application will get if it uses R as a repeating request."
The JPEG output is by way not the fastest way to fetch frames. You can accomplish this a lot faster by drawing the frames directly onto a Quad using OpenGL.
For burst capture, a faster solution would be capturing the images to RAM without encoding them, then encoding and saving them asynchronously.
On this website you can find a lot of excellent code related to android multimedia in general.
This specific program uses OpenGL to fetch the pixel data from an MPEG video. It's not difficult to use the camera as input instead of a video. You can basically use the texture used in the CodecOutputSurface class from the mentioned program as output texture for your capture request.
A possible solution I found consists of using and dumping YUV without encoding it as JPEG in combination with a micro Sd-card that is able to save up to 95Mb per second. (I had the misconception that YUV images would be larger, so with a cellphone that has full support for the camera2-pipeline, the write speed should be the limiting factor.
With this setup, I was able to achieve the following stable rates:
1920x1080, 15fps (approx. 4Mb * 15 == 60Mb/sec)
960x720, 30fps. (approx. 1.5Mb * 30 == 45Mb/sec)
I then encode the images offline from YUV to PNG using a python script.
I am developing an Android video player. I use ffmpeg in native code to decode video frame. In the native code, I have a thread called decode_thread that calls avcodec_decode_video2()
int decode_thread(void *arg) {
avcodec_decode_video2(codecCtx, pFrame, &frameFinished,pkt);
}
I have another thread called display_thread that uses aNativeWindow to display a decoded frame on a SurfaceView.
The problem is that if I let the decode_thread run continuously without a delay. It significantly reduces the performance of avcodec_decode_video2(). Sometimes it takes about 0.1 seconds to decode a frame. However if I put a delay on the decode_thread. Something likes this.
int decode_thread(void *arg) {
avcodec_decode_video2(codecCtx, pFrame, &frameFinished,pkt);
usleep(20*1000);
}
The performance of avcodec_decode_video2() is really good, about 0.001 seconds. However putting a delay on the decode_thread is not a good solution because it affects the playback. Could anyone explain the behavior of avcodec_decode_video2() and suggest me a solution?
It looks impossible that the performance of video decoding function would improve just because your thread sleeps. Most likely the video decoding thread gets preempted by another thread, and hence you get the increased timing (hence your thread did not work). When you add a call to usleep, this does the context switch to another thread. So when your decoding thread is scheduled again the next time, it starts with the full CPU slice, and is not interrupted in the decode_ video2 function anymore.
What should you do? You surely want to decode packets a little bit ahead than you show them - the performance of avcodec_decode_video2 certainly isn't constant, and if you try to stay just one frame ahead, you might not have enough time to decode one of the frames.
I'd create a producer-consumer queue with the decoded frames, with the top limit. The decoder thread is a producer, and it should run until it fills up the queue, and then it should wait until there's room for another frame. The display thread is a consumer, it would take frames from this queue and display them.
I am trying to encode a 30 frames per second video using MediaCodec through the Camera's PreviewCall back(onPreviewFrame). The video that I encoded always plays very fast(this is not desired).
So, I tried to check the number of frames that is coming into my camera's preview by setting up a int frameCount variable to remember its count. What I am expecting is 30 frames per second because I setup my camera's preview to have 30 fps preview(as shown below). The result that I get back is not the same.
I called the onPreviewFrame callback for 10 second, the number of frameCount I get back is only about 100 frames. This is bad because I am expecting 300 frames. Is my camera parameters setup correctly? Is this a limitation of Android's Camera preview call back? And if this is a limitation on the Android Camera's preview call back, then is there any other camera callback that can return the camera's image data(nv21,yuv, yv12) in 30 frames per second?
thanks for reading and taking your time to helpout. i would appreciate any comments and opinions.
Here is an example an encoded video using Camera's onPreviewFrame:
http://www.youtube.com/watch?v=I1Eg2bvrHLM&feature=youtu.be
Camera.Parameters parameters = mCamera.getParameters();
parameters.setPreviewFormat(ImageFormat.NV21);
parameters.setPictureSize(previewWidth,previewHeight);
parameters.setPreviewSize(previewWidth, previewHeight);
// parameters.setPreviewFpsRange(30000,30000);
parameters.setPreviewFrameRate(30);
mCamera.setParameters(parameters);
mCamera.setPreviewCallback(previewCallback);
mCamera.setPreviewDisplay(holder);
No, Android camera does not guarantee stable frame rate, especially at 30 FPS. For example, it may choose longer exposure at low lighting conditions.
But there are some ways we, app developers, can make things worse.
First, by using setPreviewCallback() instead of setPreviewCallbackWithBuffer(). This may cause unnecessary pressure on the garbage collector.
Second, if onPreviewFrame() arrives on the main (UI) thread, you cause any UI action directly delay the camera frames arrival. To keep onPreviewFrame() on a separate thread, you should open() the camera on a secondary Looper thread. Here I explained in detail how this can be achieved: Best use of HandlerThread over other similar classes.
Third, check that processing time is less than 20ms.