I'm trying to use the camera for video processing that needs a high constant frame rate around 30 FPS.
I'm using the Camera class and setPreviewCallbackWithBuffer to receive the video frames. I have noticed that most camera does not support a FPS range of 30000 - 30000. However, when recording movies I assume the camera on those devices still delivers a frame rate around 30. Is there some other way to achieve a higher frame rates than with my current method?
Note that non-top devices with cheap cameras, especially front ones, don't support reliably fps you've requested. If you request 30, device can reply ok (will start capture, no crash, etc) but in real it will deliver frames with fps in range say... [4-30] depending on lighting conditions (less light needs longer exposition time) and may be something else too. Example of such camera is front camera on galaxy S3 mini
If you don't want to use static fps rate for all the devices, you can use getSupportedPreviewFpsRange () method to determine the available fps range for that particular device. This method will return minimum and maximum supported fps rate.
Now after getting the maximum supported fps rate, you can use your current method to set the fps rate.
Hope this will give you some hint about setting fps to its maxmimum.
Related
When setting manual controls in Android by using the Camera2 API, what is the purpose of CaptureRequest.SENSOR_FRAME_DURATION?
I have read several times the documentation but still can’t understand its purpose, and what value to set in relation to the exposure time and ISO.
I understand the CaptureRequest.SENSOR_EXPOSURE_TIME specifies how much light is the sensor letting in; also that the CaptureRequest.SENSOR_SENSITIVITY is the sensor sensitivity to light (ISO), but no idea about SENSOR_FRAME_DURATION and how it relates to the exposure time and sensor sensitivity.
For example, if I set a long exposure time of 1 second or 30 seconds, then what is the value that I should set in SENSOR_FRAME_DURATION? And how does it relate to the other sensor controls?
FRAME_DURATION is the same concept as output frame rate. That is, how often is an image read out from the image sensor? Frame rate is generally reported as frames per second, while FRAME_DURATION is the inverse of that - the duration of a single frame.
Since the camera2 API is all about per-frame control, having the duration as a per-frame property is appropriate.
FRAME_DURATION can't be shorter than EXPOSURE_TIME (since you can't read the image from the sensor until exposure is complete), but the API handles this for you - if you ask for a FRAME_DURATION that's too short compared to EXPOSURE_TIME, it gets automatically increased.
That said, often you may want consistent frame rate (such as 30fps for video recording), so you'd set your FRAME_DURATION to 1/30s = 33333333 ns, and then vary EXPOSURE_TIME for manual exposure control. As long as you keep EXPOSURE_TIME as less than 1/30s, you'll get steady frame rate and still have manual exposure control.
The minimum possible frame duration (and therefore the maximum frame rate) depends on the output resolution(s) and format(s) you've asked for in the camera capture session. Generally, bigger resolutions take longer to read out, putting a limit on minimum frame duration. Cameras that support the BURST_CAPTURE camera capability can handle at least 20fps for 8 MP captures, or better.
At the image sensor level, frame duration is implemented by adding in extra vertical blanking time so that EXPOSURE + VBLANK = FRAME_DURATION. The full picture is also more complicated in that typical CMOS image sensors can be exposing some rows of the image while others are being read out (rolling shutter) so the actual timing diagrams look more complicated. You don't generally have to care when just doing basic manual exposure control, however.
Most of the image sensors in a smart phone are using rolling shutter, which readout pixels line by line, the FRAME_DURATION = FRAME_READ_OUT_TIME + VBLANK.
In our mobile application, camera capture on the Android device is sent as a video stream to a remote server.
I need to automatically adapt my camera fps to the network speed. Basically, if I detect that the network is slow, I need to reduce the fps and keep reducing it until a balance is reached.
I obtain available fps ranges using the field CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES on Camera2 API. I set the target fps using CONTROL_AE_TARGET_FPS_RANGE field.
Let's say the possible list of ranges, for example, is (30, 30) and (15, 30).
I started by setting the target with the highest fps ((30, 30) in our case). Once I detected that the network is slow, I reduced the fps range to (15, 30). However, what I noticed is that the device continued to generate about 29 fps.
As an experiment, I forced the target fps value to be (15, 15). This seems to have done the trick. The system started to generate 15 fps, a value that I was expecting.
However, this makes me wonder what really is the relationship between CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES and CONTROL_AE_TARGET_FPS_RANGE. My impression was that the target
range that is set on the camera has to be one of the values received from CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES. This would be either (30, 30) or (15, 30) in my case. However, if (15, 15) is also being accepted as a valid target fps, I wonder if I can specify any range inside a valid range. For example, I would like to set the fps to (29, 29), (28, 28), and so on until a balance is reached. Is this allowed?
Generally speaking, the answer is NO.
The contract requires that all supported FPS ranges be published in CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES. Furthermore, the behavior of the device is undefined when you choose an unsupported camera parameter, e.g. frame rate or preview size. Some device will throw a RuntimeException, another could keep the current setting, yet another will choose something 'as close as possible to what you ask'.
Some devices don't publish all supported FPS ranges, but also the API is not always possible to implement to the letter. For example, consider a camera that can deliver full HD 1920x1080 frames at 30 FPS, but for smaller 1280x720 frames supports 60 FPS? Which supported FPS ranges should it publish? What if these setting depend on some other choices, like night scene mode, or on focus distance?
And I have not yet spoken about the bugs that often happen on less polished devices. It isn't uncommon to see a device that declares some supported FPS or size is supported, but actually fails to set it (again, with a variety of results).
If your application or library intends to cover millions of users with a wide variety of hardware, you have no choice but to keep certain white-lists and black-lists for device features that may and may not be used, that take into account the manufacturer, the device model, and sometimes even the system version (e.g. I have seen over-the-air upgrades that broke certain, admittedly marginal, camera features).
Another note is that floating FPS ranges cannot be used for video recording or transmission. If you choose (15, 30) range, you will have problems with many video players, the audio will never be in sync with video, and you will still have no control of the bitrate.
TL;NR: in your specific case, there is no need to bother with unsupported undocumented (15, 15) FPS range. You can easily drop every second frame, and pass 15 FPS to the network, still keeping the supported (30, 30) range. If you need an arbitrary uniform rate of, say, 20 FPS, you are less lucky. There are ways to delay delivery of next frame a bit, but nobody will guarantee that exactly 50000000 nanoseconds will pass between these frames.
I want to fix the frame rate of camera preview in Android, i.e., 20fps, or 30 fps. However, we find the frame rate is unstable.
In the android document, it is said that the frame rate is fluctuated between the minimum frame rate and the maximum one which are defined in getSupportedPreviewFpsRange.
https://developer.android.com/reference/android/hardware/Camera.Parameters.html#getSupportedPreviewFpsRange%28%29
My questions are:
1) Which factors influence the frame rate? exposure time, white balance, frame resolution, background CPU loading, and etc.?
2) Is there any method to fix the frame rate by customised above factors?
3) In my project, higher frame rate is better. If the frame rate is unstable in the end. Can I increase the minimum frame rate? or fix the minimum frame rate?
4) It seems that the video taking is somewhat different with preview model, Can I fix the frame rate or minimum frame rate of video taking in Android?
Finally, we found that IOS can fix the frame rate using videoMinFrameDuration and
videoMaxFrameDuration.
Thanks.
First of all, please note that the camera API that you ask about was deprecated more than 3 years ago. The new camera2 API provides much more control over all aspects of capture, including frame rate.
Especially, if your goal is smooth video recording. Actually, the MediaRecorder performs its job decently on older devices, but I understand that this knowledge has little practical value if for some reason you cannot use the MediaRecorder.
Usually, the list of supported FPS ranges includes fixed ranges, e.g. 30 fps, intended exactly for video recording. Note that you are expected to choose a compliant (recommended) preview (video) resolution.
Two major factors cause frame rate variations within the declared range: exposure adjustments and focus adjustments. To achieve uniform rate, you should disable autofocus. If your camera supports exposure control, you should lock it, too. Refrain from using exotic "scenes" and "effects". SCENE_MODE_BARCODE and EFFECT_MONO don't seem to cause problems with frame rate. Whitebalance is OK, too.
There exist other factors that cause frame rate fluctuations that are completely under your control.
Make sure that your camera callbacks do not interfere with, and are not delayed by the Main (UI) thread. To achieve that, you must open the camera on a secondary HandlerThread. The new camera2 API makes thread management for camera callbacks easier.
Don't use setPreviewCallback() which automatically allocates pixel buffers for each frame. This is a significant burden for garbage collector, which may lock all threads once in a while for major cleanup. Instead, use setPreviewCallbackWithBuffer() and preallocate just enough pixel buffers to keep it always busy.
Don't perform heavy calculations in the context of your onPreviewFrame() callback. Pass all work to a different thread. Do your best to release the pixel buffer as early as possible.
Even for the old camera API, if the device lists a supported FPS range of (30, 30), then you should be able to select this range and get consistent, fixed video recording.
Unfortunately, some devices disregard your frame rate request once the scene conditions get too dark, and increase exposure times past 1/30s. For many applications, this is the preferable option, but such applications should simply be selecting a wider frame rate range like (15, 30).
I'm trying to get a stable frame rate from a simple camera Android application written based on the guide below.
http://developer.android.com/guide/topics/media/camera.html - Non intent version.
There is only preview and there is no image or video capture. Everytime onPreview (implementating previewcallback) is called, I check timestamp differences to measure frame rate. Though on average it meets the 15 FPS rate I set ( setPreviewFpsRange(15000, 15000 and verified that it is supported on the device using getSupportedPreviewFpsRange() ), the individual frame rate vary from 5 fps to 40 fps.
Is there a way to fix this. What are the reasons for it? Guess one reason is application process priority. It was observed that adding more applications reduced the fps. One solution is to increase priority of this camera application. Second reason could be garbage collection and slow buffer copies to preview. Third reason is that camera api (not the new camera2 api of Android L - my device is not supported yet) was not designed for streaming camera data.
Also the exposure time lock was enabled to fix frame rates.
It's most likely just how you are timestamping the frames. Try saving a sequence of preview frames pointed at a stopwatch to disk and viewing them using a YUV player (i.e http://www.yuvtoolkit.com/).
I am writing an application which has a video recording feature. During normal day-light hours with lots of light I am able to get 30fps video to record.
However, when there is less light, the frame rate drops to around 7.5fps (with exactly the same code). My guess would be that android is doing something behind the scenes with the exposure time to ensure that the resulting video has the best image quality.
I, however, would prefer a higher fps to a better quality image. Assuming exposure is the issue, is there any way to control the exposure time to ensure a decent fps (15fps+). There are the functions setExposureCompensation() and setAutoExposureLock() but they seem to do nothing.
Has anyone had this issue before? Is it even exposure that is causing my issue?
Any hits/suggestions would be great.
I am sorry but the accepted answer is totally wrong. In fact I have created an account just to correct this.
Noise information gets discarded depending on the bitrate anyway, I do not understand why someone would think that this would be an extra load on cpu at all.
In fact, video framerate on a mobile device has a lot to do with light exposure. In a low light situation, exposure is increased automatically, which also means the shutter will stay open longer to let more light in. Which will reduce the number of frames you can capture in a second, and add some motion blur on top. With a DSLR camera you could change your aperture for more light, without touching the shutter speeds, but on mobile devices your aperture is fixed.
You could mess with exposure compensation to get more fps, but I do not think super dark video is what you want.
More information;
https://anyline.com/news/low-end-android-devices-exposure-triangle/
There is a simple explanation here. The lower light means there is more noise in the video. With more noise the encoding engine has to put far more effort to get the compression it needs. Unless the encoder has a denoiser the encoding engine has far more noise to deal with than normal conditions.
If you want a more technical answer: More noise means that the motion-estimation engine of the encoder is thrown for a toss. This is the part that consumes maximum CPU cycles. The more the noise, the worse the compression and hence even other parts of the encoder are basically crunching more. More bits are generated which means that the encoding and entropy engines are also crunching more and hence the worse performance.
Generally in high end cameras a lot of noise is removed by the imaging pipeline in the sensor. However don't expect that in a mobile phone sensor. [This is the ISO performance that you see in DSLRs ].
I had this issue with Android 4.2 Galaxy S III. After experimenting with parameters found one call which started to work.
Look on Camera.Parameters, if you print them out, you'll see:
preview-fps-range=15000,30000;
preview-fps-range-values=(8000,8000),(10000,10000),(15000,15000),(15000,30000),(30000,30000);
The range allows the fps to "slow down".
The call setPreviewFpsRange(30000, 30000); enforces the fps to stay around 30.
This is right, you should call setPreviewFpsRange() to get constant fps. The frame rate you see is dropping because of the CCD, when light is low the fps goes down so it can produce better pictures (in still mode).
Also to achieve higher frame rate you should use:
Camera.Parameters parameters=camera.getParameters();
parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_VIDEO);
parameters.setRecordingHint(true);
camera.setParameters(parameters);