I'm trying to achieve the optimal settings for decoding barcodes using Android's tricky Camera2 API, using TEMPLATE_STILL_CAPTURE and CONTROL_SCENE_MODE_BARCODE WHILE playing with all the other settings like FPS, AF and AE.
So far however, I have been unable to remove the banding seen when reading barcodes from screens.
What would be the way to remove, or reduce, banding when using the Camera2 API taking pictures of screens?
You should set the anti-banding mode, see CONTROL_AE_ANTIBANDING_MODE
...
.setCaptureRequestOption(
CaptureRequest.CONTROL_AE_ANTIBANDING_MODE,
CameraMetadata.CONTROL_AE_ANTIBANDING_MODE_AUTO
)
.setCaptureRequestOption(
CaptureRequest.CONTROL_MODE,
CameraMetadata.CONTROL_MODE_AUTO
)
.setCaptureRequestOption(
CaptureRequest.CONTROL_SCENE_MODE,
CameraMetadata.CONTROL_SCENE_MODE_BARCODE
)
...
https://developer.android.com/reference/android/hardware/camera2/CaptureRequest#CONTROL_AE_ANTIBANDING_MODE
Depending on the technology used by the screen, the screen itself may be flickering at some rate. Unfortunately, the screen is also likely to be bright, meaning the camera has to reduce its exposure time so it doesn't overexpose.
A shorter exposure time makes it harder to apply anti-banding, since anti-banding normally involves having an exposure time that's a multiple of the flicker period. That is, if the flicker is at 100 hz (10 ms period), you need an exposure time that's a multiple of 10 ms. But if the screen is so bright that the camera needs a 5 ms exposure, then there's no way to cancel out the flicker.
In addition, most camera flicker detectors only handle 50 and 60-hz main power flicker rates. Increasingly they do more, since LED lighting has flicker rates that often unrelated to the power line frequencies, but if the device only handles 50 and 60hz, it probably can't compensate for most displays that flicker.
Besides turning on antibanding mode (which should be on by default anyway), you could try increasing exposure compensation to make the image brighter, which may give the device enough room to set an exposure time for flicker reduction. But beyond that there's not much you can do, unfortunately.
Related
When setting manual controls in Android by using the Camera2 API, what is the purpose of CaptureRequest.SENSOR_FRAME_DURATION?
I have read several times the documentation but still can’t understand its purpose, and what value to set in relation to the exposure time and ISO.
I understand the CaptureRequest.SENSOR_EXPOSURE_TIME specifies how much light is the sensor letting in; also that the CaptureRequest.SENSOR_SENSITIVITY is the sensor sensitivity to light (ISO), but no idea about SENSOR_FRAME_DURATION and how it relates to the exposure time and sensor sensitivity.
For example, if I set a long exposure time of 1 second or 30 seconds, then what is the value that I should set in SENSOR_FRAME_DURATION? And how does it relate to the other sensor controls?
FRAME_DURATION is the same concept as output frame rate. That is, how often is an image read out from the image sensor? Frame rate is generally reported as frames per second, while FRAME_DURATION is the inverse of that - the duration of a single frame.
Since the camera2 API is all about per-frame control, having the duration as a per-frame property is appropriate.
FRAME_DURATION can't be shorter than EXPOSURE_TIME (since you can't read the image from the sensor until exposure is complete), but the API handles this for you - if you ask for a FRAME_DURATION that's too short compared to EXPOSURE_TIME, it gets automatically increased.
That said, often you may want consistent frame rate (such as 30fps for video recording), so you'd set your FRAME_DURATION to 1/30s = 33333333 ns, and then vary EXPOSURE_TIME for manual exposure control. As long as you keep EXPOSURE_TIME as less than 1/30s, you'll get steady frame rate and still have manual exposure control.
The minimum possible frame duration (and therefore the maximum frame rate) depends on the output resolution(s) and format(s) you've asked for in the camera capture session. Generally, bigger resolutions take longer to read out, putting a limit on minimum frame duration. Cameras that support the BURST_CAPTURE camera capability can handle at least 20fps for 8 MP captures, or better.
At the image sensor level, frame duration is implemented by adding in extra vertical blanking time so that EXPOSURE + VBLANK = FRAME_DURATION. The full picture is also more complicated in that typical CMOS image sensors can be exposing some rows of the image while others are being read out (rolling shutter) so the actual timing diagrams look more complicated. You don't generally have to care when just doing basic manual exposure control, however.
Most of the image sensors in a smart phone are using rolling shutter, which readout pixels line by line, the FRAME_DURATION = FRAME_READ_OUT_TIME + VBLANK.
I want to fix the frame rate of camera preview in Android, i.e., 20fps, or 30 fps. However, we find the frame rate is unstable.
In the android document, it is said that the frame rate is fluctuated between the minimum frame rate and the maximum one which are defined in getSupportedPreviewFpsRange.
https://developer.android.com/reference/android/hardware/Camera.Parameters.html#getSupportedPreviewFpsRange%28%29
My questions are:
1) Which factors influence the frame rate? exposure time, white balance, frame resolution, background CPU loading, and etc.?
2) Is there any method to fix the frame rate by customised above factors?
3) In my project, higher frame rate is better. If the frame rate is unstable in the end. Can I increase the minimum frame rate? or fix the minimum frame rate?
4) It seems that the video taking is somewhat different with preview model, Can I fix the frame rate or minimum frame rate of video taking in Android?
Finally, we found that IOS can fix the frame rate using videoMinFrameDuration and
videoMaxFrameDuration.
Thanks.
First of all, please note that the camera API that you ask about was deprecated more than 3 years ago. The new camera2 API provides much more control over all aspects of capture, including frame rate.
Especially, if your goal is smooth video recording. Actually, the MediaRecorder performs its job decently on older devices, but I understand that this knowledge has little practical value if for some reason you cannot use the MediaRecorder.
Usually, the list of supported FPS ranges includes fixed ranges, e.g. 30 fps, intended exactly for video recording. Note that you are expected to choose a compliant (recommended) preview (video) resolution.
Two major factors cause frame rate variations within the declared range: exposure adjustments and focus adjustments. To achieve uniform rate, you should disable autofocus. If your camera supports exposure control, you should lock it, too. Refrain from using exotic "scenes" and "effects". SCENE_MODE_BARCODE and EFFECT_MONO don't seem to cause problems with frame rate. Whitebalance is OK, too.
There exist other factors that cause frame rate fluctuations that are completely under your control.
Make sure that your camera callbacks do not interfere with, and are not delayed by the Main (UI) thread. To achieve that, you must open the camera on a secondary HandlerThread. The new camera2 API makes thread management for camera callbacks easier.
Don't use setPreviewCallback() which automatically allocates pixel buffers for each frame. This is a significant burden for garbage collector, which may lock all threads once in a while for major cleanup. Instead, use setPreviewCallbackWithBuffer() and preallocate just enough pixel buffers to keep it always busy.
Don't perform heavy calculations in the context of your onPreviewFrame() callback. Pass all work to a different thread. Do your best to release the pixel buffer as early as possible.
Even for the old camera API, if the device lists a supported FPS range of (30, 30), then you should be able to select this range and get consistent, fixed video recording.
Unfortunately, some devices disregard your frame rate request once the scene conditions get too dark, and increase exposure times past 1/30s. For many applications, this is the preferable option, but such applications should simply be selecting a wider frame rate range like (15, 30).
I'm trying to capture images with 30 seconds exposure times in my app (I know it's possible since the stock camera allows it).
But SENSOR_INFO_EXPOSURE_TIME_RANGE (which it's supposed to be in nanoseconds) gives me the range :
13272 - 869661901
in seconds it would be just
0.000013272 - 0.869661901
Which obviously is less than a second.
How can I use longer exposure times?
Thanks in advance!.
The answer to your question:
You can't. You checked exactly the right information and interpreted it correctly. Any value you set for the exposure time longer than that will be clipped to that max amount.
The answer you want:
You can still get what you want, though, by faking it. You want 30 continuous seconds' worth of photons falling on the sensor, which you can't get. But you can get something (virtually) indistinguishable from it by accumulating 30 seconds' worth of photons with tiny missing intervals interspersed.
At a high level, what you need to do is create a List of CaptureRequests and pass it to CameraCaptureSession.captureBurst(...). This will take the shots with as minimal an interstitial time as possible. When each frame of image data is available, pass it to some new buffer somewhere and accumulate the information (simple point-wise addition). This is probably most properly done with an Allocation as the output Surface and some RenderScript.
Notes on data format:
The right way to do this is to use the RAW_SENSOR output format if you can. That way the accumulated output really is directly proportional to the light that was incident to the sensor over the whole 30s.
If you can't use that, for some reason, I would recommend using YUV_420_888 output, and make sure you set the tone map curve to be linear (unfortunately you have to do this manually by creating a curve with two points). Otherwise the non-linearity introduced will ruin our scheme. (Although I'm not sure simple addition is exactly right in a linear YUV space, but it's a first approach at least.) Whether you use this approach or RAW_SENSOR, you'll probably want to apply your own gamma curve/tone map after accumulation to make it "look right."
For the love of Pete don't use JPEG output, for many reasons, not the least of which is that this will most likely add a LOT of interstitial time between exposures, thereby ruining our approximation of 30s on continuous exposure.
Note on exposure equivalence:
This will produce almost exactly the exposure you want, but not quite. It differs in two ways.
There will be small missing periods of photon information in the middle of this chunk of exposure time. But on the time scale you are talking about (30s), missing a few milliseconds of light here and there is trivial.
The image will be slightly nosier than if you had taken a true single exposure of 30s. This is because each time you read out the pixel values from the actual sensor, a little electronic noise gets added to the information. So in the end you'll have 35 times as much of this additive noise (from the 35 exposures for your specific problem) as a single exposure would. There's no way around this, sorry, but it might not even be noticeable- this is usually fairly small relative to the meaningful photographic signal. It depends on the camera sensor quality (and ISO, but I imagine for this application you need that to be high.)
(Bonus!) This exposure will actually be superior in one way: Areas that might have been saturated (pure white) in a 30s exposure will still retain definition in these far shorter exposures, so you're basically guaranteed not to lose your high end details. :-)
You can't always trust SENSOR_INFO_EXPOSURE_TIME_RANGE as of May 2017. Try manually increasing the time and see what happens. I know my Pixel will actually take a 1.9 sec shot but SENSOR_INFO_EXPOSURE_TIME_RANGE has a value in the sub second range.
I am writing an application which has a video recording feature. During normal day-light hours with lots of light I am able to get 30fps video to record.
However, when there is less light, the frame rate drops to around 7.5fps (with exactly the same code). My guess would be that android is doing something behind the scenes with the exposure time to ensure that the resulting video has the best image quality.
I, however, would prefer a higher fps to a better quality image. Assuming exposure is the issue, is there any way to control the exposure time to ensure a decent fps (15fps+). There are the functions setExposureCompensation() and setAutoExposureLock() but they seem to do nothing.
Has anyone had this issue before? Is it even exposure that is causing my issue?
Any hits/suggestions would be great.
I am sorry but the accepted answer is totally wrong. In fact I have created an account just to correct this.
Noise information gets discarded depending on the bitrate anyway, I do not understand why someone would think that this would be an extra load on cpu at all.
In fact, video framerate on a mobile device has a lot to do with light exposure. In a low light situation, exposure is increased automatically, which also means the shutter will stay open longer to let more light in. Which will reduce the number of frames you can capture in a second, and add some motion blur on top. With a DSLR camera you could change your aperture for more light, without touching the shutter speeds, but on mobile devices your aperture is fixed.
You could mess with exposure compensation to get more fps, but I do not think super dark video is what you want.
More information;
https://anyline.com/news/low-end-android-devices-exposure-triangle/
There is a simple explanation here. The lower light means there is more noise in the video. With more noise the encoding engine has to put far more effort to get the compression it needs. Unless the encoder has a denoiser the encoding engine has far more noise to deal with than normal conditions.
If you want a more technical answer: More noise means that the motion-estimation engine of the encoder is thrown for a toss. This is the part that consumes maximum CPU cycles. The more the noise, the worse the compression and hence even other parts of the encoder are basically crunching more. More bits are generated which means that the encoding and entropy engines are also crunching more and hence the worse performance.
Generally in high end cameras a lot of noise is removed by the imaging pipeline in the sensor. However don't expect that in a mobile phone sensor. [This is the ISO performance that you see in DSLRs ].
I had this issue with Android 4.2 Galaxy S III. After experimenting with parameters found one call which started to work.
Look on Camera.Parameters, if you print them out, you'll see:
preview-fps-range=15000,30000;
preview-fps-range-values=(8000,8000),(10000,10000),(15000,15000),(15000,30000),(30000,30000);
The range allows the fps to "slow down".
The call setPreviewFpsRange(30000, 30000); enforces the fps to stay around 30.
This is right, you should call setPreviewFpsRange() to get constant fps. The frame rate you see is dropping because of the CCD, when light is low the fps goes down so it can produce better pictures (in still mode).
Also to achieve higher frame rate you should use:
Camera.Parameters parameters=camera.getParameters();
parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_VIDEO);
parameters.setRecordingHint(true);
camera.setParameters(parameters);
Given a known periodic motion (e.g., walking), I'd like to take a full resolution snapshot at the same point in the motion (i.e., the same time offset within different periods). However on the Nexus S (currently running OS 4.1.1 but the same was true of previous OS versions), I'm seeing so much variability in the shutter lag that I cannot accurately plan the timing of the snapshot. Here is a histogram of the shutter lags of 50 photographs. (I measured the shutter lag with one System.nanoTime() just before Camera.takePicture() and another System.nanoTime() at the beginning of the shutter callback. The camera lens was consistently covered to remove any variability due to lighting.)
Is there anything I can do in the application to reduce this shutter lag variability? (In this application, the mean lag can be any duration but the standard deviation must be small ... much smaller than the 0.5 s standard deviation of the shutter lags shown in the above histogram.) I'm hoping someone has a clever suggestion. If I don't get any suggestions, I'll post a feature request in the Android bug tracker.
UPDATE:
I turned off auto-focus (by setting focus to infinity) following RichardC's suggestion at https://groups.google.com/forum/?hl=en&fromgroups#!topic/android-developers/WaHhTicKRA0
It helped, as shown in the following histogram. Any ideas to reduce the shutter lag variability even more?
UPDATE 2:
I disabled the remaining auto parameters: white balance, scene mode, flash. The following histogram of shutter lag time variability seems to be about as good as it gets for a Nexus S running OS 4.1.1. It would be nice to have much less variability in the shutter lag time, perhaps by specifying an optional minimum shutter lag time in Camera.takePicture() which would delay the shutter if it were ready before the specified minimum. I've posted a feature request to the Android issue tracker. If you are also interested in getting the feature, "star" it at http://code.google.com/p/android/issues/detail?id=35785
what's the x axis and what's the y axis?
you can use low level coding (like NDK/renderscript) instead of using java .
you might be able to change the settings of the camera to make it faster .