After implementing the camera2 API for the inApp camera I noticed that on Samsung devices the images appear blurry. After searching about that I found the Sasmung Camera SDK (http://developer.samsung.com/galaxy#camera). So after implementing the SDK on Samsung Galaxy S7 the images are fine now, but on Galaxy S6 they are still blurry. Someone experienced those kind of issues with Samsung devices?
EDIT:
To complement #rcsumners comment. I am setting autofocus by using
mPreviewBuilder.set(SCaptureRequest.CONTROL_AF_TRIGGER, SCaptureRequest.CONTROL_AF_TRIGGER_START);
mSCameraSession.capture(mPreviewBuilder.build(), new SCameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(SCameraCaptureSession session, SCaptureRequest request, STotalCaptureResult result) {
isAFTriggered = true;
}
}, mBackgroundHandler);
It is a long exposure image where the use has to take an image of a static non moving object. For this I am using the CONTROL_AF_MODE_MACRO
mCaptureBuilder.set(SCaptureRequest.CONTROL_AF_MODE, SCaptureRequest.CONTROL_AF_MODE_MACRO);
and also I am enabling auto flash if it is available
requestBuilder.set(SCaptureRequest.CONTROL_AE_MODE,
SCaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
I am not really an expert in this API, I mostly followed the SDK example app.
There could be a number of issues causing this problem. One prominent one is the dimensions of your output image
I ran Camera2 API and the preview is clear, but the output was quite blurry
val characteristics: CameraCharacteristics? = cameraManager.getCameraCharacteristics(cameraId)
val size = characteristics?.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)?.getOutputSizes(ImageFormat.JPEG) // The issue
val width = imageDimension.width
val height = imageDimension.height
if (size != null) {
width = size[0].width; height = size[0].height
}
val imageReader = ImageReader.newInstance(width, height, ImageFormat.JPEG, 5)
The code below was returning a dimension about 245*144 which was way to small to be sent to the image reader. Some how the output was stretching the image making it end up been blurry. Therefore I removed this line below.
val size = characteristics?.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)?.getOutputSizes(ImageFormat.JPEG) // this was returning a small
Setting the width and height manually resolved the issue.
You're setting the AF trigger for one frame, but then are you waiting for AF to complete? For AF_MODE_MACRO (are you verifying the device lists support for this AF mode?) you need to wait for AF_STATE_FOCUSED_LOCKED before the image is guaranteed to be stable and sharp. (You may also receive NOT_FOCUSED_LOCKED if the AF algorithm can't reach sharp focus, which could be because the object is just too close for the lens, or the scene is too confusing)
On most modern devices, it's recommended to use CONTINUOUS_PICTURE and not worry about AF triggering unless you really want to lock focus for some time period. In that mode, the device will continuously try to focus to the best of its ability. I'm not sure all that many devices support MACRO, to begin with.
Related
So I have an issue with a device (Alcatel 5033D 1) that's always giving me a resolution of 480x640 for CameraX analysis. This is the last version of the code in which I init it:
private void initCameraAnalysis(Size resolution) {
Log.d(TAG, "initCameraAnalysis: ");
cameraAnalysis = new ImageAnalysis.Builder()
.setTargetResolution(resolution)
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.setImageQueueDepth(0)
.build();
cameraAnalysis.setAnalyzer(
cameraExecutor,
new QualityAnalyzer()
);
}
I just added this part to see if it would help, initially these calls weren't there and they actually don't help with this issue:
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.setImageQueueDepth(0)
The resolution is passed when calling that method as follows, and so I'm requesting 720x1280 that's what I need (using portrait orientation):
initCameraAnalysis(new Size(720, 1280));
Now, this is a low end device and always fallback to 480x640 but I have another app made by a colleague that still uses Camera2 and that one can make 720x1280 with this same device without issue, so I know for sure the device is actually capable of doing 720x1280.
Moreover, I have added an ImageCapture use case and that one can do 720x1280 without issue too, but I need it to do 720x1280 during the analysis.
So I wonder if there'd be any way to force 720x1280 in the analysis use case despite the stubbornness of CameraX not taking it, even if it means using some Camera2 extension or whatever.
Otherwise is likely I'd have to rewrite the whole app using Camera2 instead, which seems like going backwards in time...
Thanks a lot in advance!
I don't know If you have seen it but just in case you did not I would like to point what stated here:
Express the resolution Size in the coordinate frame after rotating the supported sizes by the target rotation. For example, a device with portrait natural orientation in natural target rotation requesting a portrait image may specify 480x640, and the same device, rotated 90 degrees and targeting landscape orientation may specify 640x480.
Maybe also give it a shot with:
val imageAnalysis = ImageAnalysis.Builder()
.setTargetResolution(Size(1280, 720))
.build()
I'm currently working on an app in C++ using the Android ndk, and I need to create a sampler to access the camera output image.
I have done this using the AIMAGE_FORMAT_YUV_420_888, and using the VkSamplerYcbcrConversion for accessing the image in the hardware buffer. I do the yuv -> rgb conversion in a shader, and it all looks good on my phone.
I have since discovered that this doesn't work on Samsung phones, in my case specifically the Samsung Galaxy S10/S10+.
The reason is that when I set up an image reader with the AIMAGE_FORMAT_YUV_420_888 I get a camera error using Samsung. On my OnePlus and on another phone I tried the pipeline worked entirely as expected. I created a very simple test setup to even try to open the camera with that image format in the ImageReader on Samsung S10 and got the error, but when I changed the ImageReader format to AIMAGE_FORMAT_JPEG the error went away and the camera seemed to start as expected.
AImageReader* SimpleCamera::CreateJpegReader()
{
AImageReader* reader = nullptr;
// media_status_t status = AImageReader_new(640, 480, AIMAGE_FORMAT_JPEG,
//AIMAGE_FORMAT_RGBA_8888
//media_status_t status = AImageReader_new(640, 480, AIMAGE_FORMAT_RGB_565,4, &reader);
media_status_t status = AImageReader_newWithUsage(640, 480,
//AIMAGE_FORMAT_RGBA_8888,
//AIMAGE_FORMAT_RGB_565,
//AIMAGE_FORMAT_RGB_888,
AIMAGE_FORMAT_JPEG,
AHARDWAREBUFFER_USAGE_GPU_SAMPLED_IMAGE | AHARDWAREBUFFER_USAGE_CPU_READ_RARELY,
4, &reader);
if (status != AMEDIA_OK) {
LOGE("Couldn't create new image reader");
return nullptr;
}
AImageReader_ImageListener listener{
.context = nullptr,
.onImageAvailable = imageCallback1,
};
AImageReader_setImageListener(reader, &listener);
return reader;
}
None of the other formats are guaranteed to be supported except AIMAGE_FORMAT_JPEG, but this format doesn't seem to work with the VkSamplerYcbcrConversion because the image layout is different.
Has anyone come up against this issue before? And if so how did you resolve it?
At a high level th goal is: In C++, get the image out of the camera2 api and onto a VkImage. If anyone knows an alternative way of doing that, I'm also all ears.
Try to use ImageFormat.PRIVATE with USAGE_GPU_SAMPLED_IMAGE flag. This used to work fine on the mentioned Samsung devices in particular.
Please make sure to read Vulkan specification, as there are quite a few android-specific and VkSamplerYcbcrConversion requirements.
I can also recommend to take a look at this great project which uses android camera2 api and vulkan.
I used the latest Camera2Basic sample program as a source for my trials:
https://github.com/android/camera-samples.git
Basically I configured the CaptureRequest before I call the capture() function in the takePhoto() function like this:
private fun prepareCaptureRequest(captureRequest: CaptureRequest.Builder) {
//set all needed camera settings here
captureRequest.set(CaptureRequest.CONTROL_MODE, CaptureRequest.CONTROL_MODE_OFF)
captureRequest.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_OFF);
//captureRequest.set(CaptureRequest.CONTROL_AF_TRIGGER, CaptureRequest.CONTROL_AF_TRIGGER_CANCEL);
//captureRequest.set(CaptureRequest.CONTROL_AWB_LOCK, true);
captureRequest.set(CaptureRequest.CONTROL_AWB_MODE, CaptureRequest.CONTROL_AWB_MODE_OFF);
captureRequest.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_OFF);
//captureRequest.set(CaptureRequest.CONTROL_AE_LOCK, true);
//captureRequest.set(CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER, CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER_CANCEL);
//captureRequest.set(CaptureRequest.NOISE_REDUCTION_MODE, CaptureRequest.NOISE_REDUCTION_MODE_FAST);
//flash
if (mState == CaptureState.PRECAPTURE){
//captureRequest.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_OFF);
captureRequest.set(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_OFF)
}
if (mState == CaptureState.TAKEPICTURE) {
//captureRequest.set(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_SINGLE)
//captureRequest.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_ALWAYS_FLASH);
captureRequest.set(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_SINGLE)
}
val iso = 100
captureRequest.set(CaptureRequest.SENSOR_SENSITIVITY, iso)
val fractionOfASecond = 750.toLong()
captureRequest.set(CaptureRequest.SENSOR_EXPOSURE_TIME, 1000.toLong() * 1000.toLong() * 1000.toLong() / fractionOfASecond)
//val exposureTime = 133333.toLong()
//captureRequest.set(CaptureRequest.SENSOR_EXPOSURE_TIME, exposureTime)
//val characteristics = cameraManager.getCameraCharacteristics(cameraId)
//val configs: StreamConfigurationMap? = characteristics[CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP]
//val frameDuration = 33333333.toLong()
//captureRequest.set(CaptureRequest.SENSOR_FRAME_DURATION, frameDuration)
val focusDistanceCm = 20.0.toFloat() //20cm
captureRequest.set(CaptureRequest.LENS_FOCUS_DISTANCE, 100.0f / focusDistanceCm)
//captureRequest.set(CaptureRequest.COLOR_CORRECTION_MODE, CameraMetadata.COLOR_CORRECTION_MODE_FAST)
captureRequest.set(CaptureRequest.COLOR_CORRECTION_MODE, CaptureRequest.COLOR_CORRECTION_MODE_TRANSFORM_MATRIX)
val colorTemp = 8000.toFloat();
val rggb = colorTemperature(colorTemp)
//captureRequest.set(CaptureRequest.COLOR_CORRECTION_TRANSFORM, colorTransform);
captureRequest.set(CaptureRequest.COLOR_CORRECTION_GAINS, rggb);
}
but the picture that is returned never is the picture where the flash is at its brightest. This is on a Google Pixel 2 device.
As I only take one picture I am also not sure how to check some CaptureResult states to find the correct one as there is only one.
I already looked at the other solutions to similar problems here but they were either never really solved or somehow took the picture during capture preview which I don't want.
Other strange observations are that on different devices the images are taken (also not always at the right moment), but then the manual values I set are not observed in the JPEG metadata of the image.
If needed I can put my git fork on github.
Long exposure time in combination with flash seems to be the basic issue and when the results are not that good, this means that the timing of your preset isn't that good. You'd have to optimize the exposure time's duration, in relation to the flash's timing (just check the EXIF of some photos for example values). You could measure the luminosity with an ImageAnalysis.Analyzer (this had been removed from the sample application, but elder revisions still have an example). And I've tried with the default Motorola camera app; there the photo also seems to be taken shortly after the flash, when the brightness is already decaying (in order to avoid the dazzling bright). That's the CaptureState.PRECAPTURE, where you switch the flash off. Flashing in two stages is rather the default and this might yield better results.
If you want it to be dazzlingly bright (even if this is generally not desired), you could as well first switch on the torch, that the image, switch off the torch again (I use something alike this, but only for barcode scanning). This would at least prevent any expose/flash timing issues.
When changed values are not represented in EXIF, you'd need to use ExifInterface, in order to update them (there's an example which updates the orientation, but one can update any value).
So I migrated from using legacy camera api to CameraX and even though it was quite simple to setup, I've noticed one issue. Now camera seems to take almost twice if not longer to start showing preview than it had before.
I'm testing on galaxy s7.
My code looks like this:
val previewConfig = PreviewConfig.Builder().apply {
setTargetAspectRatio(Rational(1, 1))
setTargetResolution(Size(binding.codeScannerView.width, binding.codeScannerView.height))
}.build()
val preview = Preview(previewConfig)
preview.setOnPreviewOutputUpdateListener { preview ->
val parent = binding.codeScannerView.parent as ViewGroup
parent.removeView(binding.codeScannerView)
parent.addView(binding.codeScannerView, 0)
binding.codeScannerView.surfaceTexture = preview.surfaceTexture
}
val analyzerConfig = ImageAnalysisConfig.Builder().apply {
val analyzerThread = HandlerThread(
"QrCodeReader").apply { start() }
setCallbackHandler(Handler(analyzerThread.looper))
setImageReaderMode(
ImageAnalysis.ImageReaderMode.ACQUIRE_LATEST_IMAGE)
}.build()
val analyzerUseCase = ImageAnalysis(analyzerConfig).apply {
analyzer = QrCodeAnalyzer(requireContext(), Handler(), { qrCode ->
if (activity == null) {
return#QrCodeAnalyzer
}
presenter.disableCameraPreview()
presenter.updateTable(qrCode.toLowerCase().parseTableId(), isFromOrder, Screens.MENU_SCREEN)
})
}
CameraX.bindToLifecycle(this, preview, analyzerUseCase)
Any ideas on how to make it appear faster?
P. S. I can also see tearing in preview once in a while
So I've spent quite some time trying to find the solution, to no avail.
I have even encountered multiple issues (with alpha04) like:
Random SIGSEGV crashes when turning camera on/off
I tried sample projects and codelabs from google which also were not working 100% of the time on tested devices
At some point I got notification that
camera was being used in background, even though It was bound to
lifecycle and window closed, which is the last thing I want my users
to see.
Camera was indeed loading slower and I was getting horrible FPS even with analyzer off.
Resolution would drop down to lowest possible and preview would be pixelated on some devices
Every once in a while preview would start tearing vertically
Analyzer frame was different size than preview and there were some aspect ratio issues which took quite some time to resolve.
There's still quite some boilerplate required for it to work
Documentation for edge cases is pretty much non existent, so most of the stuff is trial and error.
In the end I just started looking for other libraries and came upon https://github.com/natario1/CameraView This is by far the easiest to use library I have ever seen for camera. Way simplier than camerax, it seems to just work, loads way faster, renders preview at 2x-3x higher FPS even with analyzer step running in the background. So far I had no issues with it.
Even though I strongly believe, that I was missing something, when using CameraX and there's probably a way to make it work, in the end it just doesn't seem worth it for now and I'll probably wait till there's a production ready version until I try again.
In my Camera2 API project for Android, I want to set a region for my Exposure Calculation. Unfortunately it doesn't work. On the other side the Focus region works without any problems.
Device: Samsung S7 / Nexus 5
1.) Initial values for CONTROL_AF_MODE & CONTROL_AE_MODE
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_AUTO);
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON);
2.) Create the MeteringRectangle List
meteringFocusRectangleList = new MeteringRectangle[]{new MeteringRectangle(0,0,500,500,1000)};
3.) Check if it is supported by the device and set the CONTROL_AE_REGIONS (same for CONTROL_AF_REGIONS)
if (camera2SupportHandler.cameraCharacteristics.get(CameraCharacteristics.CONTROL_MAX_REGIONS_AE) > 0) {
camera2SupportHandler.mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_REGIONS, meteringFocusRectangleList);
}
4.) Tell the camera to start Exposure control
camera2SupportHandler.mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER, CameraMetadata.CONTROL_AE_PRECAPTURE_TRIGGER_START);
The CONTROL_AE_STATE is always in CONTROL_AE_STATE_SEARCHING, but doesn't use the configured regions...
After long testing & development I've found an answer.
The coordinate system - Camera 1 API VS Camera 2 API
RED = CAM1; GREEN = CAM2; As shown in the image below, the blue rect are the coordinates for a possible focus/exposure area for the Cam1. By using the Cam2 API, there must be firstly queried the max of the height and the width. Please find more info here.
Initial values for CONTROL_AF_MODE & CONTROL_AE_MODE: See in the question above.
Set the CONTROL_AE_REGIONS: See in the question above.
Set the CONTROL_AE_PRECAPTURE_TRIGGER.
// This is how to tell the camera to start AE control
CaptureRequest captureRequest = camera2SupportHandler.mPreviewRequestBuilder.build();
camera2SupportHandler.mCaptureSession.setRepeatingRequest(captureRequest, captureCallbackListener, camera2SupportHandler.mBackgroundHandler);
camera2SupportHandler.mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER, CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER_START);
camera2SupportHandler.mCaptureSession.capture(captureRequest, captureCallbackListener, camera2SupportHandler.mBackgroundHandler);
The ''captureCallbackListener'' gives feedback of the AE control (of course also for AF control)
So this configuration works for the most Android phones. Unfortunately it doesn't work for the Samsung S6/7. For this reason I've tested their Camera SDK, which can be found here.
After deep investigations I've found the config field ''SCaptureRequest.METERING_MODE''. By setting this to the value of ''SCaptureRequest.METERING_MODE_MANUAL'', the AE area works also the Samsung phones.
I'll add an example to github asap.
Recently I had the same problem and finally found a solution that helped me.
All I needed to do was to step 1 pixel from the edges of the active sensor rectangle. In your example instead of this rectangle:
meteringRectangleList = new MeteringRectangle[]{new MeteringRectangle(0,0,500,500,1000)};
I would use this:
meteringRectangleList = new MeteringRectangle[]{new MeteringRectangle(1,1,500,500,1000)};
and it started working as magic on both Samsung and Nexus 5!
(note that you should also step 1 pixel from right/bottom edges if you use maximum values there)
It seems that many vendors have poorly implemented this part of documentation
If the metering region is outside the used android.scaler.cropRegion returned in capture result metadata, the camera device will ignore the sections outside the crop region and output only the intersection rectangle as the metering region in the result metadata. If the region is entirely outside the crop region, it will be ignored and not reported in the result metadata.