I'm working on a project where I need to constantly take pictures with some defined resolution, quality and waiting period. The user is able to start/stop the capturing of pictures, the user can also run it with a preview or without a preview (as to do other things with his phone).
I have everything working fine most of the time for most devices but in some strange cases I see bad/blurred pictures. It is worth mentioning that the device is on the move.
Here is my current request:
private fun setRequestParams(builder: CaptureRequest.Builder) {
builder.set(CaptureRequest.BLACK_LEVEL_LOCK, false)
builder.set(CaptureRequest.CONTROL_AWB_LOCK, false)
builder.set(CaptureRequest.CONTROL_AWB_MODE, CaptureRequest.CONTROL_AWB_MODE_AUTO)
builder.set(CaptureRequest.CONTROL_AE_LOCK, false)
builder.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON)
builder.set(CaptureRequest.CONTROL_AE_ANTIBANDING_MODE, CaptureRequest.CONTROL_AE_ANTIBANDING_MODE_AUTO)
builder.set(CaptureRequest.CONTROL_AE_EXPOSURE_COMPENSATION, 0)
builder.set(CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER, 0)
builder.set(CaptureRequest.CONTROL_MODE, CaptureRequest.CONTROL_MODE_AUTO)
builder.set(CaptureRequest.COLOR_CORRECTION_MODE, CaptureRequest.CONTROL_MODE_AUTO)
builder.set(CaptureRequest.CONTROL_CAPTURE_INTENT, CaptureRequest.CONTROL_CAPTURE_INTENT_ZERO_SHUTTER_LAG)
builder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE)
builder.set(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE,getRange())
builder.set(CaptureRequest.JPEG_QUALITY,pictureConfig.quality.toByte())
val capabilities = cameraCharacteristics?.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES)
val isManualFocusSupported : Boolean? = capabilities?.contains(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_MANUAL_SENSOR)
if (isManualFocusSupported != null && isManualFocusSupported ) {
builder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_OFF)
builder.set(CaptureRequest.LENS_FOCUS_DISTANCE, 0.0f)
}
}
As I mentioned above, The user can either capture photos on foreground or background, for that I create 3 requests, I realized in some phones if there is no linked Surface the camera automatically closes, that's why I use the dummyView.
private val previewRequest: CaptureRequest by lazy {
captureSession!!.device.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW).apply {
addTarget(preview!!)
setRequestParams(this)
}.build()
}
private val backgroundPreviewRequest : CaptureRequest by lazy {
captureSession!!.device.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW).apply {
addTarget(dummyPreview!!)
}.build()
}
private val captureRequest: CaptureRequest by lazy {
captureSession!!.device.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE).apply {
addTarget(imageReader!!.surface)
setRequestParams(this)
}.build()
}
When everything is ready I capture a photo :
captureSession?.setRepeatingRequest(previewRequest, null, cameraPreviewHandler)
captureSession?.capture(captureRequest,null, cameraHandler)
When the onImageAvailable is called I capture again. Sometimes I save the image, but if the capture is during the waiting period I just let it continue. Usually the waiting period is of 400ms.
My questions are:
My pictures are sometimes blurred and sometimes not (on the same capturing session) is there something to improve on the request ?
Phones of similar model (SM-A705FN) give inconsistent results. Same phone give partially blurred images when another phone gives good results. Is it possible that the camera's hardware was overused ?
In some phones the camera stops taking pictures without ever calling the camera error callback. Is there a way to know that my picture was lost somewhere ?
In some phones my code doesn't work at all for example Samsung A7 but it works on the Samsung A10 (which is less powerful)
PS: I was using repeatingRequest with almost identical request params but find out that even though the output was slightly higher (pictures were taken faster) the quality was poorer.
Related
I'm still practicing with Kotlin and Android Developing. As far as I understood, Camera class has been deprecated, and Android invites to use Camerax instead, because this high-level class is device-indipendent, and they've made simpler the process of implementing cameras on apps.
I've tried to read the documentation (https://developer.android.com/training/camerax) but it's written so bad I barely understood what they are trying to explain.
So I went to read the entire sample code given in the documentation itself (https://github.com/android/camera-samples/tree/main/CameraXBasic).
The CameraFragment code is about 500 lines long (ignoring imports and various comments).
Do I really need to write 500 lines of code to simply take a picture?
How is this supposed to be considered "simpler than before"?
I mean, Android programming is at the point where I just need to write only 4 lines of code to ask the user to select an Image from his storage and retreive it and show it in an ImageView.
Is there a TRUE simple way to take a picture, or do I really need to stop and lose a whole day of work to write all those lines of code?
EDIT:
Take this page of the documentation:
https://developer.android.com/training/camerax/architecture#kotlin
It starts with this piece of code.
val preview = Preview.Builder().build()
val viewFinder: PreviewView = findViewById(R.id.previewView)
// The use case is bound to an Android Lifecycle with the following code
val camera = cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, preview)
cameraProvider comes out of nowhere. What is this supposed to be? I've found out it's a ProcessCameraProvider, but how am I supposed to initialize it?
Should it be a lateinit var or has it already been initialized somewhere else?
Because if I try to write val cameraProvider = ProcessCameraProvider() I get an error, so what am I supposed to do?
What is cameraSelector parameter? It's not defined before. I've found out it's the selector for the front or back camera, but how am I supposed to know it reading that page of the documentation?
How could have this documentation been released with these kind of lackings?
How is someone supposed to learn with ease?
Before you can interact with the device's cameras using CameraX, you need to initialize the library. The initialization process is asynchronous, and involves things like loading information about the device's cameras.
You interact with the device's cameras using a ProcessCameraProvider. It's a Singleton, so the first time you get an instance of if, CameraX performs its initialization.
val cameraProviderFuture: ListenableFuture<ProcessCameraProvider> = ProcessCameraProvider.getInstance(context)
Getting the ProcessCameraProvider singleton returns a Future because it might need to initialize the library asynchronously. The first time you get it, it might take some time (usually well under a second), subsequent calls though will return immediately, as the initialization will have already been performed.
With a ProcessCameraProvider in hand, you can start interacting with the device's cameras. You choose which camera to interact with using a CameraSelector, which wraps a set of filters for the camera you want to use. Typically, if you're just trying to use the main back or front camera, you'd use CameraSelector.DEFAULT_BACK_CAMERA or CameraSelector.DEFAULT_FRONT_CAMERA.
Now that you've defined which camera you'll use, you build the use cases you'll need. For example, you want to take a picture, so you'll use the ImageCapture use case. It allows taking a single capture frame (typically a high quality one) using the camera, and providing it either as a raw buffer, or storing it in a file. To use it, you can configure it if you'd wish, or you can just let CameraX use a default configuration.
val imageCapture = ImageCapture.Builder().build()
In CameraX, a camera's lifecycle is controlled by a LifecycleOwner, meaning that when the LifecycleOwner's lifecycle starts, the camera opens, and when it stops, the camera closes. So you'll need to choose a lifecycle that will control the camera. If you're using an Activity, you'd typically want the camera to start as the Activity starts, and stop when it stops, so you'd use the Activity instance itself as the LifecycleOwner, if you were using a Fragment, you might want to use its view lifecycle (Fragment.getViewLifecycleOwner()).
Lastly, you need to put the pieces of the puzzle together.
processCameraProvider.bindToLifecycle(
lifecycleOwner,
cameraSelector,
imageCapture
)
An app typically includes a viewfinder that displays the camera's preview, so you can use a Preview use case, and bind it with the ImageCapture use case. The Preview use case allows streaming camera frames to a Surface. Since setting up the Surface and correctly drawing the preview on it can be complex, CameraX provides PreviewView, a View that can be used with the Preview use case to display the camera preview. You can check out how to use them here.
// Just like ImageCapture, you can configure the Preview use case if you'd wish.
val preview = Preview.Builder().build()
// Provide PreviewView's Surface to CameraX. The preview will be drawn on it.
val previewView: PreviewView = findViewById(...)
preview.setSurfaceProvider(previewView.surfaceProvider)
// Bind both the Preview and ImageCapture use cases
processCameraProvider.bindToLifecycle(
lifecycleOwner,
cameraSelector,
imageCapture,
preview
)
Now to actually take a picture, you use on of ImageCapture's takePicture methods. One provides a JPEG raw buffer of the captured image, the other saves it in a file that you provide (make sure you have the necessary storage permissions if you need any).
imageCapture.takePicture(
ContextCompat.getMainExecutor(context), // Defines where the callbacks are run
object : ImageCapture.OnImageCapturedCallback() {
override fun onCaptureSuccess(imageProxy: ImageProxy) {
val image: Image = imageProxy.image // Do what you want with the image
imageProxy.close() // Make sure to close the image
}
override fun onError(exception: ImageCaptureException) {
// Handle exception
}
}
)
val imageFile = File("somePath/someName.jpg") // You can store the image in the cache for example using `cacheDir.absolutePath` as a path.
val outputFileOptions = ImageCapture.OutputFileOptions
.Builder(imageFile)
.build()
takePicture(
outputFileOptions,
CameraXExecutors.mainThreadExecutor(),
object : ImageCapture.OnImageSavedCallback {
override fun onImageSaved(outputFileResults: ImageCapture.OutputFileResults) {
}
override fun onError(exception: ImageCaptureException) {
}
}
)
Do I really need to write 500 lines of code to simply take a picture?
How is this supposed to be considered "simpler than before"?
CameraXBasic is not as "basic" as its name might suggest x) It's more of a complete example of CameraX's 3 use cases. Even though the CameraFragment is long, it explains things nicely so that it's more accessible to everyone.
CameraX is "simpler than before", before referring mainly to Camera2, which was a bit more challenging to get started with at least. CameraX provides a more developer-friendly API with its approach to using use cases. It also handles compatibility, which was a big issue before. Ensuring your camera app works reliably on most of the Android devices out there is very challenging.
I have been using this github as a start of my code: https://github.com/xizhang/camerax-gpuimage
The code is a way to show the camera view with GPUImage filters on it.
I want to be able to also analyze the bitmap with the filter applied on it to get some analytics (percent red/green/blue in the image).
I have been successful in showing the default camera view to the user as well as the filter I created.
By commenting out the setImage line of code, I have been able to get the analytics of the filtered image, but when I try to do both at the same time the screen flickers. I changed the StartCameraIfReady function to get the filtered image as follows:
#SuppressLint("UnsafeExperimentalUsageError")
private fun startCameraIfReady() {
if (!isPermissionsGranted() || cameraProvider == null) {
return;
}
val imageAnalysis = ImageAnalysis.Builder().setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST).build()
imageAnalysis.setAnalyzer(executor, ImageAnalysis.Analyzer {
var bitmap = allocateBitmapIfNecessary(it.width, it.height)
converter.yuvToRgb(it.image!!, bitmap)
it.close()
gpuImageView.post {
// These two lines conflict causing the screen to flicker
// I can comment out one or the other and it works great
// But running both at the same time causes issues
gpuImageView.gpuImage.setImage(bitmap)
val filtered = gpuImageView.gpuImage.getBitmapWithFilterApplied(bitmap)
/*
Analyze the filtered image...
Print details about image here
*/
}
})
cameraProvider!!.unbindAll()
cameraProvider!!.bindToLifecycle(this, CameraSelector.DEFAULT_BACK_CAMERA, imageAnalysis)
}
When I try to get the filtered bitmap, it seems to conflict with the setImage line of code and cause the screen to flicker as shown in the video below. I can either show the preview to the user, or I can analyze the image, but not both at the same time. I have tried running them synchronized as well as each on their own background thread. I have also tried adding another Image Analyzer and binding it to the camera lifecycle (one for the preview and the other to get the filtered bitmap), the screen still flickers but less often.
https://imgur.com/a/mXeuEhe
<blockquote class="imgur-embed-pub" lang="en" data-id="a/mXeuEhe" >screenRocord</blockquote><script async src="//s.imgur.com/min/embed.js" charset="utf-8"></script>
If you are going to get the Bitmap with filter applied, you don't even need the GPUImageView. You can just get the Bitmap, and then set it on a regular ImageView. This is how your analyzer should look like:
ImageAnalysis.Analyzer {
var bitmap = allocateBitmapIfNecessary(it.width, it.height)
converter.yuvToRgb(it.image!!, bitmap)
it.close()
val filteredBitmap = gpuImage.getBitmapWithFilterApplied(bitmap)
regularImageView.post {
regularImageView.setImageBitmap(filteredBitmap)
}
}
Please be aware that the original GitHub sample is inefficient, and the sample above is even worse, because they convert the output to a Bitmap before feeding it back to GPU. For the best performance to argument the preview stream, please see CameraX's core test app on how to access preview Surface via OpenGL.
I used the latest Camera2Basic sample program as a source for my trials:
https://github.com/android/camera-samples.git
Basically I configured the CaptureRequest before I call the capture() function in the takePhoto() function like this:
private fun prepareCaptureRequest(captureRequest: CaptureRequest.Builder) {
//set all needed camera settings here
captureRequest.set(CaptureRequest.CONTROL_MODE, CaptureRequest.CONTROL_MODE_OFF)
captureRequest.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_OFF);
//captureRequest.set(CaptureRequest.CONTROL_AF_TRIGGER, CaptureRequest.CONTROL_AF_TRIGGER_CANCEL);
//captureRequest.set(CaptureRequest.CONTROL_AWB_LOCK, true);
captureRequest.set(CaptureRequest.CONTROL_AWB_MODE, CaptureRequest.CONTROL_AWB_MODE_OFF);
captureRequest.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_OFF);
//captureRequest.set(CaptureRequest.CONTROL_AE_LOCK, true);
//captureRequest.set(CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER, CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER_CANCEL);
//captureRequest.set(CaptureRequest.NOISE_REDUCTION_MODE, CaptureRequest.NOISE_REDUCTION_MODE_FAST);
//flash
if (mState == CaptureState.PRECAPTURE){
//captureRequest.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_OFF);
captureRequest.set(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_OFF)
}
if (mState == CaptureState.TAKEPICTURE) {
//captureRequest.set(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_SINGLE)
//captureRequest.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_ALWAYS_FLASH);
captureRequest.set(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_SINGLE)
}
val iso = 100
captureRequest.set(CaptureRequest.SENSOR_SENSITIVITY, iso)
val fractionOfASecond = 750.toLong()
captureRequest.set(CaptureRequest.SENSOR_EXPOSURE_TIME, 1000.toLong() * 1000.toLong() * 1000.toLong() / fractionOfASecond)
//val exposureTime = 133333.toLong()
//captureRequest.set(CaptureRequest.SENSOR_EXPOSURE_TIME, exposureTime)
//val characteristics = cameraManager.getCameraCharacteristics(cameraId)
//val configs: StreamConfigurationMap? = characteristics[CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP]
//val frameDuration = 33333333.toLong()
//captureRequest.set(CaptureRequest.SENSOR_FRAME_DURATION, frameDuration)
val focusDistanceCm = 20.0.toFloat() //20cm
captureRequest.set(CaptureRequest.LENS_FOCUS_DISTANCE, 100.0f / focusDistanceCm)
//captureRequest.set(CaptureRequest.COLOR_CORRECTION_MODE, CameraMetadata.COLOR_CORRECTION_MODE_FAST)
captureRequest.set(CaptureRequest.COLOR_CORRECTION_MODE, CaptureRequest.COLOR_CORRECTION_MODE_TRANSFORM_MATRIX)
val colorTemp = 8000.toFloat();
val rggb = colorTemperature(colorTemp)
//captureRequest.set(CaptureRequest.COLOR_CORRECTION_TRANSFORM, colorTransform);
captureRequest.set(CaptureRequest.COLOR_CORRECTION_GAINS, rggb);
}
but the picture that is returned never is the picture where the flash is at its brightest. This is on a Google Pixel 2 device.
As I only take one picture I am also not sure how to check some CaptureResult states to find the correct one as there is only one.
I already looked at the other solutions to similar problems here but they were either never really solved or somehow took the picture during capture preview which I don't want.
Other strange observations are that on different devices the images are taken (also not always at the right moment), but then the manual values I set are not observed in the JPEG metadata of the image.
If needed I can put my git fork on github.
Long exposure time in combination with flash seems to be the basic issue and when the results are not that good, this means that the timing of your preset isn't that good. You'd have to optimize the exposure time's duration, in relation to the flash's timing (just check the EXIF of some photos for example values). You could measure the luminosity with an ImageAnalysis.Analyzer (this had been removed from the sample application, but elder revisions still have an example). And I've tried with the default Motorola camera app; there the photo also seems to be taken shortly after the flash, when the brightness is already decaying (in order to avoid the dazzling bright). That's the CaptureState.PRECAPTURE, where you switch the flash off. Flashing in two stages is rather the default and this might yield better results.
If you want it to be dazzlingly bright (even if this is generally not desired), you could as well first switch on the torch, that the image, switch off the torch again (I use something alike this, but only for barcode scanning). This would at least prevent any expose/flash timing issues.
When changed values are not represented in EXIF, you'd need to use ExifInterface, in order to update them (there's an example which updates the orientation, but one can update any value).
So I migrated from using legacy camera api to CameraX and even though it was quite simple to setup, I've noticed one issue. Now camera seems to take almost twice if not longer to start showing preview than it had before.
I'm testing on galaxy s7.
My code looks like this:
val previewConfig = PreviewConfig.Builder().apply {
setTargetAspectRatio(Rational(1, 1))
setTargetResolution(Size(binding.codeScannerView.width, binding.codeScannerView.height))
}.build()
val preview = Preview(previewConfig)
preview.setOnPreviewOutputUpdateListener { preview ->
val parent = binding.codeScannerView.parent as ViewGroup
parent.removeView(binding.codeScannerView)
parent.addView(binding.codeScannerView, 0)
binding.codeScannerView.surfaceTexture = preview.surfaceTexture
}
val analyzerConfig = ImageAnalysisConfig.Builder().apply {
val analyzerThread = HandlerThread(
"QrCodeReader").apply { start() }
setCallbackHandler(Handler(analyzerThread.looper))
setImageReaderMode(
ImageAnalysis.ImageReaderMode.ACQUIRE_LATEST_IMAGE)
}.build()
val analyzerUseCase = ImageAnalysis(analyzerConfig).apply {
analyzer = QrCodeAnalyzer(requireContext(), Handler(), { qrCode ->
if (activity == null) {
return#QrCodeAnalyzer
}
presenter.disableCameraPreview()
presenter.updateTable(qrCode.toLowerCase().parseTableId(), isFromOrder, Screens.MENU_SCREEN)
})
}
CameraX.bindToLifecycle(this, preview, analyzerUseCase)
Any ideas on how to make it appear faster?
P. S. I can also see tearing in preview once in a while
So I've spent quite some time trying to find the solution, to no avail.
I have even encountered multiple issues (with alpha04) like:
Random SIGSEGV crashes when turning camera on/off
I tried sample projects and codelabs from google which also were not working 100% of the time on tested devices
At some point I got notification that
camera was being used in background, even though It was bound to
lifecycle and window closed, which is the last thing I want my users
to see.
Camera was indeed loading slower and I was getting horrible FPS even with analyzer off.
Resolution would drop down to lowest possible and preview would be pixelated on some devices
Every once in a while preview would start tearing vertically
Analyzer frame was different size than preview and there were some aspect ratio issues which took quite some time to resolve.
There's still quite some boilerplate required for it to work
Documentation for edge cases is pretty much non existent, so most of the stuff is trial and error.
In the end I just started looking for other libraries and came upon https://github.com/natario1/CameraView This is by far the easiest to use library I have ever seen for camera. Way simplier than camerax, it seems to just work, loads way faster, renders preview at 2x-3x higher FPS even with analyzer step running in the background. So far I had no issues with it.
Even though I strongly believe, that I was missing something, when using CameraX and there's probably a way to make it work, in the end it just doesn't seem worth it for now and I'll probably wait till there's a production ready version until I try again.
After implementing the camera2 API for the inApp camera I noticed that on Samsung devices the images appear blurry. After searching about that I found the Sasmung Camera SDK (http://developer.samsung.com/galaxy#camera). So after implementing the SDK on Samsung Galaxy S7 the images are fine now, but on Galaxy S6 they are still blurry. Someone experienced those kind of issues with Samsung devices?
EDIT:
To complement #rcsumners comment. I am setting autofocus by using
mPreviewBuilder.set(SCaptureRequest.CONTROL_AF_TRIGGER, SCaptureRequest.CONTROL_AF_TRIGGER_START);
mSCameraSession.capture(mPreviewBuilder.build(), new SCameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(SCameraCaptureSession session, SCaptureRequest request, STotalCaptureResult result) {
isAFTriggered = true;
}
}, mBackgroundHandler);
It is a long exposure image where the use has to take an image of a static non moving object. For this I am using the CONTROL_AF_MODE_MACRO
mCaptureBuilder.set(SCaptureRequest.CONTROL_AF_MODE, SCaptureRequest.CONTROL_AF_MODE_MACRO);
and also I am enabling auto flash if it is available
requestBuilder.set(SCaptureRequest.CONTROL_AE_MODE,
SCaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
I am not really an expert in this API, I mostly followed the SDK example app.
There could be a number of issues causing this problem. One prominent one is the dimensions of your output image
I ran Camera2 API and the preview is clear, but the output was quite blurry
val characteristics: CameraCharacteristics? = cameraManager.getCameraCharacteristics(cameraId)
val size = characteristics?.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)?.getOutputSizes(ImageFormat.JPEG) // The issue
val width = imageDimension.width
val height = imageDimension.height
if (size != null) {
width = size[0].width; height = size[0].height
}
val imageReader = ImageReader.newInstance(width, height, ImageFormat.JPEG, 5)
The code below was returning a dimension about 245*144 which was way to small to be sent to the image reader. Some how the output was stretching the image making it end up been blurry. Therefore I removed this line below.
val size = characteristics?.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)?.getOutputSizes(ImageFormat.JPEG) // this was returning a small
Setting the width and height manually resolved the issue.
You're setting the AF trigger for one frame, but then are you waiting for AF to complete? For AF_MODE_MACRO (are you verifying the device lists support for this AF mode?) you need to wait for AF_STATE_FOCUSED_LOCKED before the image is guaranteed to be stable and sharp. (You may also receive NOT_FOCUSED_LOCKED if the AF algorithm can't reach sharp focus, which could be because the object is just too close for the lens, or the scene is too confusing)
On most modern devices, it's recommended to use CONTINUOUS_PICTURE and not worry about AF triggering unless you really want to lock focus for some time period. In that mode, the device will continuously try to focus to the best of its ability. I'm not sure all that many devices support MACRO, to begin with.