How to decrease frame rate of Android CameraX ImageAnalysis? - android

How to decrease the frame rate to 1 fps in image analysis so that I don't detect barcode multiple times. In my use-case, scanning the same barcode multiple times with a 1-second interval should increment a counter. So I want it to work correctly. (Similar to product scanner at shop tills)
cameraX version : 1.0.0-beta02
Similar questions :
How to increase frame rate with Android CameraX ImageAnalysis?
In this, the answer is based on the preview config & analysis config Builders, which was in cameraX alpha versions. But it is not found in beta versions.
How to pause barcode scanning in android ML-kit when using ByteBuffer from SurfaceView
In this, the image analysis is throttled, but I want the call to the image analyzer to be at a lower frame rate. i.e Call from image analysis usecase to image analyzer should be once in a second.
Current implementation :
https://proandroiddev.com/update-android-camerax-4a44c3e4cdcc
Following this doc, to throttle image analysis.
override fun analyze(image: ImageProxy) {
val currentTimestamp = System.currentTimeMillis()
if (currentTimestamp - lastAnalyzedTimestamp >= TimeUnit.SECONDS.toMillis(1)) {
// Image analysis code
}
image.close()
}
A better solution would be helpful.

Tried bmdelacruz's solution. Had issues with closing the image.
Was getting an error similar to this.
Couldn't get it working.
Using delay worked well for me.
Code
CoroutineScope(Dispatchers.IO).launch {
delay(1000 - (System.currentTimeMillis() - currentTimestamp))
imageProxy.close()
}
Complete BarcodeAnalyser code
class BarcodeAnalyser(
private val onBarcodesDetected: (barcodes: List<Barcode>) -> Unit,
) : ImageAnalysis.Analyzer {
private val barcodeScannerOptions = BarcodeScannerOptions.Builder()
.setBarcodeFormats(Barcode.FORMAT_ALL_FORMATS)
.build()
private val barcodeScanner = BarcodeScanning.getClient(barcodeScannerOptions)
var currentTimestamp: Long = 0
override fun analyze(
imageProxy: ImageProxy,
) {
currentTimestamp = System.currentTimeMillis()
imageProxy.image?.let { imageToAnalyze ->
val imageToProcess =
InputImage.fromMediaImage(imageToAnalyze, imageProxy.imageInfo.rotationDegrees)
barcodeScanner.process(imageToProcess)
.addOnSuccessListener { barcodes ->
// Success handling
}
.addOnFailureListener { exception ->
// Failure handling
}
.addOnCompleteListener {
CoroutineScope(Dispatchers.IO).launch {
delay(1000 - (System.currentTimeMillis() - currentTimestamp))
imageProxy.close()
}
}
}
}
}

You can take advantage of the fact that the next analysis will not begin until you close the provided ImageProxy.
In my case, I just put the thread to sleep since my analyzer's executor is a single thread executor.
class MyAnalyzer : ImageAnalysis.Analyzer {
override fun analyze(image: ImageProxy) {
val elapsedAnalysisTime = measureTimeMillis {
// do your stuff here
}
image.use {
if (elapsedAnalysisTime < 1000) {
Thread.sleep(1000 - elapsedAnalysisTime)
}
}
}
}

private var firstDetected = true
for (barcode in barcodes) {
if (barcodes.size > 0 && firstDetected) {
LoggingUtility.writeLog("Analyzer",
"MLKitBarcode Result",
"Barcode is ${barcode.rawValue!!}")
firstDetected = false
}
}
This might help

Related

Android ML Kit library for QR code scanning: How to increase detection performance by reducing image resolution

This is my stripped down sourcecode for barcode scanning
build.gradle
dependencies {
.....
// MLKit Dependencies
implementation 'com.google.android.gms:play-services-vision:20.1.3'
implementation 'com.google.mlkit:barcode-scanning:17.0.2'
def camerax_version = "1.1.0-beta01"
implementation "androidx.camera:camera-core:${camerax_version}"
implementation "androidx.camera:camera-camera2:${camerax_version}"
implementation "androidx.camera:camera-lifecycle:${camerax_version}"
implementation "androidx.camera:camera-video:${camerax_version}"
......
}
ScanCameraFragment.kt
class ScanCameraFragment : BaseFragment() {
private lateinit var binding: FragmentScanCameraBinding
private lateinit var cameraExecutor: ExecutorService
//region Lifecycle Methods
override fun onCreateView(inflater: LayoutInflater, container: ViewGroup?,
savedInstanceState: Bundle?): View? {
binding = FragmentScanCameraBinding.inflate(inflater, container, false)
cameraExecutor = Executors.newSingleThreadExecutor()
startCamera()
return binding.root
}
override fun onDestroyView() {
super.onDestroyView()
cameraExecutor.shutdown()
}
companion object {
fun newInstance() = ScanCameraFragment().apply {}
}
private fun startCamera() {
context?.let { context ->
val cameraProviderFuture = ProcessCameraProvider.getInstance(context)
cameraProviderFuture.addListener({
val cameraProvider = cameraProviderFuture.get()
// Preview
val preview = Preview.Builder()
.build()
.also {
it.setSurfaceProvider(binding.previewView.surfaceProvider)
}
// Image analyzer
val imageAnalyzer = ImageAnalysis.Builder()
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.build()
.also {
it.setAnalyzer(cameraExecutor,
QrCodeAnalyzer(context, binding.barcodeBoxView,
binding.previewView.width.toFloat(),
binding.previewView.height.toFloat()
)
)
}
// Select back camera as a default
val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA
try {
// Unbind use cases before rebinding
cameraProvider.unbindAll()
// Bind use cases to camera
var camera = cameraProvider.bindToLifecycle(this, cameraSelector,
preview, imageAnalyzer)
} catch (exc: Exception) {
exc.printStackTrace()
}
}, ContextCompat.getMainExecutor(context))
}
}
}
QRCodeAnalyzer.kt
class QrCodeAnalyzer(private val context: Context,
private val barcodeBoxView: BarcodeBoxView, private val previewViewWidth: Float,
private val previewViewHeight: Float) : ImageAnalysis.Analyzer {
private var scaleX = 1f
private var scaleY = 1f
private fun translateX(x: Float) = x * scaleX
private fun translateY(y: Float) = y * scaleY
private fun adjustBoundingRect(rect: Rect) = RectF(
translateX(rect.left.toFloat()),
translateY(rect.top.toFloat()),
translateX(rect.right.toFloat()),
translateY(rect.bottom.toFloat())
)
#SuppressLint("UnsafeOptInUsageError")
override fun analyze(image: ImageProxy) {
val img = image.image
if (img != null) {
// Update scale factors
scaleX = previewViewWidth / img.height.toFloat()
scaleY = previewViewHeight / img.width.toFloat()
val inputImage = InputImage.fromMediaImage(img,
image.imageInfo.rotationDegrees)
// Process image searching for barcodes
val options = BarcodeScannerOptions.Builder()
.build()
val scanner = BarcodeScanning.getClient(options)
scanner.process(inputImage)
.addOnSuccessListener { barcodes ->
for (barcode in barcodes) {
barcode?.rawValue?.let {
if (it.trim().isNotBlank()) {
Scanner.updateBarcode(it)
barcode.boundingBox?.let { rect ->
barcodeBoxView.setRect(adjustBoundingRect(rect))
}
}
return#addOnSuccessListener
}
}
// coming here means no satisfiable barcode was found
barcodeBoxView.setRect(RectF())
}
.addOnFailureListener {
image.close()
}
.addOnFailureListener { }
}
image.close()
}
}
This code works and I am able to scan barcodes. But sometimes, the barcode detection is slow. The documentation says one way to increase performance is to limit the image resolution.
Don't capture input at the camera’s native resolution. On some
devices, capturing input at the native resolution produces extremely
large (10+ megapixels) images, which results in very poor latency with
no benefit to accuracy. Instead, only request the size from the camera
that's required for barcode detection, which is usually no more than 2
megapixels.
If scanning speed is important, you can further lower the image
capture resolution. However, bear in mind the minimum barcode size
requirements outlined above.
Unfortunately, the documentation doesn't specify how to reduce the image resolution. And some of my end users are using high end devices with powerful camera, so we assume the poor performance is because of the image size.
How can I reduce the resolution of the image to a fixed value (something like 1024 x 768) rather than the default camera resolution?
You can set it on the imageAnalyzer builder bij using
.setTargetResolution(Size)
val imageAnalysisUseCaseBuilder = ImageAnalysis.Builder()
imageAnalysisUseCaseBuilder.setTargetResolution(Size(1024, 768))
imageAnalysisUseCase = imageAnalysisUseCaseBuilder.build()
or in you case
val imageAnalyzer = ImageAnalysis.Builder()
.setTargetResolution(Size(1024, 768))
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.build()
.also {
it.setAnalyzer(cameraExecutor,
QrCodeAnalyzer(context, binding.barcodeBoxView,
binding.previewView.width.toFloat(),
binding.previewView.height.toFloat()
)
)
}
User HarmenH's answer correctly tells how to set the image resolution, so I am not repeating it here.
As it turns out, the performance issue on my end was not because of image resolution. It seems I was closing the imageProxy prematurely.
override fun analyze(image: ImageProxy) {
val img = image.image
if (img != null) {
// Update scale factors
scaleX = previewViewWidth / img.height.toFloat()
scaleY = previewViewHeight / img.width.toFloat()
val inputImage = InputImage.fromMediaImage(img,
image.imageInfo.rotationDegrees)
// Process image searching for barcodes
val options = BarcodeScannerOptions.Builder()
.build()
val scanner = BarcodeScanning.getClient(options)
scanner.process(inputImage)
.addOnSuccessListener { barcodes - >
for (barcode in barcodes) {
barcode?.rawValue?.let {
if (it.trim().isNotBlank()) {
Scanner.updateBarcode(it)
barcode.boundingBox?.let { rect - >
barcodeBoxView.setRect(adjustBoundingRect(rect))
}
}
return #addOnSuccessListener
}
}
// coming here means no satisfiable barcode was found
barcodeBoxView.setRect(RectF())
}
.addOnFailureListener {
image.close()
}
.addOnFailureListener {
//added this here.
image.close()
}
}
//Removed this because we don't close the
//imageProxy before analysis completes
//image.close()
}

Can I get the Exif data from an Android camera preview without saving to file?

I want to use the Android camera to report lighting and colour information from a sampled patch on the image preview. The camerax preview generates ImageProxy images, and I can get the average LUV data for a patch. I would like to turn this data into absolute light levels using the exposure information and the camera white balance. The exposure data is in the Exif information, and maybe the white balance information too.
I would like this information, however we get it. Exif seems a very likely route, but any other non-Exif solutions are welcome.
At first sight, it looks as if Exif is always read from a file. However, ExifInterface
can be created from an InputStream, and one of the streamType options is STREAM_TYPE_EXIF_DATA_ONLY. This looks promising - it seems something makes and streams just the EXIF data, and a camera preview could easily do just that. Or maybe we can get Exif from the ImageProxy somehow.
I found many old threads on how to get at Exif data to find out the camera orientation. About 4 years ago these people were saying Exif is only read from a file. Is this still so?
Reply to comment:
With due misgiving, I attach my dodgy code...
private class LuvAnalyzer(private val listener:LuvListener) : ImageAnalysis.Analyzer {
private fun ByteBuffer.toByteArray(): ByteArray {
rewind() // Rewind the buffer to zero
val data = ByteArray(remaining())
get(data) // Copy the buffer into a byte array
return data // Return the byte array
}
override fun analyze(image: ImageProxy) {
// Sum for 1/5 width square of YUV_420_888 image
val YUV = DoubleArray(3)
val w = image.width
val h = image.height
val sq = kotlin.math.min(h,w) / 5
val w0 = ((w - sq)/4)*2
val h0 = ((h - sq)/4)*2
var ySum = 0
var uSum = 0
var vSum = 0
val y = image.planes[0].buffer.toByteArray()
val stride = image.planes[0].rowStride
var offset = h0*stride + w0
for (row in 1..sq) {
var o = offset
for (pix in 1..sq) { ySum += y[o++].toInt() and 0xFF }
offset += stride
}
YUV[0] = ySum.toDouble()/(sq*sq).toDouble()
val uv = image.planes[1].buffer.toByteArray()
offset = (h0/2)*stride + w0
for (row in 1..sq/2) {
var o = offset
for (pix in 1..sq/2) {
uSum += uv[o++].toInt() and 0xFF
vSum += uv[o++].toInt() and 0xFF
}
offset += stride
}
YUV[1] = uSum.toDouble()/(sq*sq/4).toDouble()
YUV[2] = vSum.toDouble()/(sq*sq/4).toDouble()
// val exif = Exif.createFromImageProxy(image)
listener(YUV)
image.close()
}
}
private fun startCamera() {
val cameraProviderFuture = ProcessCameraProvider.getInstance(this)
cameraProviderFuture.addListener({
// Used to bind the lifecycle of cameras to the lifecycle owner
val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()
// Preview
val preview = Preview.Builder()
.build()
.also {
it.setSurfaceProvider(binding.viewFinder.surfaceProvider)
}
imageCapture = ImageCapture.Builder()
.build()
// Image analyser
val imageAnalyzer = ImageAnalysis.Builder()
.build()
.also {
it.setAnalyzer(cameraExecutor, LuvAnalyzer { LUV ->
// Log.d(TAG, "Average LUV: %.1f %.1f %.1f".format(LUV[0], LUV[1], LUV[2]))
luvText = "Average LUV: %.1f %.1f %.1f".format(LUV[0], LUV[1], LUV[2])
})
}
// Select back camera as a default
val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA
try {
// Unbind use cases before rebinding
cameraProvider.unbindAll()
// Bind use cases to camera
cameraProvider.bindToLifecycle(
this, cameraSelector, preview, imageCapture, imageAnalyzer)
} catch(exc: Exception) {
Log.e(TAG, "Use case binding failed", exc)
}
}, ContextCompat.getMainExecutor(this))
}
I am doing my image averaging from an ImageProxy. I am currently trying to get the Exif data from the same ImageProxy because there not saving images to files, because this is intended to provide a stream of colour values. And there is an intriguing Exif.createFromImageProxy(image) (now commented out) which I discovered after writing the original note, but I can't get it to do anything.
I might get the Exif information if I saved an image to a .jpg file and then read it back in again. The camera is putting out a stream of preview images, and the exposure settings may be changing all the time, so I would have to save a stream of images. If I was really stuck, I might try that. But I feel there are enough Exif bits and pieces to get the information live from the camera.
Update
The Google camerax-developers suggest getting the exposure information using the camera2 Extender. I have got it working enough to see the numbers go up and down roughly as they should. This feels a lot better than the Exif route.
I am tempted to mark this as the solution, as it is the solution for me, but I shall leave it open as my original question in the title may have an answer.
val previewBuilder = Preview.Builder()
val previewExtender = Camera2Interop.Extender(previewBuilder)
// Turn AWB off
previewExtender.setCaptureRequestOption(CaptureRequest.CONTROL_AWB_MODE,
CaptureRequest.CONTROL_AWB_MODE_DAYLIGHT)
previewExtender.setSessionCaptureCallback(
object : CameraCaptureSession.CaptureCallback() {
override fun onCaptureCompleted(
session: CameraCaptureSession,
request: CaptureRequest,
result: TotalCaptureResult
) {
result.get(CaptureResult.SENSOR_EXPOSURE_TIME)
result.get(CaptureResult.SENSOR_SENSITIVITY)
result.get(CaptureResult.COLOR_CORRECTION_GAINS)
result.get(CaptureResult.COLOR_CORRECTION_TRANSFORM)
}
}
)

Camera 2 preview freezes when I uses YUV_420_888 Image Reader as a target surface

I'm working on an application that has requirements to detect objects and faces in a real-time camera feed. for this, I'm using MLkit. I have successfully implemented it object detection part with the back-facing camera with no issue. ML kit recommends using YUV_420_888 with the smallest size possible to have better results. I am using Camera 2 API and here's my code for setting up ImageReader for processing.
class SelfieCaptureFragment : Fragment(R.layout.fragment_selfie_capture) {
protected lateinit var characteristics: CameraCharacteristics
protected lateinit var camera: CameraDevice
protected lateinit var session: CameraCaptureSession
protected lateinit var requestPermissionLauncher: ActivityResultLauncher<String>
protected lateinit var captureImageReader: ImageReader
protected lateinit var analyzeImageReader: ImageReader
/** [HandlerThread] and [Handler] where all camera operations run */
private val cameraThread = HandlerThread("CameraThread").apply { start() }
private val cameraHandler = Handler(cameraThread.looper)
/** [HandlerThread] and [Handler] where all camera still image capturing operations run */
private val captureImageReaderThread = HandlerThread("captureImageReaderThread").apply { start() }
private val captureImageReaderHandler = Handler(captureImageReaderThread.looper)
private val analyzeImageReaderThread = HandlerThread("imageReaderThread").apply { start() }
private val analyzeImageReaderHandler = Handler(analyzeImageReaderThread.looper)
companion object {
/** Maximum number of images that will be held in the reader's buffer */
const val IMAGE_BUFFER_SIZE: Int = 3
}
private fun configureCamera(selectedCameraId: String) {
lifecycleScope.launch(Dispatchers.Main) {
camera = openCamera(cameraManager, selectedCameraId, cameraHandler)
val previewFraction = DisplayUtils
.asFraction(previewSize!!.width.toLong(), previewSize!!.height.toLong())
// Initialize an image reader which will be used to capture still photos
captureSize = characteristics.get(
CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)!!
.getOutputSizes(ImageFormat.JPEG)
.filter { DisplayUtils.asFraction(it.width.toLong(),it.height.toLong()) == previewFraction }
.sortedBy { it.height * it.width}
.reversed()
.first()
analyzeImageSize = characteristics
.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)!!
.getOutputSizes(ImageFormat.YUV_420_888)
.filter { DisplayUtils.asFraction(it.width.toLong(), it.height.toLong()) == previewFraction }
.sortedBy { it.height * it.width}
.first()
if (captureSize != null) {
captureImageReader = ImageReader.newInstance(
captureSize!!.width, captureSize!!.height, ImageFormat.JPEG, IMAGE_BUFFER_SIZE
)
analyzeImageReader = ImageReader.newInstance(
analyzeImageSize!!.width,
analyzeImageSize!!.height,
ImageFormat.YUV_420_888,
IMAGE_BUFFER_SIZE)
Log.d(TAG, "Selected capture size: $captureSize")
Log.d(TAG, "Selected image analyze size: $analyzeImageSize")
val targets = listOf(
binding.cameraSurfaceView.holder.surface,
captureImageReader.surface,
analyzeImageReader.surface
)
session = createCaptureSession(camera, targets, cameraHandler)
val captureBuilder = camera.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)
captureBuilder.set(
CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE
)
captureBuilder.addTarget(binding.cameraSurfaceView.holder.surface)
captureBuilder.addTarget(analyzeImageReader.surface)
session.setRepeatingRequest(captureBuilder.build(), null, cameraHandler)
}
}
}
}
This code is identical to what I'm using in the back-facing camera to set up the image reader for the object detection part. I have not yet Implemented any logic related to Image processing to detect faces in the camera steam. Error is in the line where I set up the target. captureBuilder.addTarget(analyzeImageReader.surface) If I comment that line camera preview works fine. I get the following error and warning in logcat over and over when the camera preview freezes.
W/libc: Unable to set property "debug.sf.dequeuebuffer" to "1": connection failed; errno=13 (Permission denied)
E/BufferQueueProducer: [ImageReader-256x144f23m3-23007-3](id:59df00000003,api:4,p:4806,c:23007) waitForFreeSlotThenRelock: timeout
I can't understand why it's not working in the front-facing camera. analyze image size return from the analyzeImageReader is 256x144 which is not a big resolution either plus it's the smallest out of all the supported sizes for the format of YUV_420_888. Any Help will be highly appreciated. I thought I have figured it out all when I implemented objected detection part. this was a very unexpected issue.
Edit: I'm still stuck with this issue so I have set up a small project to showcase the whole picture. Please check the repo and help me find a solution.
Hope this helps: I had a similar issue with one phone model (Samsung A50). I managed to solve it by changing this:
captureImageReader.setOnImageAvailableListener(null, null)
To:
captureImageReader.setOnImageAvailableListener({ reader ->
reader.acquireLatestImage().close()
},imageReaderHandler)

Android 10 (api 29) camera2 api regression with wide-angle camera

I'm using camera2 api in my camera app designed specifically for Google Pixel 3 XL. This device has two front facing cameras (wide-angle and normal). Thanks to multi-camera feature, I can access both physical camera devices simultaneously, and my app has a feature to toggle between those two cameras. Up until my recent upgrade to Android 10, I could accurately see two distinct results, but now my wide-angle capture frame has pretty much the same FOV (Field of View) as the normal camera one. So, the same code, same apk on Android 9 wide-angle capture result is wide, as expected, and after Andoird 10 upgrade - wide and normal cameras show practically identical FOV.
Here is a code snippet to demonstrate how I initialize both cameras and capture preview:
MainActivity.kt
private val surfaceReadyCallback = object: SurfaceHolder.Callback {
override fun surfaceChanged(p0: SurfaceHolder?, p1: Int, p2: Int, p3: Int) { }
override fun surfaceDestroyed(p0: SurfaceHolder?) { }
override fun surfaceCreated(p0: SurfaceHolder?) {
// Get the two output targets from the activity / fragment
val surface1 = surfaceView1.holder.surface
val surface2 = surfaceView2.holder.surface
val dualCamera = findShortLongCameraPair(cameraManager)!!
val outputTargets = DualCameraOutputs(
null, mutableListOf(surface1), mutableListOf(surface2))
//Open the logical camera, configure the outputs and create a session
createDualCameraSession(cameraManager, dualCamera, targets = outputTargets) { session ->
val requestTemplate = CameraDevice.TEMPLATE_PREVIEW
val captureRequest = session.device.createCaptureRequest(requestTemplate).apply {
arrayOf(surface1, surface2).forEach { addTarget(it) }
}.build()
session.setRepeatingRequest(captureRequest, null, null)
}
}
}
fun openDualCamera(cameraManager: CameraManager,
dualCamera: DualCamera,
executor: Executor = SERIAL_EXECUTOR,
callback: (CameraDevice) -> Unit) {
cameraManager.openCamera(
dualCamera.logicalId, executor, object : CameraDevice.StateCallback() {
override fun onOpened(device: CameraDevice) { callback(device) }
override fun onError(device: CameraDevice, error: Int) = onDisconnected(device)
override fun onDisconnected(device: CameraDevice) = device.close()
})
}
fun createDualCameraSession(cameraManager: CameraManager,
dualCamera: DualCamera,
targets: DualCameraOutputs,
executor: Executor = SERIAL_EXECUTOR,
callback: (CameraCaptureSession) -> Unit) {
// Create 3 sets of output configurations: one for the logical camera, and
// one for each of the physical cameras.
val outputConfigsLogical = targets.first?.map { OutputConfiguration(it) }
val outputConfigsPhysical1 = targets.second?.map {
OutputConfiguration(it).apply { setPhysicalCameraId(dualCamera.physicalId1) } }
val outputConfigsPhysical2 = targets.third?.map {
OutputConfiguration(it).apply { setPhysicalCameraId(dualCamera.physicalId2) } }
val outputConfigsAll = arrayOf(
outputConfigsLogical, outputConfigsPhysical1, outputConfigsPhysical2)
.filterNotNull().flatten()
val sessionConfiguration = SessionConfiguration(SessionConfiguration.SESSION_REGULAR,
outputConfigsAll, executor, object : CameraCaptureSession.StateCallback() {
override fun onConfigured(session: CameraCaptureSession) = callback(session)
override fun onConfigureFailed(session: CameraCaptureSession) = session.device.close()
})
openDualCamera(cameraManager, dualCamera, executor = executor) {
it.createCaptureSession(sessionConfiguration)
}
}
DualCamera.kt Helper Class
data class DualCamera(val logicalId: String, val physicalId1: String, val physicalId2: String)
fun findDualCameras(manager: CameraManager, facing: Int? = null): Array<DualCamera> {
val dualCameras = ArrayList<DualCamera>()
manager.cameraIdList.map {
Pair(manager.getCameraCharacteristics(it), it)
}.filter {
facing == null || it.first.get(CameraCharacteristics.LENS_FACING) == facing
}.filter {
it.first.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES)!!.contains(
CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA)
}.forEach {
val physicalCameras = it.first.physicalCameraIds.toTypedArray()
for (idx1 in 0 until physicalCameras.size) {
for (idx2 in (idx1 + 1) until physicalCameras.size) {
dualCameras.add(DualCamera(
it.second, physicalCameras[idx1], physicalCameras[idx2]))
}
}
}
return dualCameras.toTypedArray()
}
fun findShortLongCameraPair(manager: CameraManager, facing: Int? = null): DualCamera? {
return findDualCameras(manager, facing).map {
val characteristics1 = manager.getCameraCharacteristics(it.physicalId1)
val characteristics2 = manager.getCameraCharacteristics(it.physicalId2)
val focalLengths1 = characteristics1.get(
CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS) ?: floatArrayOf(0F)
val focalLengths2 = characteristics2.get(
CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS) ?: floatArrayOf(0F)
val focalLengthsDiff1 = focalLengths2.max()!! - focalLengths1.min()!!
val focalLengthsDiff2 = focalLengths1.max()!! - focalLengths2.min()!!
if (focalLengthsDiff1 < focalLengthsDiff2) {
Pair(DualCamera(it.logicalId, it.physicalId1, it.physicalId2), focalLengthsDiff1)
} else {
Pair(DualCamera(it.logicalId, it.physicalId2, it.physicalId1), focalLengthsDiff2)
}
// Return only the pair with the largest difference, or null if no pairs are found
}.sortedBy { it.second }.reversed().lastOrNull()?.first
}
And you can see the result on the attached screenshot, the top left corner one has much wider FOV than the same camera but running on Android 10
Is this a known regression with Android 10? Has anyone noticed similar behavior?
My understanding:
I came across the same problem on my Pixel 3. It seems that the wide angle camera's frame has been cropped in the HAL layer before combination. Actually the FOV is not totally the same, as there is a little disparity between left and right camera. However, the default zoom level of wide camera seems to change according to the focal length.
But I could not find any official documentation about it. In Android 10, it claims improved the fusing of physical cameras:
https://developer.android.com/about/versions/10/features#multi-camera
Solution:
If you wish to access the raw data from the wide angle front camera, you can create 2 camera sessions for both physical cameras instead of a single session for the logical camera.
Updated:
You can use the setPhysicalCameraKey to reset the zoom level
https://developer.android.com/reference/android/hardware/camera2/CaptureRequest.Builder#setPhysicalCameraKey(android.hardware.camera2.CaptureRequest.Key%3CT%3E,%20T,%20java.lang.String)
The regression you observed is a behavior change on Pixel 3/Pixel 3XL between Android 9 and Android 10. It is not an Android API change per se, but something the API allows the devices change behavior on; other devices may be different.
The camera API allows the physical camera streams to be cropped to match the field-of-view of the logical camera stream.
On pixel 3 (Android 11), probing cameras using CameraManager.getCameraIdList() returns 4 IDs: 0, 1, 2, 3
0: Back Camera : Physical Camera Stream
1: Front camera : Logical camera with two physical camera ID's
2: Front camera normal: Physical Camera Stream
3: Front camera widelens: Physical Camera Stream
As user DannyLin suggested, opening 2 physical camera streams (2,3) seems to do the job. Note that other combinations such as (0, 1), (1, 2) etc do not work (only the first call to openCamera() goes through and the second call fails). Here's a snapshot of the physical camera streams for two front camera's.

Android CameraX, increase ImageAnalysis frame rate

I need to take as many frames as possible from the preview of the camera.
I'm doing this to start the camera using CameraX:
private fun startCamera() {
// Create configuration object for the viewfinder use case
val metrics = DisplayMetrics().also { view_finder.display.getRealMetrics(it) }
// define the screen size
val screenSize = Size(metrics.widthPixels, metrics.heightPixels)
val screenAspectRatio = Rational(metrics.widthPixels, metrics.heightPixels)
val previewConfig = PreviewConfig.Builder()
.setLensFacing(CameraX.LensFacing.BACK) // defaults to Back camera
.setTargetAspectRatio(screenAspectRatio)
.setTargetResolution(screenSize)
.build()
// Build the viewfinder use case
val preview = Preview(previewConfig)
// Every time the viewfinder is updated, recompute layout
preview.setOnPreviewOutputUpdateListener {
// To update the SurfaceTexture, we have to remove it and re-add it
val parent = view_finder.parent as ViewGroup
parent.removeView(view_finder)
parent.addView(view_finder, 0)
view_finder.surfaceTexture = it.surfaceTexture
updateTransform()
}
val analyzerConfig = ImageAnalysisConfig.Builder().apply {
// Use a worker thread for image analysis to prevent glitches
val analyzerThread = HandlerThread("AnalysisThread").apply {
start()
}
setLensFacing(CameraX.LensFacing.BACK) // defaults to Back camera
setTargetAspectRatio(screenAspectRatio)
setMaxResolution(Size(600, 320))
setCallbackHandler(Handler(analyzerThread.looper))
setImageReaderMode(ImageAnalysis.ImageReaderMode.ACQUIRE_LATEST_IMAGE)
}.build()
val analyzerUseCase = ImageAnalysis(analyzerConfig).apply {
analyzer = context?.let { FrameCapturer(it) }
}
//====================== Image Analysis Config code End==========================
CameraX.bindToLifecycle(this, preview, analyzerUseCase)
}
My FrameCapturer class is:
class FrameCapturer(context: Context): ImageAnalysis.Analyzer {
private var mListener = context as FrameCapturerListener
override fun analyze(image: ImageProxy?, rotationDegrees: Int) {
val buffer = image?.planes?.get(0)?.buffer
// Extract image data from callback object
val data = buffer?.toByteArray()
// Convert the data into an array of pixel values
//val pixels = data?.map { it.toInt() and 0xFF }
mListener.updateFps()
}
private fun ByteBuffer.toByteArray(): ByteArray {
rewind() // Rewind the buffer to zero
val data = ByteArray(remaining())
get(data) // Copy the buffer into a byte array
return data // Return the byte array
}
interface FrameCapturerListener {
fun updateFps()
}
And my updateFps function is:
fun updateFps() {
fps += 1
if (!isCountDownStarted) {
var timer = object : CountDownTimer(1000, 1000) {
override fun onTick(millisUntilFinished: Long) {
}
override fun onFinish() {
Log.d("CameraFragment", fps.toString())
fps = 0
isCountDownStarted = false
}
}.start()
isCountDownStarted = true
}
}
I'm around 16-22 fps, even if I don't convert the image to an array of byte in the FrameCapturer class. So, it seems like the Analyzer take only 20 image per second. Is there a way to increase the fps? I need to take at least 40-60 image per second, because I need to do post-processing with machine learning for each frame, so they will probably drop to 20-30 after ML analysis.
EDIT: I discover that with the Pixel 2xl I got 60fps without any drop..with my device (Xiaomi Redmi Note 5 Pro) I got only 25 fps...Can I optimize the code in any way to increase FPS?

Categories

Resources