CameraX: Output of ImageAnalysis looks corrupted/scattered - android

I'm working on an QR scanning application that is based on CameraX. The scanning works as expected on most devices except a few random devices. After debugging this issue for a long time, I found out that the cropped image is kind of shattered on the device in which scanning wasn't working.
Expected output (works on Pixel Device):
Current output (on the devices in which scanning is not working):
The analyze method that receives each frame (part of the CustomImageAnalyzer class)
override fun analyze(image: ImageProxy) {
val byteBuffer = image.planes[0].buffer
if (imageData.size != byteBuffer.capacity()) {
imageData = ByteArray(byteBuffer.capacity())
}
byteBuffer[imageData]
val iFact = if (mActivity.getOverlayView().width <= mActivity.getOverlayView().height) {
image.width / mActivity.getOverlayView().width.toDouble()
} else {
image.height / mActivity.getOverlayView().height.toDouble()
}
Log.i(TAG, "")
Log.i(TAG, "image.height" + image.height)
Log.i(TAG, "image.width" + image.width)
Log.i(TAG, "overlay.height" + mActivity.getOverlayView().height)
Log.i(TAG, "overlay.width" + mActivity.getOverlayView().width)
val size = mActivity.getOverlayView().size * iFact
Log.i(TAG, "Obtained size 1: " + mActivity.getOverlayView().size)
Log.i(TAG, "iFact: $iFact")
Log.i(TAG, "calculated size: $size")
val left = (image.width - size) / 2
val top = (image.height - size) / 2
Log.i(TAG, "left: $left")
Log.i(TAG, "top: $top")
val source = PlanarYUVLuminanceSource(
imageData,
image.width, image.height,
left.toInt(), top.toInt(),
size.toInt(), size.toInt(),
false
)
Log.i(TAG, "source.thumbnailHeight" + source.thumbnailHeight.toString())
Log.i(TAG, "source.thumbnailWidth" + source.thumbnailWidth.toString())
mActivity.runOnUiThread {
mActivity.showIntArray(source.renderThumbnail(), source.thumbnailHeight)
}
val binaryBitmap = BinaryBitmap(HybridBinarizer(source))
try {
val result = reader.decodeWithState(binaryBitmap)
listener.invoke(result.text)
} catch (e: ReaderException) {
} finally {
reader.reset()
}
// Compute the FPS of the entire pipeline
val frameCount = 10
if (++frameCounter % frameCount == 0) {
frameCounter = 0
val now = System.currentTimeMillis()
val delta = now - lastFpsTimestamp
val fps = 1000 * frameCount.toFloat() / delta
Log.d(TAG, "Analysis FPS: ${"%.02f".format(fps)}")
lastFpsTimestamp = now
}
image.close()
}
Code to start camera:
val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()
val preview = Preview.Builder()
.build()
.also {
it.setSurfaceProvider(contentFrame.surfaceProvider)
}
overlayView = findViewById(R.id.overlay)
val imageAnalysis = ImageAnalysis.Builder()
.setTargetResolution(Size(960, 960))
.build()
imageAnalysis.setAnalyzer(
executor,
QRCodeImageAnalyzer (this) { response ->
if (response != null) {
handleResult(response)
}
}
)
cameraProvider.unbindAll()
camera = cameraProvider.bindToLifecycle(this, cameraSelector, preview, imageAnalysis)
Also, I get the image from PlanarYUVLuminanceSource after the cropping is done.
Can someone please help me out with this issue?

some device have rotation special. you can use imageProxy.getImageInfo().getRotationDegrees() to get correct rotation of image. references with (ImageInfo) https://developer.android.com/reference/androidx/camera/core/ImageInfo#getRotationDegrees()

Related

MediaProjectionManager got freezes after some number of screenshots

I am using MediaProjectionManager for taking screenshots from the ForegroundService. I discovered that the behavior of capturing surface works differently in Android 10 and Android 11.
When I take a screenshot
fun captureBitmap(frame: CropFrames, response: (bitmap: Bitmap) -> Unit){
delayed(50) {
this.frames = frame
projection = mgr!!.getMediaProjection(resultCode, resultData!!)
val cb: MediaProjection.Callback = object : MediaProjection.Callback() {
override fun onStop() {
vdisplay!!.release() //?
response.invoke(latestBitmap!!)
}
}
vdisplay = projection?.createVirtualDisplay(
NAME,
width,
height,
App.densityDpi,
FLAGS,
imageReader.surface,
null,
null
)
projection?.registerCallback(cb, null)
}
}
onImageAvailable triggered
override fun onImageAvailable(reader: ImageReader) {
try {
val image = imageReader.acquireNextImage()
if (image != null) {
val planes = image.planes
val buffer = planes[0].buffer
val pixelStride = planes[0].pixelStride
val rowStride = planes[0].rowStride
val rowPadding = rowStride - pixelStride * width
val bitmapWidth = width + rowPadding / pixelStride
if (latestBitmap == null || latestBitmap!!.width != bitmapWidth || latestBitmap!!.height != height) {
if (latestBitmap != null) {
latestBitmap!!.recycle()
}
latestBitmap = Bitmap.createBitmap(
bitmapWidth,
height, Bitmap.Config.ARGB_8888
)
}
latestBitmap!!.copyPixelsFromBuffer(buffer)
image.close()
handler.parseFrame(frames!!, latestBitmap!!) {
stopCapture()
}
}
} catch (e: Exception) {
e.printStackTrace()
}
}
Then I release it
fun stopCapture() {
if (projection != null) {
projection!!.stop()
vdisplay!!.release()
projection = null
}
}
This flow can be triggered a lot of times per lifecycle, but each call has increased completion time of execution and it looks like short twitches of UI while taking ascreenshot (no main thread calculations). Probably I don't clear something properly? Any suggestions are appreciated. Thanks!

Android CameraX - Camera preview freezes on first call to captureUseCase.takePicture()

I am using android camera X api to capture selfie image by analyzing the face detected.
The app captures more than 5 images if the face is inside rectangular frame show on the screen.
The issue is, camera preview freezes(few seconds) when capturing image for the first time.
I am using the below code to set up camera provider in my activity.
// preview use case
internal fun bindPreviewUseCase(previewView: PreviewView) {
Util.printDebugLog("Binding camera preview use case.")
if (previewUseCase != null) {
cameraProvider.unbind(previewUseCase)
}
previewUseCase = Preview.Builder()
.setTargetResolution(Size(480, 800))
.build()
previewUseCase!!.setSurfaceProvider(previewView.createSurfaceProvider())
cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, previewUseCase)
}
// analysis use case
internal fun bindAnalysisUseCase(graphicOverlay: GraphicOverlay) {
Util.printDebugLog("Binding camera analysis use case.")
if (analysisUseCase != null) {
cameraProvider.unbind(analysisUseCase)
}
imageProcessor?.stop()
try {
val faceDetectorOptions: FaceDetectorOptions = getFaceDetectorOptionsForLivePreview()
imageProcessor =
LiveFaceDetectorProcessor(context, faceDetectorOptions, faceFrameProcessListener)
} catch (e: Exception) {
Viola.listener.onFaceDetectionFailed(
FaceDetectionError.IMAGE_PROCESSOR_ERROR,
"Can not create image processor: ${e.localizedMessage}"
)
return
}
val builder = ImageAnalysis.Builder()
builder.setTargetResolution(Size(480, 800))
analysisUseCase = builder.build()
needUpdateGraphicOverlayImageSourceInfo = true
analysisUseCase!!.setAnalyzer(
ContextCompat.getMainExecutor(context),
ImageAnalysis.Analyzer { imageProxy: ImageProxy ->
if (needUpdateGraphicOverlayImageSourceInfo) {
val isImageFlipped =
lensFacing == CameraSelector.LENS_FACING_FRONT
val rotationDegrees = imageProxy.imageInfo.rotationDegrees
if (rotationDegrees == 0 || rotationDegrees == 180) {
graphicOverlay.setImageSourceInfo(
imageProxy.width, imageProxy.height, isImageFlipped
)
} else {
graphicOverlay.setImageSourceInfo(
imageProxy.height, imageProxy.width, isImageFlipped
)
}
needUpdateGraphicOverlayImageSourceInfo = false
}
try {
imageProcessor!!.processImageProxy(imageProxy, graphicOverlay)
} catch (e: MlKitException) {
Viola.listener.onFaceDetectionFailed(
FaceDetectionError.IMAGE_PROCESSOR_ERROR,
"Failed to process image: ${e.localizedMessage}"
)
}
}
)
cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, analysisUseCase)
}
// capture use case
internal fun bindCaptureUseCase(previewView: PreviewView) {
Util.printDebugLog("Binding camera capture use case.")
if (captureUseCase != null) {
cameraProvider.unbind(captureUseCase)
}
val rotation = previewView.display.rotation
captureUseCase = ImageCapture.Builder()
.setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
.setTargetResolution(Size(960, 1280))
.setTargetRotation(rotation)
.build()
cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, captureUseCase)
}
//capture
internal fun takePicture(callback: CaptureCallback) {
Util.printDebugLog("Capturing current frame.")
val diff = System.currentTimeMillis() - lastCaptureTime
if (diff > captureDelay) {
if (!isCaptureInProgress) {
if (captureUseCase == null) {
return
}
isCaptureInProgress = true
lastCaptureTime = System.currentTimeMillis()
val executor = ContextCompat.getMainExecutor(context)
//TODO remove
val startTime = System.currentTimeMillis()
captureUseCase!!.takePicture(
executor,
object : ImageCapture.OnImageCapturedCallback() {
override fun onCaptureSuccess(image: ImageProxy) {
val timeElapsed = System.currentTimeMillis() - startTime
Util.printDebugLog("Image captured, producing bitmap from image proxy. $timeElapsed")
val bitmap: Bitmap =
BitmapUtil.getBitmap(image, image.imageInfo.rotationDegrees)!!
image.close()
callback.onCaptured(bitmap)
isCaptureInProgress = false
super.onCaptureSuccess(image)
}
override fun onError(exception: ImageCaptureException) {
super.onError(exception)
isCaptureInProgress = false
Util.printDebugLog("Unable to capture: ${exception.localizedMessage}")
}
})
}
} else {
Util.printDebugLog("Capture called before minimum delay.Ignoring capture call.")
}
}
// dependencies used
implementation "androidx.camera:camera-camera2:1.0.0-beta07"
implementation "androidx.camera:camera-view:1.0.0-alpha14"
implementation "androidx.camera:camera-lifecycle:1.0.0-beta07"
// time taken for each capture(in milliseconds)
image 1 -> 2396
image 2 -> 411
image 3 -> 356
image 4 -> 386
image 5 -> 345

Is there a way to crop Image/ImageProxy (before passing to MLKit's analyzer)?

I'm using CameraX's Analyzer use case with the MLKit's BarcodeScanner. I would like to crop portion of the image received from the camera, before passing it to the scanner.
What I'm doing right now is I convert ImageProxy (that I recieve in the Analyzer) to a Bitmap, crop it and then pass it to the BarcodeScanner. The downside is that it's not a very fast and efficient process.
I've also noticed the warning I get in the Logcat when running this code:
ML Kit has detected that you seem to pass camera frames to the
detector as a Bitmap object. This is inefficient. Please use
YUV_420_888 format for camera2 API or NV21 format for (legacy) camera
API and directly pass down the byte array to ML Kit.
It would be nice to not to do ImageProxy conversion, but how do I crop the rectangle I want to analyze?
What I've already tried is to set a cropRect field of the Image (imageProxy.image.cropRect) class, but it doesn't seem to affect the end result.
Yes, it's true that if you use ViewPort and set viewport to yours UseCases(imageCapture or imageAnalysis as here https://developer.android.com/training/camerax/configuration) you can get only information about crop rectangle especially if you use ImageAnalysis(because if you use imageCapture, for on-disk the image is cropped before saving and it doesn't work for ImageAnalysis and if you use imageCapture without saving on disk) and here solution how I solved this problem:
First of all set view port for use cases as here: https://developer.android.com/training/camerax/configuration
Get cropped bitmap to analyze
override fun analyze(imageProxy: ImageProxy) {
val mediaImage = imageProxy.image
if (mediaImage != null && mediaImage.format == ImageFormat.YUV_420_888) {
croppedBitmap(mediaImage, imageProxy.cropRect).let { bitmap ->
requestDetectInImage(InputImage.fromBitmap(bitmap, rotation))
.addOnCompleteListener { imageProxy.close() }
}
} else {
imageProxy.close()
}
}
private fun croppedBitmap(mediaImage: Image, cropRect: Rect): Bitmap {
val yBuffer = mediaImage.planes[0].buffer // Y
val vuBuffer = mediaImage.planes[2].buffer // VU
val ySize = yBuffer.remaining()
val vuSize = vuBuffer.remaining()
val nv21 = ByteArray(ySize + vuSize)
yBuffer.get(nv21, 0, ySize)
vuBuffer.get(nv21, ySize, vuSize)
val yuvImage = YuvImage(nv21, ImageFormat.NV21, mediaImage.width, mediaImage.height, null)
val outputStream = ByteArrayOutputStream()
yuvImage.compressToJpeg(cropRect, 100, outputStream)
val imageBytes = outputStream.toByteArray()
return BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.size)
}
Possibly there is a loss in conversion speed, but on my devices I did not notice the difference. I set 100 quality in method compressToJpeg, but mb if set less quality it can improve speed, it need test.
upd: May 02 '21 :
I found another way without convert to jpeg and then to bitmap. This should be a faster way.
Set viewport as previous.
Convert YUV_420_888 to NV21, then crop and analyze.
override fun analyze(imageProxy: ImageProxy) {
val mediaImage = imageProxy.image
if (mediaImage != null && mediaImage.format == ImageFormat.YUV_420_888) {
croppedNV21(mediaImage, imageProxy.cropRect).let { byteArray ->
requestDetectInImage(
InputImage.fromByteArray(
byteArray,
imageProxy.cropRect.width(),
imageProxy.cropRect.height(),
rotation,
IMAGE_FORMAT_NV21,
)
)
.addOnCompleteListener { imageProxy.close() }
}
} else {
imageProxy.close()
}
}
private fun croppedNV21(mediaImage: Image, cropRect: Rect): ByteArray {
val yBuffer = mediaImage.planes[0].buffer // Y
val vuBuffer = mediaImage.planes[2].buffer // VU
val ySize = yBuffer.remaining()
val vuSize = vuBuffer.remaining()
val nv21 = ByteArray(ySize + vuSize)
yBuffer.get(nv21, 0, ySize)
vuBuffer.get(nv21, ySize, vuSize)
return cropByteArray(nv21, mediaImage.width, cropRect)
}
private fun cropByteArray(array: ByteArray, imageWidth: Int, cropRect: Rect): ByteArray {
val croppedArray = ByteArray(cropRect.width() * cropRect.height())
var i = 0
array.forEachIndexed { index, byte ->
val x = index % imageWidth
val y = index / imageWidth
if (cropRect.left <= x && x < cropRect.right && cropRect.top <= y && y < cropRect.bottom) {
croppedArray[i] = byte
i++
}
}
return croppedArray
}
First crop fun I took from here: Android: How to crop images using CameraX?
And I found also another crop fun, it seems that it is more complicated:
private fun cropByteArray(src: ByteArray, width: Int, height: Int, cropRect: Rect, ): ByteArray {
val x = cropRect.left * 2 / 2
val y = cropRect.top * 2 / 2
val w = cropRect.width() * 2 / 2
val h = cropRect.height() * 2 / 2
val yUnit = w * h
val uv = yUnit / 2
val nData = ByteArray(yUnit + uv)
val uvIndexDst = w * h - y / 2 * w
val uvIndexSrc = width * height + x
var srcPos0 = y * width
var destPos0 = 0
var uvSrcPos0 = uvIndexSrc
var uvDestPos0 = uvIndexDst
for (i in y until y + h) {
System.arraycopy(src, srcPos0 + x, nData, destPos0, w) //y memory block copy
srcPos0 += width
destPos0 += w
if (i and 1 == 0) {
System.arraycopy(src, uvSrcPos0, nData, uvDestPos0, w) //uv memory block copy
uvSrcPos0 += width
uvDestPos0 += w
}
}
return nData
}
Second crop fun I took from here:
https://www.programmersought.com/article/75461140907/
I would be glad if someone can help improve the code.
I'm still improving the way to do it. But this will work for me now
CameraX crop image before sending to analyze
<androidx.constraintlayout.widget.ConstraintLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:paddingBottom="#dimen/_40sdp">
<androidx.camera.view.PreviewView
android:id="#+id/previewView"
android:layout_width="match_parent"
android:layout_height="0dp"
app:layout_constraintDimensionRatio="1:1"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent" /></androidx.constraintlayout.widget.ConstraintLayout>
Cropping an image into 1:1 before passing it to analyze
override fun onCaptureSuccess(image: ImageProxy) {
super.onCaptureSuccess(image)
var bitmap: Bitmap = imageProxyToBitmap(image)
val dimension: Int = min(bitmap.width, bitmap.height)
bitmap = ThumbnailUtils.extractThumbnail(bitmap, dimension, dimension)
imageView.setImageBitmap(bitmap) //Here you can pass the crop[from the center] image to analyze
image.close()
}
**Function for converting into bitmap **
private fun imageProxyToBitmap(image: ImageProxy): Bitmap {
val buffer: ByteBuffer = image.planes[0].buffer
val bytes = ByteArray(buffer.remaining())
buffer.get(bytes)
return BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
}
You would use ImageProxy.SetCroprect to get the rect and then use CropRect to set it.
For example if you had imageProxy, you would do : ImageProxy.setCropRect(Rect) and then you would do ImageProxy.CropRect.

How to prevent CameraX from turning while

I build an app using CameraX codelab example, that is orking fine, but once my mobile gone into sleep and screen turned off, the CameraX transmittion had not been resumed after returning the mobile to normal status, and the CameraX screen remain white?
UPDATE
Sorry, it is not the Camera itself, The camera is invisible, and I've an image view, for which the image analyzer is displaying what is seen in the camera.
It look the val bitmap = view_finder.bitmap ?: return#Analyzer is returning null in my code below once the mobile goes in sleep.
private lateinit var viewFinder: TextureView
private fun startCamera() {
val previewConfig = PreviewConfig.Builder().apply {
setTargetAspectRatio(Rational(1, 1))
setTargetResolution(Size(640, 640))
}.build()
val preview = Preview(previewConfig)
preview.setOnPreviewOutputUpdateListener {
val parent = viewFinder.parent as ViewGroup
parent.removeView(viewFinder)
parent.addView(viewFinder, 0)
viewFinder.surfaceTexture = it.surfaceTexture
updateTransform()
}
val imageCaptureConfig = Builder()
.apply {
setTargetAspectRatio(Rational(1, 1))
setCaptureMode(ImageCapture.CaptureMode.MIN_LATENCY)
}.build()
val imageCapture = ImageCapture(imageCaptureConfig)
findViewById<ImageButton>(R.id.capture_button).setOnClickListener {
val file = File(externalMediaDirs.first(),
"${System.currentTimeMillis()}.jpg")
imageCapture.takePicture(file,
object : ImageCapture.OnImageSavedListener {
override fun onError(error: ImageCapture.UseCaseError,
message: String, exc: Throwable?) {
val msg = "Photo capture failed: $message"
Toast.makeText(baseContext, msg, Toast.LENGTH_SHORT).show()
Log.e("CameraXApp", msg)
exc?.printStackTrace()
}
override fun onImageSaved(file: File) {
val msg = "Photo capture succeeded: ${file.absolutePath}"
Toast.makeText(baseContext, msg, Toast.LENGTH_SHORT).show()
Log.d("CameraXApp", msg)
}
})
}
// Setup image analysis pipeline that computes average pixel luminance
val analyzerConfig = ImageAnalysisConfig.Builder().apply {
val analyzerThread = HandlerThread(
"LuminosityAnalysis").apply { start() }
setCallbackHandler(Handler(analyzerThread.looper))
setImageReaderMode(
ImageAnalysis.ImageReaderMode.ACQUIRE_LATEST_IMAGE)
}.build()
val analyzerUseCase = ImageAnalysis(analyzerConfig).apply {
////// This is my own code that I added to the analyzer ///////
analyzer = ImageAnalysis.Analyzer { image, rotationDegrees ->
val bitmap = view_finder.bitmap ?: return#Analyzer
scope.launch(Dispatchers.Unconfined) {
val mat = Mat()
Utils.bitmapToMat(bitmap!!, mat)
val detectedFaces = FaceDetection.detectFaces(bitmap!!)
println("Detected Faces = $detectedFaces")
Toast.makeText(
this#MainActivity, "Detected Faces = ${detectedFaces.toArray().size}",
Toast.LENGTH_SHORT
).show()
if (detectedFaces.toArray().isNotEmpty()) {
val paint = Paint().apply {
isAntiAlias = true
style = Paint.Style.STROKE
color = Color.RED
strokeWidth = 10f
}
for (rect in detectedFaces.toArray()) {
bitmap?.let { Canvas(it) }?.apply {
drawRect(
rect.x.toFloat(), // faceRectangle.left,
rect.y.toFloat(), //faceRectangle.top,
rect.x.toFloat() + rect.width,
rect.y.toFloat() + rect.height,
paint
)
}
}
}
}
runOnUiThread { imageView.setImageBitmap(bitmap) }
}
}
CameraX.bindToLifecycle(
this, preview, imageCapture, analyzerUseCase)
}
I also had the same issue and this solution worked for me.
We can add the following line of code in onCreate(), to keep the device awake
window.addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON)
Here is the reference link from docs
https://developer.android.com/training/scheduling/wakelock

Camera X captures images in different rotation states

Okay, I went through different posts and find out that depending on mobile manufacturers there can be a complications such as capture images get rotated, so you have to be aware of that. What I did was:
fun rotateBitmap(bitmap: Bitmap): Bitmap? {
val matrix = Matrix()
when (getImageOrientation(bitmap)) {
ExifInterface.ORIENTATION_NORMAL -> return bitmap
ExifInterface.ORIENTATION_FLIP_HORIZONTAL -> matrix.setScale(-1f, 1f)
ExifInterface.ORIENTATION_ROTATE_270 -> matrix.setRotate(-90f)
ExifInterface.ORIENTATION_ROTATE_180 -> matrix.setRotate(180f)
ExifInterface.ORIENTATION_ROTATE_90 -> matrix.setRotate(90f)
ExifInterface.ORIENTATION_FLIP_VERTICAL -> {
matrix.setRotate(180f)
matrix.postScale(-1f, 1f)
}
ExifInterface.ORIENTATION_TRANSPOSE -> {
matrix.setRotate(90f)
matrix.postScale(-1f, 1f)
}
ExifInterface.ORIENTATION_TRANSVERSE -> {
matrix.setRotate(-90f)
matrix.postScale(-1f, 1f)
}
else -> return bitmap
}
This worked. But then I noticed something really weird and that might be related with how I configured Camera X configuration.
With the same device I get differently rotated Bitmaps (well, this should not happen. If devices rotates image weirdly, it should rotate images in both modes - in ImageAnalysesUseCase and ImageCaptureUseCase).
So, why is this happening and how can I fix it?
Code implementation:
Binding camera X to life-cycle:
CameraX.bindToLifecycle(
this,
buildPreviewUseCase(),
buildImageAnalysisUseCase(),
buildImageCaptureUseCase()
)
Preview use case:
private fun buildPreviewUseCase(): Preview {
val previewConfig = PreviewConfig.Builder()
.setTargetAspectRatio(config.aspectRatio)
.setTargetResolution(config.resolution)
.setTargetRotation(Surface.ROTATION_0)
.setLensFacing(config.lensFacing)
.build()
return AutoFitPreviewBuilder.build(previewConfig, cameraTextureView)
}
Capture use case:
private fun buildImageCaptureUseCase(): ImageCapture {
val captureConfig = ImageCaptureConfig.Builder()
.setTargetAspectRatio(config.aspectRatio)
.setTargetRotation(Surface.ROTATION_0)
.setTargetResolution(config.resolution)
.setCaptureMode(config.captureMode)
.build()
val capture = ImageCapture(captureConfig)
manualModeTakePhotoButton.setOnClickListener {
capture.takePicture(object : ImageCapture.OnImageCapturedListener() {
override fun onCaptureSuccess(imageProxy: ImageProxy, rotationDegrees: Int) {
viewModel.onManualCameraModeAnalysis(imageProxy, rotationDegrees)
}
override fun onError(useCaseError: ImageCapture.UseCaseError?, message: String?, cause: Throwable?) {
//
}
})
}
return capture
}
Analysis use case:
private fun buildImageAnalysisUseCase(): ImageAnalysis {
val analysisConfig = ImageAnalysisConfig.Builder().apply {
val analyzerThread = HandlerThread("xAnalyzer").apply { start() }
analyzerHandler = Handler(analyzerThread.looper)
setCallbackHandler(analyzerHandler!!)
setTargetAspectRatio(config.aspectRatio)
setTargetRotation(Surface.ROTATION_0)
setTargetResolution(config.resolution)
setImageReaderMode(config.readerMode)
setImageQueueDepth(config.queueDepth)
}.build()
val analysis = ImageAnalysis(analysisConfig)
analysis.analyzer = ImageRecognitionAnalyzer(viewModel)
return analysis
}
AutoFitPreviewBuilder:
class AutoFitPreviewBuilder private constructor(config: PreviewConfig,
viewFinderRef: WeakReference<TextureView>) {
/** Public instance of preview use-case which can be used by consumers of this adapter */
val useCase: Preview
/** Internal variable used to keep track of the use-case's output rotation */
private var bufferRotation: Int = 0
/** Internal variable used to keep track of the view's rotation */
private var viewFinderRotation: Int? = null
/** Internal variable used to keep track of the use-case's output dimension */
private var bufferDimens: Size = Size(0, 0)
/** Internal variable used to keep track of the view's dimension */
private var viewFinderDimens: Size = Size(0, 0)
/** Internal variable used to keep track of the view's display */
private var viewFinderDisplay: Int = -1
/** Internal reference of the [DisplayManager] */
private lateinit var displayManager: DisplayManager
/**
* We need a display listener for orientation changes that do not trigger a configuration
* change, for example if we choose to override config change in manifest or for 180-degree
* orientation changes.
*/
private val displayListener = object : DisplayManager.DisplayListener {
override fun onDisplayAdded(displayId: Int) = Unit
override fun onDisplayRemoved(displayId: Int) = Unit
override fun onDisplayChanged(displayId: Int) {
val viewFinder = viewFinderRef.get() ?: return
if (displayId == viewFinderDisplay) {
val display = displayManager.getDisplay(displayId)
val rotation = getDisplaySurfaceRotation(display)
updateTransform(viewFinder, rotation, bufferDimens, viewFinderDimens)
}
}
}
init {
// Make sure that the view finder reference is valid
val viewFinder = viewFinderRef.get() ?:
throw IllegalArgumentException("Invalid reference to view finder used")
// Initialize the display and rotation from texture view information
viewFinderDisplay = viewFinder.display.displayId
viewFinderRotation = getDisplaySurfaceRotation(viewFinder.display) ?: 0
// Initialize public use-case with the given config
useCase = Preview(config)
// Every time the view finder is updated, recompute layout
useCase.onPreviewOutputUpdateListener = Preview.OnPreviewOutputUpdateListener {
val viewFinder =
viewFinderRef.get() ?: return#OnPreviewOutputUpdateListener
// To update the SurfaceTexture, we have to remove it and re-add it
val parent = viewFinder.parent as ViewGroup
parent.removeView(viewFinder)
parent.addView(viewFinder, 0)
viewFinder.surfaceTexture = it.surfaceTexture
bufferRotation = it.rotationDegrees
val rotation = getDisplaySurfaceRotation(viewFinder.display)
updateTransform(viewFinder, rotation, it.textureSize, viewFinderDimens)
}
// Every time the provided texture view changes, recompute layout
viewFinder.addOnLayoutChangeListener { view, left, top, right, bottom, _, _, _, _ ->
val viewFinder = view as TextureView
val newViewFinderDimens = Size(right - left, bottom - top)
val rotation = getDisplaySurfaceRotation(viewFinder.display)
updateTransform(viewFinder, rotation, bufferDimens, newViewFinderDimens)
}
// Every time the orientation of device changes, recompute layout
displayManager = viewFinder.context
.getSystemService(Context.DISPLAY_SERVICE) as DisplayManager
displayManager.registerDisplayListener(displayListener, null)
// Remove the display listeners when the view is detached to avoid
// holding a reference to the View outside of a Fragment.
// NOTE: Even though using a weak reference should take care of this,
// we still try to avoid unnecessary calls to the listener this way.
viewFinder.addOnAttachStateChangeListener(object : View.OnAttachStateChangeListener {
override fun onViewAttachedToWindow(view: View?) {
displayManager.registerDisplayListener(displayListener, null)
}
override fun onViewDetachedFromWindow(view: View?) {
displayManager.unregisterDisplayListener(displayListener)
}
})
}
/** Helper function that fits a camera preview into the given [TextureView] */
private fun updateTransform(textureView: TextureView?, rotation: Int?, newBufferDimens: Size,
newViewFinderDimens: Size) {
// This should not happen anyway, but now the linter knows
val textureView = textureView ?: return
if (rotation == viewFinderRotation &&
Objects.equals(newBufferDimens, bufferDimens) &&
Objects.equals(newViewFinderDimens, viewFinderDimens)) {
// Nothing has changed, no need to transform output again
return
}
if (rotation == null) {
// Invalid rotation - wait for valid inputs before setting matrix
return
} else {
// Update internal field with new inputs
viewFinderRotation = rotation
}
if (newBufferDimens.width == 0 || newBufferDimens.height == 0) {
// Invalid buffer dimens - wait for valid inputs before setting matrix
return
} else {
// Update internal field with new inputs
bufferDimens = newBufferDimens
}
if (newViewFinderDimens.width == 0 || newViewFinderDimens.height == 0) {
// Invalid view finder dimens - wait for valid inputs before setting matrix
return
} else {
// Update internal field with new inputs
viewFinderDimens = newViewFinderDimens
}
val matrix = Matrix()
// Compute the center of the view finder
val centerX = viewFinderDimens.width / 2f
val centerY = viewFinderDimens.height / 2f
// Correct preview output to account for display rotation
matrix.postRotate(-viewFinderRotation!!.toFloat(), centerX, centerY)
// Buffers are rotated relative to the device's 'natural' orientation: swap width and height
val bufferRatio = bufferDimens.height / bufferDimens.width.toFloat()
val scaledWidth: Int
val scaledHeight: Int
// Match longest sides together -- i.e. apply center-crop transformation
if (viewFinderDimens.width > viewFinderDimens.height) {
scaledHeight = viewFinderDimens.width
scaledWidth = Math.round(viewFinderDimens.width * bufferRatio)
} else {
scaledHeight = viewFinderDimens.height
scaledWidth = Math.round(viewFinderDimens.height * bufferRatio)
}
// Compute the relative scale value
val xScale = scaledWidth / viewFinderDimens.width.toFloat()
val yScale = scaledHeight / viewFinderDimens.height.toFloat()
// Scale input buffers to fill the view finder
matrix.preScale(xScale, yScale, centerX, centerY)
// Finally, apply transformations to our TextureView
textureView.setTransform(matrix)
}
companion object {
/** Helper function that gets the rotation of a [Display] in degrees */
fun getDisplaySurfaceRotation(display: Display?) = when(display?.rotation) {
Surface.ROTATION_0 -> 0
Surface.ROTATION_90 -> 90
Surface.ROTATION_180 -> 180
Surface.ROTATION_270 -> 270
else -> null
}
/**
* Main entrypoint for users of this class: instantiates the adapter and returns an instance
* of [Preview] which automatically adjusts in size and rotation to compensate for
* config changes.
*/
fun build(config: PreviewConfig, viewFinder: TextureView) =
AutoFitPreviewBuilder(config, WeakReference(viewFinder)).useCase
}
}
If configuration is correct (it looks okay to me), then next idea was that maybe converting captured images objects to bitmap might be faulty. Below you can see implementation.
Capture mode uses this function:
fun imageProxyToBitmap(image: ImageProxy): Bitmap {
val buffer: ByteBuffer = image.planes[0].buffer
val bytes = ByteArray(buffer.remaining())
buffer.get(bytes)
return BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
}
Analysis mode uses this function:
fun toBitmapFromImage(image: Image?): Bitmap? {
try {
if (image == null || image.planes[0] == null || image.planes[1] == null || image.planes[2] == null) {
return null
}
val yBuffer = image.planes[0].buffer
val uBuffer = image.planes[1].buffer
val vBuffer = image.planes[2].buffer
val ySize = yBuffer.remaining()
val uSize = uBuffer.remaining()
val vSize = vBuffer.remaining()
val nv21 = ByteArray(ySize + uSize + vSize)
/* U and V are swapped */
yBuffer.get(nv21, 0, ySize)
vBuffer.get(nv21, ySize, vSize)
uBuffer.get(nv21, ySize + vSize, uSize)
val yuvImage = YuvImage(nv21, ImageFormat.NV21, image.width, image.height, null)
val out = ByteArrayOutputStream()
yuvImage.compressToJpeg(Rect(0, 0, yuvImage.width, yuvImage.height), 50, out)
val imageBytes = out.toByteArray()
return BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.size)
} catch (e: IllegalStateException) {
Log.e("IllegalStateException", "#ImageUtils.toBitmapFromImage(): Can't read the image file.")
return null
}
}
So, weirdly, on few devices toBitmapFromImage() sometimes comes up upwards, but at the same time (same device) imageProxyToBitmap() returns image in correct rotation - it has to be the image to bitmap functions fault, right?Why is this happening (because capture mode returns image normally) and how to fix this?
Inside onImageCaptureSuccess, get the rotationDegrees and rotate your bitmap by that degree to get the correct orientation.
override fun onImageCaptureSuccess(image: ImageProxy) {
val capturedImageBitmap = image.image?.toBitmap()?.rotate(image.imageInfo.rotationDegrees.toFloat())
mBinding.previewImage.setImageBitmap(capturedImageBitmap)
showPostClickViews()
mCurrentFlow = FLOW_CAMERA
}
toBitmap() and rotate() are extension functions.
fun Image.toBitmap(): Bitmap {
val buffer = planes[0].buffer
buffer.rewind()
val bytes = ByteArray(buffer.capacity())
buffer.get(bytes)
return BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
}
fun Bitmap.rotate(degrees: Float): Bitmap =
Bitmap.createBitmap(this, 0, 0, width, height, Matrix().apply { postRotate(degrees) }, true)
CameraX returns the captured image with a rotation value in the callback, which can be used to rotate the image.
https://developer.android.com/reference/androidx/camera/core/ImageCapture.OnImageCapturedListener.html#onCaptureSuccess(androidx.camera.core.ImageProxy,%20int)
For Analyzer UseCases, you have to get rotationDegree coming through analyze method of ImageAnalysis.Analyzer and work accordingly.
Hope it helps!

Categories

Resources