setBarcodeFormats(Barcode.QR_CODE) does not work - android

I'm using Google Play Vision API to detect qr codes. I'm not interested in other formats, so I'm trying to use that API call to speed up detection. It works fine if I'm calling it as setBarcodeFormats(Barcode.ALL_FORMATS), it detects qr codes with format 256 (QR_CODE), but if I'm initializing it as setBarcodeFormats(Barcode.QR_CODE), it does not detect anything. Here's some code snippets:
override fun onCreate(savedInstanceState: Bundle?) {
...
barcodeDetector = BarcodeDetector.Builder(applicationContext).setBarcodeFormats(Barcode.ALL_FORMATS).build()
...
private inner class ImageReaderOnImageAvailableListenerImpl : ImageReader.OnImageAvailableListener {
override fun onImageAvailable(reader: ImageReader) {
val image = reader.acquireNextImage()
val buffer = image.planes[0].buffer;
val bitmap: Bitmap? = if (buffer.hasArray()) {
BitmapFactory.decodeByteArray(buffer.array(), buffer.arrayOffset(), buffer.remaining(), null);
} else {
val byteArray = ByteArray(buffer.remaining())
buffer.get(byteArray)
BitmapFactory.decodeByteArray(byteArray, 0, byteArray.size, null);
}
val barcodes = barcodeDetector.detect(Frame.Builder().setBitmap(bitmap).build())
image.close()
this#QrCodeCaptureActivity.imageView.setImageBitmap(bitmap)
if (barcodes.size() > 0) {
for (index in 0 until barcodes.size()) {
val barcode = barcodes.valueAt(index)
logd("barcode $index: ${barcode.format} ${barcode.valueFormat} ${barcode.rawValue}")
}
} else {
logd("no barcodes (${image.width}x${image.height})")
}
}
}

Related

QR code with logo cannot be recognized by my app's QR code scanner

I have created an app with QR code scanner, however I can't scan a QR code with a logo using it, any solution for this?
This is the code of my QR code scanner app:
class QRCode (val onQrCodeScanned: (String) -> Unit) : ImageAnalysis.Analyzer {
#RequiresApi(Build.VERSION_CODES.M)
private val supportedImageFormats = listOf(
ImageFormat.YUV_420_888 ,
ImageFormat.YUV_422_888 ,
ImageFormat.YUV_444_888)
#RequiresApi(Build.VERSION_CODES.M)
override fun analyze(image: ImageProxy) {
if (image.format in supportedImageFormats) {
val bytes = image.planes.first().buffer.toByteArray()
val source = PlanarYUVLuminanceSource(
bytes,
image.width,
image.height,
0,
0,
image.width,
image.height,
false)
val binaryBmp = BinaryBitmap(HybridBinarizer(source))
try {
val result = MultiFormatReader().apply {
setHints(
mapOf(
DecodeHintType.POSSIBLE_FORMATS to arrayListOf(
BarcodeFormat.QR_CODE
)
)
)
}.decode(binaryBmp)
onQrCodeScanned(result.text)
} catch (e: Exception) {
e.printStackTrace()
} finally {
image.close()
}
}
}
private fun ByteBuffer.toByteArray(): ByteArray {
rewind()
return ByteArray(remaining()).also {
get(it)
}
}
}
It can scan other QR codes but not QR codes with logo.
This is my QR code:
I fixed it... The problem is in-app browsers cannot process websites that are built using Javascript.
This is what my code for launching the browser looks like before and after:
Before:
#Composable
fun LoadWebUrl(url: String) {
val context = LocalContext.current
AndroidView(factory = {
WebView(context).apply {
webViewClient = WebViewClient()
loadUrl(url)
}
})
}
After:
#Composable
fun LoadWebUrl(url: String) {
val context = LocalContext.current
IntentUtils.launchBrowser(context, url)
}

How to stop the intent from opening multiple times after scaning QR code with camerax and image analysis use case?

I am creating a simple qr scanner using Camerax and google ML Kit. I am opening an intent after the string value is extracted from the QR code. The problem I'm facing is , the intent is opening multiple times. How do I resolve this?
The following is the setup for image analysis.DisplayQR intent will open after receiving string value inside QR code.
val imageAnalysis = ImageAnalysis.Builder()
.setTargetResolution(Size(640, 480))
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.build()
imageAnalysis.setAnalyzer(
ContextCompat.getMainExecutor(this),
CodeAnalyzer(this, object : CallBackInterface {
override fun onSuccess(qrString: String?) {
imageAnalysis.clearAnalyzer()
Toast.makeText(this#ActivityQR,qrString,Toast.LENGTH_SHORT).show()
Log.d("rty",qrString.toString())
//the following intent is opening multiple times
val visitordetails =
Intent(this#ActivityQR, DisplayQR::class.java)
visitordetails.putExtra("VISITOR_QR", qrString)
startActivity(visitordetails)
}
override fun onFailed() {
}
})
)
cameraProvider.bindToLifecycle(this, selectedCamera, imageAnalysis, cameraPreview)
Code for analyzing the image
class CodeAnalyzer(context: Context, callBackInterface: CallBackInterface):imageAnalysis.Analyzer {
private val context: Context = context
private val callback: CallBackInterface = callBackInterface
#SuppressLint("UnsafeOptInUsageError")
override fun analyze(image: ImageProxy) {
var scanner: BarcodeScanner = BarcodeScanning.getClient()
val scannedIMage = image.image
if (scannedIMage != null) {
var scannedInputImage = InputImage.fromMediaImage(
scannedIMage,
image.imageInfo.rotationDegrees
)
scanner.process(scannedInputImage).addOnSuccessListener { barCodes ->
for (qrCode in barCodes) {
when (qrCode.valueType) {
Barcode.TYPE_TEXT -> {
val qrString: String? = qrCode.rawValue
if (qrString != null) {
callback.onSuccess(qrString) //Here I am calling the callback
}
}
}
}
}.addOnFailureListener {
}.addOnCompleteListener {
image.close()
}
}
}
}
Edit: Corrected activity name

ImageReader's onImageAvailable method doesn't call and preview shows only 8 frames in slow motion and freezes (Camera2)

I noticed strange behavior on Xiaomi Redmi Note 9 Pro. I tested the application on hundreds of phones but this problem appears only on this device and only when used ImageReader with YUV_420_888 format and 176*144 preview resolution (for example with 320 * 240 or JPEG or without ImageReader as capture surface everything works well). onImageAvailable method doesn't call, preview shows only 8 frames in slow motion and freezes, app slows down. onCaptureCompleted() in CameraCurrentParamsReceiver also calls only 8 times.
I get the smallest resolution by using getMinPreviewSize (176 * 144 for this Xiaomi phone).
const val PREVIEW_IMAGE_FORMAT = ImageFormat.YUV_420_888
const val IMAGE_READER_MAX_SIMULTANEOUS_IMAGES = 4
val previewCaptureCallback = CameraCurrentParamsReceiver(this)
private fun startPreview(cameraDevice: CameraDevice, cameraProperties: CameraProperties)
{
val imageReader = ImageReader.newInstance(cameraProperties.previewSize.width,
cameraProperties.previewSize.height,
PREVIEW_IMAGE_FORMAT,
IMAGE_READER_MAX_SIMULTANEOUS_IMAGES)
this.imageReader = imageReader
bufferedImageConverter = BufferedImageConverter(cameraProperties.previewSize.width, cameraProperties.previewSize.height)
val previewSurface = previewSurface
val previewSurfaceForCamera =
if (previewSurface != null)
{
if (previewSurface.isValid)
{
previewSurface
}
else
{
Log.w(TAG, "Invalid preview surface - camera preview display is not available")
null
}
}
else
{
null
}
val captureSurfaces = listOfNotNull(imageReader.surface, previewSurfaceForCamera)
cameraDevice.createCaptureSession(
captureSurfaces,
object : CameraCaptureSession.StateCallback()
{
override fun onConfigureFailed(cameraCaptureSession: CameraCaptureSession)
{
Log.e(TAG, "onConfigureFailed() cannot configure camera")
if (isCameraOpened(cameraDevice))
{
shutDown("onConfigureFailed")
}
}
override fun onConfigured(cameraCaptureSession: CameraCaptureSession)
{
Log.d(TAG, "onConfigured()")
if (!isCameraOpened(cameraDevice))
{
cameraCaptureSession.close()
shutDown("onConfigured.isCameraOpened")
return
}
captureSession = cameraCaptureSession
try
{
val request = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)
captureSurfaces.forEach { request.addTarget(it) }
CameraPreviewRequestInitializer.initializePreviewRequest(request, cameraProperties, controlParams, isControlParamsStrict)
captureRequestBuilder = request
val previewCallback = PreviewFrameHandler(this#Camera2)
this#Camera2.previewFrameHandler = previewCallback
imageReader.setOnImageAvailableListener(previewCallback, previewCallback.backgroundHandler)
cameraCaptureSession.setRepeatingRequest(request.build(), previewCaptureCallback, null)
}
catch (ex: CameraAccessException)
{
Log.e(TAG, "onConfigured() failed with exception", ex)
shutDown("onConfigured.CameraAccessException")
}
}
},
null)
}
private fun chooseCamera(manager: CameraManager): CameraProperties?
{
val cameraIdList = manager.cameraIdList
if (cameraIdList.isEmpty())
{
return null
}
for (cameraId in cameraIdList)
{
val characteristics = manager.getCameraCharacteristics(cameraId)
val facing = characteristics.get(CameraCharacteristics.LENS_FACING)
if (facing != null && facing == CameraCharacteristics.LENS_FACING_BACK)
{
val minPreviewSize = getMinPreviewSize(characteristics)
if (minPreviewSize == null)
{
Log.e(TAG, "chooseCamera() Cannot determine the preview size")
return null
}
Log.d(TAG, "chooseCamera() chosen camera id: $cameraId, preview size: $minPreviewSize")
return CameraProperties(cameraId,
minPreviewSize,
characteristics)
}
}
return null
}
private fun getMinPreviewSize(characteristics: CameraCharacteristics): Size?
{
val map = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)
if (map == null)
{
Log.e(TAG, "getMinPreviewSize() Map is empty")
return null
}
return map.getOutputSizes(Constants.Camera.PREVIEW_IMAGE_FORMAT)?.minBy { it.width * it.height }
}
PreviewFrameHandler and CameraCurrentParamsReceiver (previewCaptureCallback variable)
private class PreviewFrameHandler(private val parent: Camera2) : ImageReader.OnImageAvailableListener, Handler.Callback
{
val backgroundHandler: Handler
private val backgroundHandlerThread: HandlerThread = HandlerThread("Camera2.PreviewFrame.HandlerThread")
private val mainHandler: Handler = Handler(Looper.getMainLooper(), this)
/**
* Main thread.
*/
init
{
backgroundHandlerThread.start()
backgroundHandler = Handler(backgroundHandlerThread.looper)
}
fun shutDown()
{
backgroundHandlerThread.quit()
mainHandler.removeMessages(0)
}
override fun handleMessage(msg: Message?): Boolean
{
msg ?: return false
parent.cameraFrameListener.onFrame(msg.obj as RGBImage)
return true
}
/**
* Background thread.
*/
private val relativeTimestamp = RelativeTimestamp()
override fun onImageAvailable(reader: ImageReader)
{
var image: Image? = null
try
{
image = reader.acquireNextImage()
image ?: return
val rgbImage = parent.bufferedImageConverter?.convertYUV420spToRGB(image, relativeTimestamp.updateAndGetSeconds(image.timestamp))
rgbImage ?: return
mainHandler.sendMessage(mainHandler.obtainMessage(0, rgbImage))
}
catch (ex: Exception)
{
Log.e(TAG, "onImageAvailable()", ex)
}
finally
{
image?.close()
}
}
private class RelativeTimestamp
{
private var initialNanos = 0L
fun updateAndGetSeconds(currentNanos: Long): Double
{
if (initialNanos == 0L)
{
initialNanos = currentNanos
}
return nanosToSeconds(currentNanos - initialNanos)
}
}
}
/**
* Class used to read current camera params.
*/
private class CameraCurrentParamsReceiver(private val parent: Camera2) : CameraCaptureSession.CaptureCallback()
{
private var isExposureTimeExceptionLogged = false
private var isIsoExceptionLogged = false
override fun onCaptureSequenceAborted(session: CameraCaptureSession, sequenceId: Int)
{
}
override fun onCaptureCompleted(session: CameraCaptureSession, request: CaptureRequest, result: TotalCaptureResult)
{
try
{
val exposureTimeNanos = result.get(CaptureResult.SENSOR_EXPOSURE_TIME)
if (exposureTimeNanos != null)
{
parent.currentExposureTimeNanos = exposureTimeNanos
}
}
catch (ex: IllegalArgumentException)
{
if (!isExposureTimeExceptionLogged)
{
isExposureTimeExceptionLogged = true
}
}
try
{
val iso = result.get(CaptureResult.SENSOR_SENSITIVITY)
if (iso != null)
{
parent.currentIso = iso
}
}
catch (ex: IllegalArgumentException)
{
if (!isIsoExceptionLogged)
{
Log.i(TAG, "Cannot get current SENSOR_SENSITIVITY, exception: " + ex.message)
isIsoExceptionLogged = true
}
}
}
override fun onCaptureFailed(session: CameraCaptureSession, request: CaptureRequest, failure: CaptureFailure)
{
}
override fun onCaptureSequenceCompleted(session: CameraCaptureSession, sequenceId: Int, frameNumber: Long)
{
}
override fun onCaptureStarted(session: CameraCaptureSession, request: CaptureRequest, timestamp: Long, frameNumber: Long)
{
}
override fun onCaptureProgressed(session: CameraCaptureSession, request: CaptureRequest, partialResult: CaptureResult)
{
}
override fun onCaptureBufferLost(session: CameraCaptureSession, request: CaptureRequest, target: Surface, frameNumber: Long)
{
}
}
As I understand something is wrong with preview size but I cannot find correct way how to get this value and the strangest thing is that this problem appears only on this Xiaomi device. Any thoughts?
176x144 is sometimes a problematic resolution for devices. It's really only listed by camera devices because it's sometimes required for recording videos for MMS (multimedia text message) messages. These videos, frankly, look awful, but it's still frequently a requirement by cellular carriers that they work.
But on modern devices with 12 - 50 MP cameras, the camera hardware actually struggles to scale images down to 176x144 from the sensor full resolution (> 20x downscale!), so sometimes certain combinations of sizes can cause problems.
I'd generally recommend not using preview resolutions below 320x240, to minimize issues, and definitely not mix a 176x144 preview with a high-resolution still capture.

Issues with the latest CameraX and Barcode Scanning on some devices

So I'm working on an app that requires QR scanner as a main feature. Previously I was using camerax-alpha06 with Firebase ML vision 24.0.3 and they were working fine for months, no customer complaints about scanning issues.
Then about two weeks ago I had to change Firebase ML vision to MLKit barcode scanning (related to the Crashlytics migration - out of topic) and now some of the users who could scan in the previous version now could not. Some sample devices be Samsung Tab A7 (Android 5.1.1) and Vivo 1919 (Android 10)
This is my build.gradle section that involves this feature
def camerax_version = "1.0.0-beta11"
implementation "androidx.camera:camera-core:${camerax_version}"
implementation "androidx.camera:camera-camera2:${camerax_version}"
implementation "androidx.camera:camera-lifecycle:${camerax_version}"
implementation "androidx.camera:camera-view:1.0.0-alpha18"
implementation "androidx.camera:camera-extensions:1.0.0-alpha18"
implementation 'com.google.android.gms:play-services-mlkit-barcode-scanning:16.1.2'
This is my camera handler file
class ScanQRCameraViewHandler(
private val fragment: ScanQRDialogFragment,
private val previewView: PreviewView
) {
private val displayLayout get() = previewView
companion object {
private const val RATIO_4_3_VALUE = 4.0 / 3.0
private const val RATIO_16_9_VALUE = 16.0 / 9.0
}
private val analyzer = GMSMLKitAnalyzer(onFoundQR = { extractedString ->
fragment.verifyExtractedString(extractedString)
}, onNotFoundQR = {
resetStateToAllowNewImageStream()
})
private var cameraProviderFuture: ListenableFuture<ProcessCameraProvider>? = null
private var camera: Camera? = null
private var isAnalyzing = false
internal fun resetStateToAllowNewImageStream() {
isAnalyzing = false
}
internal fun setTorceEnable(isEnabled: Boolean) {
camera?.cameraControl?.enableTorch(isEnabled)
}
internal fun initCameraProviderIfHasNot() {
if (cameraProviderFuture == null) {
fragment.context?.let {
cameraProviderFuture = ProcessCameraProvider.getInstance(it)
val executor = ContextCompat.getMainExecutor(it)
cameraProviderFuture?.addListener({
bindPreview(cameraProviderFuture?.get(), executor)
}, executor)
}
}
}
private fun bindPreview(cameraProvider: ProcessCameraProvider?, executor: Executor) {
val metrics = DisplayMetrics().also { displayLayout.display.getRealMetrics(it) }
val screenAspectRatio = aspectRatio(metrics.widthPixels, metrics.heightPixels)
val preview = initPreview(screenAspectRatio)
val imageAnalyzer = createImageAnalyzer()
val imageAnalysis = createImageAnalysis(executor, imageAnalyzer, screenAspectRatio)
val cameraSelector = createCameraSelector()
cameraProvider?.unbindAll()
camera = cameraProvider?.bindToLifecycle(
fragment as LifecycleOwner,
cameraSelector, imageAnalysis, preview
)
}
private fun createCameraSelector(): CameraSelector {
return CameraSelector.Builder()
.requireLensFacing(CameraSelector.LENS_FACING_BACK)
.build()
}
private fun createImageAnalysis(
executor: Executor, imageAnalyzer: ImageAnalysis.Analyzer, screenAspectRatio: Int
): ImageAnalysis {
val rotation = displayLayout.rotation
val imageAnalysis = ImageAnalysis.Builder()
// .setTargetRotation(rotation.toInt())
// .setTargetAspectRatio(screenAspectRatio)
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.build()
imageAnalysis.setAnalyzer(executor, imageAnalyzer)
return imageAnalysis
}
private fun createImageAnalyzer(): ImageAnalysis.Analyzer {
return ImageAnalysis.Analyzer {
isAnalyzing = true
analyzer.analyze(it)
}
}
private fun initPreview(screenAspectRatio: Int): Preview {
val preview: Preview = Preview.Builder()
//.setTargetResolution(Size(840, 840))
// .setTargetAspectRatio(screenAspectRatio)
// .setTargetRotation(displayLayout.rotation.toInt())
.build()
preview.setSurfaceProvider(previewView.surfaceProvider)
return preview
}
fun unbindAll() {
cameraProviderFuture?.get()?.unbindAll()
}
private fun aspectRatio(width: Int, height: Int): Int {
val previewRatio = width.coerceAtLeast(height).toDouble() / width.coerceAtMost(height)
if (kotlin.math.abs(previewRatio - RATIO_4_3_VALUE) <= kotlin.math.abs(previewRatio - RATIO_16_9_VALUE)) {
return AspectRatio.RATIO_4_3
}
return AspectRatio.RATIO_16_9
}
}
And my analyzer
internal class GMSMLKitAnalyzer(
private val onFoundQR: (String) -> Unit,
private val onNotFoundQR: () -> Unit
) :
ImageAnalysis.Analyzer {
private val options = BarcodeScannerOptions.Builder()
.setBarcodeFormats(Barcode.FORMAT_QR_CODE).build()
#SuppressLint("UnsafeExperimentalUsageError")
override fun analyze(imageProxy: ImageProxy) {
imageProxy.image?.let { mediaImage ->
val image = InputImage.fromMediaImage(mediaImage, imageProxy.imageInfo.rotationDegrees)
val scanner = BarcodeScanning.getClient(options)
CoroutineScope(Dispatchers.Main).launch {
val result = scanner.process(image).await()
result.result?.let { barcodes ->
barcodes.find { it.rawValue != null }?.rawValue?.let {
onFoundQR(it)
} ?: run { onNotFoundQR() }
}
imageProxy.close()
}
} ?: imageProxy.close()
}
}
The commented out lines are what I've tried to add and didn't help, some even caused issues on other (used-to-be-working) devices.
I am unsure if I misconfigure anything or not, so I would like any suggestions that would help me find the solution.
Thank you
P.S. This is my first post so if I've done anything wrong or missed something please advise.
BarcodeScanning does not work on some devices running with camera-camera2:1.0.0-beta08 version or later. You can use an earlier version of camera-camera2 to bypass this issue. For example:
See: https://developers.google.com/ml-kit/known-issues
We are working on fix internally in MLKit for the next SDK release.
Update your ML barcode scan plugin above 16.1.1
This issue was fixed in 'com.google.mlkit:barcode-scanning:16.1.1'

Convert CameraX Captured ImageProxy to Bitmap

I was working with CameraX and had hard time converting captured ImageProxy to Bitmap. After searching and trying, I formulated a solution. Later I found that it was not optimum so I changed the design. That forced me to drop hours of work.
Since I (or someone else) might need it in a future, I decided to post here as a question and post and answer to it for reference and scrutiny. Feel free to add better answer if you have one.
The relevant code is:
class ImagePickerActivity : AppCompatActivity() {
private var width = 325
private var height = 205
#RequiresApi(Build.VERSION_CODES.LOLLIPOP)
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_image_picker)
view_finder.post { startCamera() }
}
#RequiresApi(Build.VERSION_CODES.LOLLIPOP)
private fun startCamera() {
// Create configuration object for the viewfinder use case
val previewConfig = PreviewConfig.Builder().apply {
setTargetAspectRatio(Rational(1, 1))
//setTargetResolution(Size(width, height))
setLensFacing(CameraX.LensFacing.BACK)
setTargetAspectRatio(Rational(width, height))
}.build()
}
// Create configuration object for the image capture use case
val imageCaptureConfig = ImageCaptureConfig.Builder()
.apply {
setTargetAspectRatio(Rational(1, 1))
// We don't set a resolution for image capture instead, we
// select a capture mode which will infer the appropriate
// resolution based on aspect ration and requested mode
setCaptureMode(ImageCapture.CaptureMode.MIN_LATENCY)
}.build()
// Build the image capture use case and attach button click listener
val imageCapture = ImageCapture(imageCaptureConfig)
capture_button.setOnClickListener {
imageCapture.takePicture(object : ImageCapture.OnImageCapturedListener() {
override fun onCaptureSuccess(image: ImageProxy?, rotationDegrees: Int) {
//How do I get the bitmap here?
//imageView.setImageBitmap(someBitmap)
}
override fun onError(useCaseError: ImageCapture.UseCaseError?, message: String?, cause: Throwable?) {
val msg = "Photo capture failed: $message"
Toast.makeText(baseContext, msg, Toast.LENGTH_SHORT).show()
Log.e(localClassName, msg)
cause?.printStackTrace()
}
})
}
CameraX.bindToLifecycle(this, preview, imageCapture)
}
}
So the solution was to add extension method to Image and here is the code
class ImagePickerActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_image_picker)
}
private fun startCamera() {
val imageCapture = ImageCapture(imageCaptureConfig)
capture_button.setOnClickListener {
imageCapture.takePicture(object : ImageCapture.OnImageCapturedListener() {
override fun onCaptureSuccess(image: ImageProxy?, rotationDegrees: Int) {
imageView.setImageBitmap(image.image?.toBitmap())
}
//.....
})
}
}
}
fun Image.toBitmap(): Bitmap {
val buffer = planes[0].buffer
buffer.rewind()
val bytes = ByteArray(buffer.capacity())
buffer.get(bytes)
return BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
}
Slightly modified version. Using the inline function use on the Closable ImageProxy
imageCapture.takePicture(
object : ImageCapture.OnImageCapturedListener() {
override fun onCaptureSuccess(image: ImageProxy?, rotationDegrees: Int) {
image.use { image ->
val bitmap: Bitmap? = image?.let {
imageProxyToBitmap(it)
} ?: return
}
}
})
private fun imageProxyToBitmap(image: ImageProxy): Bitmap {
val buffer: ByteBuffer = image.planes[0].buffer
val bytes = ByteArray(buffer.remaining())
buffer.get(bytes)
return BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
}
Here is the safest approach, using MLKit's own implementation.
Tested and working on MLKit version 1.0.1
import com.google.mlkit.vision.common.internal.ImageConvertUtils;
Image mediaImage = imageProxy.getImage();
InputImage image = InputImage.fromMediaImage(mediaImage, imageProxy.getImageInfo().getRotationDegrees());
Bitmap bitmap = ImageConvertUtils.getInstance().getUpRightBitmap(image)
Java Implementation of Backbelt's Answer.
private Bitmap imageProxyToBitmap(ImageProxy image) {
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
return BitmapFactory.decodeByteArray(bytes,0,bytes.length,null);
}
There is second version of takePicture method at the moment (CameraX version 1.0.0-beta03). It provides several ways to persist image (OutputStream or maybe File can be useful in your case).
If you still want to convert ImageProxy to Bitmap here is my answer to similar question, which gives the correct implemetation of this conversion.
Please kindly take a look at this answer. All you need to apply it to your question is to get Image out of your ImageProxy
Image img = imaget.getImage();

Categories

Resources