Android: improving conversion speed of grayscale bitmap to monochromatic - android

How can I improve the speed of the following process of conversion?
For grayscale bitmap of 384x524 pixels, this takes around 2,1 seconds on the target device.
fun convertToMonochromatic(bitmap: Bitmap): Bitmap {
val result = Bitmap.createBitmap(bitmap.width, bitmap.height, bitmap.config)
for (row in 0 until bitmap.height) {
for (col in 0 until bitmap.width) {
val hsv = FloatArray(3)
Color.colorToHSV(bitmap.getPixel(col, row), hsv)
if (hsv[2] > 0.7f) {
result.setPixel(col, row, Color.WHITE)
} else {
result.setPixel(col, row, Color.BLACK)
}
}
}
return result
}
Is there some "mass operation", transforming all the pixels at once directly based on the HSV and the Value particulary

Using this access, to iterate the pixels one by one, it seems the main time consuming part is the colorToHSV function. It computes more info, than the needed Value.
Description of the transform https://www.rapidtables.com/convert/color/rgb-to-hsv.html
After the following adjustion, the needed time for the given size, is reduced from 2,1 seconds to 0,1 second.
fun convertToMonochromatic(bitmap: Bitmap): Bitmap {
val result = Bitmap.createBitmap(bitmap.width, bitmap.height, bitmap.config)
val size = bitmap.width * bitmap.height
val input = IntArray(size)
val output = IntArray(size)
bitmap.getPixels(input, 0, bitmap.width, 0, 0, bitmap.width, bitmap.height)
var current = 0
input.forEach {
val red = it shr 16 and 0xFF
val green = it shr 8 and 0xFF
val blue = it and 0xFF
val max = maxOf(red, green, blue)
if (max > 175) {
output[current++] = Color.WHITE
} else {
output[current++] = Color.BLACK
}
}
result.setPixels(output, 0, bitmap.width, 0, 0, bitmap.width, bitmap.height)
return result
}

Related

Is there a way to crop Image/ImageProxy (before passing to MLKit's analyzer)?

I'm using CameraX's Analyzer use case with the MLKit's BarcodeScanner. I would like to crop portion of the image received from the camera, before passing it to the scanner.
What I'm doing right now is I convert ImageProxy (that I recieve in the Analyzer) to a Bitmap, crop it and then pass it to the BarcodeScanner. The downside is that it's not a very fast and efficient process.
I've also noticed the warning I get in the Logcat when running this code:
ML Kit has detected that you seem to pass camera frames to the
detector as a Bitmap object. This is inefficient. Please use
YUV_420_888 format for camera2 API or NV21 format for (legacy) camera
API and directly pass down the byte array to ML Kit.
It would be nice to not to do ImageProxy conversion, but how do I crop the rectangle I want to analyze?
What I've already tried is to set a cropRect field of the Image (imageProxy.image.cropRect) class, but it doesn't seem to affect the end result.
Yes, it's true that if you use ViewPort and set viewport to yours UseCases(imageCapture or imageAnalysis as here https://developer.android.com/training/camerax/configuration) you can get only information about crop rectangle especially if you use ImageAnalysis(because if you use imageCapture, for on-disk the image is cropped before saving and it doesn't work for ImageAnalysis and if you use imageCapture without saving on disk) and here solution how I solved this problem:
First of all set view port for use cases as here: https://developer.android.com/training/camerax/configuration
Get cropped bitmap to analyze
override fun analyze(imageProxy: ImageProxy) {
val mediaImage = imageProxy.image
if (mediaImage != null && mediaImage.format == ImageFormat.YUV_420_888) {
croppedBitmap(mediaImage, imageProxy.cropRect).let { bitmap ->
requestDetectInImage(InputImage.fromBitmap(bitmap, rotation))
.addOnCompleteListener { imageProxy.close() }
}
} else {
imageProxy.close()
}
}
private fun croppedBitmap(mediaImage: Image, cropRect: Rect): Bitmap {
val yBuffer = mediaImage.planes[0].buffer // Y
val vuBuffer = mediaImage.planes[2].buffer // VU
val ySize = yBuffer.remaining()
val vuSize = vuBuffer.remaining()
val nv21 = ByteArray(ySize + vuSize)
yBuffer.get(nv21, 0, ySize)
vuBuffer.get(nv21, ySize, vuSize)
val yuvImage = YuvImage(nv21, ImageFormat.NV21, mediaImage.width, mediaImage.height, null)
val outputStream = ByteArrayOutputStream()
yuvImage.compressToJpeg(cropRect, 100, outputStream)
val imageBytes = outputStream.toByteArray()
return BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.size)
}
Possibly there is a loss in conversion speed, but on my devices I did not notice the difference. I set 100 quality in method compressToJpeg, but mb if set less quality it can improve speed, it need test.
upd: May 02 '21 :
I found another way without convert to jpeg and then to bitmap. This should be a faster way.
Set viewport as previous.
Convert YUV_420_888 to NV21, then crop and analyze.
override fun analyze(imageProxy: ImageProxy) {
val mediaImage = imageProxy.image
if (mediaImage != null && mediaImage.format == ImageFormat.YUV_420_888) {
croppedNV21(mediaImage, imageProxy.cropRect).let { byteArray ->
requestDetectInImage(
InputImage.fromByteArray(
byteArray,
imageProxy.cropRect.width(),
imageProxy.cropRect.height(),
rotation,
IMAGE_FORMAT_NV21,
)
)
.addOnCompleteListener { imageProxy.close() }
}
} else {
imageProxy.close()
}
}
private fun croppedNV21(mediaImage: Image, cropRect: Rect): ByteArray {
val yBuffer = mediaImage.planes[0].buffer // Y
val vuBuffer = mediaImage.planes[2].buffer // VU
val ySize = yBuffer.remaining()
val vuSize = vuBuffer.remaining()
val nv21 = ByteArray(ySize + vuSize)
yBuffer.get(nv21, 0, ySize)
vuBuffer.get(nv21, ySize, vuSize)
return cropByteArray(nv21, mediaImage.width, cropRect)
}
private fun cropByteArray(array: ByteArray, imageWidth: Int, cropRect: Rect): ByteArray {
val croppedArray = ByteArray(cropRect.width() * cropRect.height())
var i = 0
array.forEachIndexed { index, byte ->
val x = index % imageWidth
val y = index / imageWidth
if (cropRect.left <= x && x < cropRect.right && cropRect.top <= y && y < cropRect.bottom) {
croppedArray[i] = byte
i++
}
}
return croppedArray
}
First crop fun I took from here: Android: How to crop images using CameraX?
And I found also another crop fun, it seems that it is more complicated:
private fun cropByteArray(src: ByteArray, width: Int, height: Int, cropRect: Rect, ): ByteArray {
val x = cropRect.left * 2 / 2
val y = cropRect.top * 2 / 2
val w = cropRect.width() * 2 / 2
val h = cropRect.height() * 2 / 2
val yUnit = w * h
val uv = yUnit / 2
val nData = ByteArray(yUnit + uv)
val uvIndexDst = w * h - y / 2 * w
val uvIndexSrc = width * height + x
var srcPos0 = y * width
var destPos0 = 0
var uvSrcPos0 = uvIndexSrc
var uvDestPos0 = uvIndexDst
for (i in y until y + h) {
System.arraycopy(src, srcPos0 + x, nData, destPos0, w) //y memory block copy
srcPos0 += width
destPos0 += w
if (i and 1 == 0) {
System.arraycopy(src, uvSrcPos0, nData, uvDestPos0, w) //uv memory block copy
uvSrcPos0 += width
uvDestPos0 += w
}
}
return nData
}
Second crop fun I took from here:
https://www.programmersought.com/article/75461140907/
I would be glad if someone can help improve the code.
I'm still improving the way to do it. But this will work for me now
CameraX crop image before sending to analyze
<androidx.constraintlayout.widget.ConstraintLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:paddingBottom="#dimen/_40sdp">
<androidx.camera.view.PreviewView
android:id="#+id/previewView"
android:layout_width="match_parent"
android:layout_height="0dp"
app:layout_constraintDimensionRatio="1:1"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent" /></androidx.constraintlayout.widget.ConstraintLayout>
Cropping an image into 1:1 before passing it to analyze
override fun onCaptureSuccess(image: ImageProxy) {
super.onCaptureSuccess(image)
var bitmap: Bitmap = imageProxyToBitmap(image)
val dimension: Int = min(bitmap.width, bitmap.height)
bitmap = ThumbnailUtils.extractThumbnail(bitmap, dimension, dimension)
imageView.setImageBitmap(bitmap) //Here you can pass the crop[from the center] image to analyze
image.close()
}
**Function for converting into bitmap **
private fun imageProxyToBitmap(image: ImageProxy): Bitmap {
val buffer: ByteBuffer = image.planes[0].buffer
val bytes = ByteArray(buffer.remaining())
buffer.get(bytes)
return BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
}
You would use ImageProxy.SetCroprect to get the rect and then use CropRect to set it.
For example if you had imageProxy, you would do : ImageProxy.setCropRect(Rect) and then you would do ImageProxy.CropRect.

Camerax image analysis: Convert image to bytearray or ByteBuffer

I did lot of reading tried so many different methods available. CameraX is producing yuv_420_888 format Image object and provides it to the ImageAnalysis.
However, there is no way to convert this to a bytebuffer in order to scale, convert to bitmap and run detection operations. I tried following and numerous other proposed techniques.
Converting ImageProxy to Bitmap
All those created grayscale (even after using all 3 planes) and some overlay color shade image. It also created glitchy outputs in-between frames sometime which I could not figure out a reason.
What’s the proper way to get a simple byte array so that it can be converted to bitmap later?
Also how to get cameraX authors attention?
fun imageProxyToByteArray(image: ImageProxy): ByteArray {
val yuvBytes = ByteArray(image.width * (image.height + image.height / 2))
val yPlane = image.planes[0].buffer
val uPlane = image.planes[1].buffer
val vPlane = image.planes[2].buffer
yPlane.get(yuvBytes, 0, image.width * image.height)
val chromaRowStride = image.planes[1].rowStride
val chromaRowPadding = chromaRowStride - image.width / 2
var offset = image.width * image.height
if (chromaRowPadding == 0) {
uPlane.get(yuvBytes, offset, image.width * image.height / 4)
offset += image.width * image.height / 4
vPlane.get(yuvBytes, offset, image.width * image.height / 4)
} else {
for (i in 0 until image.height / 2) {
uPlane.get(yuvBytes, offset, image.width / 2)
offset += image.width / 2
if (i < image.height / 2 - 2) {
uPlane.position(uPlane.position() + chromaRowPadding)
}
}
for (i in 0 until image.height / 2) {
vPlane.get(yuvBytes, offset, image.width / 2)
offset += image.width / 2
if (i < image.height / 2 - 1) {
vPlane.position(vPlane.position() + chromaRowPadding)
}
}
}
return yuvBytes
}
You can use this class ripped from Mlkit Pose Detection.
Mlkit pose detection: BitmapUtils.java
object ImageProxyUtils {
fun getByteArray(image: ImageProxy): ByteArray? {
image.image?.let {
val nv21Buffer = yuv420ThreePlanesToNV21(
it.planes, image.width, image.height
)
return ByteArray(nv21Buffer.remaining()).apply {
nv21Buffer.get(this)
}
}
return null
}
private fun yuv420ThreePlanesToNV21(
yuv420888planes: Array<Plane>,
width: Int,
height: Int
): ByteBuffer {
val imageSize = width * height
val out = ByteArray(imageSize + 2 * (imageSize / 4))
if (areUVPlanesNV21(yuv420888planes, width, height)) {
yuv420888planes[0].buffer[out, 0, imageSize]
val uBuffer = yuv420888planes[1].buffer
val vBuffer = yuv420888planes[2].buffer
vBuffer[out, imageSize, 1]
uBuffer[out, imageSize + 1, 2 * imageSize / 4 - 1]
} else {
unpackPlane(yuv420888planes[0], width, height, out, 0, 1)
unpackPlane(yuv420888planes[1], width, height, out, imageSize + 1, 2)
unpackPlane(yuv420888planes[2], width, height, out, imageSize, 2)
}
return ByteBuffer.wrap(out)
}
private fun areUVPlanesNV21(planes: Array<Plane>, width: Int, height: Int): Boolean {
val imageSize = width * height
val uBuffer = planes[1].buffer
val vBuffer = planes[2].buffer
val vBufferPosition = vBuffer.position()
val uBufferLimit = uBuffer.limit()
vBuffer.position(vBufferPosition + 1)
uBuffer.limit(uBufferLimit - 1)
val areNV21 =
vBuffer.remaining() == 2 * imageSize / 4 - 2 && vBuffer.compareTo(uBuffer) == 0
vBuffer.position(vBufferPosition)
uBuffer.limit(uBufferLimit)
return areNV21
}
private fun unpackPlane(
plane: Plane,
width: Int,
height: Int,
out: ByteArray,
offset: Int,
pixelStride: Int
) {
val buffer = plane.buffer
buffer.rewind()
val numRow = (buffer.limit() + plane.rowStride - 1) / plane.rowStride
if (numRow == 0) {
return
}
val scaleFactor = height / numRow
val numCol = width / scaleFactor
var outputPos = offset
var rowStart = 0
for (row in 0 until numRow) {
var inputPos = rowStart
for (col in 0 until numCol) {
out[outputPos] = buffer[inputPos]
outputPos += pixelStride
inputPos += plane.pixelStride
}
rowStart += plane.rowStride
}
}
}
You just need to use imageProxy.image?.toBitmap() to convert imageProxy and then convert bitmap to bytearray as follow:
Here's an example:
private fun takePhoto() {
camera_capture_button.isEnabled = false
// Get a stable reference of the modifiable image capture use case
val imageCapture = imageCapture ?: return
imageCapture.takePicture(
ContextCompat.getMainExecutor(this),
object : ImageCapture.OnImageCapturedCallback() {
#SuppressLint("UnsafeExperimentalUsageError")
override fun onCaptureSuccess(imageProxy: ImageProxy) {
val bitmapImage = imageProxy.image?.toBitmap()
val stream = ByteArrayOutputStream()
bitmapImage.compress(Bitmap.CompressFormat.PNG, 90, stream)
val image = stream.toByteArray()
}
override fun onError(exception: ImageCaptureException) {
super.onError(exception)
}
})
}

How can I convert Tensor into Bitmap on PyTorch Mobile?

I found that solution (https://itnext.io/converting-pytorch-float-tensor-to-android-rgba-bitmap-with-kotlin-ffd4602a16b6) but when I tried to convert that way I found that the size of inputTensor.dataAsFloatArray is more than bitmap.width*bitmap.height. How works converting tensor to float array or is there any other possible method to convert pytorch tensor to bitmap?
val inputTensor = TensorImageUtils.bitmapToFloat32Tensor(
bitmap,
TensorImageUtils.TORCHVISION_NORM_MEAN_RGB, TensorImageUtils.TORCHVISION_NORM_STD_RGB
)
// Float array size is 196608 when width and height are 256x256 = 65536
val res = floatArrayToGrayscaleBitmap(inputTensor.dataAsFloatArray, bitmap.width, bitmap.height)
fun floatArrayToGrayscaleBitmap (
floatArray: FloatArray,
width: Int,
height: Int,
alpha :Byte = (255).toByte(),
reverseScale :Boolean = false
) : Bitmap {
// Create empty bitmap in RGBA format (even though it says ARGB but channels are RGBA)
val bmp = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
val byteBuffer = ByteBuffer.allocate(width*height*4)
Log.d("App", floatArray.size.toString() + " " + (width * height * 4).toString())
// mapping smallest value to 0 and largest value to 255
val maxValue = floatArray.max() ?: 1.0f
val minValue = floatArray.min() ?: 0.0f
val delta = maxValue-minValue
var tempValue :Byte
// Define if float min..max will be mapped to 0..255 or 255..0
val conversion = when(reverseScale) {
false -> { v: Float -> ((v-minValue)/delta*255).toByte() }
true -> { v: Float -> (255-(v-minValue)/delta*255).toByte() }
}
// copy each value from float array to RGB channels and set alpha channel
floatArray.forEachIndexed { i, value ->
tempValue = conversion(value)
byteBuffer.put(4*i, tempValue)
byteBuffer.put(4*i+1, tempValue)
byteBuffer.put(4*i+2, tempValue)
byteBuffer.put(4*i+3, alpha)
}
bmp.copyPixelsFromBuffer(byteBuffer)
return bmp
}
None of the answers were able to produce the output I wanted, so this is what I came up with - it is basically only reverse engineered version of what happenes in TensorImageUtils.bitmapToFloat32Tensor().
Please note that this function only works if you are using MemoryFormat.CONTIGUOUS (which is default) in TensorImageUtils.bitmapToFloat32Tensor().
fun tensor2Bitmap(input: FloatArray, width: Int, height: Int, normMeanRGB: FloatArray, normStdRGB: FloatArray): Bitmap? {
val pixelsCount = height * width
val pixels = IntArray(pixelsCount)
val output = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
val conversion = { v: Float -> ((v.coerceIn(0.0f, 1.0f))*255.0f).roundToInt()}
val offset_g = pixelsCount
val offset_b = 2 * pixelsCount
for (i in 0 until pixelsCount) {
val r = conversion(input[i] * normStdRGB[0] + normMeanRGB[0])
val g = conversion(input[i + offset_g] * normStdRGB[1] + normMeanRGB[1])
val b = conversion(input[i + offset_b] * normStdRGB[2] + normMeanRGB[2])
pixels[i] = 255 shl 24 or (r.toInt() and 0xff shl 16) or (g.toInt() and 0xff shl 8) or (b.toInt() and 0xff)
}
output.setPixels(pixels, 0, width, 0, 0, width, height)
return output
}
Example usage then could be as follows:
tensor2Bitmap(outputTensor.dataAsFloatArray, bitmap.width, bitmap.height, TensorImageUtils.TORCHVISION_NORM_MEAN_RGB, TensorImageUtils.TORCHVISION_NORM_STD_RGB)
// I faced the same problem, and I found the function itself
TensorImageUtils.bitmapToFloat32Tensor()
tortures the RGB colorspace. You should try to convert yuv to a bitmap and use
TensorImageUtils.bitmapToFloat32Tensor
instead for NOW.
// I modified the code from phillies (up) to get the coloful bitmap. Note that the format of an output tensor is typically NCHW.
// Here's my function in Kotlin. Hopefully it works in your case:
private fun floatArrayToBitmap(floatArray: FloatArray, width: Int, height: Int) : Bitmap {
// Create empty bitmap in ARGB format
val bmp: Bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
val pixels = IntArray(width * height * 4)
// mapping smallest value to 0 and largest value to 255
val maxValue = floatArray.max() ?: 1.0f
val minValue = floatArray.min() ?: -1.0f
val delta = maxValue-minValue
// Define if float min..max will be mapped to 0..255 or 255..0
val conversion = { v: Float -> ((v-minValue)/delta*255.0f).roundToInt()}
// copy each value from float array to RGB channels
for (i in 0 until width * height) {
val r = conversion(floatArray[i])
val g = conversion(floatArray[i+width*height])
val b = conversion(floatArray[i+2*width*height])
pixels[i] = rgb(r, g, b) // you might need to import for rgb()
}
bmp.setPixels(pixels, 0, width, 0, 0, width, height)
return bmp
}
Hopefully future releases of PyTorch Mobile will fix this bug.

TensorFlow Lite on Android shows the same predictions for different inputs

I created a .tflite model with keras.
I tested this model with python for different set of inputs (28x28x3) and it works fine -> shows different predictions depending on photo on the input.
When I use it on Android, it shows the same prediction value for all photo inputs.
Does anybody knows how to resolve this issue?
Python script
image = cv2.resize(image, (28, 28))
image = image.astype("float") / 255.0
image = img_to_array(image)
image = np.expand_dims(image, axis=0)
# load the trained convolutional neural network
print("[INFO] loading network...")
model = load_model(args["model"])
Android
private fun convertBitmapToByteBuffer(bitmap: Bitmap): ByteBuffer {
val inputSize = 28
val byteBuffer = ByteBuffer.allocateDirect(2352 * 4)
byteBuffer.order(ByteOrder.nativeOrder())
val intValues = IntArray(inputSize * inputSize)
bitmap.getPixels(intValues, 0, bitmap.width, 0, 0, bitmap.width, bitmap.height)
var pixel = 0
for (i in 0 until inputSize) {
for (j in 0 until inputSize) {
val value = intValues[pixel++]
byteBuffer.put(((value shr 16 and 0xFF) / 255.0f).toByte())
byteBuffer.put(((value shr 8 and 0xFF) / 255.0f).toByte())
byteBuffer.put(((value and 0xFF) / 255.0f).toByte())
}
}
return byteBuffer
}
private fun getPrediction(bitmap: Bitmap) {
tensorFlowModel?.let { tensorFlowModel ->
val tflite = Interpreter(tensorFlowModel, Interpreter.Options())
val inputData = convertBitmapToByteBuffer(bitmap)
val labelProbArray: Array<FloatArray> = Array(1) { FloatArray(2) }
tflite.run(inputData, labelProbArray)
val prediction = (labelProbArray[0][0])
}
}

how to select a portion of the image when using CameraX analyze

I'm using CameraX and then FirebaseVision to read some text from the image. when I'm analyzing the Image I want to select a portion of the image, not the entire Image, something like when you use a barcode scanner.
class Analyzer : ImageAnalysis.Analyzer {
override fun analyze(imageProxy: ImageProxy?, rotationDegrees: Int) {
// how to crop the image in here?
val image = imageProxy.image
val imageRotation = degreesToFirebaseRotation(degrees)
if (image != null) {
val visionImage = FirebaseVisionImage.fromMediaImage(image, imageRotation)
val textRecognizer = FirebaseVision.getInstance().onDeviceTextRecognizer
textRecognizer.processImage(visionImage)
}
}
I want to know, is there any way to crop the image?
Your problem is exactly what I have tackled 2 months ago...
object YuvNV21Util {
fun yuv420toNV21(image: Image): ByteArray {
val crop = image.cropRect
val format = image.format
val width = crop.width()
val height = crop.height()
val planes = image.planes
val data =
ByteArray(width * height * ImageFormat.getBitsPerPixel(format) / 8)
val rowData = ByteArray(planes[0].rowStride)
var channelOffset = 0
var outputStride = 1
for (i in planes.indices) {
when (i) {
0 -> {
channelOffset = 0
outputStride = 1
}
1 -> {
channelOffset = width * height + 1
outputStride = 2
}
2 -> {
channelOffset = width * height
outputStride = 2
}
}
val buffer = planes[i].buffer
val rowStride = planes[i].rowStride
val pixelStride = planes[i].pixelStride
val shift = if (i == 0) 0 else 1
val w = width shr shift
val h = height shr shift
buffer.position(rowStride * (crop.top shr shift) + pixelStride * (crop.left shr shift))
for (row in 0 until h) {
var length: Int
if (pixelStride == 1 && outputStride == 1) {
length = w
buffer[data, channelOffset, length]
channelOffset += length
} else {
length = (w - 1) * pixelStride + 1
buffer[rowData, 0, length]
for (col in 0 until w) {
data[channelOffset] = rowData[col * pixelStride]
channelOffset += outputStride
}
}
if (row < h - 1) {
buffer.position(buffer.position() + rowStride - length)
}
}
}
return data
}
}
then convert bytearray into bitmap
object BitmapUtil {
fun getBitmap(data: ByteArray, metadata: FrameMetadata): Bitmap {
val image = YuvImage(
data, ImageFormat.NV21, metadata.width, metadata.height, null
)
val stream = ByteArrayOutputStream()
image.compressToJpeg(
Rect(0, 0, metadata.width, metadata.height),
80,
stream
)
val bmp = BitmapFactory.decodeByteArray(stream.toByteArray(), 0, stream.size())
stream.close()
return rotateBitmap(bmp, metadata.rotation, false, false)
}
private fun rotateBitmap(
bitmap: Bitmap, rotationDegrees: Int, flipX: Boolean, flipY: Boolean
): Bitmap {
val matrix = Matrix()
// Rotate the image back to straight.
matrix.postRotate(rotationDegrees.toFloat())
// Mirror the image along the X or Y axis.
matrix.postScale(if (flipX) -1.0f else 1.0f, if (flipY) -1.0f else 1.0f)
val rotatedBitmap =
Bitmap.createBitmap(bitmap, 0, 0, bitmap.width, bitmap.height, matrix, true)
// Recycle the old bitmap if it has changed.
if (rotatedBitmap != bitmap) {
bitmap.recycle()
}
return rotatedBitmap
}
}
Please have a look at my open source project https://github.com/minkiapps/Firebase-ML-Kit-Scanner-Demo, I build a demo app where portion of the image proxy is cropped before it is processed by ml kit.

Categories

Resources