CameraX make ImageAnalysis same size as Preview - android

I want to be able to get the exact image for analysis step as I get in preview.
I have preview use case:
val metrics = DisplayMetrics().also { binding.codeScannerView.display.getRealMetrics(it) }
val screenAspectRatio = Rational(metrics.widthPixels, metrics.heightPixels)
val previewConfig = PreviewConfig.Builder().apply {
setTargetAspectRatio(screenAspectRatio)
}.build()
And next to it I have analysis use case config:
val analyzerConfig = ImageAnalysisConfig.Builder().apply {
setTargetResolution(Size(metrics.heightPixels, metrics.widthPixels))
setTargetAspectRatio(screenAspectRatio)
val analyzerThread = HandlerThread(
"QrCodeReader").apply { start() }
setCallbackHandler(Handler(analyzerThread.looper))
setImageReaderMode(
ImageAnalysis.ImageReaderMode.ACQUIRE_LATEST_IMAGE)
}.build()
My preview is fullscreen so it's size is 1440x2560. But if I try to get dimensions from ImageProxy in analyzer, I get 1920x1050 which seems to have incorrect dimensions and switched width with height. Why is that and how can I force my analysis step to have same dimensions as full screen?

Intro:
implementation 'androidx.camera:camera-core:1.0.0-alpha10'
implementation 'androidx.camera:camera-camera2:1.0.0-alpha10'
implementation "androidx.camera:camera-lifecycle:1.0.0-alpha10"
implementation "androidx.camera:camera-view:1.0.0-alpha07"
implementation 'com.google.firebase:firebase-ml-vision:24.0.1'
implementation 'com.google.firebase:firebase-ml-vision-barcode-model:16.0.2'
I am solve this issue via the method FirebaseVisionImage.fromBitmap(bitmap)
where bitmap - manual cropped && rotated image according to preview configuration.
The steps are:
when you setup ImageAnalysis.Builder() && Preview.Builder()
obtain the on-screen rendered size for
androidx.camera.view.PreviewView element:
previewView.measure(View.MeasureSpec.UNSPECIFIED, View.MeasureSpec.UNSPECIFIED)
val previewSize = Size(previewView.width, previewView.height)
Then pass the size into your own ImageAnalysis.Analyzer
implementation (let's say it will be viewFinderSize variable, usage see below)
when override fun analyze(mediaImage: ImageProxy){ occurs, do the manual crop of received ImageProxy. I am use the snippet from another SO question about distorted YUV_420_888 image: https://stackoverflow.com/a/45926852/2118862
When you do the cropping, keep in mind that ImageAnalysis use case receive image aligned in vertical-middle basis within your preview use case. In other words, received image, after rotation will be vertically centred as if it will be inside your preview area (even if your preview area is smaller than the image passed into analysis). So, crop area should be calculated in both directions from the vertical center: up and down.
The vertical size of the crop (height) should be manually calculated on the horizontal size basis. This means, when you receive image into analysis, it has full horizontal size within your preview area (100% of width inside preview is equal 100% of width inside analysis). So, no hidden zones in horizontal dimension. This open the way to calculate the size for vertical cropping. I am done this with next code:
var bitmap = ... <- obtain the bitmap as suggested in the SO link above
val matrix = Matrix()
matrix.postRotate(90f)
bitmap = Bitmap.createBitmap(bitmap, 0, 0, image.width, image.height, matrix, true)
val cropHeight = if (bitmap.width < viewFinderSize!!.width) {
// if preview area larger than analysing image
val koeff = bitmap.width.toFloat() / viewFinderSize!!.width.toFloat()
viewFinderSize!!.height.toFloat() * koeff
} else {
// if preview area smaller than analysing image
val prc = 100 - (viewFinderSize!!.width.toFloat()/(bitmap.width.toFloat()/100f))
viewFinderSize!!.height + ((viewFinderSize!!.height.toFloat()/100f) * prc)
}
val cropTop = (bitmap.height/2)-((cropHeight)/2)
bitmap = Bitmap.createBitmap(bitmap, 0, cropTop.toInt(), bitmap.width, cropHeight.toInt())
The final value in bitmap variable - is the cropped image ready to pass into FirebaseVisionImage.fromBitmap(bitmap)
PS.
welcome to improve the suggested variant

May be I am late but here is the code working for me
Few times if the the cropTop comes as -ve value so whenever the negative value comes you should process image which comes from ImageProxy in other cases you can process cropped bitmap
val mediaImage = imageProxy.image ?: return
var bitmap = ImageUtils.convertYuv420888ImageToBitmap(mediaImage)
val rotationDegrees = imageProxy.imageInfo.rotationDegrees
val matrix = Matrix()
matrix.postRotate(rotationDegrees.toFloat())
bitmap =
Bitmap.createBitmap(bitmap, 0, 0, mediaImage.width, mediaImage.height, matrix, true)
val cropHeight = if (bitmap.width < previewView.width) {
// if preview area larger than analysing image
val koeff = bitmap.width.toFloat() / previewView.width.toFloat()
previewView.height.toFloat() * koeff
} else {
// if preview area smaller than analysing image
val prc = 100 - (previewView.width.toFloat() / (bitmap.width.toFloat() / 100f))
previewView.height + ((previewView.height.toFloat() / 100f) * prc)
}
val cropTop = (bitmap.height / 2) - ((cropHeight) / 2)
if (cropTop > 0) {
Bitmap.createBitmap(bitmap, 0, cropTop.toInt(), bitmap.width, cropHeight.toInt())
.also { process(it, imageProxy) }
} else {
imageProxy.image?.let { process(it, imageProxy) }
}

Related

Preview file from original file is bigger in size android bitmap

Given a file (which of course is an Image) in Android i want to make a preview image which of course should be smaller in size. I use the code below
override fun createImagePreview(serverId: Long, fileName: String) {
if (fileName.fileType() != IMAGE_TYPE) return
val file = serverDir.getFileByName(fileName)
val originalBitmap = BitmapFactory.decodeFile(file.absolutePath) ?: return
var newWidth = DEFAULT_IMAGE_SIZE.toDouble()
var newHeight = DEFAULT_IMAGE_SIZE.toDouble()
if (originalBitmap.width > originalBitmap.height) {
val ratio = originalBitmap.width / originalBitmap.height
newHeight = newWidth / ratio
} else if (originalBitmap.width < originalBitmap.height) {
val ratio = originalBitmap.height / originalBitmap.width
newWidth = newHeight / ratio
}
val bmp = ThumbnailUtils.extractThumbnail(
originalBitmap,
newWidth.toInt().dpToPixel(appContext),
newHeight.toInt().dpToPixel(appContext)
)
FileOutputStream(File(serverDir, "p_$fileName")).use {
it.write(bmp.toByteArray())
it.flush()
}
}
However, i see inside my internalStorage that if the File is 5KB then the PreviewFile is 19KB. What is wrong with that?
bmp.toByteArray is the wrong way to store an image. The Bitmap object is uncompressed. That means it uses 4 bytes per pixel. A jpg or png is compressed, that means it uses less data per pixel. You want to store it in one of those formats. The way to do that is to use Bitmap.compress() and pass it a stream to the file you want to use.

How to crop and resize the section of an Image within a Rect bounding box?

After detecting a face with CameraX and MLKit I need to pass the image to a custom TFLite model (I'm using this one), which detects a facemask. The model accepts images of 224x224 pixels, so I need to take out the part of ImageProxy#getImage() corresponding to Face#getBoundingBox() and resize it accordingly.
I've seen this answer which could have been fine but ThumbnailUtils.extractThumbnail() can't work with a Rect of 4 coordinates and it's relative to the center of the image, while the face's bounding box might be elsewhere.
The TFLite model accepts inputs like this:
val inputFeature0 = TensorBuffer
.createFixedSize(intArrayOf(1, 224, 224, 3), DataType.FLOAT32)
.loadBuffer(/* the resized image as ByteBuffer */)
Note that the ByteBuffer will have a size of 224 * 224 * 3 * 4 bytes (where 4 is DataType.FLOAT32.byteSize()).
Edit: I've cleaned up some of the old text because it was getting overwhelming. The code suggested below actually works: I just forgot to delete a piece of my own code which was already converting the same ImageProxy to Bitmap and it must have caused some internal buffer to be read until the end, so it was either necessary to rewind it manually or to delete that useless code altogether.
However, even if the cropRect is applied to the ImageProxy and the underlying Image, the resulting bitmap is still full size so there must be something else to do. The model is still returning NaN values, so I'm going to experiment with the raw output for a while.
fun hasMask(imageProxy: ImageProxy, boundingBox: Rect): Boolean {
val model = MaskDetector.newInstance(context)
val inputFeature0 = TensorBuffer.createFixedSize(intArrayOf(1, 224, 224, 3), DataType.FLOAT32)
// now the cropRect is set correctly but the image itself isn't
// cropped before being converted to Bitmap
imageProxy.setCropRect(box)
imageProxy.image?.cropRect = box
val bitmap = BitmapUtils.getBitmap(imageProxy) ?: return false
val resized = Bitmap.createScaledBitmap(bitmap, 224, 224, false)
// input for the model
val buffer = ByteBuffer.allocate(224 * 224 * 3 * DataType.FLOAT32.byteSize())
resized.copyPixelsToBuffer(buffer)
// use the model and get the result as 2 Floats
val outputFeature0 = model.process(inputFeature0).outputFeature0AsTensorBuffer
val maskProbability = outputFeature0.floatArray[0]
val noMaskProbability = outputFeature0.floatArray[1]
model.close()
return maskProbability > noMaskProbability
}
We will provide a better way to handle the image processing when working with ML Kit.
For now, you could try this method: https://github.com/googlesamples/mlkit/blob/master/android/vision-quickstart/app/src/main/java/com/google/mlkit/vision/demo/BitmapUtils.java#L74
It will convert the ImageProxy to Bitmap, and rotate it to upright. The bounding box from the face detection should be applied to the bitmap directly, which means you should be able to crop the bitmap with the Rect bounding box.

Reduce tracking window using google mlkit vision samples

I would like to reduce the reduce bar code tracking window when using the google vision api. There are some answers here but they feel a bit outdated.
I'm using google's sample: https://github.com/googlesamples/mlkit/tree/master/android/vision-quickstart
Currently, I try to figure out if a barcode is inside my overlay box inside BarcodeScannerProcessor onSuccess callback:
override fun onSuccess(barcodes: List<Barcode>, graphicOverlay: GraphicOverlay) {
if(barcodes.isEmpty())
return;
for(barcode in barcodes) {
val center = Point(graphicOverlay.imageWidth / 2, graphicOverlay.imageHeight / 2)
val rectWidth = graphicOverlay.imageWidth * Settings.OverlayWidthFactor
val rectHeight = graphicOverlay.imageHeight * Settings.OverlayHeightFactor
val left = center.x - rectWidth / 2
val top = center.y - rectHeight / 2
val right = center.x + rectWidth / 2
val bottom = center.y + rectHeight / 2
val rect = Rect(left.toInt(), top.toInt(), right.toInt(), bottom.toInt())
val contains = rect.contains(barcode.boundingBox!!)
val color = if(contains) Color.GREEN else Color.RED
graphicOverlay.add(BarcodeGraphic(graphicOverlay, barcode, "left: ${barcode.boundingBox!!.left}", color))
}
}
Y-wise it works perfectly, but the X values from barcode.boundingBox e.g. barcode.boundingBox.left seems to have an offset. Is it based on what's being calculated in GraphicOverlay?
I'm expecting the value below to be close to 0, but the offset is about 90 here:
Or perhaps it's more efficient to crop the image according to the box?
Actually the bounding box is correct. The trick is that the image aspect ratio doesn't match the viewport aspect ratio so the image is cropped horizontally. Try to open settings (a gear in the top right corner) and choose an appropriate resolution.
For example take a look at these two screenshots. On the first one the selected resolution (1080x1920) matches my phone resolution so the padding looks good (17px). On the second screenshot the aspect ratio is different (1.0 for 720x720 resolution) therefore the image is cropped and the padding looks incorrect.
So the offset should be transformed from image coordinates to the screen coordinates. Under the hood GraphicOverlay uses a matrix for this transformation. You can use the same matrix:
for(barcode in barcodes) {
barcode.boundingBox?.let { bbox ->
val offset = floatArrayOf(bbox.left.toFloat(), bbox.top.toFloat())
graphicOverlay.transformationMatrix.mapPoints(offset)
val leftOffset = offset[0]
val topOffset = offset[1]
...
}
}
The only thing is that the transformationMatrix is private, so you should add a getter to access it.
As you know, the preview size of the camera is configurable at the settings menu. This configurable size specifies the graphicOverlay dimensions.
On the other hand, the aspect ratio of the CameraSourcePreview (i.e. preview_view in activity_vision_live_preview.xml) which is shown on the screen, does not necessarily equal to the ratio of the graphicOverlay. Because depends on the size of the phone's screen and the height that the parent ConstraintLayout allows occupying.
So, in the preview, based on the difference between the aspect ratio of graphicOverlay and preview_view, some part of the graphicOverlay might not be shown horizontally or vertically.
There are some parameters inside GraphicOverlay that can help us to adjust the left and top of the barcode's boundingBox in such a way that they start from 0 in the visible area.
First of all, they should be accessible out of the GraphicOverlay class. So, it's just enough to write a getter method for them:
GraphicOverlay.java
public class GraphicOverlay extends View {
...
/**
* The factor of overlay View size to image size. Anything in the image coordinates need to be
* scaled by this amount to fit with the area of overlay View.
*/
public float getScaleFactor() {
return scaleFactor;
}
/**
* The number of vertical pixels needed to be cropped on each side to fit the image with the
* area of overlay View after scaling.
*/
public float getPostScaleHeightOffset() {
return postScaleHeightOffset;
}
/**
* The number of horizontal pixels needed to be cropped on each side to fit the image with the
* area of overlay View after scaling.
*/
public float getPostScaleWidthOffset() {
return postScaleWidthOffset;
}
}
Now, it is possible to calculate the left and top difference gap using these parameters like the following:
BarcodeScannerProcessor.kt
class BarcodeScannerProcessor(
context: Context
) : VisionProcessorBase<List<Barcode>>(context) {
...
override fun onSuccess(barcodes: List<Barcode>, graphicOverlay: GraphicOverlay) {
if (barcodes.isEmpty()) {
Log.v(MANUAL_TESTING_LOG, "No barcode has been detected")
}
val leftDiff = graphicOverlay.run { postScaleWidthOffset / scaleFactor }.toInt()
val topDiff = graphicOverlay.run { postScaleHeightOffset / scaleFactor }.toInt()
for (i in barcodes.indices) {
val barcode = barcodes[i]
val color = Color.RED
val text = "left: ${barcode.boundingBox!!.left - leftDiff} top: ${barcode.boundingBox!!.top - topDiff}"
graphicOverlay.add(MyBarcodeGraphic(graphicOverlay, barcode, text, color))
logExtrasForTesting(barcode)
}
}
...
}
Visual Result:
Here is the visual result of the output. As it's obvious in the pictures, the gap between both left & top of the barcode and the left and top of the visible area is started from 0. In the case of the left picture, the graphicOverlay is set to the size of 480x640 (aspect ratio ≈ 1.3334) and for the right one 360x640 (aspect ratio ≈ 1.7778). In both cases, on my phone, the CameraSourcePreview has a steady size of 1440x2056 pixels (aspect ratio ≈ 1.4278), so it means that the calculation truly reflected the position of the barcode in the visible area.
(note that the aspect ratio of the visible area in one experiment is lower than that of graphicOverlay, and in another experiment, greater: 1.3334 < 1.4278 < 1.7778. So, the left values and top values are adjusted respectively.)

CameraX Image analysis's imageproxy size and PreviewView size are not the same

I'm trying to use Firebase's MLKit for face detection with Camerax. I'm having a hard time to get Image analysis's imageproxy size to match PreviewView's size. For both Image analysis and PreviewView, I've set setTargetResolution() to PreviewView width and height. However when I check the size of the Imageproxy in the analyzer, it's giving me 1920 as width and 1080 as height. My PreviewView is 1080 for width and 2042 for height. When I swap the width and the height in setTargetResolution() for Image analysis, I get 1088 for both width and height in imageproxy. My previewview is also locked to portrait mode.
Ultimately, I need to feed the raw imageproxy data and the face point data into an AR code. So scaling up just the graphics overlay that draws the face points will not work for me.
Q: If there are no way to fix this within the camerax libraries, How to scale the imageproxy that returns from the analyzer to match the previewview?
I'm using Java and the latest Camerax libs:
def camerax_version = "1.0.0-beta08"
It's quite difficult to ensure both the preview and image analysis use cases have the same output resolution, since different devices support different resolutions, and image analysis has a hard limit on the max resolution of its output (as mentioned in the documentation).
To make the conversion easier between coordinates from the image analysis frames and the UI/PreviewView, you can set both preview and ImageAnalysis to use the same aspect ratio, for instance AspectRatio.RATIO_4_3, as well as PreviewView (by wrapping it inside a ConstraintLayout for example, and setting a constraint on its width/height ratio). With this, mapping coordinates of detected faces from the analyzer to the UI becomes more straight-forward, you can take a look at it in this sample.
Alternatively, you could use CameraX's ViewPort API which -I believe- is still experimental. It allows defining a field of view for a group of use cases, resulting in their outputs matching and having WYSIWYG. You can find an example of its usage here. For your case, you'd write something like this.
Preview preview = ...
preview.setSurfaceProvider(previewView.getSurfaceProvider());
ImageAnalysis imageAnalysis = ...
imageAnalysis.setAnalyzer(...);
ViewPort viewPort = preview.getViewPort();
UseCaseGroup useCaseGroup = new UseCaseGroup.Builder()
.setViewPort(viewPort)
.addUseCase(preview)
.addUseCase(imageAnalysis)
.build();
cameraProvider.bindToLifecycle(
lifecycleOwner,
cameraSelector,
usecaseGroup);
In this scenario, every ImageProxy your analyzer receives will contain a crop rect that matches what PreviewView displays. So you just need to crop your image, then pass it to the face detector.
This answer is derived from #Husayn's answer. I have added relevant sample code part.
Camerax image size for preview and analysis varies for various reasons (example device specific display size/hardware/camera or app specific view and processing)
However there are options to map the processing image size and resulting xy coordinates to preview size and to preview xy coordinates.
Setup layout with DimensionRatio 3:4 for both preview and analysis overlay in layout,
Example:
<androidx.camera.view.PreviewView
android:id="#+id/view_finder"
android:layout_width="match_parent"
android:layout_height="0dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintDimensionRatio="3:4"
app:layout_constraintTop_toTopOf="parent"/>
<com.loa.sepanex.scanner.view.GraphicOverlay
android:id="#+id/graphic_overlay"
android:layout_width="match_parent"
android:layout_height="0dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintDimensionRatio="3:4"
app:layout_constraintTop_toTopOf="parent"/>
setup preview and analysis use cases wuth AspectRatio.RATIO_4_3
Example:
viewFinder = view.findViewById(R.id.view_finder)
graphicOverlay = view.findViewById(R.id.graphic_overlay)
//...
preview = Preview.Builder()
.setTargetAspectRatio(AspectRatio.RATIO_4_3)
.setTargetRotation(rotation)
.build()
imageAnalyzer = ImageAnalysis.Builder()
.setTargetAspectRatio(AspectRatio.RATIO_4_3)
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.setTargetRotation(rotation)
.build()
.also {
it.setAnalyzer(cameraExecutor, ImageAnalysis.Analyzer {
image ->
//val rotationDegrees = image.imageInfo.rotationDegrees
try {
val mediaImage: Image? = image.image
if (mediaImage != null) {
val imageForFaceDetectionProcess = InputImage.fromMediaImage(mediaImage, image.getImageInfo().getRotationDegrees())
//...
}
}
}
}
Define scale and traslate APIs for getting the mapping of analysis image xy coordinates to preview xy coordinates, as shown below
val preview = viewFinder.getChildAt(0)
var previewWidth = preview.width * preview.scaleX
var previewHeight = preview.height * preview.scaleY
val rotation = preview.display.rotation
if (rotation == Surface.ROTATION_90 || rotation == Surface.ROTATION_270) {
val temp = previewWidth
previewWidth = previewHeight
previewHeight = temp
}
val isImageFlipped = lensFacing == CameraSelector.LENS_FACING_FRONT
val rotationDegrees: Int = imageProxy.getImageInfo().getRotationDegrees()
if (rotationDegrees == 0 || rotationDegrees == 180) {
graphicOverlay!!.setImageSourceInfo(
imageProxy.getWidth(), imageProxy.getHeight(), isImageFlipped)
} else {
graphicOverlay!!.setImageSourceInfo(
imageProxy.getHeight(), imageProxy.getWidth(), isImageFlipped)
}
:::
:::
float viewAspectRatio = (float) previewWidth / previewHeight;
float imageAspectRatio = (float) imageWidth / imageHeight;
postScaleWidthOffset = 0;
postScaleHeightOffset = 0;
if (viewAspectRatio > imageAspectRatio) {
// The image needs to be vertically cropped to be displayed in this view.
scaleFactor = (float) previewWidth / imageWidth;
postScaleHeightOffset = ((float) previewWidth / imageAspectRatio - previewHeight) / 2;
} else {
// The image needs to be horizontally cropped to be displayed in this view.
scaleFactor = (float) previewHeight / imageHeight;
postScaleWidthOffset = ((float) previewHeight * imageAspectRatio - previewWidth) / 2;
}
transformationMatrix.reset();
transformationMatrix.setScale(scaleFactor, scaleFactor);
transformationMatrix.postTranslate(-postScaleWidthOffset, -postScaleHeightOffset);
if (isImageFlipped) {
transformationMatrix.postScale(-1f, 1f, previewWidth / 2f, previewHeight / 2f);
}
:::
:::
public float scale(float imagePixel) {
return imagePixel * overlay.scaleFactor;
}
public float translateX(float x) {
if (overlay.isImageFlipped) {
return overlay.getWidth() - (scale(x) - overlay.postScaleWidthOffset);
} else {
return scale(x) - overlay.postScaleWidthOffset;
}
}
public float translateY(float y) {
return scale(y) - overlay.postScaleHeightOffset;
}
use translateX and translateY methods for plotting analysis image based data into preview
Example:
for (FaceContour contour : face.getAllContours()) {
for (PointF point : contour.getPoints()) {
canvas.drawCircle(translateX(point.x), translateY(point.y), FACE_POSITION_RADIUS, facePositionPaint);
}
}

How to match PreviewView aspect ratio to captured image using CameraX

I have a PreviewViewthat occupies the whole screen except for the toolbar.
The preview of the camera works great, but when I capture the image, the aspect ratio is complately different.
I would like to show the image to the user after it is successfully captured so it is the same size as the PreviewView so I don't have to crop or stretch it.
Is it possible to change the aspect ratio so on every device it is the size of the PreviewView or do I have to set it to a fixed value?
You can set the aspect ratio of the Preview and ImageCapture use cases while building them. If you set the same aspect ratio to both use cases, you should end up with a captured image that matches the camera preview output.
Example: Setting Preview and ImageCapture's aspect ratios to 4:3
Preview preview = new Preview.Builder()
.setTargetAspectRatio(AspectRatio.RATIO_4_3)
.build();
ImageCapture imageCapture = new ImageCapture.Builder()
.setTargetAspectRatio(AspectRatio.RATIO_4_3)
.build();
By doing this, you'll most likely still end up with a captured image that doesn't match what PreviewView is displaying. Assuming you don't change the default scale type of PreviewView, it'll be equal to ScaleType.FILL_CENTER, meaning that unless the camera preview output has an aspect ratio that matches that of PreviewView, PreviewView will crop parts of the preview (the top and bottom, or the right and left sides), resulting in the captured image not matching what PreviewView displays. To solve this issue, you should set PreviewView's aspect ratio to the same aspect ratio as the Preview and ImageCapture use cases.
Example: Setting PreviewView's aspect ratio to 4:3
<androidx.constraintlayout.widget.ConstraintLayout
android:layout_width="match_parent"
android:layout_height="match_parent">
<androidx.camera.view.PreviewView
android:layout_width="match_parent"
android:layout_height="0dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintDimensionRatio="3:4"
app:layout_constraintTop_toTopOf="parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
I was working with CameraX and faced the same issue. I followed the excellent answer given by #Husayn Hakeen. However, my app needs the previewView to match the device height width precisely. So I made some changes. In kotlin ( or java), I added
Preview preview = new Preview.Builder()
.setTargetAspectRatio(AspectRatio.RATIO_16_9)
.build();
ImageCapture imageCapture = new ImageCapture.Builder()
.setTargetAspectRatio(AspectRatio.RATIO_16_9)
.build();
And my xml has:
<androidx.camera.view.PreviewView
android:id="#+id/viewFinder"
android:layout_width="0dp"
android:layout_height="0dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent"
tools:background="#color/colorLight"
/>
From kotlin, I measured the device actual height and width in pixel:
val metrics: DisplayMetrics = DisplayMetrics().also { viewFinder.display.getRealMetrics(it) }
val deviceWidthPx = metrics.widthPixels
val deviceHeightPx = metrics.heightPixels
Using these data, I used some basic coordinate geometry to draw some lines on the captured photo:
As you can see, the yellow rectangle indicate the device preview, so this is what you see on the preview. I had a requirement to perform a center crop, that's why I drew the blue square.
After that, I cropped the preview:
#JvmStatic
fun cropPreviewBitmap(previewBitmap: Bitmap, deviceWidthPx: Int, deviceHeightPx: Int): Bitmap {
// crop the image
var cropHeightPx = 0f
var cropWidthPx = 0f
if(deviceHeightPx > deviceWidthPx) {
cropHeightPx = 1.0f *previewBitmap.height
cropWidthPx = 1.0f * deviceWidthPx / deviceHeightPx * cropHeightPx
}else {
cropWidthPx = 1.0f *previewBitmap.width
cropHeightPx = 1.0f * deviceHeightPx / deviceWidthPx * cropWidthPx
}
val cx = previewBitmap.width / 2
val cy = previewBitmap.height / 2
val minimusPx = Math.min(cropHeightPx, cropWidthPx)
val left2 = cx - minimusPx / 2
val top2 = cy - minimusPx /2
val croppedBitmap = Bitmap.createBitmap(previewBitmap, left2.toInt(), top2.toInt(), minimusPx.toInt(), minimusPx.toInt())
return croppedBitmap
}
This worked for me.
Easiest solution is to use the useCaseGroup, where you add both preview and image capture use cases under the same group + set the same view port.
Beware that with this solution, you will need to start the camera in onCreate method when it's ready (otherwise your app will crash):
viewFinder.post {
startCamera()
}

Categories

Resources