I am trying camexaX library for capturing the image but it is capturing more space from left and right.
I captured the area between Q to O alphabet in preview but it has captured more area around Q and O.
/**
* Bind the Camera to the lifecycle
*/
private fun bindCamera(){
CameraX.unbindAll()
// Preview config for the camera
val previewConfig = PreviewConfig.Builder()
.setLensFacing(lensFacing)
.build()
val preview = Preview(previewConfig)
// Image capture config which controls the Flash and Lens
val imageCaptureConfig = ImageCaptureConfig.Builder()
.setTargetRotation(windowManager.defaultDisplay.rotation)
.setLensFacing(lensFacing)
.setFlashMode(FlashMode.ON)
.build()
imageCapture = ImageCapture(imageCaptureConfig)
// The view that displays the preview
val textureView: TextureView = findViewById(R.id.view_finder)
// Handles the output data of the camera
preview.setOnPreviewOutputUpdateListener { previewOutput ->
// Displays the camera image in our preview view
textureView.surfaceTexture = previewOutput.surfaceTexture
}
// Bind the camera to the lifecycle
CameraX.bindToLifecycle(this as LifecycleOwner, imageCapture, preview)
}
Can some one help me here?
This is normal behaviour because your camera records wider angle than your Preview can show. Preview will only adapt the picture height and width to the size that it can display and ImageCapture will capture the picture independently as if there is no Preview.
It is important to realize that all of the use cases CameraX provide (Preview, ImageAnalysis and ImageCapture) can work independently. Meaning, you can use your ImageCapture even without the Preview. Same goes for the ImageAnalysis and Preveiw.
Related
This is how it is showing
Here's the code and I tried a lot of things related to aspect ratio and preview size. The video is getting saved correctly as I want to be in portrait mode.
enter code here
camera?.let {
val params = it.parameters
params.set("orientation","portrait")
params.setRotation(90)
val surfaceView = binding.surfaceView
val previewSizes = camera?.parameters?.supportedPreviewSizes
val surfaceHolder = surfaceView.holder
camera?.setPreviewDisplay(surfaceHolder)
camera?.startPreview()
}
// Initialize media recorder
mediaRecorder = MediaRecorder()
mediaRecorder?.let {
it.setAudioSource(MediaRecorder.AudioSource.MIC)
it.setVideoSource(MediaRecorder.VideoSource.CAMERA)
it.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4)
it.setVideoEncoder(MediaRecorder.VideoEncoder.H264)
it.setAudioEncoder(MediaRecorder.AudioEncoder.AAC)
it.setVideoFrameRate(30)
it.setVideoEncodingBitRate(10000000)
it.setOrientationHint(90) // Set the orientation of the video to portrait mode
it.setPreviewDisplay(surfaceView.holder.surface)
it.setOutputFile(getOutputFile().path)
}
I have normal portrait orientation in the emulator and int rotation has zero value. Everything is fine.
void bindPreview(#NonNull ProcessCameraProvider cameraProvider) {
int rotation = cameraView.getDisplay().getRotation(); // 0
// Preview
Preview preview = new Preview.Builder()
.setTargetAspectRatio(AspectRatio.RATIO_4_3)
.build();
// Camera
CameraSelector cameraSelector = new CameraSelector.Builder()
.requireLensFacing(CameraSelector.LENS_FACING_BACK)
.build();
// Create image capture
imageCapture = new ImageCapture.Builder()
.setTargetResolution(new Size(1200, 720))
.setTargetRotation(rotation)
.build();
// ViewPort
Rational aspectRatio = new Rational(cameraView.getWidth(), cameraView.getHeight());
ViewPort viewPort = new ViewPort.Builder(aspectRatio, rotation).build();
// Use case
UseCaseGroup useCaseGroup = new UseCaseGroup.Builder()
.addUseCase(preview)
.addUseCase(imageCapture)
.setViewPort(viewPort)
.build();
}
But after in imageCapture.takePicture callback
public void onCaptureSuccess(#NonNull ImageProxy image) {
imageRotationDegrees = image.getImageInfo().getRotationDegrees(); // 90
}
imageRotationDegrees return 90! means that the image must be rotated to get the natural orientation, but it is not! Its value should be 0.
Is it normal?
update:
On device I get 0 on imageRotationDegrees.
On emulator I got 90 on imageRotationDegrees
But all images come in the correct orientation regardless of this value. How do I know which image I should rotate if I have to?
Yes, it is normal for the imageRotationDegrees value to be different from the value of rotation in the bindPreview method.
The rotation value represents the rotation of the device's screen, while the imageRotationDegrees value represents the orientation of the image as captured by the camera. These values can be different because the camera and the screen are not necessarily oriented in the same way.
For example, if the device is in portrait orientation with the camera facing the user, the rotation value will be 0, but the imageRotationDegrees value will be 90 because the camera is capturing the image rotated 90 degrees from the device's portrait orientation.
To get the natural orientation of the image, you can rotate the image by the value of imageRotationDegrees. For example, if you are using the Android's Bitmap class to represent the image, you can use the rotate method to rotate the image by the correct amount:
Bitmap rotatedImage = Bitmap.createBitmap(image, 0, 0, image.getWidth(), image.getHeight(), matrix, true);
Okey. If is not right please write me:
Hardware devices has a varios sensor rotation.
"imageRotationDegrees=0" on emulator means screen_height = sensor_height
"imageRotationDegrees=90" on device means screen_height = sensor_width
Depending on the rotation I get different values from "image.getCropRect()".
I check the imageRotationDegrees and calculate the correct frame for cropping
if (imageRotationDegrees == 0 || imageRotationDegrees == 180) {
} else {
}
Additional info:
getRotationDegrees is different on imageCapture and imageAnalysis on device
I have a PreviewViewthat occupies the whole screen except for the toolbar.
The preview of the camera works great, but when I capture the image, the aspect ratio is complately different.
I would like to show the image to the user after it is successfully captured so it is the same size as the PreviewView so I don't have to crop or stretch it.
Is it possible to change the aspect ratio so on every device it is the size of the PreviewView or do I have to set it to a fixed value?
You can set the aspect ratio of the Preview and ImageCapture use cases while building them. If you set the same aspect ratio to both use cases, you should end up with a captured image that matches the camera preview output.
Example: Setting Preview and ImageCapture's aspect ratios to 4:3
Preview preview = new Preview.Builder()
.setTargetAspectRatio(AspectRatio.RATIO_4_3)
.build();
ImageCapture imageCapture = new ImageCapture.Builder()
.setTargetAspectRatio(AspectRatio.RATIO_4_3)
.build();
By doing this, you'll most likely still end up with a captured image that doesn't match what PreviewView is displaying. Assuming you don't change the default scale type of PreviewView, it'll be equal to ScaleType.FILL_CENTER, meaning that unless the camera preview output has an aspect ratio that matches that of PreviewView, PreviewView will crop parts of the preview (the top and bottom, or the right and left sides), resulting in the captured image not matching what PreviewView displays. To solve this issue, you should set PreviewView's aspect ratio to the same aspect ratio as the Preview and ImageCapture use cases.
Example: Setting PreviewView's aspect ratio to 4:3
<androidx.constraintlayout.widget.ConstraintLayout
android:layout_width="match_parent"
android:layout_height="match_parent">
<androidx.camera.view.PreviewView
android:layout_width="match_parent"
android:layout_height="0dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintDimensionRatio="3:4"
app:layout_constraintTop_toTopOf="parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
I was working with CameraX and faced the same issue. I followed the excellent answer given by #Husayn Hakeen. However, my app needs the previewView to match the device height width precisely. So I made some changes. In kotlin ( or java), I added
Preview preview = new Preview.Builder()
.setTargetAspectRatio(AspectRatio.RATIO_16_9)
.build();
ImageCapture imageCapture = new ImageCapture.Builder()
.setTargetAspectRatio(AspectRatio.RATIO_16_9)
.build();
And my xml has:
<androidx.camera.view.PreviewView
android:id="#+id/viewFinder"
android:layout_width="0dp"
android:layout_height="0dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent"
tools:background="#color/colorLight"
/>
From kotlin, I measured the device actual height and width in pixel:
val metrics: DisplayMetrics = DisplayMetrics().also { viewFinder.display.getRealMetrics(it) }
val deviceWidthPx = metrics.widthPixels
val deviceHeightPx = metrics.heightPixels
Using these data, I used some basic coordinate geometry to draw some lines on the captured photo:
As you can see, the yellow rectangle indicate the device preview, so this is what you see on the preview. I had a requirement to perform a center crop, that's why I drew the blue square.
After that, I cropped the preview:
#JvmStatic
fun cropPreviewBitmap(previewBitmap: Bitmap, deviceWidthPx: Int, deviceHeightPx: Int): Bitmap {
// crop the image
var cropHeightPx = 0f
var cropWidthPx = 0f
if(deviceHeightPx > deviceWidthPx) {
cropHeightPx = 1.0f *previewBitmap.height
cropWidthPx = 1.0f * deviceWidthPx / deviceHeightPx * cropHeightPx
}else {
cropWidthPx = 1.0f *previewBitmap.width
cropHeightPx = 1.0f * deviceHeightPx / deviceWidthPx * cropWidthPx
}
val cx = previewBitmap.width / 2
val cy = previewBitmap.height / 2
val minimusPx = Math.min(cropHeightPx, cropWidthPx)
val left2 = cx - minimusPx / 2
val top2 = cy - minimusPx /2
val croppedBitmap = Bitmap.createBitmap(previewBitmap, left2.toInt(), top2.toInt(), minimusPx.toInt(), minimusPx.toInt())
return croppedBitmap
}
This worked for me.
Easiest solution is to use the useCaseGroup, where you add both preview and image capture use cases under the same group + set the same view port.
Beware that with this solution, you will need to start the camera in onCreate method when it's ready (otherwise your app will crash):
viewFinder.post {
startCamera()
}
I want to do real time image processing with OpenCV and I want to use the Android Camera2 API. My problem is converting the preview frame to an OpenCV Mat (or at first to a bitmap would help too).
I know you can attach an ImageReader to the camera and convert every available Image to a Bitmap. The problem is that attaching an ImageReader to the camera reduces the framerate drastically (not any image conversion, but just using the ImageReader without any additional code).
So my idea was to attache the surface of an Allocation to the camera, pass that Allocation to the ScriptIntrinsicYuvToRGB-Renderscript and copying the output Allocation to a Bitmap, like in the android-hdf-viewfinder example.
Thats what I tried so far:
private fun setupRenderscript(){
rs = RenderScript.create(context)
val tb1 = Type.Builder(rs, Element.YUV(rs)).setX(size.width).setY(size.height)
rsInput = Allocation.createTyped(rs, tb1.create(), Allocation.USAGE_IO_INPUT or Allocation.USAGE_SCRIPT)
bmOut = Bitmap.createBitmap(size.width, size.height, Bitmap.Config.ARGB_8888)
val tb2 = Type.Builder(rs, Element.RGBA_8888(rs)).setX(size.width).setY(size.height)
rsOutput = Allocation.createTyped(rs, tb2.create(), Allocation.USAGE_SCRIPT)
yuvToRgbIntrinistic = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs))
yuvToRgbIntrinistic.setInput(rsInput)
}
private fun createCameraPreviewSession() {
setupRenderscript()
val texture = cameraTextureView.surfaceTexture //Normal camera preview surface
texture.setDefaultBufferSize(size.width, size.height)
val surface = Surface(texture)
captureRequestBuilder = cameraDevice?.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)
captureRequestBuilder?.addTarget(surface)
captureRequestBuilder?.addTarget(rsInput.surface) //Attach Allocation Surface to CaptureRequest
cameraDevice?.createCaptureSession(Arrays.asList(surface, rsInput.surface), cameraCaptureSession, backgroundHandler)
private val surfaceTextureListener = object : SurfaceTextureListener {
override fun onSurfaceTextureAvailable(texture: SurfaceTexture?, width: Int, height: Int) {
openCamera(width, height)
}
override fun onSurfaceTextureSizeChanged(texture: SurfaceTexture?, width: Int, heigth: Int) {
configureTransform(width, heigth)
}
override fun onSurfaceTextureUpdated(texture: SurfaceTexture?){
if (::rsOutput.isInitialized){
log("Image")
rsInput.ioReceive()
yuvToRgbIntrinistic.forEach(rsOutput)
rsOutput.copyTo(bmOut)
}
}
override fun onSurfaceTextureDestroyed(texture: SurfaceTexture?) = true
}
The camera preview works fine and now fatal errors are thrown but I don't get the preview as a bitmap. I get the following log messages:
For rsInput.ioReceive():
E/NdkImageReader: acquireImageLocked: Output buffer format: 0x22, ImageReader configured format: 0x1
E/RenderScript: lockNextBuffer: acquire image from reader 0x7427c7d8a0 failed! ret: -10000, img 0x0
E/RenderScript_jni: non fatal RS error, Error receiving IO input buffer.
For yuvToRgbIntrinistic.forEach(rsOutput) I get the same message multiple times, probably for every pixel:
E/RenderScript: YuvToRGB executed without data, skipping
So it seems something is not working with copying/reading the data into the input Allocation, but I don't know what I am doing wrong. It should work similar to the hdr example linked above.
The problem is that attaching an ImageReader to the camera reduces the framerate drastically (not any image conversion, but just using the ImageReader without any additional code)
That shouldn't be happening. You might not have properly configured the output targets of your camera session and the image reader. You also need to make sure that you close the images sent to the image reader as quickly as possible, so the next one can come in. If you really care about performance, you should be using YUV_420_888 as the pixel format and, depending on the size of the output target, 3-5 frames as a buffer for the image reader. Here's some sample code to help you get started adapted from this blog post:
val bufferSize = 3
val imageReader = ImageReader.newInstance(
// Pick width and height from supported camera output sizes
width, height, ImageFormat.YUV_420_888, bufferSize)
// Retrieve surface from image reader
val imReaderSurface = imageReader.surface
val targets: MutableList<Surface> = arrayOf(imReaderSurface).toMutableList()
// Create a capture session using the predefined targets
cameraDevice.createCaptureSession(targets, object: CameraCaptureSession.StateCallback() {
override fun onConfigured(session: CameraCaptureSession) {
// Submit capture requests here
}
// Omitting for brevity...
override fun onConfigureFailed(session: CameraCaptureSession) = Unit
}, null)
Regarding the RenderScript error, it's a bit hard to tell what's going on with the details you provided. I would recommend using the RenderScript support library, if you are not doing that already, and test your code in an emulator to rule out potential driver implementation issues.
Basically i create a simple android camera app not adding video recording , i followed this link
android camera app tutorial link
the problem is when i capture image from front camera , after click capture button the image showing mirrored just like we stand front of mirror. e.g i have right side arrow image but when i capture, it preview left side arrow.Sorry for English.
When you display the bitmap, you may use scale attributes in imageView
android:scaleX="-1" //To flip horizontally or
android:scaleY="-1" //To flip verticallyenter code here
You may also want to look into this post and this tutorial
Update 1:
You can flip the image in preview in a similar manner. You could also create a helper function to flip the bitmap and then set it to the imageView.
Helper function to flip the image could be found here. For your code you may just need the following:
public static Bitmap flip(Bitmap src, int type) {
// create new matrix for transformation
Matrix matrix = new Matrix();
matrix.preScale(-1.0f, 1.0f);
// return transformed image
return Bitmap.createBitmap(src, 0, 0, src.getWidth(), src.getHeight(), matrix, true);
}
and then in your previewCaptureImage method, instead of this imgPreview.setImageBitmap(bitmap); , use the flip method and then set it to imageView like this
imgPreview.setImageBitmap(flip(bitmap));
val outputOptions = ImageCapture
.OutputFileOptions
.Builder(photoFile)
.setMetadata(ImageCapture.Metadata().also {
it.isReversedHorizontal = CameraSelector.LENS_FACING_FRONT == lensFacing
}).build()
you need to pass metadata by isReversedHorizontal to true with the outputOptions as of you are trying to capture an image through the front camera and front camera always in mirror form.
val metadata = ImageCapture.Metadata().apply {
isReversedHorizontal = true
}
val outputOptions = ImageCapture.OutputFileOptions.Builder(photoFile)
.setMetadata(metadata)
.build()
imageCapture.takePicture(
outputOptions, cameraExecutor, object : ImageCapture.OnImageSavedCallback {
override fun onError(exc: ImageCaptureException) {
Log.e(TAG, "Photo capture failed: ${exc.message}", exc)
}
override fun onImageSaved(output: ImageCapture.OutputFileResults) {
val savedUri = output.savedUri ?: Uri.fromFile(photoFile)
Log.d(TAG, "Photo capture succeeded: $savedUri")
}
})