I made an android app (Android Studio + Kotlin) that uses CameraX. So this is my code to start the camera:
private fun startCamera() {
val cameraProviderFuture = ProcessCameraProvider.getInstance(this)
cameraProviderFuture.addListener(Runnable {
// Used to bind the lifecycle of cameras to the lifecycle owner
val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()
// Preview
val preview = Preview.Builder()
.build()
.also {
it.setSurfaceProvider(viewFinder.surfaceProvider)
}
// Select back camera as a default
val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA
try {
// Unbind use cases before rebinding
cameraProvider.unbindAll()
// Bind use cases to camera
cameraProvider.bindToLifecycle(
this, cameraSelector, preview
)
} catch (exc: Exception) {
Log.e(TAG, "Use case binding failed", exc)
}
}, ContextCompat.getMainExecutor(this))
}
Now I wanted to get a bitmap of the viewFinder so that I can determine the color of the pixel in the center:
val bitmap = viewFinder.bitmap
val pixel = bitmap?.getPixel(width / 2, height / 2 - 30)
val pixel2 = if (pixel != null) pixel else -1
bitmap is Bitmap? and pixel is Int?
When I run the app everything works completely fine but then sometimes it crashes and shoots out this message:
at com.example.colorblind.MainActivity$onCreate$r$1.run(MainActivity.kt:116)
Line 116 is the line with val pixel = bitmap?.getPixel(width / 2, height / 2 - 30). If I then add a line before the code so that it hops to line 117 everything works perfectly fine again until the same error occurs. Then I need to change the line again. How can I fix this problem because I can't publish the app like this if it sometimes randomly crashes while opening.
I hope I was able to explain the problem properly.
EDIT
This is the only message I get when it crashes:
2022-02-16 20:43:53.123 7337-7337/com.example.colorblind E/AndroidRuntime: FATAL EXCEPTION: main
Process: com.example.colorblind, PID: 7337
java.lang.IllegalArgumentException: y must be < bitmap.height()
at android.graphics.Bitmap.checkPixelAccess(Bitmap.java:1958)
at android.graphics.Bitmap.getPixel(Bitmap.java:1863)
at com.example.colorblind.MainActivity$onCreate$r$1.run(MainActivity.kt:116)
at android.os.Handler.handleCallback(Handler.java:883)
at android.os.Handler.dispatchMessage(Handler.java:100)
at android.os.Looper.loop(Looper.java:214)
at android.app.ActivityThread.main(ActivityThread.java:7356)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:492)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:930)
Related
we are developing an android app which uses camerax api for video recording.We tried to capture 60 fps video and for this we extended camera2 feature in our code.Here is a snippet of our code
private fun startCameraatf60() {
viewBinding.flash.isChecked=false
val cameraProviderFuture = ProcessCameraProvider.getInstance(this)
cameraProviderFuture.addListener(Runnable {
val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()
val preview = Preview.Builder().apply {
setTargetResolution(Size(1080,1920))
}
val exti = Camera2Interop.Extender(preview)
.setCaptureRequestOption(CaptureRequest.CONTROL_MODE, CaptureRequest.CONTROL_MODE_USE_SCENE_MODE)
.setCaptureRequestOption(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE, Range(60,60))
val s = preview.build()
.also {
it.setSurfaceProvider(viewBinding.viewFinder.surfaceProvider)
}
val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA
val recorder = Recorder.Builder()
.setQualitySelector(QualitySelector.from(Quality.FHD))
.build()
videoCapture = VideoCapture.withOutput(recorder)}
But the main problem when 60 fps is being used ,video size(in mb) captured by app is far less than the video size which are captured by device's default camera.Through the app captured video size is around 7 mb and by device's default camera video size is around 50 mb.Can anyone please help us to resolve this issue?
I want to use the Android camera to report lighting and colour information from a sampled patch on the image preview. The camerax preview generates ImageProxy images, and I can get the average LUV data for a patch. I would like to turn this data into absolute light levels using the exposure information and the camera white balance. The exposure data is in the Exif information, and maybe the white balance information too.
I would like this information, however we get it. Exif seems a very likely route, but any other non-Exif solutions are welcome.
At first sight, it looks as if Exif is always read from a file. However, ExifInterface
can be created from an InputStream, and one of the streamType options is STREAM_TYPE_EXIF_DATA_ONLY. This looks promising - it seems something makes and streams just the EXIF data, and a camera preview could easily do just that. Or maybe we can get Exif from the ImageProxy somehow.
I found many old threads on how to get at Exif data to find out the camera orientation. About 4 years ago these people were saying Exif is only read from a file. Is this still so?
Reply to comment:
With due misgiving, I attach my dodgy code...
private class LuvAnalyzer(private val listener:LuvListener) : ImageAnalysis.Analyzer {
private fun ByteBuffer.toByteArray(): ByteArray {
rewind() // Rewind the buffer to zero
val data = ByteArray(remaining())
get(data) // Copy the buffer into a byte array
return data // Return the byte array
}
override fun analyze(image: ImageProxy) {
// Sum for 1/5 width square of YUV_420_888 image
val YUV = DoubleArray(3)
val w = image.width
val h = image.height
val sq = kotlin.math.min(h,w) / 5
val w0 = ((w - sq)/4)*2
val h0 = ((h - sq)/4)*2
var ySum = 0
var uSum = 0
var vSum = 0
val y = image.planes[0].buffer.toByteArray()
val stride = image.planes[0].rowStride
var offset = h0*stride + w0
for (row in 1..sq) {
var o = offset
for (pix in 1..sq) { ySum += y[o++].toInt() and 0xFF }
offset += stride
}
YUV[0] = ySum.toDouble()/(sq*sq).toDouble()
val uv = image.planes[1].buffer.toByteArray()
offset = (h0/2)*stride + w0
for (row in 1..sq/2) {
var o = offset
for (pix in 1..sq/2) {
uSum += uv[o++].toInt() and 0xFF
vSum += uv[o++].toInt() and 0xFF
}
offset += stride
}
YUV[1] = uSum.toDouble()/(sq*sq/4).toDouble()
YUV[2] = vSum.toDouble()/(sq*sq/4).toDouble()
// val exif = Exif.createFromImageProxy(image)
listener(YUV)
image.close()
}
}
private fun startCamera() {
val cameraProviderFuture = ProcessCameraProvider.getInstance(this)
cameraProviderFuture.addListener({
// Used to bind the lifecycle of cameras to the lifecycle owner
val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()
// Preview
val preview = Preview.Builder()
.build()
.also {
it.setSurfaceProvider(binding.viewFinder.surfaceProvider)
}
imageCapture = ImageCapture.Builder()
.build()
// Image analyser
val imageAnalyzer = ImageAnalysis.Builder()
.build()
.also {
it.setAnalyzer(cameraExecutor, LuvAnalyzer { LUV ->
// Log.d(TAG, "Average LUV: %.1f %.1f %.1f".format(LUV[0], LUV[1], LUV[2]))
luvText = "Average LUV: %.1f %.1f %.1f".format(LUV[0], LUV[1], LUV[2])
})
}
// Select back camera as a default
val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA
try {
// Unbind use cases before rebinding
cameraProvider.unbindAll()
// Bind use cases to camera
cameraProvider.bindToLifecycle(
this, cameraSelector, preview, imageCapture, imageAnalyzer)
} catch(exc: Exception) {
Log.e(TAG, "Use case binding failed", exc)
}
}, ContextCompat.getMainExecutor(this))
}
I am doing my image averaging from an ImageProxy. I am currently trying to get the Exif data from the same ImageProxy because there not saving images to files, because this is intended to provide a stream of colour values. And there is an intriguing Exif.createFromImageProxy(image) (now commented out) which I discovered after writing the original note, but I can't get it to do anything.
I might get the Exif information if I saved an image to a .jpg file and then read it back in again. The camera is putting out a stream of preview images, and the exposure settings may be changing all the time, so I would have to save a stream of images. If I was really stuck, I might try that. But I feel there are enough Exif bits and pieces to get the information live from the camera.
Update
The Google camerax-developers suggest getting the exposure information using the camera2 Extender. I have got it working enough to see the numbers go up and down roughly as they should. This feels a lot better than the Exif route.
I am tempted to mark this as the solution, as it is the solution for me, but I shall leave it open as my original question in the title may have an answer.
val previewBuilder = Preview.Builder()
val previewExtender = Camera2Interop.Extender(previewBuilder)
// Turn AWB off
previewExtender.setCaptureRequestOption(CaptureRequest.CONTROL_AWB_MODE,
CaptureRequest.CONTROL_AWB_MODE_DAYLIGHT)
previewExtender.setSessionCaptureCallback(
object : CameraCaptureSession.CaptureCallback() {
override fun onCaptureCompleted(
session: CameraCaptureSession,
request: CaptureRequest,
result: TotalCaptureResult
) {
result.get(CaptureResult.SENSOR_EXPOSURE_TIME)
result.get(CaptureResult.SENSOR_SENSITIVITY)
result.get(CaptureResult.COLOR_CORRECTION_GAINS)
result.get(CaptureResult.COLOR_CORRECTION_TRANSFORM)
}
}
)
I am creating an application which must implement its own camera.
I use the cameraX library provided by google.
I noticed that there is a difference between the quality of the image captured by my own application, and the image captured by the camera application installed on my phone.
although the 2 photos are captured with the same conditions (light, position...)
especially when I zoom the photo, the details of the image become more blurry to the image captured by my application
(in my own case, my phone is Google Pixel 5)
Please see these 2 photos to see the difference
Image by phone camera
Image by my app
And this is my code
/**
* Initialize CameraX, and prepare to bind the camera use cases
*/
private fun setupCamera()
{
val cameraProviderFuture : ListenableFuture<ProcessCameraProvider> = ProcessCameraProvider.getInstance(this)
cameraProviderFuture.addListener({
cameraProvider = cameraProviderFuture.get()
lensFacing = when
{
hasBackCamera() -> CameraSelector.LENS_FACING_BACK
hasFrontCamera() -> CameraSelector.LENS_FACING_FRONT
else -> throw IllegalStateException("Back and front camera are unavailable")
}
bindCameraUseCases()
setupCameraGestures()
}, ContextCompat.getMainExecutor(this))
}
/**
* Declare and bind preview, capture and analysis use cases.
*/
private fun bindCameraUseCases()
{
lifecycleScope.launch {
val cameraProvider : ProcessCameraProvider = cameraProvider ?: throw IllegalStateException("Camera initialization failed.")
// Try to apply extensions like HDR, NIGHT ##########################################
val extensionsManager : ExtensionsManager = ExtensionsManager.getInstanceAsync(this#ImageCaptureActivity, cameraProvider).await()
val defaultCameraSelector : CameraSelector = CameraSelector.Builder()
.requireLensFacing(lensFacing)
.build()
val finalCameraSelector : CameraSelector = if (extensionsManager.isExtensionAvailable(defaultCameraSelector, ExtensionMode.AUTO))
{
extensionsManager.getExtensionEnabledCameraSelector(defaultCameraSelector, ExtensionMode.AUTO)
}
else
{
defaultCameraSelector
}
// Get screen metrics used to setup camera for full screen resolution
val metrics : DisplayMetrics = resources.displayMetrics
val screenAspectRatio : Int = aspectRatio(metrics.widthPixels, metrics.heightPixels)
val rotation : Int = binding.cameraPreview.display.rotation
preview = Preview.Builder()
// We request aspect ratio but no resolution
.setTargetAspectRatio(screenAspectRatio)
// Set initial target rotation
.setTargetRotation(rotation)
.build()
imageCapture = ImageCapture.Builder()
// We request aspect ratio but no resolution to match preview config, but letting
// CameraX optimize for whatever specific resolution best fits our use cases
.setTargetAspectRatio(screenAspectRatio)
// Set initial target rotation, we will have to call this again if rotation changes
// during the lifecycle of this use case
.setTargetRotation(rotation)
.setCaptureMode(ImageCapture.CAPTURE_MODE_MAXIMIZE_QUALITY)
.setJpegQuality(100)
.build()
imageAnalyzer = ImageAnalysis.Builder()
// We request aspect ratio but no resolution
.setTargetAspectRatio(screenAspectRatio)
.build()
imageAnalyzer?.setAnalyzer(cameraExecutor, LuminosityAnalyzer {})
// Must unbind the use-cases before rebinding them
cameraProvider.unbindAll()
try
{
// A variable number of use-cases can be passed here -
// camera provides access to CameraControl & CameraInfo
camera = cameraProvider.bindToLifecycle(this#ImageCaptureActivity, finalCameraSelector, preview, imageCapture, imageAnalyzer)
// Attach the viewfinder's surface provider to preview use case
preview?.setSurfaceProvider(binding.cameraPreview.surfaceProvider)
}
catch (exception : Exception)
{
exception.printStackTrace()
}
}
}
/**
* [androidx.camera.core.ImageAnalysisConfig] requires enum value of [androidx.camera.core.AspectRatio].
* Currently it has values of 4:3 & 16:9.
*
* Detecting the most suitable ratio for dimensions provided in #params by counting absolute
* of preview ratio to one of the provided values.
*
* #param width - preview width
* #param height - preview height
* #return suitable aspect ratio
*/
private fun aspectRatio(width : Int, height : Int) : Int
{
val previewRatio : Double = max(width, height).toDouble() / min(width, height)
return if (abs(previewRatio - RATIO_4_3_VALUE) <= abs(previewRatio - RATIO_16_9_VALUE))
{
AspectRatio.RATIO_4_3
}
else
{
AspectRatio.RATIO_16_9
}
}
fun captureImage()
{
if (!permissionsOk()) return
// Get a stable reference of the modifiable image capture use case
imageCapture?.let { imageCapture ->
// Create output file to hold the image
val photoFile : File = storageUtils.createFile(
baseFolder = getOutputPath(),
fileName = System.currentTimeMillis().toString(),
fileExtension = StorageUtils.PHOTO_EXTENSION)
// Setup image capture metadata
val metadata : Metadata = Metadata().also {
// Mirror image when using the front camera
it.isReversedHorizontal = lensFacing == CameraSelector.LENS_FACING_FRONT
it.location = locationManager.lastKnownLocation
}
// Create output options object which contains file + metadata
val outputOptions : ImageCapture.OutputFileOptions = ImageCapture.OutputFileOptions.Builder(photoFile)
.setMetadata(metadata)
.build()
imagesAdapter.addImage(photoFile)
// Setup image capture listener which is triggered after photo has been taken
imageCapture.takePicture(outputOptions, cameraExecutor, object : ImageCapture.OnImageSavedCallback
{
override fun onImageSaved(output : ImageCapture.OutputFileResults)
{
val savedUri : Uri = output.savedUri ?: return
StorageUtils.showInGallery(savedUri.path)
binding.list.post {
imagesAdapter.addImage(savedUri.toFile())
binding.list.smoothScrollToPosition(imagesAdapter.itemCount)
}
}
override fun onError(exception : ImageCaptureException)
{
exception.printStackTrace()
}
})
binding.cameraPreview.postDelayed({
binding.backgroundEffect.isVisible = true
binding.cameraPreview.postDelayed({
binding.backgroundEffect.isVisible = false
}, AppUtils.VERY_FAST_ANIMATION_MILLIS)
}, AppUtils.FAST_ANIMATION_MILLIS)
}
}
How can I improve the quality of my images? Is there any thing I should do? is there a special filter or algorithm?
i need your help please
if you took photo on Pixel probably using default cam app (GCam) - this app is fulfilled with quaility improvements backed up by some AI. tough task to comptetite with the biggest in quality... try to take a photo with some 3rd party like OpenCamera and compare this picture with one got by your app
You can use CameraX Extension feature to enable HDR & Low light.
this improves the image quality significantly.
I'm using camera2 api in my camera app designed specifically for Google Pixel 3 XL. This device has two front facing cameras (wide-angle and normal). Thanks to multi-camera feature, I can access both physical camera devices simultaneously, and my app has a feature to toggle between those two cameras. Up until my recent upgrade to Android 10, I could accurately see two distinct results, but now my wide-angle capture frame has pretty much the same FOV (Field of View) as the normal camera one. So, the same code, same apk on Android 9 wide-angle capture result is wide, as expected, and after Andoird 10 upgrade - wide and normal cameras show practically identical FOV.
Here is a code snippet to demonstrate how I initialize both cameras and capture preview:
MainActivity.kt
private val surfaceReadyCallback = object: SurfaceHolder.Callback {
override fun surfaceChanged(p0: SurfaceHolder?, p1: Int, p2: Int, p3: Int) { }
override fun surfaceDestroyed(p0: SurfaceHolder?) { }
override fun surfaceCreated(p0: SurfaceHolder?) {
// Get the two output targets from the activity / fragment
val surface1 = surfaceView1.holder.surface
val surface2 = surfaceView2.holder.surface
val dualCamera = findShortLongCameraPair(cameraManager)!!
val outputTargets = DualCameraOutputs(
null, mutableListOf(surface1), mutableListOf(surface2))
//Open the logical camera, configure the outputs and create a session
createDualCameraSession(cameraManager, dualCamera, targets = outputTargets) { session ->
val requestTemplate = CameraDevice.TEMPLATE_PREVIEW
val captureRequest = session.device.createCaptureRequest(requestTemplate).apply {
arrayOf(surface1, surface2).forEach { addTarget(it) }
}.build()
session.setRepeatingRequest(captureRequest, null, null)
}
}
}
fun openDualCamera(cameraManager: CameraManager,
dualCamera: DualCamera,
executor: Executor = SERIAL_EXECUTOR,
callback: (CameraDevice) -> Unit) {
cameraManager.openCamera(
dualCamera.logicalId, executor, object : CameraDevice.StateCallback() {
override fun onOpened(device: CameraDevice) { callback(device) }
override fun onError(device: CameraDevice, error: Int) = onDisconnected(device)
override fun onDisconnected(device: CameraDevice) = device.close()
})
}
fun createDualCameraSession(cameraManager: CameraManager,
dualCamera: DualCamera,
targets: DualCameraOutputs,
executor: Executor = SERIAL_EXECUTOR,
callback: (CameraCaptureSession) -> Unit) {
// Create 3 sets of output configurations: one for the logical camera, and
// one for each of the physical cameras.
val outputConfigsLogical = targets.first?.map { OutputConfiguration(it) }
val outputConfigsPhysical1 = targets.second?.map {
OutputConfiguration(it).apply { setPhysicalCameraId(dualCamera.physicalId1) } }
val outputConfigsPhysical2 = targets.third?.map {
OutputConfiguration(it).apply { setPhysicalCameraId(dualCamera.physicalId2) } }
val outputConfigsAll = arrayOf(
outputConfigsLogical, outputConfigsPhysical1, outputConfigsPhysical2)
.filterNotNull().flatten()
val sessionConfiguration = SessionConfiguration(SessionConfiguration.SESSION_REGULAR,
outputConfigsAll, executor, object : CameraCaptureSession.StateCallback() {
override fun onConfigured(session: CameraCaptureSession) = callback(session)
override fun onConfigureFailed(session: CameraCaptureSession) = session.device.close()
})
openDualCamera(cameraManager, dualCamera, executor = executor) {
it.createCaptureSession(sessionConfiguration)
}
}
DualCamera.kt Helper Class
data class DualCamera(val logicalId: String, val physicalId1: String, val physicalId2: String)
fun findDualCameras(manager: CameraManager, facing: Int? = null): Array<DualCamera> {
val dualCameras = ArrayList<DualCamera>()
manager.cameraIdList.map {
Pair(manager.getCameraCharacteristics(it), it)
}.filter {
facing == null || it.first.get(CameraCharacteristics.LENS_FACING) == facing
}.filter {
it.first.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES)!!.contains(
CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA)
}.forEach {
val physicalCameras = it.first.physicalCameraIds.toTypedArray()
for (idx1 in 0 until physicalCameras.size) {
for (idx2 in (idx1 + 1) until physicalCameras.size) {
dualCameras.add(DualCamera(
it.second, physicalCameras[idx1], physicalCameras[idx2]))
}
}
}
return dualCameras.toTypedArray()
}
fun findShortLongCameraPair(manager: CameraManager, facing: Int? = null): DualCamera? {
return findDualCameras(manager, facing).map {
val characteristics1 = manager.getCameraCharacteristics(it.physicalId1)
val characteristics2 = manager.getCameraCharacteristics(it.physicalId2)
val focalLengths1 = characteristics1.get(
CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS) ?: floatArrayOf(0F)
val focalLengths2 = characteristics2.get(
CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS) ?: floatArrayOf(0F)
val focalLengthsDiff1 = focalLengths2.max()!! - focalLengths1.min()!!
val focalLengthsDiff2 = focalLengths1.max()!! - focalLengths2.min()!!
if (focalLengthsDiff1 < focalLengthsDiff2) {
Pair(DualCamera(it.logicalId, it.physicalId1, it.physicalId2), focalLengthsDiff1)
} else {
Pair(DualCamera(it.logicalId, it.physicalId2, it.physicalId1), focalLengthsDiff2)
}
// Return only the pair with the largest difference, or null if no pairs are found
}.sortedBy { it.second }.reversed().lastOrNull()?.first
}
And you can see the result on the attached screenshot, the top left corner one has much wider FOV than the same camera but running on Android 10
Is this a known regression with Android 10? Has anyone noticed similar behavior?
My understanding:
I came across the same problem on my Pixel 3. It seems that the wide angle camera's frame has been cropped in the HAL layer before combination. Actually the FOV is not totally the same, as there is a little disparity between left and right camera. However, the default zoom level of wide camera seems to change according to the focal length.
But I could not find any official documentation about it. In Android 10, it claims improved the fusing of physical cameras:
https://developer.android.com/about/versions/10/features#multi-camera
Solution:
If you wish to access the raw data from the wide angle front camera, you can create 2 camera sessions for both physical cameras instead of a single session for the logical camera.
Updated:
You can use the setPhysicalCameraKey to reset the zoom level
https://developer.android.com/reference/android/hardware/camera2/CaptureRequest.Builder#setPhysicalCameraKey(android.hardware.camera2.CaptureRequest.Key%3CT%3E,%20T,%20java.lang.String)
The regression you observed is a behavior change on Pixel 3/Pixel 3XL between Android 9 and Android 10. It is not an Android API change per se, but something the API allows the devices change behavior on; other devices may be different.
The camera API allows the physical camera streams to be cropped to match the field-of-view of the logical camera stream.
On pixel 3 (Android 11), probing cameras using CameraManager.getCameraIdList() returns 4 IDs: 0, 1, 2, 3
0: Back Camera : Physical Camera Stream
1: Front camera : Logical camera with two physical camera ID's
2: Front camera normal: Physical Camera Stream
3: Front camera widelens: Physical Camera Stream
As user DannyLin suggested, opening 2 physical camera streams (2,3) seems to do the job. Note that other combinations such as (0, 1), (1, 2) etc do not work (only the first call to openCamera() goes through and the second call fails). Here's a snapshot of the physical camera streams for two front camera's.
I am using new CameraX API (alpha 5).
On devices like: LGE, Samsung, Motorola, OPPO crash with message:
Fail to find supported surface info - CameraId:null
occur.
This crashes are rare, but unacceptable.
Error occur in method:
private fun initCamera(cameraMode: CameraMode) {
val lensFacing = if (cameraMode == CameraMode.DEFAULT) CameraX.LensFacing.BACK else CameraX.LensFacing.FRONT
val metrics = DisplayMetrics().also { textureView?.display?.getRealMetrics(it) }
val resolution = Size(metrics.widthPixels, metrics.heightPixels)
val previewConfig = PreviewConfig.Builder()
.setTargetResolution(resolution)
.setLensFacing(lensFacing)
.build()
preview?.let { CameraX.unbind(preview) }
preview = Preview(previewConfig)
preview?.setOnPreviewOutputUpdateListener { previewOutput ->
val parent = textureView?.parent as ViewGroup
parent.removeView(textureView)
parent.addView(textureView, 0)
textureView?.surfaceTexture = previewOutput.surfaceTexture
}
CameraX.bindToLifecycle(this, preview)
}
in line:
preview = Preview(previewConfig)
It is looks like that it cannot create preview for this devices because cannot get camera.
Is anyone know the possible workaround for this problem?
P.S:
I checked. This devices (device models in which crash occur) has both cameras (front, back).