I try to use new CameraX api and I got this error: When I capture the image this image stored with wrong rotation. For example I capture in portrait orientation but result image in landscape orientation.
There is my code:
private fun startCamera() {
val previewConfig = PreviewConfig.Builder().apply {
setTargetResolution(Size(textureView.width, textureView.height))
setTargetRotation(textureView.display.rotation)
}.build()
val imageCaptureConfig = ImageCaptureConfig.Builder().apply {
setCaptureMode(CaptureMode.MIN_LATENCY)
setTargetAspectRatio(RATIO_4_3)
setTargetRotation(textureView.display.rotation)
}.build()
imageCapture = ImageCapture(imageCaptureConfig)
val preview = Preview(previewConfig)
preview.setOnPreviewOutputUpdateListener { previewOutput ->
removeView(textureView)
addViewMatchParent(textureView, position = 0)
textureView.surfaceTexture = previewOutput.surfaceTexture
textureView.updateTransformForCameraFinderView()
}
(context as? LifecycleOwner)?.let { lifecycleOwner ->
CameraX.bindToLifecycle(lifecycleOwner, preview, imageCapture)
}
}
private fun capturePhoto() {
tempImageFile = generateTmpFile(false)
val executor = Executor { it.run() }
imageCapture.takePicture(tempImageFile!!, executor, object : OnImageSavedListener {
override fun onError(error: ImageCaptureError, message: String, exc: Throwable?) {
exc?.printStackTrace()
}
override fun onImageSaved(photoFile: File) {
post {
// load image into ImageView by Glide
showCapturedPhotoPreview(photoFile)
}
}
})
}
Please give me advise how can I fix it?
P.S. I tried to find solution so don't copy-paste first looking like something similar)
Update: I tried to do my CameraView like in this sample but in their case it works, in my - no)
Try it:
val imageCaptureConfig = ImageCaptureConfig.Builder().apply {
setCaptureMode(CaptureMode.MIN_LATENCY)
setTargetAspectRatio(RATIO_4_3)
// play with this line!
setTargetRotation(Surface.ROTATION_0)
setTargetRotation(textureView.display.rotation)
}.build()
I select line that fixes my problem again:
setTargetRotation(Surface.ROTATION_0)
Related
I am creating a simple qr scanner using Camerax and google ML Kit. I am opening an intent after the string value is extracted from the QR code. The problem I'm facing is , the intent is opening multiple times. How do I resolve this?
The following is the setup for image analysis.DisplayQR intent will open after receiving string value inside QR code.
val imageAnalysis = ImageAnalysis.Builder()
.setTargetResolution(Size(640, 480))
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.build()
imageAnalysis.setAnalyzer(
ContextCompat.getMainExecutor(this),
CodeAnalyzer(this, object : CallBackInterface {
override fun onSuccess(qrString: String?) {
imageAnalysis.clearAnalyzer()
Toast.makeText(this#ActivityQR,qrString,Toast.LENGTH_SHORT).show()
Log.d("rty",qrString.toString())
//the following intent is opening multiple times
val visitordetails =
Intent(this#ActivityQR, DisplayQR::class.java)
visitordetails.putExtra("VISITOR_QR", qrString)
startActivity(visitordetails)
}
override fun onFailed() {
}
})
)
cameraProvider.bindToLifecycle(this, selectedCamera, imageAnalysis, cameraPreview)
Code for analyzing the image
class CodeAnalyzer(context: Context, callBackInterface: CallBackInterface):imageAnalysis.Analyzer {
private val context: Context = context
private val callback: CallBackInterface = callBackInterface
#SuppressLint("UnsafeOptInUsageError")
override fun analyze(image: ImageProxy) {
var scanner: BarcodeScanner = BarcodeScanning.getClient()
val scannedIMage = image.image
if (scannedIMage != null) {
var scannedInputImage = InputImage.fromMediaImage(
scannedIMage,
image.imageInfo.rotationDegrees
)
scanner.process(scannedInputImage).addOnSuccessListener { barCodes ->
for (qrCode in barCodes) {
when (qrCode.valueType) {
Barcode.TYPE_TEXT -> {
val qrString: String? = qrCode.rawValue
if (qrString != null) {
callback.onSuccess(qrString) //Here I am calling the callback
}
}
}
}
}.addOnFailureListener {
}.addOnCompleteListener {
image.close()
}
}
}
}
Edit: Corrected activity name
So I'm working on an app that requires QR scanner as a main feature. Previously I was using camerax-alpha06 with Firebase ML vision 24.0.3 and they were working fine for months, no customer complaints about scanning issues.
Then about two weeks ago I had to change Firebase ML vision to MLKit barcode scanning (related to the Crashlytics migration - out of topic) and now some of the users who could scan in the previous version now could not. Some sample devices be Samsung Tab A7 (Android 5.1.1) and Vivo 1919 (Android 10)
This is my build.gradle section that involves this feature
def camerax_version = "1.0.0-beta11"
implementation "androidx.camera:camera-core:${camerax_version}"
implementation "androidx.camera:camera-camera2:${camerax_version}"
implementation "androidx.camera:camera-lifecycle:${camerax_version}"
implementation "androidx.camera:camera-view:1.0.0-alpha18"
implementation "androidx.camera:camera-extensions:1.0.0-alpha18"
implementation 'com.google.android.gms:play-services-mlkit-barcode-scanning:16.1.2'
This is my camera handler file
class ScanQRCameraViewHandler(
private val fragment: ScanQRDialogFragment,
private val previewView: PreviewView
) {
private val displayLayout get() = previewView
companion object {
private const val RATIO_4_3_VALUE = 4.0 / 3.0
private const val RATIO_16_9_VALUE = 16.0 / 9.0
}
private val analyzer = GMSMLKitAnalyzer(onFoundQR = { extractedString ->
fragment.verifyExtractedString(extractedString)
}, onNotFoundQR = {
resetStateToAllowNewImageStream()
})
private var cameraProviderFuture: ListenableFuture<ProcessCameraProvider>? = null
private var camera: Camera? = null
private var isAnalyzing = false
internal fun resetStateToAllowNewImageStream() {
isAnalyzing = false
}
internal fun setTorceEnable(isEnabled: Boolean) {
camera?.cameraControl?.enableTorch(isEnabled)
}
internal fun initCameraProviderIfHasNot() {
if (cameraProviderFuture == null) {
fragment.context?.let {
cameraProviderFuture = ProcessCameraProvider.getInstance(it)
val executor = ContextCompat.getMainExecutor(it)
cameraProviderFuture?.addListener({
bindPreview(cameraProviderFuture?.get(), executor)
}, executor)
}
}
}
private fun bindPreview(cameraProvider: ProcessCameraProvider?, executor: Executor) {
val metrics = DisplayMetrics().also { displayLayout.display.getRealMetrics(it) }
val screenAspectRatio = aspectRatio(metrics.widthPixels, metrics.heightPixels)
val preview = initPreview(screenAspectRatio)
val imageAnalyzer = createImageAnalyzer()
val imageAnalysis = createImageAnalysis(executor, imageAnalyzer, screenAspectRatio)
val cameraSelector = createCameraSelector()
cameraProvider?.unbindAll()
camera = cameraProvider?.bindToLifecycle(
fragment as LifecycleOwner,
cameraSelector, imageAnalysis, preview
)
}
private fun createCameraSelector(): CameraSelector {
return CameraSelector.Builder()
.requireLensFacing(CameraSelector.LENS_FACING_BACK)
.build()
}
private fun createImageAnalysis(
executor: Executor, imageAnalyzer: ImageAnalysis.Analyzer, screenAspectRatio: Int
): ImageAnalysis {
val rotation = displayLayout.rotation
val imageAnalysis = ImageAnalysis.Builder()
// .setTargetRotation(rotation.toInt())
// .setTargetAspectRatio(screenAspectRatio)
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.build()
imageAnalysis.setAnalyzer(executor, imageAnalyzer)
return imageAnalysis
}
private fun createImageAnalyzer(): ImageAnalysis.Analyzer {
return ImageAnalysis.Analyzer {
isAnalyzing = true
analyzer.analyze(it)
}
}
private fun initPreview(screenAspectRatio: Int): Preview {
val preview: Preview = Preview.Builder()
//.setTargetResolution(Size(840, 840))
// .setTargetAspectRatio(screenAspectRatio)
// .setTargetRotation(displayLayout.rotation.toInt())
.build()
preview.setSurfaceProvider(previewView.surfaceProvider)
return preview
}
fun unbindAll() {
cameraProviderFuture?.get()?.unbindAll()
}
private fun aspectRatio(width: Int, height: Int): Int {
val previewRatio = width.coerceAtLeast(height).toDouble() / width.coerceAtMost(height)
if (kotlin.math.abs(previewRatio - RATIO_4_3_VALUE) <= kotlin.math.abs(previewRatio - RATIO_16_9_VALUE)) {
return AspectRatio.RATIO_4_3
}
return AspectRatio.RATIO_16_9
}
}
And my analyzer
internal class GMSMLKitAnalyzer(
private val onFoundQR: (String) -> Unit,
private val onNotFoundQR: () -> Unit
) :
ImageAnalysis.Analyzer {
private val options = BarcodeScannerOptions.Builder()
.setBarcodeFormats(Barcode.FORMAT_QR_CODE).build()
#SuppressLint("UnsafeExperimentalUsageError")
override fun analyze(imageProxy: ImageProxy) {
imageProxy.image?.let { mediaImage ->
val image = InputImage.fromMediaImage(mediaImage, imageProxy.imageInfo.rotationDegrees)
val scanner = BarcodeScanning.getClient(options)
CoroutineScope(Dispatchers.Main).launch {
val result = scanner.process(image).await()
result.result?.let { barcodes ->
barcodes.find { it.rawValue != null }?.rawValue?.let {
onFoundQR(it)
} ?: run { onNotFoundQR() }
}
imageProxy.close()
}
} ?: imageProxy.close()
}
}
The commented out lines are what I've tried to add and didn't help, some even caused issues on other (used-to-be-working) devices.
I am unsure if I misconfigure anything or not, so I would like any suggestions that would help me find the solution.
Thank you
P.S. This is my first post so if I've done anything wrong or missed something please advise.
BarcodeScanning does not work on some devices running with camera-camera2:1.0.0-beta08 version or later. You can use an earlier version of camera-camera2 to bypass this issue. For example:
See: https://developers.google.com/ml-kit/known-issues
We are working on fix internally in MLKit for the next SDK release.
Update your ML barcode scan plugin above 16.1.1
This issue was fixed in 'com.google.mlkit:barcode-scanning:16.1.1'
I am trying to record video with latest version of cameraX 1.0.0-beta11 but I am facing issue in recording it.
Code snippet to start camera and record video.
#SuppressLint("RestrictedApi")
private fun startCameraX() {
val cameraProviderFuture = ProcessCameraProvider.getInstance(activity!!)
cameraProviderFuture.addListener(Runnable {
// Used to bind the lifecycle of cameras to the lifecycle owner
val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()
// Preview
val preview = Preview.Builder()
.build()
.also {
it.setSurfaceProvider(previewView.surfaceProvider)
}
val videoCapture=VideoCapture.Builder().setVideoFrameRate(15).build()
val videoFile = File(
getOutputDirectory(),
SimpleDateFormat(FILENAME_FORMAT, Locale.US
).format(System.currentTimeMillis()) + ".mp4")
val outputOptions = VideoCapture.OutputFileOptions.Builder(videoFile).build()
videoCapture.startRecording(outputOptions, ContextCompat.getMainExecutor(activity), object: VideoCapture.OnVideoSavedCallback{
override fun onVideoSaved(outputFileResults: VideoCapture.OutputFileResults) {
Log.e("data","onVideoSaved")
}
override fun onError(videoCaptureError: Int, message: String, cause: Throwable?) {
Log.e("data", "onError->$message")
}
})
// Select back camera as a default
val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA
try {
// Unbind use cases before rebinding
cameraProvider.unbindAll()
// Bind use cases to camera
cameraProvider.bindToLifecycle(
(this as LifecycleOwner), cameraSelector, preview, videoCapture)
} catch(exc: Exception) {
Log.e("TAG", "Use case binding failed", exc)
}
}, ContextCompat.getMainExecutor(activity))
}
Output directory
private fun getOutputDirectory(): File {
val mediaDir = activity?.externalMediaDirs?.firstOrNull()?.let {
File(it, resources.getString(R.string.app_name)).apply { mkdirs() } }
return if (mediaDir != null && mediaDir.exists())
mediaDir else activity!!.filesDir
}
Error - Not bound to a Camera [androidx.camera.core.VideoCapture#4bd02a4]
Please help!!
Thanks in advance.
You must start the camera before calling the startRecording function. Basically you should output the following code in a function.
val videoFile = File(
getOutputDirectory(),
SimpleDateFormat(FILENAME_FORMAT, Locale.US
).format(System.currentTimeMillis()) + ".mp4")
val outputOptions = VideoCapture.OutputFileOptions.Builder(videoFile).build()
videoCapture.startRecording(outputOptions, ContextCompat.getMainExecutor(activity), object: VideoCapture.OnVideoSavedCallback{
override fun onVideoSaved(outputFileResults: VideoCapture.OutputFileResults) {
Log.e("data","onVideoSaved")
}
override fun onError(videoCaptureError: Int, message: String, cause: Throwable?) {
Log.e("data", "onError->$message")
}
})
And call it when the user clicks record button, then, when user clicks stop button you can stop recording with
videoCapture.stopRecording()
Once stopRecording function is called, if everything is ok, onVideoSaved function is gonna be called
I was working with CameraX and had hard time converting captured ImageProxy to Bitmap. After searching and trying, I formulated a solution. Later I found that it was not optimum so I changed the design. That forced me to drop hours of work.
Since I (or someone else) might need it in a future, I decided to post here as a question and post and answer to it for reference and scrutiny. Feel free to add better answer if you have one.
The relevant code is:
class ImagePickerActivity : AppCompatActivity() {
private var width = 325
private var height = 205
#RequiresApi(Build.VERSION_CODES.LOLLIPOP)
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_image_picker)
view_finder.post { startCamera() }
}
#RequiresApi(Build.VERSION_CODES.LOLLIPOP)
private fun startCamera() {
// Create configuration object for the viewfinder use case
val previewConfig = PreviewConfig.Builder().apply {
setTargetAspectRatio(Rational(1, 1))
//setTargetResolution(Size(width, height))
setLensFacing(CameraX.LensFacing.BACK)
setTargetAspectRatio(Rational(width, height))
}.build()
}
// Create configuration object for the image capture use case
val imageCaptureConfig = ImageCaptureConfig.Builder()
.apply {
setTargetAspectRatio(Rational(1, 1))
// We don't set a resolution for image capture instead, we
// select a capture mode which will infer the appropriate
// resolution based on aspect ration and requested mode
setCaptureMode(ImageCapture.CaptureMode.MIN_LATENCY)
}.build()
// Build the image capture use case and attach button click listener
val imageCapture = ImageCapture(imageCaptureConfig)
capture_button.setOnClickListener {
imageCapture.takePicture(object : ImageCapture.OnImageCapturedListener() {
override fun onCaptureSuccess(image: ImageProxy?, rotationDegrees: Int) {
//How do I get the bitmap here?
//imageView.setImageBitmap(someBitmap)
}
override fun onError(useCaseError: ImageCapture.UseCaseError?, message: String?, cause: Throwable?) {
val msg = "Photo capture failed: $message"
Toast.makeText(baseContext, msg, Toast.LENGTH_SHORT).show()
Log.e(localClassName, msg)
cause?.printStackTrace()
}
})
}
CameraX.bindToLifecycle(this, preview, imageCapture)
}
}
So the solution was to add extension method to Image and here is the code
class ImagePickerActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_image_picker)
}
private fun startCamera() {
val imageCapture = ImageCapture(imageCaptureConfig)
capture_button.setOnClickListener {
imageCapture.takePicture(object : ImageCapture.OnImageCapturedListener() {
override fun onCaptureSuccess(image: ImageProxy?, rotationDegrees: Int) {
imageView.setImageBitmap(image.image?.toBitmap())
}
//.....
})
}
}
}
fun Image.toBitmap(): Bitmap {
val buffer = planes[0].buffer
buffer.rewind()
val bytes = ByteArray(buffer.capacity())
buffer.get(bytes)
return BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
}
Slightly modified version. Using the inline function use on the Closable ImageProxy
imageCapture.takePicture(
object : ImageCapture.OnImageCapturedListener() {
override fun onCaptureSuccess(image: ImageProxy?, rotationDegrees: Int) {
image.use { image ->
val bitmap: Bitmap? = image?.let {
imageProxyToBitmap(it)
} ?: return
}
}
})
private fun imageProxyToBitmap(image: ImageProxy): Bitmap {
val buffer: ByteBuffer = image.planes[0].buffer
val bytes = ByteArray(buffer.remaining())
buffer.get(bytes)
return BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
}
Here is the safest approach, using MLKit's own implementation.
Tested and working on MLKit version 1.0.1
import com.google.mlkit.vision.common.internal.ImageConvertUtils;
Image mediaImage = imageProxy.getImage();
InputImage image = InputImage.fromMediaImage(mediaImage, imageProxy.getImageInfo().getRotationDegrees());
Bitmap bitmap = ImageConvertUtils.getInstance().getUpRightBitmap(image)
Java Implementation of Backbelt's Answer.
private Bitmap imageProxyToBitmap(ImageProxy image) {
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
return BitmapFactory.decodeByteArray(bytes,0,bytes.length,null);
}
There is second version of takePicture method at the moment (CameraX version 1.0.0-beta03). It provides several ways to persist image (OutputStream or maybe File can be useful in your case).
If you still want to convert ImageProxy to Bitmap here is my answer to similar question, which gives the correct implemetation of this conversion.
Please kindly take a look at this answer. All you need to apply it to your question is to get Image out of your ImageProxy
Image img = imaget.getImage();
I've followed the steps here to get CameraX setup, and now I am trying to get a front facing camera button working.
Here is my set up code:
private lateinit var preview: Preview
private fun startCamera() {
// Create configuration object for the viewfinder use case
val previewConfig = PreviewConfig.Builder().apply {
setLensFacing(CameraX.LensFacing.BACK)
}.build()
// Build the viewfinder use case
preview = Preview(previewConfig)
// Every time the viewfinder is updated, recompute layout
preview.setOnPreviewOutputUpdateListener {
// To update the SurfaceTexture, we have to remove it and re-add it
val parent = viewFinder.parent as ViewGroup
parent.removeView(viewFinder)
parent.addView(viewFinder, 0)
viewFinder.surfaceTexture = it.surfaceTexture
updateTransform()
}
// Bind use cases to lifecycle
CameraX.bindToLifecycle(this, preview)
}
When a user clicks the "switch" button I re-configure the preview to use the front camera, then reinitialize the Preview.
private fun initSwitchButton(view: View) {
switchButton = view.findViewById(R.id.switch_button)
switchButton.setOnClickListener {
val previewConfig = PreviewConfig.Builder().apply { setLensFacing(CameraX.LensFacing.FRONT) }.build()
preview = Preview(previewConfig)
}
}
However, this doesn't switch to the front camera. What am I missing?
Since 2021, an update to CameraX has rendered CameraX.LensFacing unusable. Use CameraSelector instead.
private CameraSelector lensFacing = CameraSelector.DEFAULT_FRONT_CAMERA;
private void flipCamera() {
if (lensFacing == CameraSelector.DEFAULT_FRONT_CAMERA) lensFacing = CameraSelector.DEFAULT_BACK_CAMERA;
else if (lensFacing == CameraSelector.DEFAULT_BACK_CAMERA) lensFacing = CameraSelector.DEFAULT_FRONT_CAMERA;
startCamera();
}
private void startCamera() {
ListenableFuture<ProcessCameraProvider> cameraFuture = ProcessCameraProvider.getInstance(requireContext());
cameraFuture.addListener(() -> {
imageCapture = new ImageCapture.Builder()
.setTargetRotation(cameraPreview.getDisplay().getRotation())
.setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
.build();
videoCapture = new VideoCapture.Builder().build();
try {
ProcessCameraProvider processCameraProvider = cameraFuture.get();
Preview preview = new Preview.Builder().build();
preview.setSurfaceProvider(cameraPreview.getSurfaceProvider());
processCameraProvider.unbindAll();
// lensFacing is used here
processCameraProvider.bindToLifecycle(getViewLifecycleOwner(), lensFacing, imageCapture, videoCapture, preview);
} catch (ExecutionException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}, ContextCompat.getMainExecutor(requireContext()));
}
It looks like the recommended way to achieve this is to store the LensFacing position as an instance variable and then call bindToLifecycle() to switch the camera.
Here is a code snippet that worked for me:
private var lensFacing = CameraX.LensFacing.BACK
private var imageCapture: ImageCapture? = null
#SuppressLint("RestrictedApi")
private fun startCamera() {
bindCameraUseCases()
// Listener for button used to switch cameras
switchButton = view.findViewById(R.id.switch_button)
switchButton.setOnClickListener {
lensFacing = if (CameraX.LensFacing.FRONT == lensFacing) {
CameraX.LensFacing.BACK
} else {
CameraX.LensFacing.FRONT
}
try {
// Only bind use cases if we can query a camera with this orientation
CameraX.getCameraWithLensFacing(lensFacing)
bindCameraUseCases()
} catch (exc: Exception) {
// Do nothing
}
}
}
private fun bindCameraUseCases() {
// Make sure that there are no other use cases bound to CameraX
CameraX.unbindAll()
val previewConfig = PreviewConfig.Builder().apply {
setLensFacing(lensFacing)
}.build()
val preview = Preview(previewConfig)
val imageCaptureConfig = ImageCaptureConfig.Builder().apply {
setLensFacing(lensFacing)
}.build()
imageCapture = ImageCapture(imageCaptureConfig)
// Apply declared configs to CameraX using the same lifecycle owner
CameraX.bindToLifecycle(this, preview, imageCapture)
}
private LensFacing lensFacing = CameraX.LensFacing.BACK;
private ImageCapture imageCapture = null;
private Button switchButton;
#SuppressLint("RestrictedApi")
private void startCamera() {
bindCameraUseCases();
// Listener for button used to switch cameras
switchButton = view.findViewById(R.id.switch_button);
switchButton.setOnClickListener(v -> {
lensFacing = lensFacing == LensFacing.FRONT ? LensFacing.BACK : LensFacing.FRONT;
try {
// Only bind use cases if we can query a camera with this orientation
CameraX.getCameraWithLensFacing(lensFacing);
bindCameraUseCases();
} catch (CameraInfoUnavailableException e) {
// Do nothing
}
});
}
private void bindCameraUseCases() {
// Make sure that there are no other use cases bound to CameraX
CameraX.unbindAll();
PreviewConfig previewConfig = new PreviewConfig.Builder().
setLensFacing(lensFacing)
.build();
Preview preview = new Preview(previewConfig);
ImageCaptureConfig imageCaptureConfig = new ImageCaptureConfig.Builder()
.setLensFacing(lensFacing)
.build();
imageCapture = new ImageCapture(imageCaptureConfig);
// Apply declared configs to CameraX using the same lifecycle owner
CameraX.bindToLifecycle(this, preview, imageCapture);
}
Java version
Here is how i did mine
private var defaultCameraFacing = CameraSelector.DEFAULT_BACK_CAMERA
btnFlipCamera.setOnClickListener {
Log.d("CameraFacing", defaultCameraFacing.toString())
defaultCameraFacing = if(defaultCameraFacing == CameraSelector.DEFAULT_FRONT_CAMERA){
CameraSelector.DEFAULT_BACK_CAMERA
}else{
CameraSelector.DEFAULT_FRONT_CAMERA
}
try {
// Only bind use cases if we can query a camera with this orientation
startCamera(defaultCameraFacing)
} catch (exc: Exception) {
// Do nothing
}
}
private fun startCamera(defaultCameraFacing: CameraSelector) {
llPictureCaptured.visibility = View.GONE
tvLocationLabel.visibility= View.GONE
pgLoadingLocation.visibility = View.GONE
openCamera.visibility = View.GONE
llCameraControl.visibility = View.VISIBLE
viewFinder.visibility = View.VISIBLE
val cameraProviderFuture = ProcessCameraProvider.getInstance(this)
cameraProviderFuture.addListener({
// Used to bind the lifecycle of cameras to the lifecycle owner
val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()
// Preview
val preview = Preview.Builder()
.build()
.also {
it.setSurfaceProvider(viewFinder.surfaceProvider)
}
imageCapture = ImageCapture.Builder()
.build()
//set image analysis, i.e luminosity analysis
val imageAnalyzer = ImageAnalysis.Builder()
.build()
.also {
it.setAnalyzer(cameraExecutor, LuminosityAnalyzer { luma ->
Log.d(TAG, "Average luminosity: $luma")
})
}
// Set camera facing
val cameraSelector = defaultCameraFacing
try {
// Unbind use cases before rebinding
cameraProvider.unbindAll()
// Bind use cases to camera
cameraProvider.bindToLifecycle(
this, cameraSelector, preview, imageCapture, imageAnalyzer)
} catch (exc: Exception) {
Log.e(TAG, "Use case binding failed", exc)
}
}, ContextCompat.getMainExecutor(this))
}