Sorry about my English, I dont speak very well.
I have an app that comumnicates with an arduino device by blinking flash.
I notice that both Camera1 and Camera2 have issues to work into all Android devices, so I builded a settings screen that user can test both and choose the one that works fine.
I'm now trying to build the same communication with CameraX, hoping that works fine into more devices, but I cant find examples to just toggle the flash. I'm new at android develop, and the material I found is just about taking pictures and stuff, but I dont want even open a camera screen, just turn on and off the flash, like lantern.
Can someone help with this, or send a documentation that helps?
edit1
I did this in onCreate and I see the logs into logcat but flash dont toggle.
Maybe I need to create case ?
lateinit var cameraControl: CameraControl
val cameraProcessFuture = ProcessCameraProvider.getInstance(this)
cameraProcessFuture.addListener(Runnable {
val cameraProvider = cameraProcessFuture.get()
val lifecycleOwner = this
val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA
val camera = cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector)
cameraControl = camera.cameraControl
val listenableFuture = cameraControl!!.enableTorch(true)
// cameraControl.enableTorch(false)
Log.d("MurilloTesteCamera", "listener")
listenableFuture.addListener(Runnable {
Log.d("MurilloTesteCamera", "listener 2")
}, ContextCompat.getMainExecutor(this))
}, ContextCompat.getMainExecutor(this))
Log.d("MurilloTesteCamera", "oncreate")
edit2
this code I tried to create a case, but no solve my problem and flash stil dont turn on ( my activity implements CameraXConfig.Provider:
val context = this
Log.d("MurilloTesteCamera", "before initialize")
CameraX.initialize(context, cameraXConfig).addListener(Runnable {
Log.d("MurilloTesteCamera", "inside initialize")
CameraX.unbindAll()
val preview = Preview.Builder()
.apply {
setTargetResolution(Size(640, 480))
}
.build()
lateinit var cameraControl: CameraControl
val cameraProcessFuture = ProcessCameraProvider.getInstance(context)
cameraProcessFuture.addListener(Runnable {
val cameraProvider = cameraProcessFuture.get()
val lifecycleOwner = context
val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA
val camera = cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector)
cameraControl = camera.cameraControl
camera.cameraInfo.hasFlashUnit()
Log.d("MurilloTesteCamera", "info before -> " + camera.cameraInfo.torchState)
Log.d("MurilloTesteCamera", "has flash -> " + camera.cameraInfo.hasFlashUnit())
val listenableFuture = cameraControl.enableTorch(true)
Log.d("MurilloTesteCamera", "listener")
listenableFuture.addListener(Runnable {
Log.d("MurilloTesteCamera", "info after -> " + camera.cameraInfo.torchState)
Log.d("MurilloTesteCamera", "listener 2")
}, ContextCompat.getMainExecutor(context))
CameraX.bindToLifecycle(context, cameraSelector, preview)
}, ContextCompat.getMainExecutor(context))
}, ContextCompat.getMainExecutor(context))
Log.d("MurilloTesteCamera", "after initialize")
while (!CameraX.isInitialized()){}
Log.d("MurilloTesteCamera", "after while")
In CameraX, the API to turn on/off the camera torch is CameraControl.enableTorch(boolean). To get a CameraControl instance, you can follow the documentation:
The application can retrieve the CameraControl instance via
Camera.getCameraControl(). CameraControl is ready to start operations
immediately after Camera is retrieved and UseCases are bound to that
camera
This means you'll need to first bind a use case (or multiple use cases) to a lifecycle. You mentioned you don't want to open a camera screen, so I'm assuming you mean you don't want to bind any use cases, in which case you can call bindToLifecycle() with zero use cases. This may or may not work with the latest versions of CameraX.
All in all, you'd have to write something like this:
val cameraProcessFuture = ProcessCameraProvider.getInstance(context)
cameraProcessFuture.addListener(Runnable {
val cameraProvider = cameraProcessFuture.get()
// Choose the lifecycle to which the camera will be attached to
val lifecycleOwner = /* Can be an Activity, Fragment, or a custom lifecycleOwner */
// Choose a valid camera the device has
val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA
// You might need to create a use case to start capture requests
// with the camera
val imageAnalysis = ImageAnalysis.Builder()
.build()
.apply {
val executor = /* Define an executor */
setAnalyzer(executor, ImageAnalysis.Analyzer {
it.close()
})
}
// Get a camera instance
val camera = cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector)
// Get a cameraControl instance
val cameraControl = camera.cameraControl
// Call enableTorch(), you can listen to the result to check whether it was successful
cameraControl.enableTorch(true) // enable torch
cameraControl.enableTorch(false) // disbale torch
}, ContextCompat.getMainExecutor(context))
Related
I'm an android developer beginner and i try to use CameraX.
Is it possible to open the native camera and take a picture instead of build a page with preview and custom button to take a photo ?
I've read multiple article / tutorial but cannot find the solution.
Thanks for helping me
Using the example here, with a few tweaks - I was able to take a photo without any preview.
First we start the camera using ImageCapture use case:
private fun startCamera() {
val cameraProviderFuture = ProcessCameraProvider.getInstance(this)
cameraProviderFuture.addListener({
// Used to bind the lifecycle of cameras to the lifecycle owner
val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()
imageCapture = ImageCapture.Builder()
.build()
// Select back camera as a default
val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA
try {
// Unbind use cases before rebinding
cameraProvider.unbindAll()
// Bind use cases to camera
cameraProvider.bindToLifecycle(
this, cameraSelector, imageCapture)
} catch(exc: Exception) {
Log.e("Error", "Use case binding failed", exc)
}
}, ContextCompat.getMainExecutor(this))
}
Then we can take a photo by calling takePhoto method:
private fun takePhoto() {
// Get a stable reference of the modifiable image capture use case
val imageCapture = imageCapture ?: return
// Create time stamped name and MediaStore entry.
val contentValues = ContentValues().apply {
put(MediaStore.MediaColumns.DISPLAY_NAME, "sample_image")
put(MediaStore.MediaColumns.MIME_TYPE, "image/jpeg")
if(Build.VERSION.SDK_INT > Build.VERSION_CODES.P) {
put(MediaStore.Images.Media.RELATIVE_PATH, "Pictures/CameraX-Image")
}
}
// Create output options object which contains file + metadata
val outputOptions = ImageCapture.OutputFileOptions
.Builder(contentResolver,
MediaStore.Images.Media.EXTERNAL_CONTENT_URI,
contentValues)
.build()
// Set up image capture listener, which is triggered after photo has
// been taken
imageCapture.takePicture(
outputOptions,
ContextCompat.getMainExecutor(this),
object : ImageCapture.OnImageSavedCallback {
override fun onError(exc: ImageCaptureException) {
Log.e("Error", "Photo capture failed: ${exc.message}", exc)
}
override fun
onImageSaved(output: ImageCapture.OutputFileResults){
val msg = "Photo capture succeeded: ${output.savedUri}"
Toast.makeText(baseContext, msg, Toast.LENGTH_SHORT).show()
Log.d("Success", msg)
}
}
)
}
I've uploaded a working project to Github
I am developing a camera application that users Camera API 2. I have managed to get the image capturing and video recording functions to works. But whenever I perform either of those things my camera preview freezes. I am trying to understand what changes I need to make in order to work it fine without any issues.
After handling camera permission and selecting the camera id I invoke the following function to get a camera preview. this works fine without any issues.
private fun startCameraPreview(){
if (ContextCompat.checkSelfPermission(requireContext(), Manifest.permission.CAMERA)
== PackageManager.PERMISSION_GRANTED &&
ContextCompat.checkSelfPermission(requireContext(), Manifest.permission.RECORD_AUDIO)
== PackageManager.PERMISSION_GRANTED){
lifecycleScope.launch(Dispatchers.Main) {
camera = openCamera(cameraManager, cameraId!!, cameraHandler)
val cameraOutputTargets = listOf(viewBinding.cameraSurfaceView.holder.surface)
session = createCaptureSession(camera, cameraOutputTargets, cameraHandler)
val captureBuilder = camera.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)
captureBuilder.set(
CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE)
captureBuilder.addTarget(viewBinding.cameraSurfaceView.holder.surface)
session.setRepeatingRequest(captureBuilder.build(), null, cameraHandler)
selectMostMatchingImageCaptureSize()
selectMostMatchingVideoRecordSize()
}
}else{
requestCameraPermission()
}
}
According to my understanding when you create CaptureSession with relevant targets and when you make setRepeatingRequest out of it we get the camera preview. correct me if I'm wrong.
Here is the function I use to capture an image.
private fun captureImage(){
if (captureSize != null) {
captureImageReader = ImageReader.newInstance(
captureSize!!.width, captureSize!!.height, ImageFormat.JPEG, IMAGE_BUFFER_SIZE)
viewModel.documentPreviewSizeToCaptureSizeScaleFactor =
captureSize!!.width / previewSize!!.width.toFloat()
lifecycleScope.launch(Dispatchers.IO) {
val cameraOutputTargets = listOf(
viewBinding.cameraSurfaceView.holder.surface,
captureImageReader.surface
)
session = createCaptureSession(camera, cameraOutputTargets, cameraHandler)
takePhoto().use { result ->
Log.d(TAG, "Result received: $result")
// Save the result to disk
val output = saveResult(result)
Log.d(TAG, "Image saved: ${output.absolutePath}")
// If the result is a JPEG file, update EXIF metadata with orientation info
if (output.extension == "jpg") {
decodedExifOrientationOfTheImage =
decodeExifOrientation(result.orientation)
val exif = ExifInterface(output.absolutePath)
exif.setAttribute(
ExifInterface.TAG_ORIENTATION, result.orientation.toString()
)
exif.saveAttributes()
Log.d(TAG, "EXIF metadata saved: ${output.absolutePath}")
}
}
}
}
}
The function takePhoto() is a function I have placed in the inherited base fragment class which is responsible for setting up capture requests and saving the image.
protected suspend fun takePhoto(): CombinedCaptureResult = suspendCoroutine { cont ->
// Flush any images left in the image reader
#Suppress("ControlFlowWithEmptyBody")
while (captureImageReader.acquireNextImage() != null) {
}
// Start a new image queue
val imageQueue = ArrayBlockingQueue<Image>(IMAGE_BUFFER_SIZE)
captureImageReader.setOnImageAvailableListener({ reader ->
val image = reader.acquireNextImage()
Log.d(TAG, "Image available in queue: ${image.timestamp}")
imageQueue.add(image)
}, imageReaderHandler)
val captureRequest = session.device.createCaptureRequest(
CameraDevice.TEMPLATE_STILL_CAPTURE
).apply {
addTarget(captureImageReader.surface)
}
session.capture(captureRequest.build(), object : CameraCaptureSession.CaptureCallback() {
override fun onCaptureCompleted(
session: CameraCaptureSession,
request: CaptureRequest,
result: TotalCaptureResult
) {
super.onCaptureCompleted(session, request, result)
val resultTimestamp = result.get(CaptureResult.SENSOR_TIMESTAMP)
Log.d(TAG, "Capture result received: $resultTimestamp")
// Set a timeout in case image captured is dropped from the pipeline
val exc = TimeoutException("Image dequeuing took too long")
val timeoutRunnable = Runnable { cont.resumeWithException(exc) }
imageReaderHandler.postDelayed(timeoutRunnable, IMAGE_CAPTURE_TIMEOUT_MILLIS)
// Loop in the coroutine's context until an image with matching timestamp comes
// We need to launch the coroutine context again because the callback is done in
// the handler provided to the `capture` method, not in our coroutine context
#Suppress("BlockingMethodInNonBlockingContext")
lifecycleScope.launch(cont.context) {
while (true) {
val image = imageQueue.take()
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q &&
image.format != ImageFormat.DEPTH_JPEG &&
image.timestamp != resultTimestamp
) continue
Log.d(TAG, "Matching image dequeued: ${image.timestamp}")
// Unset the image reader listener
imageReaderHandler.removeCallbacks(timeoutRunnable)
captureImageReader.setOnImageAvailableListener(null, null)
// Clear the queue of images, if there are left
while (imageQueue.size > 0) {
imageQueue.take().close()
}
// Compute EXIF orientation metadata
val rotation = relativeOrientation.value ?: defaultOrientation()
Log.d(TAG,"EXIF rotation value $rotation")
val mirrored = characteristics.get(CameraCharacteristics.LENS_FACING) ==
CameraCharacteristics.LENS_FACING_FRONT
val exifOrientation = computeExifOrientation(rotation, mirrored)
// Build the result and resume progress
cont.resume(
CombinedCaptureResult(
image, result, exifOrientation, captureImageReader.imageFormat
)
)
}
}
}
}, cameraHandler)
}
Invoking these functions above does perform image capturing but it freezes the preview. If I want to get the preview back I need to reset the preview using the bellow function. I have to call this method end of captureImage() function.
private fun resetCameraPreview(){
lifecycleScope.launch(Dispatchers.Main) {
val cameraOutputTargets = listOf(viewBinding.cameraSurfaceView.holder.surface)
session = createCaptureSession(camera, cameraOutputTargets, cameraHandler)
val captureBuilder = camera.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)
captureBuilder.set(
CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE)
captureBuilder.addTarget(viewBinding.cameraSurfaceView.holder.surface)
session.setRepeatingRequest(captureBuilder.build(), null, cameraHandler)
}
}
Even doing this is not providing a good User experience as it freezes the preview for few seconds. When it's come to video recording this issue becomes unsolvable using even about not so good fix.
This is the function I use for video recording.
private fun startRecoding(){
if (videoRecordeSize != null) {
lifecycleScope.launch(Dispatchers.IO) {
configureMediaRecorder(videoRecodeFPS,videoRecordeSize!!)
val cameraOutputTargets = listOf(
viewBinding.cameraSurfaceView.holder.surface,
mediaRecorder.surface
)
session = createCaptureSession(camera, cameraOutputTargets, cameraHandler)
recordVideo()
}
}
}
The function recordVideo() is a function in the inherited base fragment class which is responsible for setting up capture requests and starting mediaRecorder to begin video recode to a file.
protected fun recordVideo() {
lifecycleScope.launch(Dispatchers.IO) {
val recordRequest = session.device
.createCaptureRequest(CameraDevice.TEMPLATE_RECORD).apply {
addTarget(mediaRecorder.surface)
}
session.setRepeatingRequest(recordRequest.build(), null, cameraHandler)
mediaRecorder.apply {
start()
}
}
}
While it does record the video correctly and saves the file when invoked mediaRecorder.stop(). But the whole time the camera preview is freeze even after calling mediaRecorder.stop().
What am I missing here? both of the times when I create capture sessions I have included preview surface as a target. Doesn't it enough for Camera 2 API to know that it should push frames to the surface while capturing images or recording videos? You can find the repo for this codebase here. I hope someone can help me because I'm stuck with Camera 2 API. I wish I could use cameraX but some parts are still in beta so I can't use it in production.
I'm trying to explore cameraX beta version.
I'm stuck in my implementation.
imageCapture.takePicture() imageCapture is null.
// Bind the CameraProvider to the LifeCycleOwner
val cameraSelector = CameraSelector.Builder().requireLensFacing(lensFacing).build()
val cameraProviderFuture = ProcessCameraProvider.getInstance(this)
cameraProviderFuture.addListener(Runnable {
// CameraProvider
val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()
// ImageCapture
imageCapture = ImageCapture.Builder()
.setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
.build()
// Must unbind the use-cases before rebinding them
cameraProvider.unbindAll()
try {
// A variable number of use-cases can be passed here -
// camera provides access to CameraControl & CameraInfo
camera = cameraProvider.bindToLifecycle(
this, cameraSelector, imageCapture)
} catch(exc: Exception) {
Log.e("TAG", "Use case binding failed", exc)
}
}, ContextCompat.getMainExecutor(this))
// Create output file to hold the image
photoFile = createFile(externalMediaDirs.first(), FILENAME, PHOTO_EXTENSION)
// Setup image capture metadata
val metadata = Metadata().apply {
// Mirror image when using the front camera
isReversedHorizontal = lensFacing == CameraSelector.LENS_FACING_FRONT
}
// Create output options object which contains file + metadata
outputOptions = ImageCapture.OutputFileOptions.Builder(photoFile)
.setMetadata(metadata)
.build()
// Setup image capture listener which is triggered after photo has been taken
imageCapture?.takePicture(
outputOptions, cameraExecutor, object : ImageCapture.OnImageSavedCallback {
override fun onError(exc: ImageCaptureException) {
Log.e(TAG, "Photo capture failed: ${exc.message}", exc)
}
override fun onImageSaved(output: ImageCapture.OutputFileResults) {
val savedUri = output.savedUri ?: Uri.fromFile(photoFile)
Log.d(TAG, "Photo capture succeeded: $savedUri")
}
})
}
(I don't use onClickListener)
My function is called by a service.
If I remove cameraProviderFuture.addListener(Runnable I get Not bound to a valid Camera
I use camerax beta version
The ImageCapture use case doesn't currently work on its own. It has to be used in combination with at least a Preview or ImageAnalysis use case. This might change in future versions of CameraX. For now, you can check the documentation on supported use case combinations.
A simple fix to your problem would be to add an ImageAnalysis use case, its Analyzer can just immediately close the images it receives.
val imageAnalysis = ImageAnalysis.Builder()
.build()
.apply {
setAnalyzer(executor, ImageAnalysis.Analyzer { image ->
image.close()
})
}
// Then bind both the imageAnalysis and ImageCapture
cameraProvider.bindToLifecycle(this, cameraSelector, imageCapture, imageAnalysis)
I'm working with Camera X for the first time and I can't find a way to check if a device has a front or rear camera in runtime...
I only need to use the preview I'm not capturing images so i can't use a button for it..
private var lensFacing = CameraX.LensFacing.FRONT
val viewFinderConfig = PreviewConfig.Builder().apply {
setLensFacing(lensFacing)
setTargetAspectRatio(screenAspectRatio)
setTargetRotation(viewFinder.display.rotation)
}.build()
How can I make sure that the app won't crash if the user device has no Front camera?
Thanks in advance!
Check if the device supports at least one camera with the specified lens facing:
version 1.0.0-alpha06:
val hasFrontCamera = CameraX.hasCameraWithLensFacing(CameraX.LensFacing.FRONT)
EDIT :
version >= 1.0.0-alpha07:
From https://developer.android.com/jetpack/androidx/releases/camera:
hasCamera() previously provided by the CameraX class call are now
available via the ProcessCameraProvider
override fun onCreate(savedInstanceState: Bundle?) {
cameraProviderFuture = ProcessCameraProvider.getInstance(this);
}
cameraProviderFuture.addListener(Runnable {
val cameraProvider = cameraProviderFuture.get()
try {
var hasCamera = cameraProvider.hasCamera(CameraSelector.DEFAULT_FRONT_CAMERA)
} catch (e: CameraInfoUnavailableException) {
e.printStackTrace()
}
}, ContextCompat.getMainExecutor(this))
I am trying to implement face detection using Firebase MLKit and CameraX ImageAnalysis. It works fine when using back camera, but when i tried with front camera, it detected nothing:
val config = PreviewConfig.Builder()
.setLensFacing(CameraX.LensFacing.FRONT)
.build()
val previewUseCase = Preview(config)
previewUseCase.setOnPreviewOutputUpdateListener { previewOutput ->
viewFinder.post {
removeView(viewFinder)
addView(viewFinder, 0)
viewFinder.surfaceTexture = previewOutput.surfaceTexture
updateTransform(previewOutput)
}
}
val highAccuracyOpts = FirebaseVisionFaceDetectorOptions.Builder()
.setPerformanceMode(FirebaseVisionFaceDetectorOptions.ACCURATE)
.build()
val detector = FirebaseVision.getInstance().getVisionFaceDetector(highAccuracyOpts)
val imageAnalysisConfig = ImageAnalysisConfig.Builder()
.setImageReaderMode(ImageAnalysis.ImageReaderMode.ACQUIRE_LATEST_IMAGE)
.build()
val imageAnalysis = ImageAnalysis(imageAnalysisConfig).apply {
setAnalyzer(
Executors.newSingleThreadExecutor(),
ImageAnalysis.Analyzer { image, rotationDegrees ->
if (image.image != null && isBusy.compareAndSet(false, true)) {
val visionImage = FirebaseVisionImage.fromMediaImage(image.image!!, degreesToFirebaseRotation(rotationDegrees))
detector.detectInImage(visionImage)
.addOnSuccessListener { faces ->
// faces.size always zero when using front camera
Timber.d("${faces.size}")
isBusy.set(false)
}
.addOnFailureListener { error ->
Timber.d("$error")
}
}
})
}
CameraX.bindToLifecycle(lifecycleOwner, previewUseCase, imageAnalysis)
I tested on Nokia 8.1 with Android 10. I tried https://github.com/firebase/quickstart-android/tree/master/mlkit which does not use CameraX and it works fine with front camera.
Solved it by setting lensfacing for ImageAnalysis to CameraX.LensFacing.FRONT:
val imageAnalysisConfig = ImageAnalysisConfig.Builder()
.setLensFacing(CameraX.LensFacing.FRONT)
.setImageReaderMode(ImageAnalysis.ImageReaderMode.ACQUIRE_LATEST_IMAGE)
.build()