Android CameraX - face detection while recording video - android

I'm using the new library CameraX with Firebase ML Kit in Android and detecting faces every frame the device can.
So I set CameraX like that:
CameraX.bindToLifecycle(this, preview, imageCapture, faceDetectAnalyzer)
All working flowless, now, while I'm doing that, I want to record a video.
So basically I want to to detect faces while recording a video.
I tried:
CameraX.bindToLifecycle(this, preview, imageCapture, faceDetectAnalyzer, videoCapture)
But I'm getting an error saying that there are too many parameters so I guess that's not the right way.
I know that this library still in alpha but I guess there is a way to do that.
Even if there is not jet, what's another way to implement face detection while recording a video with Firebase ML?

I didn't use CameraX a lot, but I'm usually working with Camera 2 API and Firebase ML Kit.
For use both API together, you should get the Image callbacks from your Preview Size ImageReader. On that callback you can use that Images to create a FirebaseVisionFace through the API and do whatever you want with it.
Using Kotlin and Coroutines it should look like these:
private val options: FirebaseVisionFaceDetectorOptions = FirebaseVisionFaceDetectorOptions.Builder()
.setContourMode(FirebaseVisionFaceDetectorOptions.ALL_CONTOURS)
.build()
val detector = FirebaseVision.getInstance().getVisionFaceDetector(options)
suspend fun processImage(image: Image): FirebaseVisionFace {
val metadata = FirebaseVisionImageMetadata.Builder()
.setWidth(image.width) // 480x360 is typically sufficient for image recognition
.setHeight(image.height)
.setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21)
.build()
val visionImage = FirebaseVisionImage.fromMediaImage(image)
val firebaseVisionFace = detector.detectInImage(visionImage).await()
return firebaseVisionFace
}
If you want to use the await method for Coroutine support you can give a loot to https://github.com/FrangSierra/Firebase-Coroutines-Android

Related

disable autofocus in android camerax (camera 2)

I'm in project of scanning barcode, so i want to disable auto-focus for improving performance. I tried so many ways but it doesn't work at all. Could anyone give me some help? Thank you.
If you really want to turn off AF, you can do this on CameraX with the Camera2CameraControl class. To do this you have to first bind the use cases you need to the lifecycle which results in a Camera object, you can then use that camera object to get the CameraControl object and then use it to instantiate a Camera2CameraControl which would let you set the focus mode to CameraMetadata.CONTROL_AF_MODE_OFF.
val camera : Camera = cameraProvider.bindToLifecycle(
this,
cameraSelector,
imagePreview,
imageCapture,
imageAnalysis
)
val cameraControl : CameraControl = camera.cameraControl
val camera2CameraControl : Camera2CameraControl = Camera2CameraControl.from(cameraControl)
//Then you can set the focus mode you need like this
val captureRequestOptions = CaptureRequestOptions.Builder()
.setCaptureRequestOption(CaptureRequest.CONTROL_AF_MODE, CameraMetadata.CONTROL_AF_MODE_OFF)
.build()
camera2CameraControl.captureRequestOptions = captureRequestOptions
This was tested on the latest CameraX's "1.0.0-rc03" build.
I use
disableAutoCancel()
with cameraX 1.0.0. The camera focuses once then stays locked, autofocus is not restarted at every X seconds, so something like
val autoFocusAction = FocusMeteringAction.Builder(
autoFocusPoint,
FocusMeteringAction.FLAG_AF or
FocusMeteringAction.FLAG_AE or
FocusMeteringAction.FLAG_AWB
).apply {
disableAutoCancel()
}
}.build()
myCameraControl!!.startFocusAndMetering(autoFocusAction)

How do you take a picture with camerax?

I'm still practicing with Kotlin and Android Developing. As far as I understood, Camera class has been deprecated, and Android invites to use Camerax instead, because this high-level class is device-indipendent, and they've made simpler the process of implementing cameras on apps.
I've tried to read the documentation (https://developer.android.com/training/camerax) but it's written so bad I barely understood what they are trying to explain.
So I went to read the entire sample code given in the documentation itself (https://github.com/android/camera-samples/tree/main/CameraXBasic).
The CameraFragment code is about 500 lines long (ignoring imports and various comments).
Do I really need to write 500 lines of code to simply take a picture?
How is this supposed to be considered "simpler than before"?
I mean, Android programming is at the point where I just need to write only 4 lines of code to ask the user to select an Image from his storage and retreive it and show it in an ImageView.
Is there a TRUE simple way to take a picture, or do I really need to stop and lose a whole day of work to write all those lines of code?
EDIT:
Take this page of the documentation:
https://developer.android.com/training/camerax/architecture#kotlin
It starts with this piece of code.
val preview = Preview.Builder().build()
val viewFinder: PreviewView = findViewById(R.id.previewView)
// The use case is bound to an Android Lifecycle with the following code
val camera = cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, preview)
cameraProvider comes out of nowhere. What is this supposed to be? I've found out it's a ProcessCameraProvider, but how am I supposed to initialize it?
Should it be a lateinit var or has it already been initialized somewhere else?
Because if I try to write val cameraProvider = ProcessCameraProvider() I get an error, so what am I supposed to do?
What is cameraSelector parameter? It's not defined before. I've found out it's the selector for the front or back camera, but how am I supposed to know it reading that page of the documentation?
How could have this documentation been released with these kind of lackings?
How is someone supposed to learn with ease?
Before you can interact with the device's cameras using CameraX, you need to initialize the library. The initialization process is asynchronous, and involves things like loading information about the device's cameras.
You interact with the device's cameras using a ProcessCameraProvider. It's a Singleton, so the first time you get an instance of if, CameraX performs its initialization.
val cameraProviderFuture: ListenableFuture<ProcessCameraProvider> = ProcessCameraProvider.getInstance(context)
Getting the ProcessCameraProvider singleton returns a Future because it might need to initialize the library asynchronously. The first time you get it, it might take some time (usually well under a second), subsequent calls though will return immediately, as the initialization will have already been performed.
With a ProcessCameraProvider in hand, you can start interacting with the device's cameras. You choose which camera to interact with using a CameraSelector, which wraps a set of filters for the camera you want to use. Typically, if you're just trying to use the main back or front camera, you'd use CameraSelector.DEFAULT_BACK_CAMERA or CameraSelector.DEFAULT_FRONT_CAMERA.
Now that you've defined which camera you'll use, you build the use cases you'll need. For example, you want to take a picture, so you'll use the ImageCapture use case. It allows taking a single capture frame (typically a high quality one) using the camera, and providing it either as a raw buffer, or storing it in a file. To use it, you can configure it if you'd wish, or you can just let CameraX use a default configuration.
val imageCapture = ImageCapture.Builder().build()
In CameraX, a camera's lifecycle is controlled by a LifecycleOwner, meaning that when the LifecycleOwner's lifecycle starts, the camera opens, and when it stops, the camera closes. So you'll need to choose a lifecycle that will control the camera. If you're using an Activity, you'd typically want the camera to start as the Activity starts, and stop when it stops, so you'd use the Activity instance itself as the LifecycleOwner, if you were using a Fragment, you might want to use its view lifecycle (Fragment.getViewLifecycleOwner()).
Lastly, you need to put the pieces of the puzzle together.
processCameraProvider.bindToLifecycle(
lifecycleOwner,
cameraSelector,
imageCapture
)
An app typically includes a viewfinder that displays the camera's preview, so you can use a Preview use case, and bind it with the ImageCapture use case. The Preview use case allows streaming camera frames to a Surface. Since setting up the Surface and correctly drawing the preview on it can be complex, CameraX provides PreviewView, a View that can be used with the Preview use case to display the camera preview. You can check out how to use them here.
// Just like ImageCapture, you can configure the Preview use case if you'd wish.
val preview = Preview.Builder().build()
// Provide PreviewView's Surface to CameraX. The preview will be drawn on it.
val previewView: PreviewView = findViewById(...)
preview.setSurfaceProvider(previewView.surfaceProvider)
// Bind both the Preview and ImageCapture use cases
processCameraProvider.bindToLifecycle(
lifecycleOwner,
cameraSelector,
imageCapture,
preview
)
Now to actually take a picture, you use on of ImageCapture's takePicture methods. One provides a JPEG raw buffer of the captured image, the other saves it in a file that you provide (make sure you have the necessary storage permissions if you need any).
imageCapture.takePicture(
ContextCompat.getMainExecutor(context), // Defines where the callbacks are run
object : ImageCapture.OnImageCapturedCallback() {
override fun onCaptureSuccess(imageProxy: ImageProxy) {
val image: Image = imageProxy.image // Do what you want with the image
imageProxy.close() // Make sure to close the image
}
override fun onError(exception: ImageCaptureException) {
// Handle exception
}
}
)
val imageFile = File("somePath/someName.jpg") // You can store the image in the cache for example using `cacheDir.absolutePath` as a path.
val outputFileOptions = ImageCapture.OutputFileOptions
.Builder(imageFile)
.build()
takePicture(
outputFileOptions,
CameraXExecutors.mainThreadExecutor(),
object : ImageCapture.OnImageSavedCallback {
override fun onImageSaved(outputFileResults: ImageCapture.OutputFileResults) {
}
override fun onError(exception: ImageCaptureException) {
}
}
)
Do I really need to write 500 lines of code to simply take a picture?
How is this supposed to be considered "simpler than before"?
CameraXBasic is not as "basic" as its name might suggest x) It's more of a complete example of CameraX's 3 use cases. Even though the CameraFragment is long, it explains things nicely so that it's more accessible to everyone.
CameraX is "simpler than before", before referring mainly to Camera2, which was a bit more challenging to get started with at least. CameraX provides a more developer-friendly API with its approach to using use cases. It also handles compatibility, which was a big issue before. Ensuring your camera app works reliably on most of the Android devices out there is very challenging.

What is the distinct difference between an ImageAnalyzer and VisionProcessor in Android MLKit, if any?

I'm new to MLKit.
One of the first thing I've noticed from looking at the docs as well as the sample MLKit apps is that there seems to be multiple ways to attach/use image processors/analyzers.
In some cases they demonstrate using the ImageAnalyzer api https://developers.google.com/ml-kit/vision/image-labeling/custom-models/android
private class YourImageAnalyzer : ImageAnalysis.Analyzer {
override fun analyze(imageProxy: ImageProxy) {
val mediaImage = imageProxy.image
if (mediaImage != null) {
val image = InputImage.fromMediaImage(mediaImage, imageProxy.imageInfo.rotationDegrees)
// Pass image to an ML Kit Vision API
// ...
}
}
}
It seems like analyzers can be bound to the lifecycle of CameraProviders
cameraProvider.bindToLifecycle(this, cameraSelector, preview, imageCapture, imageAnalyzer)
In other cases shown in MLKit showcase apps, the CameraSource has a frame processor that can be set.
cameraSource?.setFrameProcessor(
if (PreferenceUtils.isMultipleObjectsMode(this)) {
MultiObjectProcessor(graphicOverlay!!, workflowModel!!)
} else {
ProminentObjectProcessor(graphicOverlay!!, workflowModel!!)
}
)
So are these simply two different approaches of doing the same thing? Can they be mixed and matched? Are there performance benefits in choosing one over the other?
As a concrete example: if I wanted to use the MLKit ImageLabeler, should I wrap it in a processor and set it as the ImageProcessor for CameraSource, or use it in the Image Analysis callback and bind that to the CameraProvider?
Lastly in the examples where CameraSource is used (MLKit Material showcase app) there is no use of CameraProvider... is this simply because CameraSource makes it irrelevant and unneeded? In that case, is binding an ImageAnalyzer to a CameraProvider not even an option? Would one simply set different ImageProcessors to the CameraSource on demand as they ran through different scenarios such as ImageLabelling, Object Detection, Text Recognition etc ?
The difference is due to the underlying camera implementation. The analyzer interface is from CameraX while the processor needs to be written by developer for camera1.
If you want to use android.hardware.Camera, you need to follow the example to create a processor and feed camera output to MLKit.
If you want to use cameraX, then you can follow the example in the vision sample app and find CameraXLivePreviewActivity.

How to listen to the cameraX lens facing changes

Does camerax provide api for lens facing changes callback? After switching the lens facing camera I want to be notified when it has finished changing and the camera is ready to use.
Currently I'm using this dependencies of camerax
implementation "androidx.camera:camera-lifecycle:1.0.0-beta01"
implementation "androidx.camera:camera-view:1.0.0-alpha08"
implementation "androidx.camera:camera-extensions:1.0.0-alpha08"
Sounds like you need a signal for when the camera starts emitting frames. You can use Camera2Interop and set a CaptureCallback on the preview use case for example. After binding the preview use case using a CameraSelector for the lens facing you want, you can listen for when onCaptureCompleted() is invoked, this should give you a signal that the camera has started.
val builder = Preview.Builder()
Camera2Interop.Extender(builder).setSessionCaptureCallback(object: CameraCaptureSession.CaptureCallback() {
override fun onCaptureCompleted(session: CameraCaptureSession, request: CaptureRequest, result: TotalCaptureResult) {
// Camera will start emitting frames
}
})
val preview = builder.build()

Take a Video with CameraX - setLensFacing() is unresolved

I tried to take a video with CameraX. For that I have read the SO posts here and here .
But when I copy paste the code and adjust it a little bit, there is an unresolved reference with the setLensFacing() method:
videoCapture = VideoCaptureConfig.Builder().apply {
setTargetRotation(binding.viewFinder.display.rotation)
setLensFacing(lensFacing)
}.build()
I adjust the code little bit since you do not need to pass a config object to a VideoCapture anymore. You can build it directly.
At this point, Android Studio is telling me that setLensFacing(lensFacing) is unresolved.
I am a little bit confused because on this page , there is a nice documentation and VideoCaptureConfig.Builder() contains setLensFacing()
I hope someone can help.
Camera selection is no longer done through the use cases. The code you wrote was possible until -I think- version 1.0.0-alpha08.
The way to select the lens now is by using a CameraSelector when binding a use case (or multiple use cases) to a lifecycle. That way all the use cases use the same lensFacing.
So you can write:
val cameraSelector = CameraSelector.Builder().requireLensFacing(lensFacing).build()
// Or alternatively if you want a specific lens, like the back facing lens
val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA
val videoCapture = VideoCaptureConfig.Builder().build()
processCameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, videoCapture)
Note that currently, the VideoCapture use case is hidden in the camerax API, and is still in an early state of development.
In CameraX 1.0.0-beta11, the video capture configuration has moved from VideoCaptureConfig to VideoCapture and the lens is set in the CameraSelector Builder:
val videoCapture = VideoCapture.Builder().apply {
setVideoFrameRate(30)
setAudioBitRate(128999)
setTargetRotation(viewFinder.display.rotation)
setTargetAspectRatio(AspectRatio.RATIO_16_9)
}.build()
val cameraSelector = CameraSelector.Builder()
.requireLensFacing(CameraSelector.LENS_FACING_BACK)
.build()

Categories

Resources