I added cameraX in my app as it described in this tutorial. The only problem i faced is image rotation. In app manifest file I use this settings for camera activity android:screenOrientation="portrait". My goal is to make this activity always in portrait mode, while captured images should have real rotation.
How can i achieve this? Is it possible for cameraX to detect different rotation while activity has fixed?
This is my code in camera activity
private lateinit var cameraProviderFuture: ListenableFuture<ProcessCameraProvider>
private lateinit var imageCapture: ImageCapture
private val executor = Executors.newSingleThreadExecutor()
private var camera: Camera? = null
...
override fun onCreate(savedInstanceState: Bundle?)
{
...
cameraProviderFuture = ProcessCameraProvider.getInstance(this)
preview_view.post(
{
startCamera()
})
}
...
fun startCamera()
{
preview = Preview.Builder().apply {
setTargetAspectRatio(AspectRatio.RATIO_16_9)
setTargetRotation(preview_view.display.rotation)
}.build()
imageCapture = ImageCapture.Builder().apply {
setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
}.build()
val cameraSelector = CameraSelector.Builder().requireLensFacing(CameraSelector.LENS_FACING_BACK).build()
cameraProviderFuture.addListener(Runnable {
val cameraProvider = cameraProviderFuture.get()
cameraProvider.unbindAll()
camera = cameraProvider.bindToLifecycle(this, cameraSelector, preview, imageCapture)
preview.setSurfaceProvider(preview_view.createSurfaceProvider(camera!!.cameraInfo))
}, ContextCompat.getMainExecutor(this))
}
...
fun takePicture()
{
val file = createFile(getOutputDirectory(), FILENAME, PHOTO_EXTENSION)
val outputFileOptions = ImageCapture.OutputFileOptions.Builder(file).build()
imageCapture.takePicture(outputFileOptions, executor, object : ImageCapture.OnImageSavedCallback
{
override fun onImageSaved(outputFileResults: ImageCapture.OutputFileResults)
{
val my_file_item = MyFileItem.createFromFile(file)
imageCaptured(my_file_item)
}
override fun onError(exception: ImageCaptureException)
{
val msg = "Photo capture failed: ${exception.message}"
preview_view.post({
Toast.makeText(this#ActPhotoCapture2, msg, Toast.LENGTH_LONG).show()
})
}
})
}
If your orientation is locked, you can probably use an orientation listener to listen for changes in the device's orientation, and each time its onOrientationChanged callback is invoked, you'd set the target rotation for the image capture use case.
val orientationEventListener = object : OrientationEventListener(context) {
override fun onOrientationChanged(orientation: Int) {
imageCapture.targetRotation = view.display.rotation
}
}
The view you use to get the rotation can be any view, for example the root view if you're in fragment, or just the PreviewView. You can also enable/disable this listener in onResume and onPause.
ps: The way you're setting up your use cases can cause issues. The use cases shouldn't be initialized before the camera is started. You should build the use cases after this line val cameraProvider = cameraProviderFuture.get().
Related
I am trying to integrate CameraX in my flutter app but I get error saying Cannot access class 'com.google.common.util.concurrent.ListenableFuture'. Check your module classpath for missing or conflicting dependencies
There error comes from below line
val cameraProviderFuture = ProcessCameraProvider.getInstance(context)
Below is my native view
class CealScanQrView(val context: Context, id: Int, creationParams: Map<String?, Any?>?) :
PlatformView {
private var mCameraProvider: ProcessCameraProvider? = null
private var preview: PreviewView
private var linearLayout: LinearLayout = LinearLayout(context)
private lateinit var cameraExecutor: ExecutorService
private lateinit var options: BarcodeScannerOptions
private lateinit var scanner: BarcodeScanner
private var analysisUseCase: ImageAnalysis = ImageAnalysis.Builder()
.build()
companion object {
private val REQUIRED_PERMISSIONS = mutableListOf(Manifest.permission.CAMERA).toTypedArray()
}
init {
val linearLayoutParams = ViewGroup.LayoutParams(
ViewGroup.LayoutParams.WRAP_CONTENT,
ViewGroup.LayoutParams.WRAP_CONTENT
)
linearLayout.layoutParams = linearLayoutParams
linearLayout.orientation = LinearLayout.VERTICAL
preview = PreviewView(context)
preview.layoutParams = ViewGroup.LayoutParams(
ViewGroup.LayoutParams.MATCH_PARENT,
ViewGroup.LayoutParams.MATCH_PARENT
)
linearLayout.addView(preview)
setUpCamera()
}
private fun setUpCamera(){
if (allPermissionsGranted()) {
startCamera()
}
cameraExecutor = Executors.newSingleThreadExecutor()
options = BarcodeScannerOptions.Builder()
.setBarcodeFormats(
Barcode.FORMAT_QR_CODE)
.build()
scanner = BarcodeScanning.getClient(options)
analysisUseCase.setAnalyzer(
// newSingleThreadExecutor() will let us perform analysis on a single worker thread
Executors.newSingleThreadExecutor()
) { imageProxy ->
processImageProxy(scanner, imageProxy)
}
}
override fun getView(): View {
return linearLayout
}
override fun dispose() {
cameraExecutor.shutdown()
}
#SuppressLint("UnsafeOptInUsageError")
private fun processImageProxy(
barcodeScanner: BarcodeScanner,
imageProxy: ImageProxy
) {
imageProxy.image?.let { image ->
val inputImage =
InputImage.fromMediaImage(
image,
imageProxy.imageInfo.rotationDegrees
)
barcodeScanner.process(inputImage)
.addOnSuccessListener { barcodeList ->
val barcode = barcodeList.getOrNull(0)
// `rawValue` is the decoded value of the barcode
barcode?.rawValue?.let { value ->
mCameraProvider?.unbindAll()
}
}
.addOnFailureListener {
// This failure will happen if the barcode scanning model
// fails to download from Google Play Services
}
.addOnCompleteListener {
// When the image is from CameraX analysis use case, must
// call image.close() on received images when finished
// using them. Otherwise, new images may not be received
// or the camera may stall.
imageProxy.image?.close()
imageProxy.close()
}
}
}
private fun allPermissionsGranted() = REQUIRED_PERMISSIONS.all {
ContextCompat.checkSelfPermission(context, it) == PackageManager.PERMISSION_GRANTED
}
private fun startCamera() {
val cameraProviderFuture = ProcessCameraProvider.getInstance(context)
cameraProviderFuture.addListener({
// Used to bind the lifecycle of cameras to the lifecycle owner
val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()
mCameraProvider = cameraProvider
// Preview
val surfacePreview = Preview.Builder()
.build()
.also {
it.setSurfaceProvider(preview.surfaceProvider)
}
// Select back camera as a default
val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA
try {
// Unbind use cases before rebinding
cameraProvider.unbindAll()
// Bind use cases to camera
cameraProvider.bindToLifecycle(
(context as FlutterActivity),
cameraSelector,
surfacePreview,
analysisUseCase,
)
} catch (exc: Exception) {
// Do nothing on exception
}
}, ContextCompat.getMainExecutor(context))
}
}
class CealScanQrViewFactory : PlatformViewFactory(StandardMessageCodec.INSTANCE) {
override fun create(context: Context, viewId: Int, args: Any?): PlatformView {
val creationParams = args as Map<String?, Any?>?
return CealScanQrView(context, viewId, creationParams)
}
}
Add this line to your app build.gradle dependencies:
implementation 'com.google.guava:guava:29.0-android'
I'm trying to understand how to use CameraX by studying this example:
class MainActivity : AppCompatActivity(), EasyPermissions.PermissionCallbacks {
private var imageCapture: ImageCapture? = null
private lateinit var outputDirectory: File
private lateinit var cameraExecutor: ExecutorService
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
requestPermission()
// Set up the listener for take photo button
camera_capture_button.setOnClickListener { takePhoto() }
outputDirectory = getOutputDirectory()
cameraExecutor = Executors.newSingleThreadExecutor()
}
private fun takePhoto() {
// Get a stable reference of the modifiable image capture use case
val imageCapture = imageCapture ?: return
// Create time-stamped output file to hold the image
val photoFile = File(
outputDirectory,
SimpleDateFormat(
FILENAME_FORMAT, Locale.US
).format(System.currentTimeMillis()) + ".jpg"
)
// Create output options object which contains file + metadata
val outputOptions = ImageCapture.OutputFileOptions.Builder(photoFile).build()
// Set up image capture listener, which is triggered after photo has
// been taken
imageCapture.takePicture(
outputOptions,
ContextCompat.getMainExecutor(this),
object : ImageCapture.OnImageSavedCallback {
override fun onError(exc: ImageCaptureException) {
Log.e(TAG, "Photo capture failed: ${exc.message}", exc)
}
override fun onImageSaved(output: ImageCapture.OutputFileResults) {
val savedUri = Uri.fromFile(photoFile)
val msg = "Photo capture succeeded: $savedUri"
Toast.makeText(baseContext, msg, Toast.LENGTH_SHORT).show()
Log.d(TAG, msg)
}
})
}
private fun startCamera() {
val cameraProviderFuture = ProcessCameraProvider.getInstance(this)
cameraProviderFuture.addListener({
// Used to bind the lifecycle of cameras to the lifecycle owner
val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()
// Preview
val preview = Preview.Builder()
.build()
.also {
it.setSurfaceProvider(viewFinder.surfaceProvider)
}
imageCapture = ImageCapture.Builder()
.build()
val imageAnalyzer = ImageAnalysis.Builder()
.build()
.also {
it.setAnalyzer(cameraExecutor, LuminosityAnalyzer { luma ->
Log.d(TAG, "Average luminosity: $luma")
})
}
// Select back camera as a default
val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA
try {
// Unbind use cases before rebinding
cameraProvider.unbindAll()
// Bind use cases to camera
cameraProvider.bindToLifecycle(
this, cameraSelector, preview, imageCapture, imageAnalyzer
)
} catch (exc: Exception) {
Log.e(TAG, "Use case binding failed", exc)
}
}, ContextCompat.getMainExecutor(this))
}
private fun getOutputDirectory(): File {
val mediaDir = externalMediaDirs.firstOrNull()?.let {
File(it, resources.getString(R.string.app_name)).apply { mkdirs() }
}
return if (mediaDir != null && mediaDir.exists())
mediaDir else filesDir
}
override fun onDestroy() {
super.onDestroy()
cameraExecutor.shutdown()
}
private fun requestPermission() {
if (CameraUtility.hasCameraPermissions(this)) {
startCamera()
return
}
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.Q) {
EasyPermissions.requestPermissions(
this,
"You need to accept the camera permission to use this app",
REQUEST_CODE_CAMERA_PERMISSION,
Manifest.permission.CAMERA
)
} else {
EasyPermissions.requestPermissions(
this,
"You need to accept the camera permission to use this app",
REQUEST_CODE_CAMERA_PERMISSION,
Manifest.permission.CAMERA
)
}
}
It works. The images captured with this code are rather big. On Pixel 3, it produces 4032x3024 JPEGs (usually around 4.5 to 6 MB). Does CameraX has a built-in feature to reduce the JPEG size?
I tried this:
imageCapture = ImageCapture.Builder()
.setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
.setTargetResolution(Size(800, 600))
.build()
and this:
val imageAnalyzer = ImageAnalysis.Builder()
.setTargetResolution(Size(800,600))
.build()
.also {
it.setAnalyzer(cameraExecutor, LuminosityAnalyzer { luma ->
})
}
Doesn't work. Still get the same 4032x3024 JPEG. I wonder if I don't understand the CameraX API properly.
How to take image using front and back camera using cameraX at the same time
I capture single image from any one of the camera facing using following code
#Inject
lateinit var preview: Preview
#Inject
lateinit var imageCapture: ImageCapture
#Inject
lateinit var cameraProviderFuture : ListenableFuture<ProcessCameraProvider>
#Inject
lateinit var executor: Executor
private var cameraSelector : CameraSelector? = null
After camera permission is granted the following function is called to initialize the camera and other necessaary
private fun initCamera() {
cameraSelector = null
cameraSelector = CameraSelector.Builder()
.requireLensFacing(cameraLensFacing)
.build()
cameraSelector?.let { mCameraSelector ->
cameraProviderFuture.addListener({
val cameraProvider = cameraProviderFuture.get()
cameraProvider.unbindAll()
cameraProvider.bindToLifecycle(this#MainActivity, mCameraSelector, preview,
imageCapture)
}, executor)}
}
When capture button clicked image generation process is going on
private fun capturePhoto() {
getOutPutOptions()?.let { outputOption ->
imageCapture.apply {
flashMode = turnOnFlash
takePicture(
outputOption,
executor,
object : ImageCapture.OnImageSavedCallback {
override fun onImageSaved(outputFileResults: ImageCapture.OutputFileResults) {
//Here I get the result URI
}
override fun onError(exception: ImageCaptureException) {
showToast(exception.message.toString())
}
}
)
}
}
}
Every thing working fine. But my problem is to take images from front and back camera simultaneously. Is it possible to achieve it?
I have made a very simple app to take a picture using camera x, with a viewfinder and it displays the image taken in the same activity
I am trying to get all the images taken from the internal storage and display them in another activity in the app with a recycler view
Im fine with recycler views, and I can use one with external Api's to display images and data but I just cannot figure out how to get the images from the internal storage, and add them to a list to display in the recycler view, Ive tried checking documentation but im getting nowhere, all I need is a simple code which will allow me to display the images as a thumbnail
class MainActivity : AppCompatActivity() {
private var imageCapture: ImageCapture? = null
private lateinit var outputDirectory: File
private lateinit var cameraExecutor: ExecutorService
private val TAG = "Snap"
private val FILENAME_FORMAT = "yyyy-MM-dd-HH-mm-ss-SSS"
private val REQUEST_CODE_PERMISSIONS = 10
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
btnSnap.setOnClickListener {
takePhoto()
}
if (ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA) ==
PackageManager.PERMISSION_GRANTED
) {
startCamera()
} else {
ActivityCompat.requestPermissions(
this,
arrayOf(Manifest.permission.CAMERA), REQUEST_CODE_PERMISSIONS
)
}
outputDirectory = getOutputDirectory()
cameraExecutor = Executors.newSingleThreadExecutor()
}
override fun onRequestPermissionsResult(requestCode: Int,
permissions: Array<String>, grantResults: IntArray ) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
if (ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA) ==
PackageManager.PERMISSION_GRANTED) {
startCamera()
} else {
Toast.makeText(this, "Permissions not granted by the user",
Toast.LENGTH_SHORT).show()
finish()
}
}
private fun startCamera() {
val cameraProviderFuture = getInstance(this)
//add listener to the ProcessCameraProvider
cameraProviderFuture.addListener(Runnable {
val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()
val preview = Preview.Builder().build().also {
it.setSurfaceProvider(viewFinder.createSurfaceProvider())}
imageCapture = ImageCapture.Builder().build()
val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA
try {
//unbind use cases before rebinding
cameraProvider.unbindAll()
cameraProvider.bindToLifecycle(this, cameraSelector, preview,
imageCapture)
} catch (exc: Exception) {
Log.e(TAG, "Use case binding failed", exc)
}
}, ContextCompat.getMainExecutor(this))
}
private fun takePhoto() { val imageCapture = imageCapture ?: return
val photoFile = File(outputDirectory, SimpleDateFormat(FILENAME_FORMAT,
Locale.UK).format(System.currentTimeMillis()) + ".jpg")
val outputOpts = ImageCapture.OutputFileOptions.Builder(photoFile).build()
imageCapture.takePicture(outputOpts,
ContextCompat.getMainExecutor(this), object :
ImageCapture.OnImageSavedCallback {
override fun onError(exc: ImageCaptureException) {
Log.e(TAG, "Photo capture failed: ${exc.message}", exc)
}
//save the photo to the file and display on screen
override fun onImageSaved(output: ImageCapture.OutputFileResults) {
val savedUri = Uri.fromFile(photoFile)
imgSnap.setImageURI(savedUri)
}
})
}
private fun getOutputDirectory(): File {
val mediaDir = externalMediaDirs.firstOrNull()?.let {
File(it, resources.getString(R.string.app_name)).apply { mkdirs() }
}
return if (mediaDir != null && mediaDir.exists()) mediaDir else filesDir
}
}
You can check out this link that provides you the better overview about using RecyclerView..
And in your bind view holder method in your adapter you can use this code to get the images from the storage and show it into your recyclerview as thumbnail.
String path = Environment.getExternalStorageDirectory() + "/myImage.jpg";
Bitmap bitmap = BitmapFactory.decodeFile(path);
You can learn more about the getting the images from storage Link.
I am trying to make an app that allow user to switch between different ML-Models on fly. Right now i have a simple app that can run camera with no model and camera with an object detection model. There will be five such models in final app.
This is my Home Activity
class HomeActivity : AppCompatActivity(), EasyPermissions.PermissionCallbacks {
private val rqSpeechRec = 102
private var tts: TextToSpeech? = null
private lateinit var binding: ActivityHomeBinding
private lateinit var objectDetector: ObjectDetector
private lateinit var cameraProviderFuture: ListenableFuture<ProcessCameraProvider>
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
// Binding ViewData
binding = ActivityHomeBinding.inflate(layoutInflater)
setContentView(binding.root)
supportActionBar?.hide() // Hiding App bar
requestCameraPermission() // Requesting Camera Permission
}
This is my Request Camera Permission function.
private fun requestCameraPermission() {
speakOut("Allow Eynetic to access the camera to take photos or videos.")
EasyPermissions.requestPermissions(
this,
"This app can not work without camera.",
Constants.PERMISSION_CAMERA_REQUEST_CODE,
permission.CAMERA
)
}
override fun onRequestPermissionsResult(
requestCode: Int,
permissions: Array<out String>,
grantResults: IntArray
) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
EasyPermissions.onRequestPermissionsResult(requestCode, permissions, grantResults, this)
}
override fun onPermissionsDenied(requestCode: Int, perms: List<String>) {
speakOut("Permissions Denied.")
if (EasyPermissions.somePermissionDenied(this, perms.first()))
SettingsDialog.Builder(this).build()
.show() // If permissions are permanently denied show settings.
else
requestCameraPermission()
}
override fun onPermissionsGranted(requestCode: Int, perms: List<String>) {
cameraProviderFuture = ProcessCameraProvider.getInstance(this)
cameraProviderFuture.addListener({
startCamera(cameraProviderFuture.get())
}, ContextCompat.getMainExecutor(this))
// Introduction
val intro =
"Welcome to Eyenetic. Please activate any module through voice command.The nodules are obstacle detection, scene recognition, currency detection, human object interaction and human human interaction."
speakOut(intro)
allButtons()
}
I only addListener to cameraProviderFuture when permission is granted. And when permission is granted i start the camera with no model running. Note when app is open each time no model will be running.
#SuppressLint("UnsafeOptInUsageError")
private fun startCamera(cameraProvider: ProcessCameraProvider) {
val preview = Preview.Builder().build()
val cameraSelector =
CameraSelector.Builder().requireLensFacing(CameraSelector.LENS_FACING_BACK).build()
preview.setSurfaceProvider(binding.previewView.surfaceProvider)
cameraProvider.bindToLifecycle(this as LifecycleOwner, cameraSelector, preview)
}
And code for fetching Object-Detection model.
private fun readyObjectDetectionModel()
{
val localModel = LocalModel.Builder().setAbsoluteFilePath("object_detection.tflite").build()
val customObjectDetectionOptions = CustomObjectDetectorOptions.Builder(localModel)
.setDetectorMode(CustomObjectDetectorOptions.STREAM_MODE)
.enableClassification()
.setClassificationConfidenceThreshold(0.5f)
.build()
objectDetector = ObjectDetection.getClient(customObjectDetectionOptions)
}
Code for Object-Detection
#SuppressLint("UnsafeOptInUsageError")
private fun startObjectDetection(cameraProvider: ProcessCameraProvider) {
val preview = Preview.Builder().build()
val cameraSelector =
CameraSelector.Builder().requireLensFacing(CameraSelector.LENS_FACING_BACK).build()
preview.setSurfaceProvider(binding.previewView.surfaceProvider)
val imageAnalysis = ImageAnalysis.Builder().setTargetResolution(Size(1280, 720))
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST).build()
imageAnalysis.setAnalyzer(ContextCompat.getMainExecutor(this),
{imageProxy ->
val rotationDegrees =imageProxy.imageInfo.rotationDegrees
val image = imageProxy.image
if (image != null) {
val processImage = InputImage.fromMediaImage(image, rotationDegrees)
objectDetector.process(processImage)
.addOnSuccessListener {
imageProxy.close()
}.addOnFailureListener{
imageProxy.close()
}
}
})
cameraProvider.bindToLifecycle(this as LifecycleOwner, cameraSelector, imageAnalysis, preview)
}
I can run both codes for camera by calling each one at a time and commenting the other. But i have made different buttons to trigger different models. What should i do before calling each method.
Right now the only way to i can think on is unbind the previous Life-cycle and make a new one. But what about the redundant code like
val preview = Preview.Builder().build()
val cameraSelector =
CameraSelector.Builder().requireLensFacing(CameraSelector.LENS_FACING_BACK).build()
preview.setSurfaceProvider(binding.previewView.surfaceProvider)
Is their way to only change the usecase on current bindedlifecycle?
Should i make camerafragment and switch between different fragments?
Any advice for better approaching this problem.
One idea come to my mind is to create an ImageAnalysis.Analyzer subclass that can handle different ML features (in your word - ML models).
In that case, you only setup the imageAnalysis use case once, and the camera will keep feeding images to your Analyzer.
In your Analyzer, you can design some APIs like switchTo(MLFeatureName name). When your UI button is pressed, you can call the switchTo method. After such a switch, the images will be fed to a different detector.
Make sure to close unused detector to release system resources.