How can I use ARCore with CameraX? - android

I know ARCore using Camera2 by default. I was wondering about how we can use it with CameraX?
Is any way to use ARCore with CameraX?
I just tried shared session ARCore samples, It's using Camera2 API.
Is any approach or extension to use ARCore with CameraX?

Alas! As of today, Google ARCore 1.35 is still not capable of using Android Jetpack's library CameraX, although CameraX is integrated with many MLKit features including barcode scanning, face detection, text recognition, etc.

Related

CameraX recorded video analysis graphics error

Android CameraX, prompt when VideoCapture and ImageAnalysis are used at the same time:
java.lang.IllegalArgumentException: No supported surface combination is found for camera device-Id: 1. May be attempting to bind too many use cases. Existing surfaces: [] New configs: [androidx.camera.core.impl.ImageAnalysisConfig#7ea65ca , androidx.camera.core.impl.VideoCaptureConfig#e0d133b, androidx.camera.core.impl.PreviewConfig#20f5a58]
CameraX does not support this combination for most of the devices.
Here you can find the supported combinations: Source
Notice that video capture use case is not even in supported combinations. However, as far as I know, CameraX team is working on this.
For now, I would suggest you to use Camera2 Api. Here is a very well documented library that would allow you to do what you want to achieve.

Camera2 face detection without mlkit

we were using camerax with mlkit for face detection. but now we need to move our face detection module to the dynamic feature module and mlkit is currently not supported. it is crashing at run time.
now I'm looking for a solution where I don't want to use firebase mlkit. is it possible to use camera2 API to detect faces and draw face rectangle on preview?.

Is there any Android API to detect face inside an image?

Is there any Android API to detect face inside an image? For exemple on iOS there is such API to detect face, and I‘m curious if there is a similar API in Android.
The Android framework has the FaceDetector API, although is only suitable for bitmaps (not real-time), and returns only the boundaries (rectangle) location.
For more advanced features, such as real-time detection or face features contour, there is the ML Kit library offered by Google. Although this library can also be used for very simple use cases, such as also getting the face location in a bitmap.
More about ML Kit in the next link:
https://developers.google.com/ml-kit/guides

What is Android CameraX?

What is Android CameraX?
There is a session about CameraX planned in Google I/O 2019. What is it? Is it a new framework API? Is it a new library?
https://events.google.com/io/schedule/events/8d400240-f31f-4ac2-bfab-f8347ef3ab3e
Does it mean that Camera2 API is deprecated?
https://github.com/googlesamples/android-Camera2Basic
What is Android CameraX?
CameraX is a new Jetpack library that lets developers control a device's camera and focuses on compatibility across devices going back to API level 21 (Lollipop). It was announced at Google I/O 2019 and has a dedicated documentation page alongside an official sample.
Does it mean that Camera2 API is deprecated?
Camera2 API is not deprecated; in fact, it is the foundation that CameraX is built on. CameraX also provides a Camera2 interop API that lets developers extend their CameraX implementation with Camera2 code.
For more information, the official documentation is available at https://developer.android.com/camerax
In Google IO 2019, Google added another powerful tool for camera development in Android development called CameraX as part of Jetpack
Few Features of CameraX
It is backwards compatible till Android 5.0 / Lollipop (API 21) and
it works with at least 90% devices in the market.
Under the hood, it uses and leverages the Camera 2 APIs. It
basically, provide the same consistency as Camera 1 API via Camera 2
Legacy layer and it fixed a lot of issues across the device.
It also has a lot of awesome advanced features like Portrait, HDR,
Night mode etc (Provided your Device supports that).
CameraX has also introduced use cases which allow you to focus on the
the task you need to get it done and not waste your time with specific devices. Few of them are Preview, Image Analysis, Image Capture.
CameraX doesn't have specific call/stop methods in onResume() and
onPause() but it binds to the lifecycle of the View with the help of
CameraX.bindToLifecycle()
The following is the few lists of known issues fixed with CameraX,
what more you can do with CameraX
You can also create Video Recorder App using CameraX
Add multiple extensions like Portrait Mode, HDR etc.
We can also use Image Analysis to perform Computer Vision, ML. So it
implements Analyzer method to run on each and every frame.
To read more about CameraX refer here
for Getting Started with CameraX
You can check the official doc:
CameraX is an addition to Android Jetpack that makes it easier to add camera capabilities to your app. The library provides a number of compatibility fixes and workarounds to help make the developer experience consistent across many devices.
You can use cameraX to interface with a device’s camera through an abstraction called a use case. The following use cases are currently available:
Preview: prepares a preview SurfaceTexture
Image analysis: provides CPU-accessible buffers for analysis, such as for machine learning
Image capture: captures and saves a photo
Use cases can be combined and active concurrently.
Just add the dependencies:
dependencies {
// CameraX core library
def camerax_version = "1.0.0-alpha01"
implementation "androidx.camera:camera-core:$camerax_version"
// If you want to use Camera2 extensions
implementation "androidx.camera:camera-camera2:$camerax_version"
}
For information on how to use the CameraX library check here.

Arcore StereoScopic Rendering in Android

I would like to implement an android application (API min 27) for a term project which enables users to experience Arcore and mixed reality features in headsets such as google cardboard etc with stereoscopic view. For preliminary research, i couldn't find valid resources and approaches in order to solve stereoscopic vision on arcore except some approaches in unity3d, openGL and some frameworks such as Vuforia etc.. As far as i know, currently arcore 1.5 is not supporting this feature.
I considered using cardboard sdk and arcore sdk together but I am not sure that it is going to be naive approach and provide solid foundation for the project and future works.
Is there a way to work around for desired stereoscopic view for Arcore in native android or how can I implement stereoscopic view for given case (Not asking for actual implementation, just brainstorming) ?
Thanks in Advance
there is the Google Sceneform - scenegraph-based visualization engine for AR applications and there is this feature request asking for the same thing you have planned. Now to answer your question, there is this java visualization library used throughout google's own ARCore examples and it has support for stereoscopic rendering. What is missing in their implementation is accounting for lens distortion.

Categories

Resources