CameraX recorded video analysis graphics error - android

Android CameraX, prompt when VideoCapture and ImageAnalysis are used at the same time:
java.lang.IllegalArgumentException: No supported surface combination is found for camera device-Id: 1. May be attempting to bind too many use cases. Existing surfaces: [] New configs: [androidx.camera.core.impl.ImageAnalysisConfig#7ea65ca , androidx.camera.core.impl.VideoCaptureConfig#e0d133b, androidx.camera.core.impl.PreviewConfig#20f5a58]

CameraX does not support this combination for most of the devices.
Here you can find the supported combinations: Source
Notice that video capture use case is not even in supported combinations. However, as far as I know, CameraX team is working on this.
For now, I would suggest you to use Camera2 Api. Here is a very well documented library that would allow you to do what you want to achieve.

Related

Fetch Raw frames from capturing video Android Camera

I do not know much about Camera API, though i need to use frames from a capturing video with more then 30fps with a good quality camera(S9).
Can anybody suggest code for the same.
I tried to find fit code for this but i am failed.
Thanks in Advance
Mohit
You are lucky, since not so long ago a component of android JetPack was released, called CameraX. Sadly it is still in alpha stage, meaning that you should avoid using it in production since it might have breaking changes in the future. This component was built on top of Camera 2 API, witch is a low level API for working with camera.
If you plan to use your app in production I highly recommend to use Camera 2 API, it is low level, however you have the full control over the camera.
Here is an example to get you started.

Does the new CameraX api provide multi-camera support for processing dual camera streams separately?

I want to build an application for taking photos with both cameras of a dual cam at the same time and wanted to ask if someone knows if the new camerax api allows me to process both camera streams separately.
As far as I can see, CameraX does not support multi-camera API yet, at least not as of 1.0.0-alpha08. They just added a new experimental API to get the cameraID for now, but that's about it.
See https://developer.android.com/jetpack/androidx/releases/camera#camera-core-1.0.0-alpha08

What is Android CameraX?

What is Android CameraX?
There is a session about CameraX planned in Google I/O 2019. What is it? Is it a new framework API? Is it a new library?
https://events.google.com/io/schedule/events/8d400240-f31f-4ac2-bfab-f8347ef3ab3e
Does it mean that Camera2 API is deprecated?
https://github.com/googlesamples/android-Camera2Basic
What is Android CameraX?
CameraX is a new Jetpack library that lets developers control a device's camera and focuses on compatibility across devices going back to API level 21 (Lollipop). It was announced at Google I/O 2019 and has a dedicated documentation page alongside an official sample.
Does it mean that Camera2 API is deprecated?
Camera2 API is not deprecated; in fact, it is the foundation that CameraX is built on. CameraX also provides a Camera2 interop API that lets developers extend their CameraX implementation with Camera2 code.
For more information, the official documentation is available at https://developer.android.com/camerax
In Google IO 2019, Google added another powerful tool for camera development in Android development called CameraX as part of Jetpack
Few Features of CameraX
It is backwards compatible till Android 5.0 / Lollipop (API 21) and
it works with at least 90% devices in the market.
Under the hood, it uses and leverages the Camera 2 APIs. It
basically, provide the same consistency as Camera 1 API via Camera 2
Legacy layer and it fixed a lot of issues across the device.
It also has a lot of awesome advanced features like Portrait, HDR,
Night mode etc (Provided your Device supports that).
CameraX has also introduced use cases which allow you to focus on the
the task you need to get it done and not waste your time with specific devices. Few of them are Preview, Image Analysis, Image Capture.
CameraX doesn't have specific call/stop methods in onResume() and
onPause() but it binds to the lifecycle of the View with the help of
CameraX.bindToLifecycle()
The following is the few lists of known issues fixed with CameraX,
what more you can do with CameraX
You can also create Video Recorder App using CameraX
Add multiple extensions like Portrait Mode, HDR etc.
We can also use Image Analysis to perform Computer Vision, ML. So it
implements Analyzer method to run on each and every frame.
To read more about CameraX refer here
for Getting Started with CameraX
You can check the official doc:
CameraX is an addition to Android Jetpack that makes it easier to add camera capabilities to your app. The library provides a number of compatibility fixes and workarounds to help make the developer experience consistent across many devices.
You can use cameraX to interface with a device’s camera through an abstraction called a use case. The following use cases are currently available:
Preview: prepares a preview SurfaceTexture
Image analysis: provides CPU-accessible buffers for analysis, such as for machine learning
Image capture: captures and saves a photo
Use cases can be combined and active concurrently.
Just add the dependencies:
dependencies {
// CameraX core library
def camerax_version = "1.0.0-alpha01"
implementation "androidx.camera:camera-core:$camerax_version"
// If you want to use Camera2 extensions
implementation "androidx.camera:camera-camera2:$camerax_version"
}
For information on how to use the CameraX library check here.

How to encode camera frames into mp4 on android

I take camera preview frames from android camera in 640x480 (sufficient to me) and do some modifications over them. But now I need to encode those to new MP4 file (with audio).
Is this some how possible? I can't use ffmpeg due to its not so good license, but I've found Stagefright framework which should be probably capable of doing that. But I did not find any sort of official documentation or tutorials to do such a thing I need to do.
Is there a way to do it with this framework please? I don't need codes, I would be very glad just for pointing me the right direction.
There is one scenario where the use-case described is realized. Consider a scenario where the Camera output is fed to an OpenGL library where some effects are applied on the preview frames which need to be recorded.
Well in this case, you can use the traditional MediaRecorder with GrallocSource instead of CameraSource. The setup would look like thus:
MediaRecorder is set up with the GrallocSource. The input surfaces for recording are provided by the Camera + OpenGL combined operation which implement a SurfaceTextureClient. A good example for this can be found in SurfaceMediaSource_test modules.
stagefright is quite good if you must support API 9 and higher. But this framework is not official, as you saw. You can use the sample code in platform/frameworks/av at your risk.
The google source includes CameraSource, which provides the camera frames directly to the encoder. While this approach may be much more efficient (the pixels are not copied to the user space at all), it does not allow manipulation. It is possible to modify the C++ source, but I strongly recommend to access the Camera in Java, and pass the preview frames via JNI to stagefrght (OpenMAX) encoder. On some devices, this may force you to use software encoder. You must convert the video frames to YUV planar format for the encoder. See libyuv for optimized converters.
If you can restrict your support to API 16 and higher, it is safer to use the official Java MediaCdec API.

OpenCV for Android: Autofocus native camera

Is it possible to control autofocus feature of Android camera, using OpenCV's libnative_camera*.so ?
Or maybe it's possible to manually set focus distance?
Is there alternative approach (May be, it's better to use Android API to control camera and then grab frame in onPreview events and pass it to native code)?
If your intention is to control the camera on your own, then Android Camera APIs suck. As such Android APIs suck when it comes to offering your hardware camera device number to JavaCV native camera library. Without a native device number, JavaCV would be clueless to connect to the appropriate camera (Front or Back).
If your intention is only to perform object detection and stuff, well Android Camera APIs coupled with JavaCV should work. Setup a callbackBuffer of sufficient size, setPreviewCallbackWithBuffer, setup sufficient preview frame-rate, and once you start getting preview-frames in ImageFormat.NV21 format (mind you!!, this is the only format supported for preview-frames even in ICS), pass them off to JavaCV for performing object detection.
AutoFocus on Android Camera APIs suck big time. I have been researching for over a month for feasible solutions.

Categories

Resources