I need to have a function that tells whether I the phone's camera video supports at least 30fps.
I'm using both camera1 and camera2 api, depending on the phone's camera2 support (or lack thereof)
I though about using this :
val manager = context.getSystemService(Context.CAMERA_SERVICE) as CameraManager
val cameras = manager.cameraIdList
for (camera in cameras) {
val cc = manager.getCameraCharacteristics(camera)
val fpsRange = cc.get(CameraCharacteristics.CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES)!!
--> Log.d("TAG", "fps : ${fpsRange.find { it.lower == 30 && it.upper == 30 }}") // is this correct?
}
but I'm not sure whether it's the right solution, I don't understand the ranges that I get and whether I chose the right CameraCharacteristics.
That expression will evaluate to true for a fixed 30fps frame rate (minimum frame rate=maximum frame rate=30). That will be generally supported on all Android devices, since it's required for standard video recording.
Related
I’m trying to access telephoto lens on Samsung S21 ultra using Camera2 and CameraX API. It list down two back and two front cameras. See in snapshot attached. But none of these is Telephoto lens. Two cameras accessible via APIs are WIDE and UW lens but Telephoto and Periscope Telephoto are missing.
Can someone confirm if Samsung S21 Ultra telephoto camera is opened to be used in third party apps and can be accessed via Camera2 APIs?
I was also thinking of binding multiple physical cameras to a single logical camera and we can access them but while trying to retrieve physical camera ids from logical cameras. However I’m always getting null value when printing these. See code snippet below.
fun findDualCameras(manager: CameraManager, facing: Int? = null): Array<DualCamera> {
val dualCameras = ArrayList<DualCamera>()
// Iterate over all the available camera characteristics
manager.cameraIdList.map {
Pair(manager.getCameraCharacteristics(it), it)
}.filter {
// Filter by cameras facing the requested direction
facing == null || it.first.get(CameraCharacteristics.LENS_FACING) == facing
}.filter {
// Filter by logical cameras
it.first.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES)!!.contains(
CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA
)
}.forEach {
// All possible pairs from the list of physical cameras are valid results
// NOTE: There could be N physical cameras as part of a logical camera grouping
val physicalCameras = it.first.physicalCameraIds.toTypedArray()
for (idx1 in physicalCameras.indices) {
for (idx2 in (idx1 + 1) until physicalCameras.size) {
dualCameras.add(
DualCamera(
it.second, physicalCameras[idx1], physicalCameras[idx2]
)
)
}
}
}
return dualCameras.toTypedArray()
}
Is there any other possibility to access the Telescope camera directly in S21 ultra or Can we get stream from telescope camera indirectly in S21 ultra?
I currently have an app with a minimum api of 21 that has both a camera1 and a camera2 implementation. Based on the following code I select whether to use camera1 or camera2:
val cameraIds = manager.cameraIdList
for (id in cameraIds) {
val info = manager.getCameraCharacteristics(id)
val facing = info.get(CameraCharacteristics.LENS_FACING)!!
val level = info.get(CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL)!!
val hasFullLevel = level == CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_FULL
val syncLatency = info.get(CameraCharacteristics.SYNC_MAX_LATENCY)!!
val hasEnoughCapability = syncLatency == CameraCharacteristics.SYNC_MAX_LATENCY_PER_FRAME_CONTROL
// All these are guaranteed by
// CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_FULL, but checking
// for only the things we care about expands range of devices we can run on.
// We want:
// - Back-facing camera
// - Per-frame synchronization (so that exposure can be changed every frame)
if (facing == CameraCharacteristics.LENS_FACING_BACK && (hasFullLevel || hasEnoughCapability)) {
// Found suitable camera - get info, open, and set up outputs
foundCamera = true
break
}
}
I plan to, in the near future update to cameraX and I would love to drop the camera1 implementation and have one unified cameraX implementation. Does anyone have experience with cameraX and whether it handles all the cases that I was previously using the camera1 fallback for? A lot of our customers are in developing markets and therefore we need to maintain support for as many older devices as possible.
cameraX does not have a Camera API1 fallback. They rely on camera2 LEGACY support, but yes, the library includes many workarounds for specific problems. Please look at the list of devices they tested: https://developer.android.com/training/camerax/devices.
You don't explain what are "all the cases" you were previously using the camera1 fallback for, but if you have such list, you can go through the release notes to check whether they are addressed by the current version of the library. If they are not, you are welcome to add them to the issues list.
I started to use CameraX (1.0.8 alpha) library in my Android application and during development on real Samnsung A50 device + emulators all working fine. But when it was release to Play Store - I see a lot of crashes on Pixel 2XL and Nexus 5X devices (I tried my app on emulators for this devices, but all is working fine).
I'm just call bindToLifecle:
Fatal Exception: java.lang.IllegalArgumentException: Can not get supported output size under supported maximum for the format: 34
at androidx.camera.camera2.internal.SupportedSurfaceCombination.getSupportedOutputSizes(SupportedSurfaceCombination.java:29)
at androidx.camera.camera2.internal.Camera2DeviceSurfaceManager.getSuggestedResolutions(Camera2DeviceSurfaceManager.java:29)
at androidx.camera.core.CameraX.calculateSuggestedResolutions(CameraX.java:14)
at androidx.camera.lifecycle.ProcessCameraProvider.bindToLifecycle(ProcessCameraProvider.java)
Does anybody had such issues?
Code for init:
#SuppressLint("RestrictedApi")
private void definePermissionsCallback() {
allPermissionsCheck = Dexter.withActivity(this)
.withPermissions(WRITE_EXTERNAL_STORAGE, Manifest.permission.CAMERA)
.withListener(new MultiplePermissionsListener() {
#Override
public void onPermissionsChecked(MultiplePermissionsReport report) {
if (report.areAllPermissionsGranted()) {
isFileStoragePermissionGiven = true;
isCameraPermissionGiven = true;
SharedPreferences sharedPreferences = getSharedPreferences(APPLICATION_SETTINGS, MODE_PRIVATE);
sharedPreferences.edit().putBoolean(ALLOW_CAMERA, true).apply();
findViewById(R.id.switch_camera).setEnabled(true);
cameraProviderFuture = ProcessCameraProvider.getInstance(MainActivity.this);
cameraProviderFuture.addListener(() -> {
try {
cameraProvider = (ProcessCameraProvider) cameraProviderFuture.get();
bindPreview();
} catch (ExecutionException | InterruptedException e) {
Crashlytics.logException(e);
}
}, ContextCompat.getMainExecutor(MainActivity.this));
return;
}...
void bindPreview() {
cameraProvider.unbindAll();
Preview preview = new Preview.Builder()
.setTargetName("Preview")
.build();
preview.setPreviewSurfaceProvider(previewView.getPreviewSurfaceProvider());
cameraSelector = new CameraSelector.Builder().requireLensFacing(lensFacing).build();
cameraProvider.bindToLifecycle(this, cameraSelector, preview);
}
This may not answer perfectly and I could be wrong, but give following a try:
The Preview instance you are using to bind to life cycle, has the Builder which allows setting either of target resolution or target aspect ratio with setTargetResoltion or setTargetAspectRation.
It calls out that if not set
If not set, the default selected resolution will be the best size match to the device's screen resolution, or to 1080p (1920x1080), whichever is smaller.
And
If not set, resolutions with aspect ratio 4:3 will be considered in higher priority.
Respectively.
Based on the error message
Can not get supported output size under supported maximum for the format
It looks like it's not able to get the output size that it's trying to find for the default values for certain devices. This is possible as the HAL implementation of the Camera is done by OEMs (like Nokia, Huawei etc) and can have support for different supported size. If you want to look at supported resolutions in a given device you can use this app: Camera2Api Probe
Pointer to how Camera X selects automatic resolution
TL;DR;
While the API should provide this support implicitly, considering it in alpha, try to set the Aspect Ratio or the Target Resolution explicitly so it works for most of devices. To make it highly configurable you can query the supported resolution using this api
Note that, I have no link or ownership with the mentioned app Camera2Api, but I used it to query Camera2 information for devices in my job.
You could use this function (written in Kotlin) to get the possible output sizes:
private fun getOutputSizes(lensFacing: Int = CameraCharacteristics.LENS_FACING_BACK): Array<Size>? {
val manager = context.getSystemService(Context.CAMERA_SERVICE) as CameraManager
for (cameraId in manager.cameraIdList) {
val characteristics = manager.getCameraCharacteristics(cameraId)
val orientation = characteristics[CameraCharacteristics.LENS_FACING]!!
if (orientation == lensFacing) {
val configurationMap = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)!!
return configurationMap.getOutputSizes(SurfaceTexture::class.java)
}
}
return null
}
And simply try them until one of them is working:
for (outputSize in getOutputSizes()!!) {
try {
CameraX.bindToLifecycle(this, qrCodePreview.useCase, qrCodeImageAnalysis.useCase)
return
}
catch (e: IllegalArgumentException) {}
}
TODO("No valid output size found, handle error ...")
Sometimes this happens when you use the old or new version of lib. Try to change it
From the official documentation:
"The CameraX library is in alpha stage, as its API surfaces aren't yet finalized. We do not recommend using Alpha libraries in production. Libraries should strictly avoid depending on Alpha libraries in production, as their API surfaces may change in source- and binary-incompatible ways."
You can wait for a stable release or use api's of Camera or Camera2.
According to official code:
(outputSizeCandidates.isEmpty() && !isDefaultResolutionSupported) {
throw new IllegalArgumentException(
"Can not get supported output size for the desired output size quality for "
+ "the format: "
+ imageFormat);
}
I've got the same on device with Legacy Camera Support. Hope it helps to find answer.
UPD: It happened because CameraX lib can't find best resolution to fit Camera output for Display resolution. For example:
Display resolution: 1280x720
Closest supported camera resolution: 1920x720
DispRes < CamRes. Failed! Lib can't properly setup size.
Lib works if display resolution smaller than camera resolution.
For example:
Display resolution: 1280x800
Closest supported camera resolution: 1280x720
DispRes > CamRes. Success!
While googling, I've got this information that if I want to enable my camera to record high frame rate video on android device, I need to put specific parameters by device vendor for calling camera APIs.
For example, by calling the methods as below, I could enable my Galaxy S6 camera app recording 120 fps constantly.
camera = Camera.open();
Camera.Parameters parms = camera.getParameters();
// for 120fps
parms.set("fast-fps-mode", 2); // 2 for 120fps
parms.setPreviewFpsRange(120000, 120000);
But the problem is no all devices(including LG, and other vendors) support 120 fps(or higher). So I need to know maximum fps in API level in real-time when run my camera app for error handling.
In my case, Camera.Parameters.getSupportedPreviewFpsRange() not worked for me.
It only returns maximum 30000(meaning 30fps) even it could record at 120000(120 fps). I think it because recording at high frame rate(more than 30 fps) is strongly related with camera hardware property and that's why I need to call vendor specific APIs.
Is there a common way to get maximum fps by camera device in API level?
---------------------- EDIT ----------------------
On API21(LOLLIPOP), we could use StreamConfigurationMap to get maximum value for high speed fps recording. The usage is as below.
CameraManager manager = (CameraManager)activity.getSystemService(Context.CAMERA_SERVICE);
String cameraId = manager.getCameraIdList()[0];
CameraCharacteristics characteristics = manager.getCameraCharacteristics(cameraId);
StreamConfigurationMap map = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
Range<Integer>[] fpsRange = map.getHighSpeedVideoFpsRanges(); // this range intends available fps range of device's camera.
You can get that information via CamcoderProfile starting API-21 as follows:
for (String cameraId : manager.getCameraIdList()) {
int id = Integer.valueOf(cameraId);
if (CamcorderProfile.hasProfile(id, CamcorderProfile.QUALITY_HIGH_SPEED_LOW)) {
CamcorderProfile profile = CamcorderProfile.get(id, CamcorderProfile.QUALITY_HIGH_SPEED_LOW);
int videoFrameRate = profile.videoFrameRate;
//...
}
}
This will give you the lowest available profile supporting high speed capture. I doubt there are many pre-Lollipop devices out there with such hardware possibilities (if any at all), so this should get you covered.
I want to adjust the exposure time and frame duration of an Adroid phone. I read some papers saying Android doesn't provide exposure time setting API. I tried using HTC m8x phone whose camera itself support different exposure time from 4 to 1/8000, so I guess there should some way to change it in an app.
The method get(CaptureRequest.EXPOSURE_TIME) returns null. After I used CaptureRequest.Builder.set(CaptureRequest.SENSOR_EXPOSURE_TIME,x), the CaptureRequest.SENSOR_EXPOSURE_TIME becomes x, but the preview effect in the phone doesn't change.
I checked the authority of HTC m8x, the code is as :
Activity activity = getActivity();
CameraManager manager =(CameraManager)activity.getSystemService(Context.CAMERA_SERVICE);
for (String cameraId : manager.getCameraIdList()) {
CameraCharacteristics characteristics= manager.getCameraCharacteristics(cameraId);
// We don't use a front facing camera in this sample.
if (characteristics.get(CameraCharacteristics.LENS_FACING)== CameraCharacteristics.LENS_FACING_FRONT) {
continue;
}
int level = characteristics.get(CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL);
boolean hasFullLevel
= (level == CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_FULL);
int[] capabilities = characteristics.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES);
int syncLatency = characteristics.get(CameraCharacteristics.SYNC_MAX_LATENCY);
boolean hasManualControl = hasCapability(capabilities,CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_MANUAL_SENSOR);
boolean hasEnoughCapability = hasManualControl &&syncLatency == CameraCharacteristics.SYNC_MAX_LATENCY_PER_FRAME_CONTROL;
// All these are guaranteed by
// CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_FULL, but checking for only
// the things we care about expands range of devices we can run on
// We want:
// - Back-facing camera
// - Manual sensor control
// - Per-frame synchronization (so that exposure can be changed every frame)
if ( hasFullLevel !! hasEnoughCapability) {
mCameraId = cameraId;
return;}
}
There is no camera id returned.
characteristics.get(CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL) =2;
CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_FULL = 1;
CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_MANUAL_SENSOR = 1;
capabilities =0;
characteristics.get(CameraCharacteristics.SYNC_MAX_LATENCY) =-1
CameraCharacteristics.SYNC_MAX_LATENCY_PER_FRAME_CONTROL = 0;
So does this show I don't have the authority to change exposure time of HTC m8x phone? Will root the phone help?
It looks like your phone ships with Android 4.x and the new camera2 API requires Lollipop/API 21. If you have upgraded to API21, yes, if it does have full control it would tell you it does support manual exposure control... if it has limited support...
check REQUEST_AVAILABLE_CAPABILITIES_MANUAL_SENSOR for the individual control you want like exposure...
if it has legacy support, no it doesn't support manual controls.