Attempting to use Android 4 API 14 face recognition found in Camera.Face class.
I'm having difficulty getting values for face coordinates [Left/Right eye, mouth].
Device im using is Samsung Galaxy Tab 2 [GT-P5100] with Android 4.0.4
I'm initialising face detection something like below code snippet and the value of camera.getParameters().getMaxNumDetectedFaces() is returned as 3 when running on the above mentioned device.
Now when face is introduced to the surface frame and detected in face detection listener, it returns back the values in faces[0].rect.flattenToString() identifying position of the face on surface. However the rest of the values i.e. face id, left/right eye and mouth are returned as -1 and Null respectively.
This behaviour is described in documentation as
This is an optional field, may not be supported on all devices. If not supported, the value will always be set to null. The optional fields are supported as a set. Either they are all valid, or none of them are.
So the question is am I missing something or is it simply that my device can not support Android api face recognition as found in Camera.Face?
It is worth to mention that same device offeres face log in to the device, which is configured trough user settings.
FaceDetectionListener faceDetectionListener = new FaceDetectionListener(){
#Override
public void onFaceDetection(Face[] faces, Camera camera) {
if (faces.length == 0){
prompt.setText(" No Face Detected! ");
}else{
prompt.setText(String.valueOf(faces.length) + " Face Detected :) [ "
+ faces[0].rect.flattenToString()
+ "Coordinates : Left Eye - " + faces[0].leftEye + "]"
) ;
Log.i("TEST", "face coordinates = Rect :" + faces[0].rect.flattenToString());
Log.i("TEST", "face coordinates = Left eye : " + String.valueOf(faces[0].leftEye));
Log.i("TEST", "face coordinates = Right eye - " + String.valueOf(faces[0].rightEye));
Log.i("TEST", "face coordinates = Mouth - " + String.valueOf(faces[0].mouth));
}
.....
if (camera != null){
try {
camera.setPreviewDisplay(surfaceHolder);
camera.startPreview();
prompt.setText(String.valueOf(
"Max Face: " + camera.getParameters().getMaxNumDetectedFaces()));
camera.startFaceDetection();
previewing = true;
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
In your initialization code, you need to set the face detection listener for the camera.
Related
I know that my phone and other models have a depth camera. I have used Portrait mode and extracted the depth information from the image using Desktop tools. I have attempted to use Unity's WebCamTexture.depthCameraName to do this on the device, but to no avail. Is this possible, or is the depth camera reserved for the camera app on MIUI?
Certainly, there might be the possibility to make the user take a photograph in the camera app and import it, but my application would benefit greatly from being able to read out this data in real time. I would appreciate any pointers on what to research, thank you in advance.
I would just like to add that if this is doable in Unity, that would be my preferred solution. However, if it has to be, I can make do with any other XR solution for android (position info will be relevant to the project)
As I know, there is a way to get depth image on Android studio. With camera2 API, you can use CameraMetadata.REQUEST_AVAILABLE_CAPABILITIES_DEPTH_OUTPUT to find depthcamera's CameraId and use it.
such as:
private String DepthCameraID() {
try {
for (String camera : cameraManager.getCameraIdList()) {
CameraCharacteristics chars = cameraManager.getCameraCharacteristics(camera);
final int[] capabilities = chars.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES);
boolean facingFront = chars.get(CameraCharacteristics.LENS_FACING) == CameraMetadata.LENS_FACING_BACK;
boolean depthCapable = false;
for (int capability : capabilities) {
boolean capable = capability == CameraMetadata.REQUEST_AVAILABLE_CAPABILITIES_DEPTH_OUTPUT;
depthCapable = depthCapable || capable;
}
if (depthCapable && facingFront) {
SizeF sensorSize = chars.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE);
Log.i(TAG, "Sensor size: " + sensorSize);
float[] focalLengths = chars.get(CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS);
if (focalLengths.length > 0) {
float focalLength = focalLengths[0];
double fov = 2 * Math.atan(sensorSize.getWidth() / (2 * focalLength));
Log.i(TAG, "Calculated FoV: " + fov);
}
return camera;
}
}
} catch (CameraAccessException e) {
Log.e(TAG, "Could not initialize Camera Cache");
e.printStackTrace();
}
return null;
}
I want to set camera exposure.When camera starts i want to set higher values and when it stops it is set to lower value.So i used the below code.On emulator it is showing range of -9 to 9 but when i attached physical usb camera it is showing 0 value for lower range and higher range. I am trying to get
exposure time range it is showing null also . Range exposure_time= cameraCharacteristics.get(CameraCharacteristics.SENSOR_INFO_EXPOSURE_TIME_RANGE);
public void setExposure(Context context, double exposureAdjustment) {
CameraManager manager = (CameraManager) context.getSystemService(Context.CAMERA_SERVICE);
try {
camId = manager.getCameraIdList()[0];
} catch (CameraAccessException e) {
e.printStackTrace();
}
try {
cameraCharacteristics = manager.getCameraCharacteristics(camId);
} catch (CameraAccessException e) {
e.printStackTrace();
}
Range<Integer> range1 = cameraCharacteristics.get(CameraCharacteristics.CONTROL_AE_COMPENSATION_RANGE);
Log.d(TAG,"range1" +range1);
Integer minExposure = range1.getLower();
Log.d(TAG,"minExposure" +minExposure);
Integer maxExposure = range1.getUpper();
Log.d(TAG,"maxExposure" +maxExposure);
if (minExposure != 0 || maxExposure != 0) {
float newCalculatedValue = 0;
if (exposureAdjustment >= 0) {
newCalculatedValue = (float) (maxExposure * exposureAdjustment);
} else {
newCalculatedValue = (float) (minExposure * exposureAdjustment);
}
if (requestBuilder != null) {
CaptureRequest captureRequest = requestBuilder.build();
try {
captureSession.setRepeatingRequest(captureRequest, captureCallback, null);
} catch (CameraAccessException e) {
e.printStackTrace();
}
requestBuilder.set(CaptureRequest.CONTROL_AE_EXPOSURE_COMPENSATION, (int) newCalculatedValue);
Log.d(TAG,"New Calculated VAlue "+newCalculatedValue);
try {
captureSession.capture(captureRequest,captureCallback,null);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
}
}
First, you need to find out the available auto exposure modes.
You can do this by :
final int[] availableAeModes = characteristics.get(CameraCharacteristics.CONTROL_AE_AVAILABLE_MODES);
for(int mode : availableAeModes){
Timber.d("AE mode : %d",mode);
}
The meaning of these integer values you can map here.
You would only be able to control the exposure manually, if among the available ae modes, the value of CONTROL_AE_MODE_OFF is present.
Otherwise, you won't be able to control the exposure.
Now, I am assuming that the CONTROL_AE_MODE_OFF is an available mode on your camera.
You can control the exposure by manipulating these two parameters (there are some other parameters as well, through which you can control the exposure, however, these two have worked perfectly for me) :
SENSOR_EXPOSURE_TIME
SENSOR_SENSITIVITY
For setting SENSOR_SENSITIVITY, check the range supported by your camera by :
final Range<Integer> isoRange =
characteristics.get(CameraCharacteristics.SENSOR_INFO_SENSITIVITY_RANGE);
if(null != isoRange) {
Timber.d("iso range => lower : %d, higher : %d",isoRange.getLower(),isoRange.getUpper());
} else {
Timber.d("iso range => NULL NOT SUPPORTED");
}
For setting SENSOR_EXPOSURE_TIME, check the range supported by your camera by:
final Range<Long> exposureTimeRange = characteristics.get(CameraCharacteristics.SENSOR_INFO_EXPOSURE_TIME_RANGE);
if(null!=exposureTimeRange){
Timber.d("exposure time range => lower : %d, higher : %d",exposureTimeRange.getLower(),exposureTimeRange.getUpper());
}else{
Timber.d("exposure time range => NULL NOT SUPPORTED");
}
Now, you have the range of both exposure time and sensitivity.
The next step is to configure the preview with these values.
This is how you configure your preview :
final CaptureRequest.Builder previewRequest =
//it's important to set the manual template, because you want to change exposure manually
this.cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_MANUAL);
previewRequest.addTarget(this.previewReader.getSurface());
previewRequest.set(JPEG_ORIENTATION, 0);
//setting ae mode to off state
previewRequest.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_OFF);
//we don't want to handle white balance manually, so set the white balance mode to auto
previewRequest.set(CaptureRequest.CONTROL_AWB_MODE,
CaptureRequest.CONTROL_AWB_MODE_AUTO);
//setting sensor sensitivity
previewRequest.set(CaptureRequest.SENSOR_SENSITIVITY, <YOUR_SELECTED_SENSITIVITY>);
//setting sensor exposure time
previewRequest.set(CaptureRequest.SENSOR_EXPOSURE_TIME, <YOUR_SELECTED_EXPOSURE_TIME>);
this.previewCaptureSession.setRepeatingRequest(previewRequest.build(),
null, this.cameraHandler);
Note : You can see that I use the template as TEMPLATE_MANUAL. Once, the template is set to manual. All three auto processes, namely, auto-exposure, auto-white-balance and auto-focus, will become manual.
The above code, doesn't take care of setting focus, since, I used it on a camera which didn't have auto-focus.
If your camera has auto-focus, then you will have to handle setting the focus separately.
Getting a 0 for both the lower and the higher range of the exposure compensation range means that the device, which runs your app, doesn't support exposure adjustments.
A null value for the SENSOR_INFO_EXPOSURE_TIME_RANGE is also returned when the given device doesn't support this feature or you can't adjust it.
You can read more about that from the official docs.
I am trying to record video using a Vivo X20 (7.1.1) and the camera2 api without using a preview and without recording sound (Strictly recording HD Video only).
I'm currently stuck because I cannot figure out how to successfully call MediaRecorder.setVideoSize() and record a video in HD. Currently when I run the app the log shows the error: Surface with size (w=1920, h=1080) and format 0x1 is not valid, size not in valid set: [1440x1080, 1280x960, 1280x720, 960x540, 800x480, 720x480, 768x432, 640x480, 384x288, 352x288, 320x240, 176x144]
The phone's stock camera app can record video up to 4K so I'm definitely missing something here. There are a total of two camera devices identified by CameraManager. When I use getOutPutFormats() from CameraCharacteristics it shows the same valid set of resolutions for both cameras and it is the same range as the above error message.
The below is the code I am using to initialize MediaRecorder and initiate a capture session:
public void StartRecordingVideo() {
Initialize();
recordingVideo = true;
cameraManager = (CameraManager) this.getSystemService(Context.CAMERA_SERVICE);
try {
if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
String[] cameraIDs = cameraManager.getCameraIdList();
//LogAllCameraInfo();
if (cameraIDs != null)
{
for(int x = 0; x < cameraIDs.length; x++)
{
Log.d(LOG_ID, "ID: " + cameraIDs[x]);
}
}
cameraManager.openCamera(deviceCameraID, cameraStateCallback, handler);
Log.d(LOG_ID, "Successfully opened camera");
}
else
{
throw new IllegalAccessException();
}
}
catch (Exception e)
{
recordingVideo = false;
Log.e(LOG_ID, "Error during record video start: " + e.getMessage());
}
}
private void Initialize()
{
videoRecordThread = new HandlerThread("video_capture");
videoRecordThread.start();
handler = new Handler((videoRecordThread.getLooper()));
try
{
vidRecorder = new MediaRecorder();
vidRecorder.setVideoSource(MediaRecorder.VideoSource.SURFACE);
vidRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
vidRecorder.setVideoFrameRate(30);
vidRecorder.setCaptureRate(30);
vidRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.DEFAULT);
vidRecorder.setVideoEncodingBitRate(10000000);
vidRecorder.setVideoSize(1920, 1080);
String videoFilename = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS)+ File.separator + System.currentTimeMillis() + ".mp4";
vidRecorder.setOutputFile(videoFilename);
Log.d(LOG_ID, "Starting video: " + videoFilename);
vidRecorder.prepare();
}
catch (Exception e)
{
Log.e(LOG_ID, "Error during Initialize: " + e.getMessage());
}
}
And the onReady/onSurfacePrepared/Camera onOpened callbacks:
#Override
public void onReady(CameraCaptureSession session) {
Log.d(LOG_ID, "onReady: ");
super.onReady(session);
try {
CaptureRequest.Builder builder = deviceCamera.createCaptureRequest(CameraDevice.TEMPLATE_RECORD);
builder.addTarget(vidRecorder.getSurface());
CaptureRequest request = builder.build();
session.setRepeatingRequest(request, null, handler);
vidRecorder.start();
} catch (CameraAccessException e) {
Log.d(LOG_ID, "Error on Ready: " + e.getMessage());
}
}
#Override
public void onSurfacePrepared(CameraCaptureSession session, Surface surface) {
Log.d(LOG_ID, "onSurfacePrepared: ");
super.onSurfacePrepared(session, surface);
}
#Override
public void onOpened(CameraDevice camera) {
Log.d(LOG_ID, "onOpened: ");
deviceCamera = camera;
try {
camera.createCaptureSession(Arrays.asList(vidRecorder.getSurface()), recordSessionStateCallback, handler);
} catch (CameraAccessException e) {
Log.d(LOG_ID, "onOpened: " + e.getMessage());
}
}
I've tried messing with the order of calls and the output format/encoder with no luck. I am sure that I have all the required permissions. Thanks in advance for your time!
This device most likely supports camera2 at the LEGACY level; check what the output of INFO_SUPPORTED_HARDWARE_LEVEL is.
LEGACY devices are effectively running camera2 on top of the legacy android.hardware.Camera API (more complex than that, but roughly true); as a result, their capabilities via camera2 are restricted.
The maximum recording resolution is one significant problem; android.hardware.Camera records videos via a magic path that the LEGACY mapping layer cannot directly use (there's no Surface involved). As a result, camera2 LEGACY can only record at the maximum preview resolution supported by android.hardware.Camera, not at the maximum recording resolution.
Sounds like this device has no support for 1:1 1080p preview, which is pretty unusual for a device launched so recently.
You can verify if the set of supported preview sizes in the deprecated Camera API matches the list you get in your error; if it doesn't then there may be a OS bug in generating the list so it'd be good to know.
But in general, you can't request sizes that aren't enumerated in the CameraCharacteristics StreamConfiguraitonMap for the camera, no matter what the feature list on the side of the box says. Sometimes the OEM camera app has magic hooks to enable features that normal apps can't get to; often because the feature only works in some very very specific set of circumstances, which normal apps wouldn't know how to replicate.
I am creating an application capture image as it detect the face and I am able to achieve that, but only one issue the OnFaceDetection function of FaceDetectionListener is keep on executing even if there is no face in-front of camera. I am pasting my code.
mCamera.setFaceDetectionListener(new Camera.FaceDetectionListener() {
#Override
public void onFaceDetection(Camera.Face[] faces, Camera camera) {
try {
if (lastCaptureTiume + 10000 <= System.currentTimeMillis() || !faceCaptured) {
mCamera.takePicture(null, null, jpegCallback);
lastCaptureTiume = System.currentTimeMillis();
faceCaptured = true;
}
}
catch (Exception e) {
}
}
});
The issue is it is keep on takeing picture although there is no face in front of camera.
This behavior is different for different devices For my Note 3 onFaceDetection keeps on executing even without face and for nexus phone its performing perfectly.
Well I didn't got any other solution so I put a face check condition.
if (faces != null && faces.length > 0) {
//Do code here
}
Can we detect 'scream' or 'loud sound' etc using Android Speech Recognition APIs?
Or is there is any other software/third party tool that can do the same?
Thanks,
Kaps
You mean implement a clapper?
There's no need to use fancy math or the speech recognition API. Just use the MediaRecorder and its getMaxAmplitute() method.
Here is some of code you'll need.
The algorithm, records for a period of time and then measures the amplitute difference. If it is large, then the user probably made a loud sound.
public void recordClap()
{
recorder.start();
int startAmplitude = recorder.getMaxAmplitude();
Log.d(D_LOG, "starting amplitude: " + startAmplitude);
boolean ampDiff;
do
{
Log.d(D_LOG, "waiting while taking in input");
waitSome();
int finishAmplitude = 0;
try
{
finishAmplitude = recorder.getMaxAmplitude();
}
catch (RuntimeException re)
{
Log.e(D_LOG, "unable to get the max amplitude " + re);
}
ampDiff = checkAmplitude(startAmplitude, finishAmplitude);
Log.d(D_LOG, "finishing amp: " + finishAmplitude + " difference: " + ampDiff );
}
while (!ampDiff && recorder.isRecording());
}
private boolean checkAmplitude(int startAmplitude, int finishAmplitude)
{
int ampDiff = finishAmplitude - startAmplitude;
Log.d(D_LOG, "amplitude difference " + ampDiff);
return (ampDiff >= 10000);
}
If I were trying to detect a scream or loud sound, I would just look for a high root-mean-squared of the sounds coming through the microphone. I suppose that you can try to train a speech recognition system to recognize a scream, but it seems like overkill.