On Android - how to capture image/video from the wide angle camera? - android

How to capture images or videos from the camera2 api wide angle camera?
or the telescopic camera?
I know how to handle camera capture for front & back camera.
I just can't understand how to open the camera and choose the wide/telescopic camera?
I guess it has something to do with setting one of the following :
CameraMetadata.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA
CameraCharacteristics.getPhysicalCameraIds()
CameraCharacteristics.getAvailablePhysicalCameraRequestKeys()
CameraDevice.createCaptureSession(SessionConfiguration config)
CameraCharactersitics.LOGICAL_MULTI_CAMERA_SENSOR_SYNC_TYPE
But I fail to understand the scenario of setting it up and I didn;t find any good explanation.
I will appreciate any kind of tutorial or explanation.
Last question - how to test it with no phsical device? I mean - how to setup the Avd/emulator?

So I asked this on the CameraX discussion group and here is the reply from google:
For CameraX to support wide angle cameras we are working with manufacturers to expose those via camera2 first. Some devices indeed do so today in an non-determinstic manner. Will keep you posted as we progress, thanks!

So, If somebody still looks for the answer:
Almost no manufacturer supported that until android 10. from android 10 - all physical camera are logical cameras.
It means that you can see those cameras on
manager.getCameraIdList()
you will get a list of all available camera, just look for the CameraCharacteristics.LENS_FACING direction. and populate a list.
Here is the full code:
public CameraItem[] GetCameraListFirstTime() {
List<CameraItem> listValuesItems = new ArrayList<CameraItem>();
boolean IsMain = false;
boolean IsSelfie = false;
if(manager == null)
manager = (CameraManager)mContext.getSystemService(CAMERA_SERVICE);
try {
for (String cameraId : manager.getCameraIdList()) {
CameraCharacteristics chars = manager.getCameraCharacteristics(cameraId);
if (!IsMain && chars.get(CameraCharacteristics.LENS_FACING) == Camera.CameraInfo.CAMERA_FACING_FRONT) {
listValuesItems.add(new CameraItem(Integer.parseInt(cameraId), "Main"));
IsMain = true;
}
else if (!IsSelfie && chars.get(CameraCharacteristics.LENS_FACING) == Camera.CameraInfo.CAMERA_FACING_BACK) {
listValuesItems.add(new CameraItem(Integer.parseInt(cameraId), "Selfie"));
IsSelfie = true;
}
else
listValuesItems.add(new CameraItem(Integer.parseInt(cameraId), "Wide or Other"));
}
}
catch (CameraAccessException e) {
Log.e(TAG, e.toString());
}
return listValuesItems.toArray(new CameraItem[0]);
}
public class CameraItem implements java.io.Serializable{
public int Key;
public String Description;
public CameraItem(int key, String desc)
{
Key=key;
Description = desc;
}

Related

Sharing GIFs with iOS app from Android app

Has anyone one worked on attaching GIFs in apps? Something similar to WhatsApp or Skype.
When I try to get the content uri from the below code -
final InputConnectionCompat.OnCommitContentListener callback = new InputConnectionCompat.OnCommitContentListener() {
#Override
public boolean onCommitContent(InputContentInfoCompat info, int flags, Bundle opts) {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.N_MR1 && (flags & InputConnectionCompat.INPUT_CONTENT_GRANT_READ_URI_PERMISSION) != 0) {
try {
info.requestPermission();
} catch (Exception e) {
return false; // return false if failed
}
}
Log.d("TAG", "Content URI " + info.getContentUri());
return true; // return true if succeeded
}
};
return InputConnectionCompat.createWrapper(ic, info, callback);
}
This is what the content URI looks like, which works fine when passed to Glide.
Content URI (Swift keyboard) -> content://com.touchtype.swiftkey.fileprovider/share_images/Bi7V7tixYcLzbNgEJhE_NaXQ5eE.gif
Content URI(Gboard) -> content://com.google.android.inputmethod.latin.inputcontent/inputContent?fileName=%2Fdata%2Fuser_de%2F0%2Fcom.google.android.inputmethod.latin%2Ffiles%2Fgif69727604103223767740&packageName=com.testsdk&mimeType=image%2Fgif
But when same is passed to iOS app, then it will not work.
Can you please guide me in correct direction?
What needs to be sent to iOS app, so that same selected GIF can be shown in iOS app as well?
EDIT - I think I should use getLinkUri() instead of getContentUri().
Is it correct approach?
URI https://tse2.mm.bing.net/th?id=OGC.5e1a1b1d71e12b32cc7bfac93fbb7d1f&pid=Api&rurl=https%3a%2f%2fmedia.giphy.com%2fmedia%2fVe20ojrMWiTo4%2f200.gif&ehk=iHJHy%2fFSR7s2nIoEXxTIrUAWWuGBnz%2fecKwkM8Hm2ac%3d

Text Recognition not accurate on Android device using camera and Firebase ML Kit

I am using Firebase ML Kit on Android device for text recognition using a camera without clicking image.
I am using it by receiving frames and getting bitmaps from the frames.
Then passing the Bitmaps into the Text Recognition method.
But the text recognized is not accurate. Also, it is constantly changing but never giving accurate results.
Please let me know what I am doing wrong.
getting frames and Bitmaps:
public void onSurfaceTextureUpdated(SurfaceTexture surface) {
frame = Bitmap.createBitmap(textureView.getWidth(), textureView.getHeight(), Bitmap.Config.ARGB_8888);
textureView.getBitmap(frame);
Bitmap emptyBitmap = Bitmap.createBitmap(textureView.getBitmap(frame).getWidth(), textureView.getBitmap(frame).getHeight(), textureView.getBitmap(frame).getConfig());
if (textureView.getBitmap(frame).sameAs(emptyBitmap)) {
// myBitmap is empty/blank
System.out.println(" empty !!!!!!!!!!!!!!!!!!!!!!!!!!!!!");
} else {
System.out.println(" bitmap");
bitmap = textureView.getBitmap(frame);
runTextRecognition();
}
text recognition:
private void runTextRecognition() {
System.out.println(" text recognition!!!");
FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(bitmap);
FirebaseVisionTextRecognizer recognizer = FirebaseVision.getInstance().getOnDeviceTextRecognizer();
recognizer.processImage(image).addOnSuccessListener(new OnSuccessListener<FirebaseVisionText>() {
#Override
public void onSuccess(FirebaseVisionText texts) {
System.out.println("Text recognized ::: " + texts);
textRecognized = true;
processTextRecognitionResult(texts);
}
}).addOnFailureListener(new OnFailureListener() {
#Override
public void onFailure(#NonNull Exception e) {
e.printStackTrace();
}
});
}
The text I was trying to recognize was MRZ. I contacted Firebase Support, they themselves performed tests and concluded that ML Kit API isn't capable of reading MRZ type text and that they might incorporate it in future.
You can try Mobile Text Vision API for OCR (Optical Character Recognition) in Android.
Refer to this Google code lab for implementation details https://codelabs.developers.google.com/codelabs/mobile-vision-ocr/index.html?index=..%2F..index#0
Especially creating OcrDetectorProcessor step.

Microblink recognizer set up RegexParserSettings

I am trying to scan an image taken from resources using a Recognizer with a RegerParserSettings inside a fragment. The problem is that BaseRecognitionResult obtained through the callback onScanningDone is always null. I have tried to set up the RecognitionSettings with MRTDRecognizer and worked fine, so I think that the library is properly integrated. This is the source code that I am using:
#Override
public void onAttach(Context context) {
...
try {
mRecognizer = Recognizer.getSingletonInstance();
mRecognizer.setLicenseKey(context, LICENSE_KEY);
} catch (FeatureNotSupportedException | InvalidLicenceKeyException e) {
Log.d(TAG, e.getMessage());
}
buildRecognitionSettings();
mRecognizer.initialize(context, mRecognitionSettings, new DirectApiErrorListener() {
#Override
public void onRecognizerError(Throwable t) {
//Handle exception
}
});
}
private void buildRecognitionSettings() {
mRecognitionSettings = new RecognitionSettings();
mRecognitionSettings.setRecognizerSettingsArray(setupSettingsArray());
}
private RecognizerSettings[] setupSettingsArray() {
RegexParserSettings regexParserSettings = new RegexParserSettings("[A-Z0-9]{17}");
BlinkOCRRecognizerSettings sett = new BlinkOCRRecognizerSettings();
sett.addParser("myRegexParser", regexParserSettings);
return new RecognizerSettings[] { sett };
}
I scan the image like:
mRecognizer.recognizeBitmap(bitmap, Orientation.ORIENTATION_PORTRAIT, FragMicoblink.this);
And this is the callback handled in the fragment
#Override
public void onScanningDone(RecognitionResults results) {
BaseRecognitionResult[] dataArray = results.getRecognitionResults();
//dataArray is null
for(BaseRecognitionResult baseResult : dataArray) {
if (baseResult instanceof BlinkOCRRecognitionResult) {
BlinkOCRRecognitionResult result = (BlinkOCRRecognitionResult) baseResult;
if (result.isValid() && !result.isEmpty()) {
String parsedAmount = result.getParsedResult("myRegexParser");
if (parsedAmount != null && !parsedAmount.isEmpty()) {
Log.d(TAG, "Result: " + parsedAmount);
}
}
}
}
}`
Thanks in advance!
Helllo Spirrow.
The difference between your code and SegmentScanActivity is that your code uses DirectAPI, which can process only single bitmap image you send for processing, while SegmentScanActivity processes camera frames as they arrive from the camera. While doing so, it can utilize time redundant information to improve the OCR quality, i.e. it combines consecutive OCR results from multiple video frames to obtain a better quality OCR result.
This feature is not available via DirectAPI - you need to use either SegmentScanActivity, or custom scan activity with our camera management.
You can also find out more here:
https://github.com/BlinkID/blinkid-android/issues/54
Regards

Android : Detect movement of eyes using sensor at real time

I am preparing one android application in which I have to detect to movement of eyes. Somehow I am able to achieve the above thing on images but I want this on live eyes.
I am not able to understand that if we can use the proximity sensor to detect the eyes. Just like smartStay feature.
Please suggest the ideas to implement the same.
We can use the front camera to detect the eyes and eyes blink. Use the Vision api for detecting the eyes.
Code for eye tracking:
public class FaceTracker extends Tracker<Face> {
private static final float PROB_THRESHOLD = 0.7f;
private static final String TAG = FaceTracker.class.getSimpleName();
private boolean leftClosed;
private boolean rightClosed;
#Override
public void onUpdate(Detector.Detections<Face> detections, Face face) {
if (leftClosed && face.getIsLeftEyeOpenProbability() > PROB_THRESHOLD) {
leftClosed = false;
} else if (!leftClosed && face.getIsLeftEyeOpenProbability() < PROB_THRESHOLD){
leftClosed = true;
}
if (rightClosed && face.getIsRightEyeOpenProbability() > PROB_THRESHOLD) {
rightClosed = false;
} else if (!rightClosed && face.getIsRightEyeOpenProbability() < PROB_THRESHOLD) {
rightClosed = true;
}
if (leftClosed && !rightClosed) {
EventBus.getDefault().post(new LeftEyeClosedEvent());
} else if (rightClosed && !leftClosed) {
EventBus.getDefault().post(new RightEyeClosedEvent());
} else if (!leftClosed && !rightClosed) {
EventBus.getDefault().post(new NeutralFaceEvent());
}
}
}
//method to call the FaceTracker
private void createCameraResources() {
Context context = getApplicationContext();
// create and setup the face detector
mFaceDetector = new FaceDetector.Builder(context)
.setProminentFaceOnly(true) // optimize for single, relatively large face
.setTrackingEnabled(true) // enable face tracking
.setClassificationType(/* eyes open and smile */ FaceDetector.ALL_CLASSIFICATIONS)
.setMode(FaceDetector.FAST_MODE) // for one face this is OK
.build();
// now that we've got a detector, create a processor pipeline to receive the detection
// results
mFaceDetector.setProcessor(new LargestFaceFocusingProcessor(mFaceDetector, new FaceTracker()));
// operational...?
if (!mFaceDetector.isOperational()) {
Log.w(TAG, "createCameraResources: detector NOT operational");
} else {
Log.d(TAG, "createCameraResources: detector operational");
}
// Create camera source that will capture video frames
// Use the front camera
mCameraSource = new CameraSource.Builder(this, mFaceDetector)
.setRequestedPreviewSize(640, 480)
.setFacing(CameraSource.CAMERA_FACING_FRONT)
.setRequestedFps(30f)
.build();
}
No you can't use proximity sensor for eye detection or tracking . Give a shot to OpenCV .
Link : OpenCv
github : OpenCv github

switch camera(back/front) in android webrtc

I have used libjingle library for webrtc android application. I have successfully implemented audio video streaming for two way communication.
Till now i was using front camera for video streaming but now i want to put option for users to select front or back camera for video streaming.
How can i archive it? i have no idea about this.
I have tried VideocaptureAndroid switch camera method but not working.
If anyone one knows then help me out for this functionality?
Thanks in advance.
You need to use the same videoCapturer object, which is created while initial MediaStream creation.
CameraVideoCapturer cameraVideoCapturer = (CameraVideoCapturer) videoCapturer;
cameraVideoCapturer.switchCamera(null);
AppRTC Reference
Using this version: org.webrtc:google-webrtc:1.0.22672
Create the VideoCapturer by this method:
VideoCapturer videoCaptor = createCameraCaptor(new Camera1Enumerator(false));
The trick is on isBackFacing(...)/ isFrontFacing(...)
private VideoCapturer createCameraCaptor(CameraEnumerator enumerator) {
final String[] deviceNames = enumerator.getDeviceNames();
// First, try to find back facing camera
Logging.d(TAG, "Looking for back facing cameras.");
for (String deviceName : deviceNames) {
if (enumerator.isBackFacing(deviceName)) {
Logging.d(TAG, "Creating back facing camera captor.");
VideoCapturer videoCapturer = enumerator.createCapturer(deviceName, null);
if (videoCapturer != null) {
return videoCapturer;
}
}
}
// back facing camera not found, try something else
Logging.d(TAG, "Looking for other cameras.");
for (String deviceName : deviceNames) {
if (!enumerator.isBackFacing(deviceName)) {
Logging.d(TAG, "Creating other camera captor.");
VideoCapturer videoCapturer = enumerator.createCapturer(deviceName, null);
if (videoCapturer != null) {
return videoCapturer;
}
}
}
return null;
}
Here is an example using libjingle.
If you want to switch between the front and rear camera you will need to get the name of the device you want to use. This can be done using VideoCapturerAndroid.getNameOfFrontFacingDevice() or VideoCapturerAndroid.getNameOfRearFacingDevice() depending on whether you want to use the front or rear camera.
Here's a simple example of how to get the correct VideoCapturer using io.pristine.libjingle:9127
private VideoCapturer getCameraCapturer(boolean useFrontCamera) {
String deviceName = useFrontCamera ? VideoCapturerAndroid.getNameOfFrontFacingDevice() : VideoCapturerAndroid.getNameOfBackFacingDevice();
return VideoCapturerAndroid.create(deviceName);
}
If you're using a different version of LibJingle or for any reason this doesn't work let me know and I'll be happy to help!
Cheers,
Create a new video capturer and start it. Dont forget to stop the old one.
fun switchCamera() {
cameraFacingFront = !cameraFacingFront
try {
videoCapturer!!.stopCapture()
} catch (e: InterruptedException) {
}
videoCapturer = createVideoCapturer(cameraFacingFront)
videoCapturer!!.initialize(
surfaceTextureHelper,
activity,
videoSource!!.getCapturerObserver()
)
videoCapturer!!.startCapture(
VIDEO_SIZE_WIDTH,
VIDEO_SIZE_HEIGHT,
VIDEO_FPS
)
}

Categories

Resources