openCV Android camera orientation - android

Hello I am new in OpenCV and Android, I'm running the tutorial examples, nevertheless the camera orientation is rotated, I have tried this code to fix it:
mRgba = inputFrame.rgba();
Mat mRgbaT = mRgba.t();
Core.flip(mRgba.t(), mRgbaT, 1);
Imgproc.resize(mRgbaT, mRgbaT, mRgba.size());
return mRgbaT;
This works fine for a minute or so, frames are captured and represented correctly during one minute, but after that I get a SIGSev BpMemory fail and the application crashes.
Is there any workaround to change the camera orientation without having to do a flip and image resize?
May thanks for your help and advice.

If you are using android.hardware.Camera and a custom camera application, your issue is probably that it defaults to layout mode, and that is probably what is setting the orientation when you transform it into a mat.
Can you provide some more context for how you are passing camera output into the module where you are using openCV?
After you open the camera call:
mCamera.setDisplayOrientation(90);

You can use something like this to change the orientation.
Mat draw = new Mat(mRgba.rows(),mRgba.cols(),mRgba.type());
Mat show = new Mat(mRgba.rows(),mRgba.cols(),mRgba.type());
Mat how = new Mat(mRgba.rows(),mRgba.cols(),mRgba.type());
Core.transpose(draw,how);
Imgproc.resize(how, show, show.size(),0,0,0);
Core.flip(show,draw,1);
return draw;

Related

How to most efficiently use and apply Android CameraX Image Analysis setTargetRotation

Similar to what is laid out in the tutorial I am initializing CameraX's Image Analysis use case with code:
ImageAnalysis imageAnalysis =
new ImageAnalysis.Builder()
.setTargetResolution(new Size(1280, 720))
.setTargetRotation(Surface.ROTATION_90)
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.build();
imageAnalysis.setAnalyzer(executor, new ImageAnalysis.Analyzer() {
#Override
public void analyze(#NonNull ImageProxy image) {
int rotationDegrees = image.getImageInfo().getRotationDegrees();
// insert your code here.
}
});
cameraProvider.bindToLifecycle((LifecycleOwner) this, cameraSelector, imageAnalysis, preview);
I am trying to use setTargetRotation method but I am not clear as to how I am supposed to apply this rotation to the output image as vaguely described in the docs:
The rotation value of ImageInfo will be the rotation, which if applied to the output image, will make the image match target rotation specified here.
If I set a breakpoint in the analyze() method shown above, the image object does not get rotated when changing the setTargetRotation value, so I assume the docs are telling me to grab the orientation with getTargerRotation() in the sense that these two pieces of code (builder vs analyzer) are coded separately and this information can be passed between the two without actually applying any rotation. Did I understand this correctly? This really doesn't make sense to me as the setTargetResolution method actually changes the size sent via the ImageProxy. I'd think setTargetRotation should also apply said rotation, but it appears not.
If my understanding is correct, is there an optimal efficient way to rotate these ImageProxy objects after entering the analyze method? Right now I'm doing it after converting to Bitmap via
Bitmap myImg = BitmapFactory.decodeResource(someInutStream);
Matrix matrix = new Matrix();
matrix.postRotate(30);
Bitmap rotated = Bitmap.createBitmap(myImg, 0, 0, myImg.getWidth(), myImg.getHeight(),
matrix, true);
Above idea came from here, but I'd think this is not the most efficient way to do this. I'm sure I could also come up with a way to transpose the arrays, but this could get tedious and messy quickly. Isn't there any way to setup the ImageAnalysis Builder to send the rotated ImageProxy directly rather than having to make a bitmap of everything?
The rotation value of ImageInfo will be the rotation, which if applied
to the output image, will make the image match target rotation
specified here.
An example to understand the definition would be to assume the target rotation matches the device's orientation. Applying the returned rotation to the output image will result in the image being upright, i.e matching the device's orientation. So if the device is in its natural portrait position, the rotated output image will also be in that orientation.
Output image + Rotated by rotation value --> Upright output image
CameraX's documentation includes a section about rotations, since it can be a confusing topic. You can check it out here.
Going back to your question about setTargetRotation and the ImageAnalysis use case, it isn't meant to rotate the images passed to the Analyzer, but it should affect the rotation information of the images, i.e ImageProxy.getImageInfo().getRotationDegrees(). Transforming images (rotating, cropping, etc) can be an expensive operation, so CameraX does not perform any modification to the analysis frames, but it provides the required metadata to make sense of the output images, metadata that can then be used with image processors that analyze the frames.
If you need to rotate each analysis frame, using the Bitmap approach is one way, but it can be costly. Another more performant way may be to do it in native code.

Face not detected using vision google service

https://developers.google.com/android/reference/com/google/android/gms/vision/face/FaceDetector.Builder
I'm using the above google service in my app for face detection. I made sure my phone has the minimum google play service version, which on my phone is 8.3, but still I can't get the face detection to work! I imported the library by importing the google play library in my eclipse project.... Here's the code:
#Override
protected void onPreExecute()
{
detector = new FaceDetector.Builder(MainContext)
.setTrackingEnabled(false)
//.setProminentFaceOnly(true)
.setLandmarkType(FaceDetector.ALL_LANDMARKS) //required
.build();
}
private void detectTheFace(Bitmap converted)
{
Frame frame = new Frame.Builder().setBitmap(converted).build();
faces = detector.detect(frame);
}
I don't know if it's necessary to convert the bitmap you use to detect the faces has to be RGB_565 configuration but I did it anyways. I tried with and without changing the RGB configuration and it yields the same results. Basically the faces sparse array is of size 0 meaning it doesnt detect a face.... ever. Btw just to give some context on the above code, I'm executing the face detection in a async task because I want to run it on the background.
I have the same problem ,i.e. it was working fine on nexus but not in galaxy. I resolved the problem by rotating the bitmap to 90 degree in case detector.detect() method gives faces of zero size. so maximum retry is 3 times after calling detector.detect() because 4th rotation gives you the same bitmap.
Bitmap rotateBitmap(Bitmap bitmapToRotate) {
Matrix matrix = new Matrix();
matrix.postRotate(90);
Bitmap rotatedBitmap = Bitmap.createBitmap(bitmapToRotate, 0, 0,
bitmapToRotate.getWidth(), bitmapToRotate.getHeight(), matrix,
true);
return rotatedBitmap;
}
check if the face return by detector.detect() has zero size then below code should run.
if(!faces.size()>0){
if (rotationCounter < 3) {
rotationCounter++;
bitmap= rotateBitmap(bitmapToRotate);
//again call detector.detect() here
}
}
you can check for the need of rotating bitmap without writing above code. From your original, code try to capture the image in landscape mode of phone or just rotate the image to 90 degree ans capture it.
To solve this problem, use the orientation specification from the EXIF of the photo.

How to disable/modify AutoFocus and AutoWhiteBalance on Android Camera using OpenCV

I'm using Android + Opencv(new to opencv) and I'm currently working with real time object detection (the object stays really close to the android device Camera) , and I noticed that the Android camera's autoFocus keeps modifying my frames (kind of 'zoom in' and 'zoom out' effect) which make it harder for me to keep tracking the object.
I need to turn the "AUTO FOCUS" off because in my case the more blurred image input I have, the better, and I also need to turn the AutoWhiteBalance off as well, or maybe set to a different value.
I would like to know how to do it through my OpenCV CameraBridgeViewBase so I could modify the camera's Focus/WhiteBalance settings.
I've trying to find a way to solve it, and I noticed that many people face the same problems.
Here, at Stack Overflow, would be a great place to find someone who have worked with that and found a good way to overcome these problems.
create your own subclass of javacameraview
public class MyJavaCameraView extends JavaCameraView {
where you can have access to mCamera;
add whatever camera access using method you are interested in
for example
// Setup the camera
public void setFlashMode(boolean flashLightON) {
Camera camera = mCamera;
if (camera != null) {
Camera.Parameters params = camera.getParameters();
params.setFlashMode(Camera.Parameters.FLASH_MODE_TORCH);
camera.setParameters(params);
and use this new class as part of the main activity
//force java camera
mOpenCvCameraView = (MyJavaCameraView) findViewById(R.id.activity_surface_view);
mOpenCvCameraView.setVisibility(SurfaceView.VISIBLE);
mOpenCvCameraView.setCvCameraViewListener(this);
mOpenCvCameraView.enableView();

How to set OpenCV's camera to display preview in both portrait orientation and full screen

I'm trying to create an Android app. which displays a camera preview in portrait mode at all times and performs some heavy image processing operations on (some of) it's frames. Hence, I'm using OpenCV (both of it's OpenCV4Android and native C/C++ components). The problem is that when using CameraBridgeViewBase or JavaCameraView classes, the frame which is returned by OnCameraFrame is in Landscape mode.
Now, if the Activity is defined to use Landscape mode (just like OpenCV's sample apps), the preview looks fine but any additional UI views are tilted by 90 degrees (and as mentioned before, the device should run my App in Portrait mode).
If the Activity is set to portrait mode, UI views will obviously look right, but camera preview will be tilted by 90 degrees.
If I try to rotate each frame by manipulating the image Matrix on OnCameraFrame like this:
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
mRgba = inputFrame.rgba();
Core.transpose(mRgba,mRgbaT);
Core.flip(mRgbaT, mRgbaT, -1);
Imgproc.resize(mRgbaT, mRgbaT, mRgba.size());
return mRgbaT;
}
then each camera's frame fills device's width but not it's height (and as a result, it looks streched), plus - it considerably slows down the frame rate. Trying to resize the image to fullscreen (or any size different than the original frame size) results in displaying no image at all (black screen) and the following exception is thrown:
E/CameraBridge(11183): Utils.matToBitmap() throws an exception: /home/reports/ci/slave_desktop/50-SDK/opencv/modules/java/generator/src/cpp/utils.cpp:97: error: (-215) src.dims == 2 && info.height == (uint32_t)src.rows && info.width == (uint32_t)src.cols in function void Java_org_opencv_android_Utils_nMatToBitmap2(JNIEnv*, jclass, jlong, jobject, jboolean)
So, my question is: How can I display OpenCV's camera preview in both portrait mode and full screen?
Unfortunately, due to my beginner SO reputation I can't attach screen-shots. Also, I'm aware of the fact that somewhat similar questions were asked on SO before, but none of the answers seem to solve the problem completely.
I've found a solution:
Create a custom camera class which extends from CameraBridgeViewBase in a similar way that JavaCameraView extends from it (most parts are even identical), but when implementing the inner JavaCameraFrame class, replace the method which returns a Mat object to something like this:
public Mat rgba() {
Imgproc.cvtColor(mYuvFrameData, mRgba, Imgproc.COLOR_YUV2BGR_NV12, 4);
if (mRotated != null)
mRotated.release();
mRotated = mRgba.t();
Core.flip(mRotated, mRotated, -1);
return mRotated;
}
There is a great example here (The first answer, courtesy of Zarokka ). There is some downgrade in performance, but it's remotely not as serious as when rotating the output Mat in OnCameraFrame (which doesn't fully solve the problem anyway). I use it on a 640*480 resolution preview frame size (full screen) and it looks good and works smoothly even not not-so new devices.

FaceDetector not detecting faces with front camera

I am experiencing a strange issue with the FaceDetector from the Android SDK. The code below is working fine and detecting faces correctly when using the back camera, but no matter what, when the picture is from the front camera, no face is detected.
FaceDetector.Face[] faces = new FaceDetector.Face[1];
FaceDetector faceDetector = new FaceDetector(width, height, 1);
int facesFound = faceDetector.findFaces(picture, faces);
Log.d(TAG, "Face found: "+(facesFound == 1));
I am trying to find an explanation for this, but I haven't come to any conclusion. I have even tried to clean the meta-data of the picture, in case the FaceDetector was set to not detect faces in pictures coming from the front camera.
There are various factors that could be the cause of the problem. Here are some I've run into:
The bitmap is not upright
The bitmap is not in 565 pixel format
The image is poor quality (too dark, too noisy, poor resolution, obscured by a finger, etc.)

Categories

Resources