android face detection coordinate - android

I am using Android's api 14 Camera Face Detection to draw rectangle over face detected by the camera.
It works in most devices (Galaxy Nexus, S4, S Note 2). But in S3 SGH-T999 and SGH-I747 (Tmobile and AT&T locked versions) the Face.rect object returned was outside the normal range of [-1000, 1000].
Specifically, Face.rect.left = -1165 (or other numbers < -1000).
Quote from the documentation [Camera.Face.rect]:
"The coordinates can be smaller than -1000 or bigger than 1000. But at least one vertex will be within (-1000, -1000) and (1000, 1000)."
This is the method that i use [link here] :
onFaceDetection(android.hardware.Camera.Face[], android.hardware.Camera)
Other data:
app is set to portrait only
app use front facing camera only
My questions are:
Have anyone experienced the same problem?
What does it mean by this smaller than -1000 coordinate ?
How to solve this problem in order to correctly draw the correct rectangle over detected face?
I have looked around for a week and did not find this problem asked by other users.
Again, my app works fine in other devices except those two.
Thanks in advance.

I am facing similar kind of issue. What I found is, the face rectangle, obtained from onFaceDetection callback, is from different co-ordinate system in different android phones. I tested my application in Samsung and Micromax. it follows the rectangle coordinates values as per the android documents (i.e -1000 to 1000).
When I tested my application on Sony xperia L and Sony xperia M, I observed that it does not follow the coordinate according to android doc. Rather it follows coordinates which has origin (0,0) at right top corner of the screen for portrait mode.
When I applied the matrix according to that, I found the perfect rectangle plotting. This lead me to dig little more into android stack. I believe that it is the vendor of the android phone who manipulate the rectangle coordinate not android original stack.
My question is is there any way to find out that the rectangle obtained, follows which coordinate system before drawing the rectangle?

Related

Unity Dev for Gear VR - Equirectangular Panorama as skybox/sphere

I've been trying to make a 360 photo viewer similar to the Oculus 360 Photos app. The only problem is when projecting onto a sphere with inverted normals, the image "warps" or "bends" as the sphere does, and results in straight lines such as door frames turning into bending images; bad result.
Changing the size of the sphere does nothing, and obviously the picture has to bend somewhere to fit onto the inner surface of the sphere, so I don't think this solution will work.
I then tried turning the photo into a cylindrical skybox, and using it as a skybox component of the camera, which works great: no bending lines, everything looks as desired. Except for one thing: there is a shimmering/aliasing effect on the texture, unless I enable mip maps, which then results in a blurred image.
Does anybody know how I could apply my image to appear similar to those in the Oculus 360 Photo app? They render with perfect quality and no bending lines, no shimmering. How do they achieve this result?
I've tried different compression types and different shapes, the only thing I haven't tried is slicing the photo into 6 pieces and rendering it on the inside of a cube around the camera, which, due to it's proximity, might not get the shimmery result that could be cause by distance from the camera?
Thoughts, suggestions, questions? Any assistance or discussion is appreciated
I was able to get good results by increasing the renderscale to 1.5 or higher, which eliminated the shimmery aliasing effect. Not 100% sure if this was an issue due to the Samsung s6 resolution, but I just work now with an enhanced render scale for higher quality regardless, and optimise elsewhere to save on framerate
I know that question is old now but I had that problem too but on the oculus go and it is solved thanks to these instructions here & here

Unity3d ARTookit5 Blurred camera on android mobile

i'm trying to do a simple AR scene with NFT image that i've created with genTextData. The result works fairly well in unity editor, but once compiled and run on an android device, the camera resolution is very bad and there's no focus at all.
My marker is rather small (3 cm picture), and the camera is so blurred that the AR cannot identify the marker from far away. I have to put the phone right in front of it (still verrrrryy blurred) and it will show my object but with a lot of flickering and jittering.
I tried playing with the filter fields (Sample rate/cutoff..), it helped just a little bit wit the flickering of the object, but it would never display it from far away..i always have to put my phone like right in front of it. The result that i want should be: detecting the small marker (sharp resolution or/and good focus) from a fair distance away from it..just like the distance from your computer screen to your eyes.
The problem could be camera resolution and focus, or it could be something else. But i'm pretty sure that the AR cannot identify the marker points because of the blurriness.
Any ideas or solutions about this problem ?
You can have a look here:
http://augmentmy.world/augmented-reality-unity-games-artoolkit-video-resolution-autofocus
I compiled the Unity plugin java part and set it to use the highest resolution from your phone. Also the auto focus mode is activated.
Tell me if that helps.

Camera is being rotated 90 degree in air for android

In my AS3 Flex Mobile application for Android, I am using camera and it is being automatically rotated 90 degrees before I even done any video rotation by myself, it seems like it's a known bug in AIR. But I was wondering if anyone found a solution since it's really pretty important feature for mobile application developer.
I've tried to do some rotation manually in my code, but it is only fixes the view on my display, but still sends the wrong video to the receiver.
If any code is required I will add the snippets
Please let me know.
As you mentioned, this is a known bug with AIR. It is not consistent, either. On some devices, it is in the correct orientation but in some (and all iOS devices, I believe, though I haven't fully tested that), it is rotated as you are seeing. For example, it was always oriented correctly on my Nexus 4 and on my Nexus 5, but a friends Moto X is rotated incorrectly.
Unfortunately, I don't believe there is anything you can do short of having the user do a calibration (i.e. overlay a straight line and tell them to place it horizontally and click a button) and rotating the camera display and any images you take with the display.
That being said, if you are using the camera to take photos, I highly recommend using CameraUI instead, which is the native implementation.
I've faced the same issue today but i'm developping in Java, not with AIR so i don't know if it the same, for me the solution was to add this line before starting the recording.
mMediaRecorder.setOrientationHint(90);

Sensor coordinate system in Android doesn't change, does it?

I read in many places like: One Screen Deserves Another that: "The sensor coordinate system used by the API for the natural orientation of the device does not change as the device moves, and is the same as the OpenGL coordinate system."
Now, I get the same reading as this image:
What I don't understand is: If the coordinate system doesn't change if I rotate the phone (always with the screen facing the user), the gravity force should be applied on Y axis, always. It should only change the axis if I put the phone in a position where the screen is not facing the user anymore like resting on a table, where gravity should be applied on Z axis.
What is wrong with my understanding?
Thanks! Guillermo.
The axis are swapped when the device's screen orientation changes. Per the article you cited:
However, the Android sensor APIs define the sensor coordinate space to be relative to the top and side of the device — not the short and long sides. When the system reorients the screen in response to holding the phone sideways, the sensor coordinate system no longer lines up with the screen’s coordinate system, and you get unexpected rotations in the display of your app.
To access the un-swapped values if you'd like, use indices 3, 4 and 5 in values[], otherwise some of the suggestions mentioned in that same article work quite nicely!
Quite old question, but I find it still relevant. As of today, Sensor Overview page in its Sensor Coordinate System chapter still says:
The most important point to understand about this coordinate system is that the axes are not swapped when the device's screen orientation changes—that is, the sensor's coordinate system never changes as the device moves. This behavior is the same as the behavior of the OpenGL coordinate system.
To me this wording is still confusing, of course it might be because I'm not an English native speaker.
My understanding would be that in Android (as is in iOS) the coordinate system assumed by sensors is integral with the device. That is, the coordinate system is stuck with the device and its axes rotate along with the device.
So, for a phone whose natural orientation is portrait, Y-axis points upward when the phone is held vertically in portrait in front of user. See image below, from same Android guide:
Then when user rotates the phone to landscape left orientation (so with home button on the right side), the Y-axis points to the left. See image below, from a Matlab tutorial (although screen is not really user facing anymore):
Then there's the frequently cited post from Android dev blog, One Screen Turn Deserves Another that says:
The sensor coordinate system used by the API for the natural orientation of the device does not change as the device moves, and is the same as the OpenGL coordinate system
which to me sounds exactly the opposite as my previous reasoning. But then again, in its following So What’s the Problem? chapter, you do see that when phone is rotated to landscape left, Y-axis points to the left as in my previous reasoning.

Marker Recognition on Android (recognising Rubik's Cubes)

I'm developing an augmented reality application for Android that uses the phone's camera to recognise the arrangement of the coloured squares on each face of a Rubik's Cube.
One thing that I am unsure about is how exactly I would go about detecting and recognising the coloured squares on each face of the cube. If you look at a Rubik's Cube then you can see that each square is one of six possible colours with a thin black border. This lead me to think that it should be relativly simply to detect a square, possibly using an existing marker detection API.
My question is really, has anybody here had any experience with image recognition and Android? Ideally I'd like to be able to implement and existing API, but it would be an interesting project to do from scratch if somebody could point me in the right direction to get started.
Many thanks in advance.
Do you want to point the camera at a cube, and have it understand the configuration?
Recognizing objects in photographs is an open AI problem. So you'll need to constrain the problem quite a bit to get any traction on it. I suggest starting with something like:
The cube will be photographed from a distance of exactly 12 inches, with a 100W light source directly behind the camera. The cube will be set diagonally so it presents exactly 3 faces, with a corner in the center. The camera will be positioned so that it focuses directly on the cube corner in the center.
A picture will taken. Then the cube will be turned 180 degrees vertically and horizontally, so that the other three faces are visible. A second picture will be taken. Since you know exactly where each face is expected to be, grab a few pixels from each region, and assume that is the color of that square. Remember that the cube will usually be scrambled, not uniform as shown in the picture here. So you always have to look at 9*6 = 54 little squares to get the color of each one.
The information in those two pictures defines the cube configuration. Generate an image of the cube in the same configuration, and allow the user to confirm or correct it.
It might be simpler to take 6 pictures - one of each face, and travel around the faces in well-defined order. Remember that the center square of each face does not move, and defines the correct color for that face.
Once you have the configuration, you can use OpenGL operations to rotate the cube slices. This will be a program with hundreds of lines of code to define and rotate the cube, plus whatever you do for image recognition.
In addition to what Peter said, it is probably best to overlay guide lines on the picture of the cube as the user takes the pictures. The user then lines up the cube within the guide lines, whether its a single side (a square guide line) or three sides (three squares in perspective). You also might want to have the user specify the number of colored boxes in each row. In your code, sample the color in what should be the center of each colored box and compare it to the other colored boxes (within some tolerance level) to identify the colors. In addition to providing the recognized results to the user, it would be nice to allow the user to make changes to the recognized colors. It does not seem like fancy image recognition is needed.
Nice idea, I'm planing to use computer vision and marker detectors too, but for another project. I am still looking if there is any available information on the web, ex: linking openCV or ARtoolkit to the Android SDK. If you have any additional information, about how to link a computer vision API, please let me know.
See you soon and goodluck!
NYARToolkit uses marker detection and is made in JAVA (as well as managed C# for windows devices). I don't know how well it works on the android platform, but I have seen it used on windows mobile devices, and its very well done.
Good luck, and happy programming!
I'd suggest looking at the Andoid OpenCV library. You probably want to examine the blob detection algorithms. You may also want to consider Hough lines or Countours to detect quads.

Categories

Resources