I'm new in Android programming. I've already program an application that makes the phone which has front camera likes a mirror. I use front camera to do that, but a problem is that when I look at myself via the screen, my head is stretched (it means my head is lengthen) and it looks so funny. Could anyone help me fix this, I appreciate that.
Thanks in advance
Related
I am working on a face recognition app where the picture is taken and sent to server for recognition.
I have to add a validation that user should capture picture of real person and of another picture. I have tried a feature of eye blink and in which the camera waits for eye blink and captures as soon as eye is blinked, but that is not working out because it detects as eye blink if mobile is shaken during capture.
Would like to ask for help here, is there any way that we can detect if user is capturing picture of another picture. Any ideas would help.
I am using react native to build both Android and iOS apps.
Thanks in advance.
Thanks for support.
I resolve it by the eye blink trick after all. Here is a little algorithm I used:
Open camera, click capture button:
Camera detects if any face is in the view and waits for eye blink.
If eye blink probability is 90% for both the eyes, wait 200 milliseconds. Detect face again with eye open probability > 90% to verify if the face is still there, and capture the picture at the end.
That's a cheap trick but working out so far.
Regards
On some iPhones (iOS 11.1 upwards), there's a so-called trueDepthCamera that's used for Face ID. With it (or the back facing dual camea system) you can capture images along with depth maps. You could exploit that feature to see if the face is flat (captured from an image) or has normal facial contours. See here...
One would have to come up with a 3d face model to fool that.
It's limited to only a few iPhone models though and I don't know about Android.
i'm trying to do a simple AR scene with NFT image that i've created with genTextData. The result works fairly well in unity editor, but once compiled and run on an android device, the camera resolution is very bad and there's no focus at all.
My marker is rather small (3 cm picture), and the camera is so blurred that the AR cannot identify the marker from far away. I have to put the phone right in front of it (still verrrrryy blurred) and it will show my object but with a lot of flickering and jittering.
I tried playing with the filter fields (Sample rate/cutoff..), it helped just a little bit wit the flickering of the object, but it would never display it from far away..i always have to put my phone like right in front of it. The result that i want should be: detecting the small marker (sharp resolution or/and good focus) from a fair distance away from it..just like the distance from your computer screen to your eyes.
The problem could be camera resolution and focus, or it could be something else. But i'm pretty sure that the AR cannot identify the marker points because of the blurriness.
Any ideas or solutions about this problem ?
You can have a look here:
http://augmentmy.world/augmented-reality-unity-games-artoolkit-video-resolution-autofocus
I compiled the Unity plugin java part and set it to use the highest resolution from your phone. Also the auto focus mode is activated.
Tell me if that helps.
I'm working with OCR using tess-two as my scanner and it works great when taking the photo, and using https://github.com/IsseiAoki/SimpleCropView to crop the image for the right numbers to be read from tess-two.
What I'd like to do is to take the second step of cropping the image out of the process, making it quicker for the user. I'm just confused on how to add my own crop rectangle area to allow the user to take a picture in.
Even something as simple as "[ ]" with the rest of the camera surface view blacked out or dimmed, guiding the user to take the photo of the numbers in between the brackets.
Are there any good tutorials out there involving this? I've searched, but with no success. Or if anyone can point me in the right direction on how to start this that would be a big help! Thanks in advance!
I am trying to make an application that, while the camera on the phone is running, only shows half of what the view finder sees. I have tried reading the documentation on accessing the camera hardware, however I am new to development and can't make much of it. Would this even be possible? And if so can someone please offer any tips on how I could go about this? Thanks very much.
I was including CWAC-Android Library in my Project, at first, thanks a lot for doing this work.
But I experience some really strange problem.
Pictures in light scene (area, outside, bright light, beautiful day, sunlight), and images get some pink overlay,
on fast checking it i was not able to find a solution or a reason for that,
it seems it's just some overlay and not really "the image",
it tried it in my implementation,
and the library demo project, it exists in both .. :/
im doing it on a xperia Z1? maybe another problem?
any assistance would be great?
Some sample images
http://s1.directupload.net/images/140227/463xystc.jpg
http://s7.directupload.net/images/140227/l78xvjhd.jpg
sorry my mouth is always open on images :D ;) thanks
Pictures taken with sample (demo) project
Problem seems to occur when directly moving camera to bright "scene" and taking picture immediately, like this small amount of time when the camera screen is just overexposed
finally I didn't find any solution of the problem within the library to fix by myself,
did the implementation on my own (basic one) -> works just fine.
using following fix for distortions (portrait and lanscape):
https://stackoverflow.com/a/22201580/371749
good luck