I am writing an Android-app that uses the camera. To make it user-friendly, I'd like to display a message when the picture is too dark or the user has his finger in the lens. Is there any possibility to get the camera-state and decide wether it is covered by something or the camera lens is free?
In order to detect whether the camera is covered by some object or not you will have to use OpenCV library and perform the action accordingly after the object is detected. There is nothing inbuilt in android for the task you want to achieve.
Link to OpenCV
You can use the Camera.PreviewCallback coupled with the Camera class to get a callback with a byte array, that byte array contain the image data of that frame.
Then you'll need some sort of algorithm/logic to determine whether or not it's "too dark". There is nothing built into Android that can help you determine that.
Related
After watching watching this video of AR remote support for HoloLens I decided to try to do something similar but with Android and ARCore. The things was going fine until I try to do a feature shown at 2:01 which is basically getting a "screenshot" of a specific moment, draw or insert objects on it and then convert it in AR Models.
I tried to retain an instance of the Frame however later when I try to simulate a HitTest I receive the following message:
FrameHitTest invoked on old frame, the previous state of the system is no longer available. Returning empty list.
So my question is: is there another approach I can try to simulate a later HitTest or it's not possible using ARCore for now?
...
Hi Pedro,
Have you already tried to put an anchor on the center of the frame when the remote user save the photo and store it's reference somewhere?
With that anchor you can try to generate the content remotely and then send the relative coordinates and the models to client phone with an Anchor ID.
After receiving the data the phone add the augmented content according to the anchor it refers using the previously ID.
You could also add other useful information for you (e.g. camera distance from plane in that specific frame, ...)
Hope this help or give you some hints.
Cheers.
I need to use openCV to do the image pair in android devices.
For example, I want the mobile devices to match the apple image.
When I open the application, the camera is opening and prepare to detect the apple image. If it is matched, the "match" message will be shown.
Can any one give me some direction to finish it? Thanks.
To match an image, you can using Template Matching or using SURF Detector in Open-CV. see the following links:
http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html
http://docs.opencv.org/doc/tutorials/imgproc/histograms/template_matching/template_matching.html
I'm creating an Android app that makes use of OpenCV to implement augmented reality. One of the needed features is that it saves the processed video. I can't seem to find any sample code on real-time saving while using OpenCV.
If the above scenario isn't possible, another option is to save the video first and have it post-processed by OpenCV and saved back as a new file. But I can't find any sample code for this either.
Could someone be kind enough to point me to either direction, or give me an alternative? It's ok if the alternative doesn't use OpenCV.
Typical opencv flow is, you receive frames from camera, convert to RGB format, perform matrix operations then return to activity to display in View. You can actually store the modified frames as images somewhere in sdcard and use jcodec to create your mp4 out of your images. See Android make animated video from list of images.
I have a question regarding where the face detection informaition is stored by Android.
There seem to be two options :
1) The face detection information is stored along with Image as a part of EXIF metadata.
2) Android stores the detected faces information somewhere and retrives when user opens that particular image.
For option 1 I tried to fetch information with Metadata Extractor but there was no tag in particular that corresponds to face detection (correct me if I am wrong)
If it is option 2 how exactly I can filter gallary images according to faces tagged inside ?
Please give me some pointers.
Android have face detection api. You can just call findFaces method for bitmap. Also you cat use external libs and frameworks like OpenCV. According your points - which framefork you use for face detection?
I have an application where i make use of the Camera API to specify my own settings and do some frame processing.
I need my application to take photos with the same settings the Android original Camera app does, but apparently I have no way to extract the procedures and intents from it. I have taken a look at the original Android Camera app class file, but it was not helpful, since it makes use of native routines for the parameters part...
Is there any way that I can obtain the parameters used by the original camera App? And in which way does it save the images?
I know that i can write to a File stream as suggested in many posts, and i have done so, but how can i actually save the specific information the device puts in the files, such as information on the camera, density, and such ?
Thanks in advance