Is it possible to control autofocus feature of Android camera, using OpenCV's libnative_camera*.so ?
Or maybe it's possible to manually set focus distance?
Is there alternative approach (May be, it's better to use Android API to control camera and then grab frame in onPreview events and pass it to native code)?
If your intention is to control the camera on your own, then Android Camera APIs suck. As such Android APIs suck when it comes to offering your hardware camera device number to JavaCV native camera library. Without a native device number, JavaCV would be clueless to connect to the appropriate camera (Front or Back).
If your intention is only to perform object detection and stuff, well Android Camera APIs coupled with JavaCV should work. Setup a callbackBuffer of sufficient size, setPreviewCallbackWithBuffer, setup sufficient preview frame-rate, and once you start getting preview-frames in ImageFormat.NV21 format (mind you!!, this is the only format supported for preview-frames even in ICS), pass them off to JavaCV for performing object detection.
AutoFocus on Android Camera APIs suck big time. I have been researching for over a month for feasible solutions.
Related
I am working on an Android project which should capture an image by the camera API and the do some image processing to classify the image and give result to user (I used opencv to process the image and classify result). My question is which is the best camera API? Shall I use Java camera view in opencv or use Camera API using intent or finally use camera 2 API which can give me control to manage some characteristics related to ambient conditions.
Please clear my confusion and suggest which is the best one to control the quality of the image and and other conditions that affect the image taken.
native camera:
higher framerate
capture RGBA, no need to convert from android yuv format
and much more features
so i would say use standard Camera API
try gpuImagePlus library which is available for both android and ios.
here is the link for android version.
https://github.com/wysaid/android-gpuimage-plus
I have quite a bit of experience with the camera API, but I could not find any documentation to answer this question...
Most phones already have a front and a back camera. Would it be possible to simulate a 3rd camera via software (probably using a service), and register that with the api?
The idea would be that we define a custom camera, register it with the Api, and then any camera app would be able to get it by looping through the available cameras.
I imagine several cases where we might want this...
There are some external cameras (such as the FLIR thermal camera) that could provide this.
We might want to concatenate the front and back camera images into a single image, and preview that. I know not all phones support opening both cameras concurrently, but some do, and i could imagine this functionality would be cool for 3rd party video chat apps like Skype... Specifically, since Skype doesnt natively support this, by registering directly with the Android Camera Api, we could get around the limitations of the Skype API, since our custom camera would just look like one of the default Android cameras.
So would this be possible? Or what is the technical limitations that prevents us from doing it. Perhaps the Android Api simply doesnt let us define a custom source (I know the Sensor API doesnt, so I would not be surprised if this was the case for the Camera API as well).
I am trying to build a data logger with machine vision camera function. The most important part from camera2 api is the ability to set focus to infinite to me.
I recently got a Lenovo Phab 2 pro and have been exploring Tango's motion tracking and depth map functions. I would like to record the pose estimation and depth map from Tango alongside my original camera image. Although Tango seems to use only the fisheye camera and time of flight camera for those tasks, I am not able to get any reading for pose and depth map whenever I open the main back facing camera using camera2. I have been doing some research on Tango to see if it allows manual focus control. Unfortunately,I couldn't find anything useful.
My questions are:
Is there a way to get Tango to work while having the back camera controlled by camera2?
If not, is there a way to manually control the main camera's focus using Tango api?
Thank you all very much!
I am developing a simple AR application which renders a 3D image on top of camera view. I could successfully implement that in Windows 7.I used OpenCv's native POSE estimation functions which internally uses POSIT algorithm, so as to give Translation, Rotation matrix, which could be applied on the the 3D modal.
I want to implement the same functionality in an Android application. The problem I am facing is, one of the arguments to the POSE estimation function is the Camera intrinsic and distortion parameters. Which i am not able to find out.
I tried studying various AR platforms - AandAR, ARToolKit etc. But after reverse engineering their Sources, i could get to any conclusion about usage of these in POSE estimation.
Please suggest me an appropriate method for POSE estimation (if it involves camera distortion parameters, then how would i determine it) and hence 3D object rendering over camera view in an Android application
OpenCV is a great library, which gives you the methods to perform camera calibration, if you want to figure out what the intrinsic parameters of your camera are, see Camera calibration documentation. Many of those functionalities are also implemented in the Android OpenCV library.
Finally you can also use the internal sensors Android ships with to obtain Gravity, Linear Acceleration and Orientation. See this inspiring talk by David Sachs on how to use Sensor Fusion on Android.
I was successfully able to perform Camera Calibration for the mobile device. I clicked chessboard snaps using the mobile camera, from various angles,and then used them in C++ Camera calibration code.
Then using these intrinsic and extrinsic camera parameters, I was trying to make the POSE estimation module.
I parsed the XML files and build the matrices.
Then using this OpenCv tutorial - OpenCv POSIT .I was able to perform POSE estimation.
However, I am yet to check its credibility over an actual program. But theoretically its achievable and code doesn't give any build or compilation error.
In a project on Android, I'm trying to capture the video and process it in realtime (like a Kinect). I tried with two method: using OpenCV keep calling mCamera.grab() and capture.retrieve(mRgba,Highgui.CV_CAP_ANDROID_COLOR_FRAME_RGBA); or the Android's Camera by keep capturing image.
I feel that the OpenCV camera's ability to capture image faster than the Android one. But why?
OpenCV uses a hack to get low level access to the Android camera. It allows to avoid several data copyings and transitions between native and managed layers.