I am working on an Android project which should capture an image by the camera API and the do some image processing to classify the image and give result to user (I used opencv to process the image and classify result). My question is which is the best camera API? Shall I use Java camera view in opencv or use Camera API using intent or finally use camera 2 API which can give me control to manage some characteristics related to ambient conditions.
Please clear my confusion and suggest which is the best one to control the quality of the image and and other conditions that affect the image taken.
native camera:
higher framerate
capture RGBA, no need to convert from android yuv format
and much more features
so i would say use standard Camera API
try gpuImagePlus library which is available for both android and ios.
here is the link for android version.
https://github.com/wysaid/android-gpuimage-plus
Related
I am trying to develop a camera app which captures image and process it using OpenCV in Kotlin language. I am trying to develop it for Android Things Odroid N2+ board.
For now, I am struggling with the camera2 API.
I have a question. For image processing using OpenCV, can we use the camera2 API or does OpenCV provide separate library/tools for camera capturing and processing image for Android ?
Having no experience in OpenCV library I have heard that VideoCapture class is used in python for this purpose.
The processing part involves first capturing a reference image and then comparing other images with the reference image.
How can I go about camera capture issue for image processing ?
I'm learning android studio and I'm currently working on a robotics project in which an Android phone is placed on the robot and used as the processor, therefore I can't reach the phone by hand. The phone needs to do some image processing. And it's not a real-time processing so I need to take a photo (Preferably Bitmap) whenever I want, quickly and without preview and confirmation. I've tried some tutorials and they all open the camera app and the user needs to capture and then confirm the photo.
I don't have problem with the processing and I don't need to use openCV etc. I just need help with capturing the photo. Thanks
You can implement your own camera, either via the camera APIs (hard) or by using a library (CameraKit-Android, Fotoapparat, etc.), thank you will have control, you can directly save the image without previewing it
I am new with android studio and I have a question about camera.I have done the Take photo tutorial and now I have a button that opens my camera app. I want to take the color of a pixel from the camera app without saving the picture.
It is possible or I need to make camera API in order to take that color?
Any suggestion or tips about how I can make this project are welcomed.
Android camera intent either saves an image file, or it is cancelled. You need camera API approach to catch pixels without creating files. You can use a library like fotoapparat to wraps the tricky API for easy usage.
In a project on Android, I'm trying to capture the video and process it in realtime (like a Kinect). I tried with two method: using OpenCV keep calling mCamera.grab() and capture.retrieve(mRgba,Highgui.CV_CAP_ANDROID_COLOR_FRAME_RGBA); or the Android's Camera by keep capturing image.
I feel that the OpenCV camera's ability to capture image faster than the Android one. But why?
OpenCV uses a hack to get low level access to the Android camera. It allows to avoid several data copyings and transitions between native and managed layers.
Is it possible to control autofocus feature of Android camera, using OpenCV's libnative_camera*.so ?
Or maybe it's possible to manually set focus distance?
Is there alternative approach (May be, it's better to use Android API to control camera and then grab frame in onPreview events and pass it to native code)?
If your intention is to control the camera on your own, then Android Camera APIs suck. As such Android APIs suck when it comes to offering your hardware camera device number to JavaCV native camera library. Without a native device number, JavaCV would be clueless to connect to the appropriate camera (Front or Back).
If your intention is only to perform object detection and stuff, well Android Camera APIs coupled with JavaCV should work. Setup a callbackBuffer of sufficient size, setPreviewCallbackWithBuffer, setup sufficient preview frame-rate, and once you start getting preview-frames in ImageFormat.NV21 format (mind you!!, this is the only format supported for preview-frames even in ICS), pass them off to JavaCV for performing object detection.
AutoFocus on Android Camera APIs suck big time. I have been researching for over a month for feasible solutions.