Regarding opening camera in Android Things for OpenCV - android

I am trying to develop a camera app which captures image and process it using OpenCV in Kotlin language. I am trying to develop it for Android Things Odroid N2+ board.
For now, I am struggling with the camera2 API.
I have a question. For image processing using OpenCV, can we use the camera2 API or does OpenCV provide separate library/tools for camera capturing and processing image for Android ?
Having no experience in OpenCV library I have heard that VideoCapture class is used in python for this purpose.
The processing part involves first capturing a reference image and then comparing other images with the reference image.
How can I go about camera capture issue for image processing ?

Related

Taking a photo without preview in android

I'm learning android studio and I'm currently working on a robotics project in which an Android phone is placed on the robot and used as the processor, therefore I can't reach the phone by hand. The phone needs to do some image processing. And it's not a real-time processing so I need to take a photo (Preferably Bitmap) whenever I want, quickly and without preview and confirmation. I've tried some tutorials and they all open the camera app and the user needs to capture and then confirm the photo.
I don't have problem with the processing and I don't need to use openCV etc. I just need help with capturing the photo. Thanks
You can implement your own camera, either via the camera APIs (hard) or by using a library (CameraKit-Android, Fotoapparat, etc.), thank you will have control, you can directly save the image without previewing it

Object detection in static image using Tensorflow

is it possible to detect the object on a static image using TensorFlow. Most of the tutorials I found on the internet are using a live camera. I currently working on an android app that can detect an object after taking a photo. I'm wondering if is it possible.
TIA.
Defenitely, all basic object detections are running on images only. The live feed from a camera or a video file is taken frame by frame for processing with object detection methods. Unless, a temporal analysis is used, the object detections are simply running inference on each of the frames of the video.
Yes you can run on captured images, just sharing a demo link where its demonstrated for prototyping -
https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb
Github link-
https://github.com/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb
You can run this in colab to test with any image of your choice.

which is the best Camera API for image processing in Android

I am working on an Android project which should capture an image by the camera API and the do some image processing to classify the image and give result to user (I used opencv to process the image and classify result). My question is which is the best camera API? Shall I use Java camera view in opencv or use Camera API using intent or finally use camera 2 API which can give me control to manage some characteristics related to ambient conditions.
Please clear my confusion and suggest which is the best one to control the quality of the image and and other conditions that affect the image taken.
native camera:
higher framerate
capture RGBA, no need to convert from android yuv format
and much more features
so i would say use standard Camera API
try gpuImagePlus library which is available for both android and ios.
here is the link for android version.
https://github.com/wysaid/android-gpuimage-plus

Why on Android, OpenCV camera is faster than the Android Camera when capturing video

In a project on Android, I'm trying to capture the video and process it in realtime (like a Kinect). I tried with two method: using OpenCV keep calling mCamera.grab() and capture.retrieve(mRgba,Highgui.CV_CAP_ANDROID_COLOR_FRAME_RGBA); or the Android's Camera by keep capturing image.
I feel that the OpenCV camera's ability to capture image faster than the Android one. But why?
OpenCV uses a hack to get low level access to the Android camera. It allows to avoid several data copyings and transitions between native and managed layers.

OpenCV for Android: Autofocus native camera

Is it possible to control autofocus feature of Android camera, using OpenCV's libnative_camera*.so ?
Or maybe it's possible to manually set focus distance?
Is there alternative approach (May be, it's better to use Android API to control camera and then grab frame in onPreview events and pass it to native code)?
If your intention is to control the camera on your own, then Android Camera APIs suck. As such Android APIs suck when it comes to offering your hardware camera device number to JavaCV native camera library. Without a native device number, JavaCV would be clueless to connect to the appropriate camera (Front or Back).
If your intention is only to perform object detection and stuff, well Android Camera APIs coupled with JavaCV should work. Setup a callbackBuffer of sufficient size, setPreviewCallbackWithBuffer, setup sufficient preview frame-rate, and once you start getting preview-frames in ImageFormat.NV21 format (mind you!!, this is the only format supported for preview-frames even in ICS), pass them off to JavaCV for performing object detection.
AutoFocus on Android Camera APIs suck big time. I have been researching for over a month for feasible solutions.

Categories

Resources