I want to implement visualization on camera image. For Example: if in camera view there is any wall and closed surface you can color that surface by choosing color from colorPicker. For a reference you can see dulux visualizer.
Can anyone suggest me how to implement visualizer that I have explained above?
Dulux Visualizer uses image processing capabilites. They extract whole element structure from the visualized picture, and manipulate it - in this case, paint it.
I would suggest you to look at OpenCV. It will provide all the necessary powerful image processing you need.
OpenCV Tutorials
About OpenCV
Related
I want to crop the camera preview in Android using camera2 api. I am using android-Camera2Basic the official example.
This is the result I am getting
And, the result exactly I want to achieve is this
I don't want to overlay the object on textureView. I want it actually to be of this size without stretching.
You'll need to edit the image yourself before drawing it, since the default behavior of a TextureView is to just draw the whole image sent to its Surface.
And adjusting the TextureView's transform matrix will only scale or move the whole image, not crop it.
Doing this requires quite a bit of boilerplate, since you need to re-implement most of a TextureView. For best efficiency, you likely want to implement the cropping in OpenGL ES; so you'll need a GLSurfaceView, and then you need to use the OpenGL context of that GLSurfaceView to create a SurfaceTexture object, and then using that texture, draw a quadrilateral with the cropping behavior you want in the fragment shader.
That's fairly basic EGL, but it's quite a bit if you've never done any OpenGL programming before. There's a small test program within the Android OS tree that uses this kind of path: https://android.googlesource.com/platform/frameworks/native/+/master/opengl/tests/gl2_cameraeye/#
How to create custom RectangleDetector like FaceDetector and BarcodeDetector in Mobile vision API? I need to detect rectangle shapes from camera frame. How can i achieve that?
You'd extend the Detector class:
https://developers.google.com/android/reference/com/google/android/gms/vision/Detector
Defining your RectangleDetector class. The code to detect rectangles would be implemented by overriding the detect() method. You'd need to implement this yourself, since there isn't already code for detecting rectangles in mobile vision.
When you have this, you'd be able to use it with CameraSource and other parts of the mobile vision API.
As pm0733464 mentioned you can extend the Detector and use an Image processing library such as Catalano Framework GITHUB or CODEPROJECT.
for each frame,
convert frame to Bitmap
using the framework convert the Bitmap to FastBitmap
Gray-scale, then threshold it
start a blob search
check the blobs for Rectangular shapes with certain sizes
It can find Rectangles even when they are scaled or skewed, extract the blob with four angles and stretch it for farther processing.
You can make any type of detector and am working on a custom object detector ATM.
I want to develop customize camera application like Snap chat!, with out using Surface view.
First i used surface view to develop the app,but i am unable get quality image and also i am unable to get all features what default camera app is providing, like Zoom, focus,face reorganization etc. Please provide me any solution to achieve this
sorry for my english
github/xplodwild/android_packages_apps_Focal
github/almalence/OpenCamera
github/troop/FreeDCam
github/rexstjohn/UltimateAndroidCameraGuide
maybe one those might help
In my android project I want to add an overlay (which is simple image with text on it) on top of the camera view.
The overlay is not simple rectangle but it is transformed at each camera frame.
What I'm looking for is kind of like image below:
I'm getting camera frame with OpenCV library (CvCameraViewListener and CameraBridgeViewBase).
My question is what is the best and fastest way to do this?
How can I transformed the overlay at each frame in the way as figure above shows.
Any help and suggestion is appreciated.
I have user tesseract ocr for my android project to recognize text from an image taken from the camera. But the results are not accurate. I want to optimize the image using opencv. I want to achieve the following for the captured image which is decoded in Bitmap.Config.ARGB_8888 format:
Detect the objects in the resized image.
Once the object is identified, compute its border w.r.t original image. (This is for removing the camera angle effect)
Extract the object from original image, by applying perspective transform.
Apply white balance to remove lightening effects.
In the example provided by with the tess_two api, they are using Leptonica for the image manipulations like drawing the bounding boxes around the words..But in my case I want to use OpenCV...Your guidance will be highly appreciated...
That's a lot you are asking for, and depending on the object may be impossible. You should check out the tutorials on 2D feature detection and object detection (http://docs.opencv.org/doc/tutorials/features2d/table_of_content_features2d/table_of_content_features2d.html and http://docs.opencv.org/doc/tutorials/objdetect/table_of_content_objdetect/table_of_content_objdetect.html) to see if there is something you can use.
White balance does not do anything to lighting, you should do adaptive thresholding or some kind of high pass filtering instead.