How to create custom RectangleDetector like FaceDetector and BarcodeDetector in Mobile vision API? I need to detect rectangle shapes from camera frame. How can i achieve that?
You'd extend the Detector class:
https://developers.google.com/android/reference/com/google/android/gms/vision/Detector
Defining your RectangleDetector class. The code to detect rectangles would be implemented by overriding the detect() method. You'd need to implement this yourself, since there isn't already code for detecting rectangles in mobile vision.
When you have this, you'd be able to use it with CameraSource and other parts of the mobile vision API.
As pm0733464 mentioned you can extend the Detector and use an Image processing library such as Catalano Framework GITHUB or CODEPROJECT.
for each frame,
convert frame to Bitmap
using the framework convert the Bitmap to FastBitmap
Gray-scale, then threshold it
start a blob search
check the blobs for Rectangular shapes with certain sizes
It can find Rectangles even when they are scaled or skewed, extract the blob with four angles and stretch it for farther processing.
You can make any type of detector and am working on a custom object detector ATM.
Related
I want to crop the camera preview in Android using camera2 api. I am using android-Camera2Basic the official example.
This is the result I am getting
And, the result exactly I want to achieve is this
I don't want to overlay the object on textureView. I want it actually to be of this size without stretching.
You'll need to edit the image yourself before drawing it, since the default behavior of a TextureView is to just draw the whole image sent to its Surface.
And adjusting the TextureView's transform matrix will only scale or move the whole image, not crop it.
Doing this requires quite a bit of boilerplate, since you need to re-implement most of a TextureView. For best efficiency, you likely want to implement the cropping in OpenGL ES; so you'll need a GLSurfaceView, and then you need to use the OpenGL context of that GLSurfaceView to create a SurfaceTexture object, and then using that texture, draw a quadrilateral with the cropping behavior you want in the fragment shader.
That's fairly basic EGL, but it's quite a bit if you've never done any OpenGL programming before. There's a small test program within the Android OS tree that uses this kind of path: https://android.googlesource.com/platform/frameworks/native/+/master/opengl/tests/gl2_cameraeye/#
I want to implement visualization on camera image. For Example: if in camera view there is any wall and closed surface you can color that surface by choosing color from colorPicker. For a reference you can see dulux visualizer.
Can anyone suggest me how to implement visualizer that I have explained above?
Dulux Visualizer uses image processing capabilites. They extract whole element structure from the visualized picture, and manipulate it - in this case, paint it.
I would suggest you to look at OpenCV. It will provide all the necessary powerful image processing you need.
OpenCV Tutorials
About OpenCV
I have user tesseract ocr for my android project to recognize text from an image taken from the camera. But the results are not accurate. I want to optimize the image using opencv. I want to achieve the following for the captured image which is decoded in Bitmap.Config.ARGB_8888 format:
Detect the objects in the resized image.
Once the object is identified, compute its border w.r.t original image. (This is for removing the camera angle effect)
Extract the object from original image, by applying perspective transform.
Apply white balance to remove lightening effects.
In the example provided by with the tess_two api, they are using Leptonica for the image manipulations like drawing the bounding boxes around the words..But in my case I want to use OpenCV...Your guidance will be highly appreciated...
That's a lot you are asking for, and depending on the object may be impossible. You should check out the tutorials on 2D feature detection and object detection (http://docs.opencv.org/doc/tutorials/features2d/table_of_content_features2d/table_of_content_features2d.html and http://docs.opencv.org/doc/tutorials/objdetect/table_of_content_objdetect/table_of_content_objdetect.html) to see if there is something you can use.
White balance does not do anything to lighting, you should do adaptive thresholding or some kind of high pass filtering instead.
I'm developing a game in Android and Java. In android I am using andengine for sprite image and i was able to rotate in all directions.
int bikeFrame:
//bikeFrame++,bikeFrame--
bikeSprite.setRotation(bikeFrame);
I want to make the game in j2me also. But in j2me we have only four methods to rotate angle
(TRANS_MIRROR,TRANS_MIRROR 90,TRANS_MIRROR 270,TRANS_MIRROR 180)..
If i take images as frames I am still not getting smooth animation.
How to rotate sprite image in all angles in j2me?
See this thread, omarhassan123 created a code snippet that should allow you to rotate the image by any angle you like.
There is a library called J2ME ARMY KNIFE that provides all sorts of image manipulation techniques, you can get it here.
Also, see this question:
Image rotation algorithm
Another idea: decompile a game called Flexis Extreme. They do a lot of image rotation in real time so you could try to find out how they did it.
If you can, try LWUIT Image.rotate there is a sample at this page http://lwuit.blogspot.com.br/2008/11/round-round-infinite-progress-and.html
I'm using cocos2d.
Now I've added some images in the layer, and played around a bit.
I'm trying to save the whole screen as image file.
How can I do this?
The only way to capture the content of a SurfaceView is if you are rendering into it using OpenGL. You can use glReadPixels() to grab the content of the surface. If you are drawing onto the SurfaceView using a Canvas, you can simply create a Bitmap, create a new Canvas for that Bitmap and execute your drawing code with the new Canvas.
It is my understanding that cocos2d-android also has a CCRenderTexture class with the saveBuffer method. In that case have a look at my CCRenderTexture demo program and blog post for cocos2d-iphone which gives you an example for how to create a screenshot using CCRenderTexture and saveBuffer. The same principle should be applicable to cocos2d-android.