The sample project that is available with openCV SDK named "OpenCV Sample - color-blob-detection" identifies the area according to the color of the object you select. It then draws contours around that object. It is possible to extract/highlight that particular area? Since there may also be some other object in the background with the same color, however that is not my desired object.
I know this maybe tricky and involves lot of processing, but some guidance on this will help. How can this be achieved?
Note :-
The reason I am asking this, is later we want to model a temporary 3D object on the selected real time object. So differentiating it from the background objects is necessary.
You should use pointPolygonTest(). In the process() function you should add to mContours only one contour, the one that pointPolygonTest returns true, using the coordinates of the touch.
You will need to pass the coordinates to the process() method.
Related
I am rendering a 3d Object using Rajawali and Opengl on android. I want to render a cube around boundaries of the object in exactly the same as shown.
I understand the I may need to use a set of lines or the stack api in rajawali. But I dont understand how can I use the Object3D api to figure out which points consists of the line. I tried using Object3D.getGeometry().getBoundingBox() api but its returning null.
I need hints on how can I derive the min max Number3d points using Rajawali api and whether it would be the right solution to create a set of 12 lines based on those lines. Or should I somehow augment it into the Model after loading so that i can scale around with it?
This may be too obvious but please consider me a beginner. Thanks.
Object3D.getGeometry().getBoundingBox() return undefined value until you call Object3D.getGeometry().computeBoundingBox() method which updates the bounding box information.
The Answer below will give you more information if you are using complex 3d objects with scene hierarchy.
Any way to get a bounding box from a three.js Object3D?
Update:
The best way is to create a separate object with lines using Min/Max values and use the original object's Transformation. Or make the BoundingBox geometry a child for the original object.
im new to this android things. And i have to develop an application that can help an autism to learn numbers. I have a few ideas and I've been trying to learn and implement the code. But it's failed. The question is how can i apply the motion code or sprite to draw a numbers or letter? For example like this, i wanna make the penguin move through the line and draw a number nine.
There is example from mybringback.com which is the image move to draw a rectangle. How can i implement it to draw a number? Im sorry if i asking too much, i just trying to get some ideas.
I think that you should first build an utility program, in order to create the "path vector".
What I mean by path vector is simply a vector of Points (where a point has x value, and y value). And your utility should let you draw whatever you want, with a simple pen. You should draw on surface and store points when mouse is down, and ignore points when mouse is up.
Then, in the main program, you will just have to read at the path of your number/letter.
I've tried to implement something like this for the Sugar OLPC platform, without serializing path into files : I was able to draw, and to view the animation. And I used the process I've just described you.
Hope it can help you.
P.S : I used the word mouse, but you guessed that I talk about finger ...
There are various ways to achieve animation effects. One approach that is quite versatile involves creating a custom View or SurfaceView in which you Override the onDraw method. Various tutorials can be found on this; the official Android discussion of it is here:
http://developer.android.com/guide/topics/graphics/2d-graphics.html#on-view
Your implementation will look something like this:
// Find elapsed time since previous draw
// Compute new position of drawable/bitmap along figure
// Draw bitmap in appropriate location
// Add line to buffer containing segments of curve drawn so far
// Render all segments in curve buffer
// Take some action to call for the rendering of the next frame (this may be done in another thread)
Obviously a simplification. For a very simplistic tutorial, see here:
http://www.techrepublic.com/blog/software-engineer/bouncing-a-ball-on-androids-canvas/1733/
Note that different implementations of this technique will require different levels of involvement by you; for example, if you use a SurfaceView, you are in charge of calling the onDraw method, whereas subclassing the normal View lets you leave Android in charge of redrawing (at the expense of limiting your ability to draw on a different thread). In this respect, Google remains your friend =]
I am looking to be pointed in the right direction for help. What I want to do is take a picture, and then be able to highlight certain aspects of it (i.e circle a door, comment on a color) right onto the picture. Basically what a Samsung note can do. What android package would I be looking at? What it looks like to me, is that I would use the picture as a canvas and then draw on top of the canvas(which is the picture), is that it basically summed up? Or am I missing something?
Another thing that I would like to do with the picture is add data for future identification. I know android has their Exif Interface for this, but what I cant seem to find any information on is, it possible to create my own tags for this class? For example adding a "who took it" tag.
You're going to need a custom view and override the onDraw method of the view. In the onDraw method you get a reference to a Canvas object. From there, you can do most of whatever drawing you need. If you want to take user input and draw with it, you're going to have to override the touch events, and keep track of what you want to draw and then draw in the onDraw method.
As for Exif data. If you want to develop for before Android 2.0, then you need a 3rd party library, I use sanselanandroid personally. If you don't care about pre 2.0, I head ExifInterface works well too. It looks like you can save any arbitrary tag using ExifInterface because it just uses a string tag, and then string value, but know that only your app will know to read that tag.
I have in my app some sprites.
When I touch a sprite (in TouchEvent.isActionDown() ) , I need to change its image
How can I made this?
I'm not familiar with AndEngine, but by the looks of it, the Sprite class does not provide the functionality to change its image - or better said: texture. However, you might be able to accomplish your goal by using the TiledSprite or AnimatedSprite.
The latter is an extension of the first, so you should be able to use a TiledSprite. It has setCurrentTileIndex() and nextTile methods that seem to allow you to swap out one texture region by another. You may need to modify your images into a format suitable for AndEngine though and obviously you will need a handle to the touched sprite.
I am developing in android 2.1 framework.
According to Opengles's doc, we should call glEnableClientState(GL_VERTEX_ARRAY) to make glVertexPointer used by glDrawElements. But I didn't see that from my tutorial, could anybody tells my why, any clue?
My tutorial is:
I try to create a 3d application that display a simple cube, I found that whether I use
glEnableClientState(GL_VERTEX_ARRAY)
glDisableClientState(GL_VERTEX_ARRAY)
//glEnableClientState(GL_VERTEX_ARRAY)
It makes nothing different of result, a cube was displayed normally.
IMHO, if you'll use the several cubes, one with color array and another without color array, you'll need to disable (GL_COLOR_ARRAY) color array before drawing a cube without color array