how to detect the shape that touch the touch screen - android

I have received requirement like this http://www.youtube.com/watch?v=7MYQicokwmY&feature=plcp I am reviewing this requirement.As per requirement we have to build touch detection like in video link for Android enabled Tablets.
In that video toys (toys with circular, star or rectangle shape) uses Conductive Silicone Sensors with that they are detecting touch on screen & deciding shape of external world object like triangle,circle or a star & further processing the shape.
I have to use same touch detection for android tablets.Can anybody help me to find the way to implement this on Android platform ? Is there any API or framework to implement it?

If you see the video around 1:13, they show what I am guessing are some prototypes, the circle has three points, the hexagon too...
My best guess is that the biggest part of the object is non-conductive and only has a few points that are conductive and would actually register as touch points on the screen. The key is that each of them will be different enough that you would be able to recognize them no matter what the orientation is, what the position (and depending on your requirements whether you have several of those objects at the same time on the screen).
You can also play with the area of each conductive points so in your code, you will get the touch information, you can get different pressure values from the MotionEvent
Now how you place the conductive points and how many on each shape is completely up to you and would really depend on what your requirements are (recognizing arbitrary shape is not an option...)
Most touch screens would reject the touch if the area is too large (that's palm rejection), so I don't think there are much other ways to do this...

Related

Android game board with irregular board spaces

I'm creating an Android board game with several differently shaped board spaces (like Risk).
I want to be sure that my board appears correct and that OnTouchListeners stay in place on the GUI regardless of screen size/resolution.
Possible solutions I have thought of and their problems:
Create a single image for the board and assign OnTouchListeners based upon pixel geometry. Problem: If the user's display is a different resolution, my Listener might not be under the same pixels as my image (right?)
Create several ImageButtons and arrange them together. Problem: the ImageButtons might get rearranged based upon the display and I would end up with overlapping spaces or gaps.
Use Android custom drawing. If I do this, how would I link my Listeners to my Canvas and be sure that they are synced?
Basic question:
How to be sure that listeners sync with graphics in a GUI that uses irregular geometry?
I worked on an app with irregular touch areas so I can give you guidance on one way to achieve this.
Start with a single image for your entire board. This image is going to have a certain ("intrinsic") width and height regardless of any device resolutions.
Now here comes the tedious part. You (or maybe your graphic designer) will need to plot out coordinates of an irregular polygon for each touch area. These will be constants to your application.
When you are displaying your board, if you are zooming and panning on the image, you want to keep track of the transform matrix for the display. When the user touches the screen, you will get x,y coordinates from OnTouchListener and for those to be useful, you will have to "de-transform" the x,y to normalize it against the intrinsic dimensions of the board and your polygons.
We rolled our own hit-testing logic using an algorithm from http://alienryderflex.com/polygon/, but you can also try this: Create a Path out of your polygon coordinates (using moveTo(), lineTo(), and close()), then assign the Path to a Region using Region.setPath(). Once you have that, supposedly you should be able to hit-test using Region.contains(x,y), but I've never tried it so I can't guarantee that's going to work.

Android Game Development - Custom map leading to different activities

I'd like to create a custom map. It should be or look like one picture, but according to the part of which the user clicks, it should move the user to a different location (i.e. start a different activity). I've seen it done in several games but I don't know how to do it myself.
The part of the picture should have non-geometrical borders (obviously it would be easily done with many square images). Sadly, I don't even know what term describes what I want to do so I wasn't able to find any helpful tutorials or discussed topics.
Example:
Picture: http://i236.photobucket.com/albums/ff40/iathen/mapEx.png
If the user touches the purple slide, (s)he should be leaded to activity_1
If the user touches the blue slide, (s)he should be leaded to activity_2
If the user touches the green slide, (s)he should be leaded to activity_3
In my experience there are 2 main (most used) ways to achieve this.
The first (my favorite):
Get the data from a PNG
You should write multiple layers to a canvas. These layers constitute your "zones" (blue, green, purple in the image). To obtain the data of these areas, you get it from PNGs (with transparencies off course) to write the canvas with whatever you want. You must store the values where there can be a tap from the user (non-transparent areas). Notice that this values can be scaled up/down depending on the map size, screen resolution, map dimensions, etc.
Once you've written the layers to the canvas you should check for a match of the user tap and the stored areas you have. You should take into consideration here the order in which the user tap is processed in your code. For instance, in your image, the purple layer is on top so it must be processed first, the blue as second, and the green as the last one. This way you can have an "island" inside a bigger area.
The second way:
Generate the boundaries programmaticaly
I think this solution is self-explanatory. The only I've faced with this variant is that when the surfaces boundaries get messy, it's really complicated to generate the proper equations.
EDIT:
Using the first approach you can employ multiple PNGs to load data or use a single PNG with data coded into the bytes (i.e. RGB values). It's up to you to decide which one to implement.
Hope it helps!
Since a touchscreen itself isn't very accurate, your collision detection for the buttons doesn't need to be either. It would be a waste of time to try to make a complicated collision detection algorithm to detect a touch within those weird shapes.
Since you are making a game, I assume you know how to handle custom touch events, as well as canvas (at least). There are many ways to do what you want, but in the specific example image you linked is kind of a special case.
You could create a giant bounding circle around the three blobs, and then check if the user touched within the bounds of the circle (ie check if the distance from the touch to the center of the circle is less than or equal to the radius). Once you determine that it is, you could check which section of the circle it falls into by splitting it up into 3 equal sections. Requires some math, but shouldn't be that complicated.
It wouldn't be a perfect solution, but it should be good enough. Although, you might have to change the buttons a little so they aren't so stretched out horizontally, otherwise a bounding circle wouldn't be ideal.
Personally, in my games I always have "nodes" that represent the visual elements of the game, such as buttons. Instead of using a large image like you are doing, I would create separate images for each button, and then check their collisions with touch events independently. That way I could have each button check with their own individual bounding circles, or, if absolutely necessary, I could even have custom algorithms for each individual button.
These aren't perfect solutions. If you do want a pixel-perfect solution, you'll need to implement some polygon collision detection algorithms
One thing to consider is screen size and ratio. The only constants you should use are for percentages.

how to detect non human touch on screen

I am creating an weight measuring app for android for that
I want to detect a non human touch on android screen so that by measuring its pressure i will be able to detect the weight of the object placed on screen but the problem is capicitive screen of android are detecting only human touch .
please help
It's all about whether the material used is conductive (not only human touch is copnductive). That's just how capacitive screens works. It isn't possible to detect a touch with e.g. a rock, because that isn't conductive.
A coin on the other hand may just work. Because most coins (at least where I am from) contain conductive material.

Detecting touch area on Android

Is it possible to detect every pixel being touched? More specifically, when the user touches the screen, is it possible to track all the x-y coordinates of the cluster of points touched by the user? How can I tell the difference between when users are drawing with their thumb and when they are drawing with the tip of a finger? I would like to reflect the brush difference depending on how users touch the screen, and would also like to track x-y coordinates of all the pixels being touched over time. Thanks so much in advance for any help.
This would be very tricky primarily because every android phone is going to behave differently. There are some touch screen devices that are very, very sensitive and some that are basically "dull" by comparison.
It also sounds more like you are wanting to track pressure - how hard is the user pushing on the screen - which is actually supported on android devices.
I think some of your answer may be found by monitoring all of the touch events - in practice, most applications ignore a great number of events or perform some kind of "smoothing" of the events since there is literally a deluge of touch events when the user is manipulating the screen. Doing this may negatively impact your applications performance though.
I would recommend that you look into pressure sensitivity and calculate a circular region around the primary touch point based on pressure, then build your brush around that.
Another idea would be to incorporate more of a gesture approach to what you are trying to do - for example, visualize touching the screen with the tip of two fingers together (index and middle) and rolling the middle finger around the index finger or simply moving the middle finger up and down in relation to the index finger. Both fingers would be moved together for painting. This could be used to manipulate drawing angle on the fly or perhaps even toggle between a set of pre-selected brushes or could change brush size on the fly as you are painting.
Some of the above ideas I would love to see implemented - let me know when you have your app ready.
Good luck!
Rodney
If you have a listener on your image it will respond that there was a touch within that bounding box, basically.
So, to get what you want, you could, but, I would never do this, create a box around every pixel, or small group of pixels, and listen for a touch.
Wherever you get a touch, it may fire off an event, then you can react accordingly.
I can't think of any other solution that will give you each pixel that a person touched, at one time.
You may want to read up on multitouch though, as there are some suggestions in here that my help you:
http://android-developers.blogspot.com/2010/06/making-sense-of-multitouch.html
If you're looking for a way to get your content view as a View after Activity#setContentView(int), then you can set an id on the outer-most element of your layout:
android:id="#+id/entire_view" and reference it in your onCreate() method after setContentView:
View view = getViewById(R.id.entire_view);
view.setOnTouchListener( ... );

Comparing gestures/images in Android

What would be the best way to compare a gesture made on an Android device's screen with a stored gesture? For example, if in my application, I want it so that if I draw a triangle with my finger, the screen will turn blue, and if I draw a circle, the screen will turn red, how could that be done? The only thing I have been able to think of so far is to somehow generate an image file and then compare that to an image of a triangle or circle and check for similarities. But that wouldn't really account for different sized shapes or offset ones. Any ideas on how this could be implemented? Thanks!
There is no need to compare/match the shape of a gesture with an image. The better way is to mathematically guess which one of the recognized shapes did the user draw. http://developer.android.com/resources/articles/gestures.html provides a great reference for implementing gestures.
HTH,
Akshay

Categories

Resources