Slide finger on keyboard area to print a key - android

i just need to create an edit box with keyboard below..Below keyboard,,there should be an Slide finger on the keyboard link area while looking at the keyboard.As the finger slides, the key corresponding to the location of the finger on the keyboard link area is highlighted.And when i click on dragger,the corresponding letter should be printed on edittext.How could i do this.?.Thanks..

As I have figured out, you need to have a mechanism where you let the users draw letters on a surface, and then you put them in the EditText by detecting what it latter was.
As a concept, you can do this:
Below the EditText, have a Canvas area, where people can draw letters.
Since I haven't tried many of Canvas things myself, you'll have to do the R&D, sorry.
but listen for press and move events.
record each pixel where the user is moving their fingers, and when the user lifts the finger up, perform detection.
For detection, you will have to have a database which tells you which drawing pattern corresponds to which letter. For this, develop a base where you store which pixel traversal path refers to which letter, and allow a threshold level. then put this info in the database.
when the user finishes drawing, and you have collected all the pixels user had traversed on, compare the set with the database to recognise which letter it was.
I know it might become complex, and requires you to evolve your own data structures (sadly, i don't have any structure in mind for this purpose as of now) for this, but it will be extremely interesting, and time taking too! ;-)
Just disable the keyboard for EditText when you're about to implement it.

Related

pygame - link an object to mouse event

I am trying to develop a game for Android using pygame.
It would be a platformer. To move the main charachter, I would make the game to wait for mouse events (like pygame.MOUSEBUTTONDOWN).
In order to do that on mobile, I'd like to create a graphic representation of a joypad with arrow keys to be shown on the bottom left corner of the screen.
Now, when the user touches one of the arrows, a MOUSEBUTTONDOWN event should be triggered and the charachter should move accordingly.
My question is: since the "joypad" object is a mere draw, how can I link it to the event with pygame?
Is there a way to do so? Should I use the pixel coordinates of the arrow keys of the joypad or is there a better choice?
As far as I know this is not possible.
When handling input, mouse input and touch input are to be handled separately.
So to answer the 2 questions you listed at the end:
As far as I know there is no way to implement this functionality.
You could use the pixel coordinates of the arrows. However you can use Rects for that and test if the place of mouse input/touch input is inside the arrow button Rect with the collidepoint method
You can achieve that as follows:
arrow_left.collidepoint(mouse_x, mouse_y)
I hope this answer helped you!

Adding gestures and making an image interactive

I'm trying to make an image more interactive by adding gestures (currently I'm thinking about only zoom, might add later depending on the requirement). It's a parking app by the way.
By interactive I mean that user can tap on a part of an image, let's say it's area A and labelled "A", and then a new interactive image will pop up for that area.
I'm pondering whether I should break down the image part by part in order to detect user's tapping location, or should I take the X, and Y coordinate? And if I should take the coordinate, does that mean I have to alter the code so that it'll cover different screen sizes?
Another question is.. I'm thinking of using Djikstra's algorithm for best route to be taken by user, does the algorithm need the image to be whole?
I'm also planning to add marks for vacant and occupied slots, but it is trivial for whether I should split the image to parts or not, right?

Android Game Development - Custom map leading to different activities

I'd like to create a custom map. It should be or look like one picture, but according to the part of which the user clicks, it should move the user to a different location (i.e. start a different activity). I've seen it done in several games but I don't know how to do it myself.
The part of the picture should have non-geometrical borders (obviously it would be easily done with many square images). Sadly, I don't even know what term describes what I want to do so I wasn't able to find any helpful tutorials or discussed topics.
Example:
Picture: http://i236.photobucket.com/albums/ff40/iathen/mapEx.png
If the user touches the purple slide, (s)he should be leaded to activity_1
If the user touches the blue slide, (s)he should be leaded to activity_2
If the user touches the green slide, (s)he should be leaded to activity_3
In my experience there are 2 main (most used) ways to achieve this.
The first (my favorite):
Get the data from a PNG
You should write multiple layers to a canvas. These layers constitute your "zones" (blue, green, purple in the image). To obtain the data of these areas, you get it from PNGs (with transparencies off course) to write the canvas with whatever you want. You must store the values where there can be a tap from the user (non-transparent areas). Notice that this values can be scaled up/down depending on the map size, screen resolution, map dimensions, etc.
Once you've written the layers to the canvas you should check for a match of the user tap and the stored areas you have. You should take into consideration here the order in which the user tap is processed in your code. For instance, in your image, the purple layer is on top so it must be processed first, the blue as second, and the green as the last one. This way you can have an "island" inside a bigger area.
The second way:
Generate the boundaries programmaticaly
I think this solution is self-explanatory. The only I've faced with this variant is that when the surfaces boundaries get messy, it's really complicated to generate the proper equations.
EDIT:
Using the first approach you can employ multiple PNGs to load data or use a single PNG with data coded into the bytes (i.e. RGB values). It's up to you to decide which one to implement.
Hope it helps!
Since a touchscreen itself isn't very accurate, your collision detection for the buttons doesn't need to be either. It would be a waste of time to try to make a complicated collision detection algorithm to detect a touch within those weird shapes.
Since you are making a game, I assume you know how to handle custom touch events, as well as canvas (at least). There are many ways to do what you want, but in the specific example image you linked is kind of a special case.
You could create a giant bounding circle around the three blobs, and then check if the user touched within the bounds of the circle (ie check if the distance from the touch to the center of the circle is less than or equal to the radius). Once you determine that it is, you could check which section of the circle it falls into by splitting it up into 3 equal sections. Requires some math, but shouldn't be that complicated.
It wouldn't be a perfect solution, but it should be good enough. Although, you might have to change the buttons a little so they aren't so stretched out horizontally, otherwise a bounding circle wouldn't be ideal.
Personally, in my games I always have "nodes" that represent the visual elements of the game, such as buttons. Instead of using a large image like you are doing, I would create separate images for each button, and then check their collisions with touch events independently. That way I could have each button check with their own individual bounding circles, or, if absolutely necessary, I could even have custom algorithms for each individual button.
These aren't perfect solutions. If you do want a pixel-perfect solution, you'll need to implement some polygon collision detection algorithms
One thing to consider is screen size and ratio. The only constants you should use are for percentages.

Emulating physical buttons / lower level touchscreen control

One thing I find many Android games and emulators get wrong is when the user presses multiple (on-screen) buttons simultaneously. I'm wondering how one could fix that.
Imagine a game like Super Mario World. You have two buttons on the right side (simplified): Y is for running and B is for jumping. Typically, you hold Y most of the time with the tip of your thumb, and when you want to jump you lay down your thumb and press B, too.
Situations like this understandably confuse Android. Instead of detecting two presses, it just moves the one from the Y button down a bit.
What I'd need to fix this is one of the following:
Raw touch data as a bitmap (but probably too computationally expensive, and doesn't leave the touchscreen anyway)
The detected touch points in more detail, e.g. as best-fit ellipses or polygons
The ability to define touch regions. If a finger overlaps such a region a certain amount, the region fires.
(The points are from low- to high level, e.g. if I had the first I could emulate the other ones.)
Any ideas?

Detecting touch area on Android

Is it possible to detect every pixel being touched? More specifically, when the user touches the screen, is it possible to track all the x-y coordinates of the cluster of points touched by the user? How can I tell the difference between when users are drawing with their thumb and when they are drawing with the tip of a finger? I would like to reflect the brush difference depending on how users touch the screen, and would also like to track x-y coordinates of all the pixels being touched over time. Thanks so much in advance for any help.
This would be very tricky primarily because every android phone is going to behave differently. There are some touch screen devices that are very, very sensitive and some that are basically "dull" by comparison.
It also sounds more like you are wanting to track pressure - how hard is the user pushing on the screen - which is actually supported on android devices.
I think some of your answer may be found by monitoring all of the touch events - in practice, most applications ignore a great number of events or perform some kind of "smoothing" of the events since there is literally a deluge of touch events when the user is manipulating the screen. Doing this may negatively impact your applications performance though.
I would recommend that you look into pressure sensitivity and calculate a circular region around the primary touch point based on pressure, then build your brush around that.
Another idea would be to incorporate more of a gesture approach to what you are trying to do - for example, visualize touching the screen with the tip of two fingers together (index and middle) and rolling the middle finger around the index finger or simply moving the middle finger up and down in relation to the index finger. Both fingers would be moved together for painting. This could be used to manipulate drawing angle on the fly or perhaps even toggle between a set of pre-selected brushes or could change brush size on the fly as you are painting.
Some of the above ideas I would love to see implemented - let me know when you have your app ready.
Good luck!
Rodney
If you have a listener on your image it will respond that there was a touch within that bounding box, basically.
So, to get what you want, you could, but, I would never do this, create a box around every pixel, or small group of pixels, and listen for a touch.
Wherever you get a touch, it may fire off an event, then you can react accordingly.
I can't think of any other solution that will give you each pixel that a person touched, at one time.
You may want to read up on multitouch though, as there are some suggestions in here that my help you:
http://android-developers.blogspot.com/2010/06/making-sense-of-multitouch.html
If you're looking for a way to get your content view as a View after Activity#setContentView(int), then you can set an id on the outer-most element of your layout:
android:id="#+id/entire_view" and reference it in your onCreate() method after setContentView:
View view = getViewById(R.id.entire_view);
view.setOnTouchListener( ... );

Categories

Resources