Suppose I am writing a word "abc" in my GestureOverlayView and while doing so I need to save all those alphabets in the screen until I press a clear button.Can anyone tell me a good way to do this..
I will write "a" which is taken as a gesture (not as stroke) i.e One thing I though of was like use a ImageView or SurfaceView on bottom of a GestureOverLayView.and Suppose when I draw "a" on GestureOverLayView then in the "onGesturePerformed" event it will take the Gesture and then get the strokes and then convert them into paths and then draw the paths onto the underlying ImageView or SurfaceView.Can anyone suggest me the code or guide me.I tried various combinations of them but couldn't solve it..
There's an app called GestureBuilder in the samples directory of the SDK. This app shows how to persist gestures drawn by the user.
I realize this is an old question, but you can "cheat" by increasing the fadeOffset to some ridiculously high number. This can be done either in the xml
android:fadeOffset = "some very large number"
whatever, or programatically,
yourgestureoverlayView.SetFadeOffset(some very large number in milliseconds)
Related
How to get an array/bitmap data from android touch screen?
(the shape and area of the touch).
Same question as this one Android: get raw bitmap data from touch events, but this is very old.
The data is processed in the kernel, is there any way I can get it?
It's not possible since your touchscreen only recognizes a few points. For example, try using any kind of a Paint app. You will see that your whole finger won't get noticed if you put it on the screen. Only a few dots will be painted.
Also notice that some phones don't support multi touches. I would recommend you to rethink about your app design. Maybe you could try let the user draw something to achieve a result you want.
Yes that's what I ment. Check cursor location.
I'd like to create a custom map. It should be or look like one picture, but according to the part of which the user clicks, it should move the user to a different location (i.e. start a different activity). I've seen it done in several games but I don't know how to do it myself.
The part of the picture should have non-geometrical borders (obviously it would be easily done with many square images). Sadly, I don't even know what term describes what I want to do so I wasn't able to find any helpful tutorials or discussed topics.
Example:
Picture: http://i236.photobucket.com/albums/ff40/iathen/mapEx.png
If the user touches the purple slide, (s)he should be leaded to activity_1
If the user touches the blue slide, (s)he should be leaded to activity_2
If the user touches the green slide, (s)he should be leaded to activity_3
In my experience there are 2 main (most used) ways to achieve this.
The first (my favorite):
Get the data from a PNG
You should write multiple layers to a canvas. These layers constitute your "zones" (blue, green, purple in the image). To obtain the data of these areas, you get it from PNGs (with transparencies off course) to write the canvas with whatever you want. You must store the values where there can be a tap from the user (non-transparent areas). Notice that this values can be scaled up/down depending on the map size, screen resolution, map dimensions, etc.
Once you've written the layers to the canvas you should check for a match of the user tap and the stored areas you have. You should take into consideration here the order in which the user tap is processed in your code. For instance, in your image, the purple layer is on top so it must be processed first, the blue as second, and the green as the last one. This way you can have an "island" inside a bigger area.
The second way:
Generate the boundaries programmaticaly
I think this solution is self-explanatory. The only I've faced with this variant is that when the surfaces boundaries get messy, it's really complicated to generate the proper equations.
EDIT:
Using the first approach you can employ multiple PNGs to load data or use a single PNG with data coded into the bytes (i.e. RGB values). It's up to you to decide which one to implement.
Hope it helps!
Since a touchscreen itself isn't very accurate, your collision detection for the buttons doesn't need to be either. It would be a waste of time to try to make a complicated collision detection algorithm to detect a touch within those weird shapes.
Since you are making a game, I assume you know how to handle custom touch events, as well as canvas (at least). There are many ways to do what you want, but in the specific example image you linked is kind of a special case.
You could create a giant bounding circle around the three blobs, and then check if the user touched within the bounds of the circle (ie check if the distance from the touch to the center of the circle is less than or equal to the radius). Once you determine that it is, you could check which section of the circle it falls into by splitting it up into 3 equal sections. Requires some math, but shouldn't be that complicated.
It wouldn't be a perfect solution, but it should be good enough. Although, you might have to change the buttons a little so they aren't so stretched out horizontally, otherwise a bounding circle wouldn't be ideal.
Personally, in my games I always have "nodes" that represent the visual elements of the game, such as buttons. Instead of using a large image like you are doing, I would create separate images for each button, and then check their collisions with touch events independently. That way I could have each button check with their own individual bounding circles, or, if absolutely necessary, I could even have custom algorithms for each individual button.
These aren't perfect solutions. If you do want a pixel-perfect solution, you'll need to implement some polygon collision detection algorithms
One thing to consider is screen size and ratio. The only constants you should use are for percentages.
I am looking for a way to do something similar to the image at the bottom. However, I need the selection area to go all the way across and be able to be moved up and down the image via touch. I just need to get the coordinates of the selected pixels.
I am assuming I will need to create a canvas and draw the image background, but after that I'm pretty lost. If anyone can provide an example of something like this, or just a basic walkthrough on what to read up on I would really appreciate it.
I had a question regarding Android programming. I have created an application to take in data via Bluetooth and I would like to use that input as sort of a "game controller" to move an image around a screen. For instance, if the "up" command was given, the image (or even a colored circle) would move a few pixels up.
Can someone point me in the right direction? I thought of using a canvas and redrawing the image every so often but I don't know how to do this and I thought there might be a better/simpler way. I don't need anything fancy.
What i would do. Catch the up command, then have a class lets call it Circle which will hold the current circles position, size etc.
Whenever you catch the up command or left or right or any kind of command, you would redraw the image in that position. I would also use a custom surface view for the drawing and an Activity for catching all the inputs and passing them onto the object.
I would check out
http://www.javacodegeeks.com/2011/07/android-game-development-moving-images.html
and read through the whole series, if you have the time. It is quite a good read.
I'm fairly new to the Android platform and was wondering if I could get some advice for my current head scratcher:
I'm making an app which in one view will need an image, which can be scrolled on one axis, with a load of selectable points over the top of it. Each point needs to be positionable on the x and y (unlikely to change once the app is running, but I'll need to fine tune the positions whilst I'm developing it).
I'd like to be able to let the user select each point and have a graphic drawn on the point the user has selected or just draw a graphic on one/more points without user intervention.
I though for the selectable points I could extend the checkbox with a custom image for the selected state - does that sounds right, or is there a better way of doing this? Is there any thing I can read up on doing this, I can't seem to find anything on the net about replacing the default images?
I was going to use the absolute layout, but see that it's been depreciated and I can't find anything to replace it.
Can anyone give me some code or advice on where to read up on what I need to do?
Thank you in advance
This really feels like something you should be doing with the Canvas and 2D graphics, rather than trying to twist the widget framework to fit.