I am with a really serious help and want to know if you can help me.
As App Inventor has some 'Capacity Issues' I cannot create lots of Screens in my project. So, one of my screens is a 'Dynamic One', which receives a StartValue from another one and acts according to this value.
This 'Dynamic Screen' has one Canvas and Two ImageSprites. Depending on which value this Screen receives, a different background image is used in the Canvas component and I have to move the two image Sprites to different positions in the Canvas. I did this because these two ImageSprites are buttons in specific parts of the image that will open an ActivityStarter that runs the youtube application in a given URL. The ImageSprites positions, the canvas background and video that will played when the canvas is pressed will depend of course on the input.
My problem here is that I cannot move these sprites this way (dynamicaly depending on StartValue the Screen receives), it simply doesn't work (I think it is a bug). I tried everything, setting ImageSprite.x, ImageSprite.y, use functions ImageSprite.Moveto, ImageSprite.PointTowards, it simply doesn't work if the Screen receives an StartValue.
If I am on 'Blocks' tab and click with right button on the function ImageSprite.MoveTo and select 'Do It', the ImageSprites can be moved. Also, if I access the 'Design' tab and inform which Background will be used in Canvas, the ImageSprites are also moved without problems (I don't want this scenario, I want the background changing depending on StartValue).
Below is my 'Blocks' Tab, what is wrong here? is this a bug? can you please help me with this issue? how can I move the ImageSprites on a Canvas depending on which StartValue a screen receives?
Related
I'm creating an Android board game with several differently shaped board spaces (like Risk).
I want to be sure that my board appears correct and that OnTouchListeners stay in place on the GUI regardless of screen size/resolution.
Possible solutions I have thought of and their problems:
Create a single image for the board and assign OnTouchListeners based upon pixel geometry. Problem: If the user's display is a different resolution, my Listener might not be under the same pixels as my image (right?)
Create several ImageButtons and arrange them together. Problem: the ImageButtons might get rearranged based upon the display and I would end up with overlapping spaces or gaps.
Use Android custom drawing. If I do this, how would I link my Listeners to my Canvas and be sure that they are synced?
Basic question:
How to be sure that listeners sync with graphics in a GUI that uses irregular geometry?
I worked on an app with irregular touch areas so I can give you guidance on one way to achieve this.
Start with a single image for your entire board. This image is going to have a certain ("intrinsic") width and height regardless of any device resolutions.
Now here comes the tedious part. You (or maybe your graphic designer) will need to plot out coordinates of an irregular polygon for each touch area. These will be constants to your application.
When you are displaying your board, if you are zooming and panning on the image, you want to keep track of the transform matrix for the display. When the user touches the screen, you will get x,y coordinates from OnTouchListener and for those to be useful, you will have to "de-transform" the x,y to normalize it against the intrinsic dimensions of the board and your polygons.
We rolled our own hit-testing logic using an algorithm from http://alienryderflex.com/polygon/, but you can also try this: Create a Path out of your polygon coordinates (using moveTo(), lineTo(), and close()), then assign the Path to a Region using Region.setPath(). Once you have that, supposedly you should be able to hit-test using Region.contains(x,y), but I've never tried it so I can't guarantee that's going to work.
I'd like to create a custom map. It should be or look like one picture, but according to the part of which the user clicks, it should move the user to a different location (i.e. start a different activity). I've seen it done in several games but I don't know how to do it myself.
The part of the picture should have non-geometrical borders (obviously it would be easily done with many square images). Sadly, I don't even know what term describes what I want to do so I wasn't able to find any helpful tutorials or discussed topics.
Example:
Picture: http://i236.photobucket.com/albums/ff40/iathen/mapEx.png
If the user touches the purple slide, (s)he should be leaded to activity_1
If the user touches the blue slide, (s)he should be leaded to activity_2
If the user touches the green slide, (s)he should be leaded to activity_3
In my experience there are 2 main (most used) ways to achieve this.
The first (my favorite):
Get the data from a PNG
You should write multiple layers to a canvas. These layers constitute your "zones" (blue, green, purple in the image). To obtain the data of these areas, you get it from PNGs (with transparencies off course) to write the canvas with whatever you want. You must store the values where there can be a tap from the user (non-transparent areas). Notice that this values can be scaled up/down depending on the map size, screen resolution, map dimensions, etc.
Once you've written the layers to the canvas you should check for a match of the user tap and the stored areas you have. You should take into consideration here the order in which the user tap is processed in your code. For instance, in your image, the purple layer is on top so it must be processed first, the blue as second, and the green as the last one. This way you can have an "island" inside a bigger area.
The second way:
Generate the boundaries programmaticaly
I think this solution is self-explanatory. The only I've faced with this variant is that when the surfaces boundaries get messy, it's really complicated to generate the proper equations.
EDIT:
Using the first approach you can employ multiple PNGs to load data or use a single PNG with data coded into the bytes (i.e. RGB values). It's up to you to decide which one to implement.
Hope it helps!
Since a touchscreen itself isn't very accurate, your collision detection for the buttons doesn't need to be either. It would be a waste of time to try to make a complicated collision detection algorithm to detect a touch within those weird shapes.
Since you are making a game, I assume you know how to handle custom touch events, as well as canvas (at least). There are many ways to do what you want, but in the specific example image you linked is kind of a special case.
You could create a giant bounding circle around the three blobs, and then check if the user touched within the bounds of the circle (ie check if the distance from the touch to the center of the circle is less than or equal to the radius). Once you determine that it is, you could check which section of the circle it falls into by splitting it up into 3 equal sections. Requires some math, but shouldn't be that complicated.
It wouldn't be a perfect solution, but it should be good enough. Although, you might have to change the buttons a little so they aren't so stretched out horizontally, otherwise a bounding circle wouldn't be ideal.
Personally, in my games I always have "nodes" that represent the visual elements of the game, such as buttons. Instead of using a large image like you are doing, I would create separate images for each button, and then check their collisions with touch events independently. That way I could have each button check with their own individual bounding circles, or, if absolutely necessary, I could even have custom algorithms for each individual button.
These aren't perfect solutions. If you do want a pixel-perfect solution, you'll need to implement some polygon collision detection algorithms
One thing to consider is screen size and ratio. The only constants you should use are for percentages.
Is there any way to create a 360 degree object view from photos? I have a set of 71 photos of a single car viewed from different angles. I want to combine them and be able to rotate the car when touching the screen seeing it from different angles.
I've done researching but I couldn't find anything done in android. One example is found here
This example is made with Jquery. What I need is to implement it directly inside an android app. How can I do this?
Edit1: Until now I managed to create an animation between images in this way:
The problem is that the animation starts on click and works by itself. I want to be able to move the car from left to right and right to left when keeping the finger pressed and moving to right or left. How can I do that so I can see the car from the angle I want?
I just tested that Jquery plugin page on my device and it seems to work alright. So you could still presumeably use that plugin to make some html content that you could then load into a WebView. That would give the rotation thing inside of your application.
If you don't want to use html/javascript to do it you'll have to use an ImageView with a TouchListener attached to it that handles the drag events by swapping to the next image at the appropriate interval.
Or probably somehow with Canvas, though I am not as familiar with that, I would knot how to describe what you'd need to do to make it work this way.
After hours of trying to accomplish this task I stopped using the Drawable animation method because at point I was loading the images the app would of crashed because was out of memory.
Instead I found another way to do it which I use it right now. Example
I changed the .html with the images I need and the layout I want then I implement it inside my app using a webview. Is working pretty well.
I'm doing a live wallpaper. However, what is initially shown depends on the number of home screens.
While onOffsetsChanged() allows you to calculate the number of home screens, it gets called only if the user scrolls the homescreen.
So is there a way to get the current xStep and xOffSet without calling onOffSetsChanged()?
Edit: I may not need to know that per se. Here's what I'm doing: I'm basically drawing a portion of the bitmap. The portion shown depends on the current homescreen.
Edit 2: so to explain what I'm trying to do---I'm basically trying to mimick the scrolling wallpaper effect but with a video. The point is that the portion shown depends on the current homescreen. Here's the problem: So the user selects the wallpaper. OnSurfaceCreated() is called, followed by onSurfaceChanged(). However, onOffSetsChanged() is never called until the user tries to scroll the homescreens. That's the problem. You don't know what part of the bitmap/video to display until the user scrolls the screen. (So Josh's suggestion doesn't work. The part of the video that's displayed may be wrong---until the user scrolls the screen and we get the correct onOffSetsChanged() values.)
Your edit doesn't really explain why you need to know how many screens there are. You can draw the center portion of your bitmap initially, then when xOffset changes to something like 0, draw the leftmost portion of your bitmap. What's the issue?
I'm working on my Live Wallpaper and I want it to scroll with the screen like a normal wallpaper does. I know I need to use onOffsetsChanged() but which parameter will tell me the direction that the home screen is being swiped? It seems like xOffset always returns a positive value no matter which way the screen slides.
Thank you.
The direction alone will not help you: you need to know the exact offset, because the user may have jumped several screens at once (e.g. by using a pop-up that displays mini-versions of all the home screens).
Generally speaking, you want to save the xPixel value you get in onOffsetsChanged, then use this to translate your canvas.
See my two answers below:
onOffsetsChanged: move Bitmap
android live wallpaper rescaling