Android : AndEngine pixel perfect collision GLES2? - android

I am using AndEngine to implement a 2d game and I am now wondering if there is actually any already implemented pixel perfect collision detection library/extension/... for AndEngine GLES2 . I've been searching for hours now and nothing. Please do not refer me to this, as I've already tried it and it only works for GLES1. If there really isn't any already existing work I accept any idea for me to implement it - although i chose AndEngine for pixel perfect collisions, bad idea-
Thank you.

AndEngine Collision Extension:
https://github.com/MakersF/AndEngineCollisionsExtension
This is an extension that aims to bring different collision methods (perfect or approximations) to AndEngine GLES2
Supported Collision Methods:
Pixel-Perfect Collision (supports: translation, scale, rotation, screw)
It also support pixel-perfect collision between pixel-perfect shapes and rectangular shapes, without the need for the latter to be a pixel-perfect shape.
Alpha values different from 0 (you set the threshold that identify if a pixel is solid or not)
You can use the Utils methods to check the performances in your app or to output the collision mask to check if it is what you need

Related

Detect & measure shape (circle) sizes in an image on Android

The requirement is to create an Android application running on one specific mobile device that records video of a human eye pupil dilating in response to a bright light (which is physically attached to the mobile device). The video is then post-processed frame by frame on the device to detect & measure the diameter of the pupil AND the iris in each frame. Note the image processing does NOT need doing in real-time. The end result will be a dataset describing the changes in pupil (& iris) size over time. It's expected that the iris size can be used to enhance confidence in the pupil diameter data (eg removing pupil size data that's wildly wrong), but also as a relative measure for how dilated the eye is at any point.
I am familiar with developing Android mobile apps, but my experience with image processing is very limited. I've researched solutions and it seems that the answer may lie with the OpenCV/JavaCv libraries, which should provide shape detection (eg http://opencvlover.blogspot.co.uk/2012/07/hough-circle-in-javacv.html) but can anyone provide guidance on these specific questions:
Am I right to think it can detect the two circle shapes within a bitmap, one inside the other? ie shapes inside each other is not a problem.
Is it true that JavaCv can detect a circle, and return a position & radius/diameter? ie it doesn't return a set of vertices that then require further processing to compare with a circle? It seems to have a HoughCircle method, so I think yes.
What processing of each frame is typically used before doing shape detection? For example an algorithm to enhance edges, smooth, or remove colour?
Can I use it to not just detect presence of, but measure the diameter of the detected circles? (in pixels, but then can easily be converted to real-world measurements because known hardware is being used). I think yes, but would be great to hear confirmation from those more familiar.
This project is a non-commercial charitable project, so any help especially appreciated.
I would really suggest using ndk as it is a bit richer in features. Also it allows you to run and test your algorithms on a laptop with images before pushing it to a device, speeding up development.
Pre-processing steps:
Typically one would use thresholding or canny edge detection and morphological operations like erode dilate.
For detection of iris / pupil, houghcircles is not a very good method, feature detection methods like MSER work better for not-so-well-defined circles. Here is another answer I wrote on the same topic which has code that could help.
If you are looking to measure the regions, I would suggest going through this blog. It has a clear explanation on the steps involved for a reasonably accurate measurement.

Direction to take with Android Graphics

I am looking at making a simple game. Without giving out the entire story, I need to draw two pieces of fruit (with arms and legs), who do different movements. They can do a few different actions (less than 5) and they also react to each others actions.
I'd like it to look simple. Very 2D, kids sort of graphics. Maybe shaded, but nice bright happy colours.
Let's say an action is to 'throw ball'. I'd like to see a semi smooth arm action. Smooth if possible.
So, I found a tutorial, which used sprites, and a PNG with 3 different states of a person walking. So, very basic. And I was able to make it walk across the screen, leading each part of the PNG for each state, and iterating through that over and over again, while moving the image.
I got pretty happy with my progress, and would like to base my game on that sort of model - but ... is using sprites, and loading areas of the PNG to make the image move correct? My PNG would be large if I want maybe 20 images just to throw the ball.
But if that's the right way to go, then great! It seems you can go with OpenGL and all that, but that's for 3D graphics right? Using sprites, and a few PNG with images would be OK for perforamnce and all that?
OpenGL is a valid choice for 2D or 3D, you shouldn't have any performance issues.
It will work fine for your game, and would likely be much smoother than trying to use android animations, which are not hardware accelerated on Android 2.x.

Live Wallpaper Water Ripple Effect

I'm working on a live wallpaper that incorporates some water ripple effects on touching the screen but I'm a little stuck.
Would it be better to create multiple images and loop through them to create a ripple animation or would it be better to distort the bitmap a bit before I place it on the canvas?
This is a video of a very nice ripple effect done through OpenGL.
I don't have any experience yet with OpenGL and was wondering if it is still possible to create a 2D water effect on the live wallpaper?
I wanted to implemented a realistic ripple effect in Android too, so will share my experience:
As a reference implementation i took Sergey's Chikuyonok JavaScript port of Neil Wallis Java algo. Here's a playground where you can experiment with original JS code: http://jsfiddle.net/esteewhy/5Ht3b/6/
At first, i've ported JS code to Java only to realize that there's no way to squeeze more than 1 fps on my Huawei U8100 hardware. (There're several similar attempts on the net with the only conclusion: they're ridiculously slow).
BTW, this SO answer was quite useful to get basic understanding of how to code an interactive graphics in Android: https://stackoverflow.com/a/4946893/35438. I've borrowed fps counter from there.
Then i decided to try Android NDK to reimplement original algo in pure C (my first encounter with it in 10+ yrs!). Despite NDK's docs being somewhat confusing (especially as to requirements and prerequisites), it all worked like a charm, so i was able to achieve up to 30 fps -- it might not be too impressive, but still, a radical improvement over Java code.
Finally, i've put all my work online: https://github.com/esteewhy/whater , so feel free to play with that. It contains:
Interactive bouncing ball code mentioned above (just for the reference).
Water ripples Java port (slow like hell!)
Water ripples C implementation (needs NDK to compile and JDK to create .h file).
(The project is not "clean", i.e.: all binaries are there, so can try to run it "as is" even without NDK.)
You can find an example of a touch ripple effect here:
https://github.com/MasDennis/RajawaliExamples
It utilizes the rajawali OpenGL ES framework/library. You can download the rajawali examples app from the market to see how it looks. Browse through the "src" folder and you will see the TouchRippleEffect activity and renderer. Hope that helps.
I'm no expert in this, but I believe the typical way to do water effects in OpenGL is with a fragment shader. With a static image as a texture, your shader can vary the texture coordinates used for sampling that image, to distort it in arbitrary ways.
Calculate the pixel's direction and distance from the center of the circle, and adjust the texture coordinate toward or away from the circle's center based on a sinusoidal function of the distance, and you should get a nice ripple effect.
Judging by the description of that YouTube video you linked, it sounds like that's done by using a grid of triangles and adjusting the texture coordinates only at the vertices. That should work too, but it won't look as good unless you use a rather fine grid. Doing it per-pixel with a fragment shader is the ideal, but I don't know whether that would cause performance problems on a phone's GPU.

AndEngine VS Android's Canvas VS OpenGLES - For rendering a 2D indoor vector map

This is a big issue for me I'm trying to figure out for a long time already.
I'm working on an application that should include a 2D vector indoor map in it.
The map will be drawn out from an .svg file that will specify all the data of the lines, curved lines (path) and rectangles that should be drawn.
My main requirement from the map are
Support touch events to detect where exactly the finger is touching.
Great image quality especially when considering the drawings of curved and diagonal lines (anti-aliasing)
Optional but very nice to have - Built in ability to zoom, pan and rotate.
So far I tried AndEngine and Android's canvas.
With AndEngine I had troubles with implementing anti-aliasing for rendering smooth diagonal lines or drawing curved lines, and as far as I understand, this is not an easy thing to implement in AndEngine.
Though I have to mention that AndEngine's ability to zoom in and pan with the camera instead of modifying the objects on the screen was really nice to have.
I also had some little experience with the built in Android's Canvas, mainly with viewing simple bitmaps, but I'm not sure if it supports all of these things, and especially if it would provide smooth results.
Last but no least, there's the option of just plain OpenGLES 1 or 2, that as far as I understand, with enough work should be able to support all the features I require. However it seems like something that would be hard to implement. And I've never programmed in OpenGL or anything like it, but I'm willing very much to learn.
To sum it all up, I need a platform that would provide me with the ability to do the 3 things I mentioned before, but also very important - To allow me to implement this feature as fast as possible.
Any kind of answer or suggestion would be very much welcomed as I'm very eager to solve this problem!
Thanks!

How to implement physical effect, perspective effect on Android

I'm researching about 2D game for Android to implement an Android Game Project. My project looks nearly like PaperToss. Instance of throwing a page, my game will throw a coin. Suppose that I have a coin put in three-dimensional that have coordinates at A(x,y,z). I throw that point ahead, after 1/100 second, that coin move from A(x,y,z) to A'(x',y',z'). By this way, I have two problems need to solve.
Determine the formulas can be used to compute the coordinates of the coin at time t.
This problem is under-researching. I have no idea to solve this problem.
Mapping three-dimensional points to a two-dimensional and use those new coordinates (a two-dimensional coordinates) to draw our coin on screen. I have found two solutions for this problem: Orthographic projection & Perspective projection
However, my old friend said that OpenGL supports to solve problems like my problems. Any body have experiences about my problems?
Help me please :)
Thank for reading my question.
My personal opinion is to use a 3D engine since I don't like doing hacky stuff trying to fake 3D with 2D. However, 2D is much more beginner friendly, so if you're not planning to do anything fancy like actually emulating how coin actually rotates in air then I think it's doable in 2D.
My approach to do this would be to use a framework like libgdx instead of dealing with OpenGL ES directly. They have built in support for view projection (PerspectiveCamera), high level 2D/3D rendering support and even a wrapper for OpenGL ES if you really want to go low level. I'm not so sure about how you would approach the physics part right now.
Good luck....

Categories

Resources