I am looking at making a simple game. Without giving out the entire story, I need to draw two pieces of fruit (with arms and legs), who do different movements. They can do a few different actions (less than 5) and they also react to each others actions.
I'd like it to look simple. Very 2D, kids sort of graphics. Maybe shaded, but nice bright happy colours.
Let's say an action is to 'throw ball'. I'd like to see a semi smooth arm action. Smooth if possible.
So, I found a tutorial, which used sprites, and a PNG with 3 different states of a person walking. So, very basic. And I was able to make it walk across the screen, leading each part of the PNG for each state, and iterating through that over and over again, while moving the image.
I got pretty happy with my progress, and would like to base my game on that sort of model - but ... is using sprites, and loading areas of the PNG to make the image move correct? My PNG would be large if I want maybe 20 images just to throw the ball.
But if that's the right way to go, then great! It seems you can go with OpenGL and all that, but that's for 3D graphics right? Using sprites, and a few PNG with images would be OK for perforamnce and all that?
OpenGL is a valid choice for 2D or 3D, you shouldn't have any performance issues.
It will work fine for your game, and would likely be much smoother than trying to use android animations, which are not hardware accelerated on Android 2.x.
Related
I want to apply little tricky fluid ink animation as same as the below image:
I want to achieve using the basic classes only like applying scale, translate etc. I don't want to go with 2D or 3D. As I have checked some of the links they are suggesting frame animation but It doesn't have smooth transition.
Kindly suggest best way to achieve this animation.
Thanks,
"I don't want to go with 2D or 3D."
Well.. I don't know about you but those are the only options until we have 4D screens and mindsets.
But I guess you mean using 2D images or 3D models.
Unless you want to procedurally generate that (which can be quite a pain) I'd suggest using a spritesheet, there may be better ways to do this but I think a spritesheet on 30fps might be the easiest + fluid way to do this, increase FPS to increase fluidness (until the max of androids 60 fps screens)
Basically:
Create a (example) 256x128 sprite, and "manually" animate the sphere to the other side of the image, saving each "frame" and either create 1 big texture with all the frames (most efficient) or save each frame independently (lower on RAM if you manage it well, but it'll be hard to get 60 fps with this due to loading times)
You can check the recently announced Lottie library (by Airbnb).
It is a library for Android that parses Adobe After Effects animations exported as json with Bodymovin (related project) and renders them natively on Android devices.
So I'm doing a 3D game for kids for Android and iOS in Unity, but i'm new in game developing and it's been really difficult to plan the assets.
We need to create 2D animations (paper like characters) and the characters have to be really detailed with great animations.
We have been thinking of several options:
We could create frame by frame animations but our designer says there has to be at least 24 images per second (because of 24 fps per second) with this the app will be very big.
Other option is to create 2D models in Blender and animate them there, but it's a lot of work and could take a lot of time.
The last option is to have the pieces of the model an animate it throughout code but it's a lot of work and I believe the quality of the animations would be low.
What's the better way to create 2D animations in Unity?.
Thank you!
Have you explored the 2D sprite engines that are available in Unity? Whoever said "Unity isn't really an engine designed to work with 2D stuff" is talking guff. I have just started working on a hobby 2D game and am using a Unity plugin called Orthello (see WyrmTale website for info). It handles sprite sheets, animations, collision detection and more without you having to write loads of code to do this. The learning curve is a bit steep and the examples on their website aren't the best but I found replicating the sample solutions that come with the download the best way to get something working.
There's also a similar tool called Sprite Manager 2 but you have to pay for that (I think). Check out the asset store for more information.
I would be really interested to hear if Orthello is what you're looking for and how you find working with it - please let me know via http://markp3rry.wordpress.com if you can.
Just because the app runs at 24fps doesn't mean you can't just display the animations for more than one frame of the main loop. It might not be smooth, but then again looking at the sprite sheets of games like super street fighter it doesn't look like they're at anywhere close to 24fps (the sprite sheet for Dhalism in SF3 Alpha is a 210kb .gif file on my computer, and there's less than 252 frames of animation on it. Likewise, the total storage space take up by every character sprite in Dustforce takes up a mere 7mb, though those sprites are just 192x192, maybe too low-res for you. They do look like paper though). I doubt that anything involving blender would take longer than hand animating -- Blender does key frames for you.
I am trying to write a libgdx livewallpaper (OpenGL ES 2.0) which will display a unique background image (non splittable into sprites).
I want to target tablets, so I need to somehow be able to display at least 1280x800 background image on top of which a lot more action will also happen, so I need it to render as fast as possible.
Now I have only basic knowledge both about libgdx and about opengl es, so I do not know what is the best way to approach this.
By googling I found some options:
split texture into smaller textures. It seems like GL_MAX_TEXTURE_SIZE on most devices is at least 1024x1024, but I do not want to hit max, so maybe I can use 512x512, but wouldn't that mean drawing a lot of tiles, rebinding many textures on every frame => low performance?
libgdx has GraphicsTileMaps which seems to be the tool to automate drawing tiles. But it also has support for many features (mapping info to tiles) that I do not need, maybe it would be better to use splitting by hand?
Again, the main point here is performance for me - because drawing background is expected to be the most basic thing, more animation will be on top of it!
And with tablet screen growing in size I expect soon I'll need to be able to comfortably render even bigger image sizes :)
Any advice is greatly appreciated! :)
Many tablets (and some celphones) support 2048 textures. Drawing it in one piece will be the fastest option. If you still need to be 100% sure, you can divide your background into 2 pieces whenever GL_MAX_TEXTURE happens to be smaller (640x400).
'Future' tables will surely support bigger textures, so don't worry so much about it.
For the actual drawing just create a libgdx mesh which uses VBOs whenever possible! ;)
Two things you dindn't mention will be very important to the performance. The texture filter (GL_NEAREST is the ugliest if you don't do a pixel perfect mapping, but the fastest), and the texture format (RGBA_8888 would be the best and slowest, you can downgrade it until it suits your needs - At least you can remove alpha, can't you?).
You can also research on compressed formats which will reduce the fillrate considerably!
I suggest you start coding something, and then tune the performance up. This particular problem you have is not that hard to optimize later.
I'm working on a live wallpaper that incorporates some water ripple effects on touching the screen but I'm a little stuck.
Would it be better to create multiple images and loop through them to create a ripple animation or would it be better to distort the bitmap a bit before I place it on the canvas?
This is a video of a very nice ripple effect done through OpenGL.
I don't have any experience yet with OpenGL and was wondering if it is still possible to create a 2D water effect on the live wallpaper?
I wanted to implemented a realistic ripple effect in Android too, so will share my experience:
As a reference implementation i took Sergey's Chikuyonok JavaScript port of Neil Wallis Java algo. Here's a playground where you can experiment with original JS code: http://jsfiddle.net/esteewhy/5Ht3b/6/
At first, i've ported JS code to Java only to realize that there's no way to squeeze more than 1 fps on my Huawei U8100 hardware. (There're several similar attempts on the net with the only conclusion: they're ridiculously slow).
BTW, this SO answer was quite useful to get basic understanding of how to code an interactive graphics in Android: https://stackoverflow.com/a/4946893/35438. I've borrowed fps counter from there.
Then i decided to try Android NDK to reimplement original algo in pure C (my first encounter with it in 10+ yrs!). Despite NDK's docs being somewhat confusing (especially as to requirements and prerequisites), it all worked like a charm, so i was able to achieve up to 30 fps -- it might not be too impressive, but still, a radical improvement over Java code.
Finally, i've put all my work online: https://github.com/esteewhy/whater , so feel free to play with that. It contains:
Interactive bouncing ball code mentioned above (just for the reference).
Water ripples Java port (slow like hell!)
Water ripples C implementation (needs NDK to compile and JDK to create .h file).
(The project is not "clean", i.e.: all binaries are there, so can try to run it "as is" even without NDK.)
You can find an example of a touch ripple effect here:
https://github.com/MasDennis/RajawaliExamples
It utilizes the rajawali OpenGL ES framework/library. You can download the rajawali examples app from the market to see how it looks. Browse through the "src" folder and you will see the TouchRippleEffect activity and renderer. Hope that helps.
I'm no expert in this, but I believe the typical way to do water effects in OpenGL is with a fragment shader. With a static image as a texture, your shader can vary the texture coordinates used for sampling that image, to distort it in arbitrary ways.
Calculate the pixel's direction and distance from the center of the circle, and adjust the texture coordinate toward or away from the circle's center based on a sinusoidal function of the distance, and you should get a nice ripple effect.
Judging by the description of that YouTube video you linked, it sounds like that's done by using a grid of triangles and adjusting the texture coordinates only at the vertices. That should work too, but it won't look as good unless you use a rather fine grid. Doing it per-pixel with a fragment shader is the ideal, but I don't know whether that would cause performance problems on a phone's GPU.
This is a big issue for me I'm trying to figure out for a long time already.
I'm working on an application that should include a 2D vector indoor map in it.
The map will be drawn out from an .svg file that will specify all the data of the lines, curved lines (path) and rectangles that should be drawn.
My main requirement from the map are
Support touch events to detect where exactly the finger is touching.
Great image quality especially when considering the drawings of curved and diagonal lines (anti-aliasing)
Optional but very nice to have - Built in ability to zoom, pan and rotate.
So far I tried AndEngine and Android's canvas.
With AndEngine I had troubles with implementing anti-aliasing for rendering smooth diagonal lines or drawing curved lines, and as far as I understand, this is not an easy thing to implement in AndEngine.
Though I have to mention that AndEngine's ability to zoom in and pan with the camera instead of modifying the objects on the screen was really nice to have.
I also had some little experience with the built in Android's Canvas, mainly with viewing simple bitmaps, but I'm not sure if it supports all of these things, and especially if it would provide smooth results.
Last but no least, there's the option of just plain OpenGLES 1 or 2, that as far as I understand, with enough work should be able to support all the features I require. However it seems like something that would be hard to implement. And I've never programmed in OpenGL or anything like it, but I'm willing very much to learn.
To sum it all up, I need a platform that would provide me with the ability to do the 3 things I mentioned before, but also very important - To allow me to implement this feature as fast as possible.
Any kind of answer or suggestion would be very much welcomed as I'm very eager to solve this problem!
Thanks!