Android Porter-Duff Compositing Performance - android

I have been unable to find any internet articles or Google documentation on the relative performance of compositing bitmaps using different Porter-Duff modes. What has become very apparent to me whilst programming is that the traditional SRC/DST prefix modes are performing a lot faster (3 - 4 times faster) than the Android Mode.DARKEN, Mode.LIGHTEN, Mode.MULTIPLY modes. Use of the latter modes can bring down my game engine's performance from 40+ to around 13 FPS when rendering a lighting mask on a 720p screen.
My questions are thus:
Is there a faster way for compositing images using the darken/lighten property than the supplied Porter-Duff modes? Would it be worth the switch to OpenGL?
Are there data available on the relative speeds of different compositing modes?

Yes, there are many faster ways, for a game engine switching to opengl (or to something like Unity if you want something more high level) can be a very good idea. Renderscript is also a very good alternative that already has built-in multiply intrinsic.
You should probably bench these things yourself, there are few measurements on this kind of topic and hardware moves fast.

Related

Motion graphics animations in Android

I was wondering what will be the best solution or method to implement motion graphics animations in an android application .
Check out this website.
https://uxplanet.org/bringing-mobile-apps-to-life-through-motion-9472d259b58e
I want to implement same type of animations.
How can I do that?
What is the best industry standard to do this?
Should I be using After Effects animations and render them into android application or can I achieve this using Open GLES ? Which will be the fast and efficient way ?
Thanks
Design is no doubt an integral part of any app. Hence, design must also be taken in consideration along with the functionalities of the app. Motion Graphics in Android can be implemented in many ways. Android itself provides a rich set of powerful APIs to implement animations to various UI elements. For animations, Android has the 'PropertyAnimation', 'ViewAnimation' and the 'DrawableAnimation'. The Property Animation is a powerful system which can be used to achieve complex animations for both View and nonView objects. The view and drawable animations are a bit simpler systems to achieve simpler animations.
For drawing graphics in Android, you can use the Android Canvas or the OpenGL ES. OpenGL is an extremely powerful tool for manipulating and displaying high-end animated 3D graphics and can use hardware accelerated GPU. Please look into the Android docs for exact codes for implementation.
Now considering After Effects animations, they are really neat and perfectly fit the design aspects of the app. However the animations rendered by After Effects tend to be large in size and end up making the final app big in size. Personally, I have used AE rendered animations for my splashscreen and only a few other animations. Finally, it depends on your coding abilities since implementing complex animations using Android systems would be hard, whereas AE animations would make the app size large.
Hope this answers your question.

OpenGL ES performance of different blending modes

I need to apply full-screen photographic-like vignette effect over rendered scene. Obviously, I have to use blending to achieve this. I would like to choose the fastest possible blending mode because it will be applied to all screen space.
Do some blending modes in OpenGL ES work faster than another? Or does any blending mode work at the same fill-rate? So far I haven't found any resources on Internet saying that certain blending modes are slower or faster than another ones, so I decided to ask this question on SO.
This is for Android app, so I understand that of course this behavior can depend on GPU vendor, but maybe there are some common considerations for faster blending?
The one single slow part of blending is reading pixels from the backbuffer(doesn't matter alpha only or rgb or both). So as long as it's 'real' blending using dst color/alpha(i.e. not using a degenerate blend func like glBlendFunc(GL_ONE, GL_ZERO)or glBlendFunc(GL_ZERO, GL_ONE) or similar) - there's no performance difference.
It doesn't matter which blending option you choose, it is going to slow down the fragment shader as it needs to read back the pixel values from the target framebuffer. You can save some cycles by splitting your effect into some quads that are setup around the screen borders and leaving the central part of the framebuffer without overlaying quads. You can also do some more tricky approaches to use the early fragment discard employed by some tile based mobile GPUs like the Mali ones, but maybe is just not worth the effort.
To be short, no there is probably not a measurably worse blendmode (as long as you are doing "real" blending).
Blending can either be implemented by having a fixed function blend stage, or by adding a short tail to the shader program that will do the actual blending. Another solution is that fixed-function is used for most of the common blend modes, while a shader takes over if there is an uncommon blend mode. If you hit the shader one, your performance might take a hit.
Knowing what is good or bad would be very HW specific - and might not even be measurable due to the biggest cost is that you need to read and combine two buffers, not the relatively minor extra shading cost.

Drawing large background image with libgdx - best practices?

I am trying to write a libgdx livewallpaper (OpenGL ES 2.0) which will display a unique background image (non splittable into sprites).
I want to target tablets, so I need to somehow be able to display at least 1280x800 background image on top of which a lot more action will also happen, so I need it to render as fast as possible.
Now I have only basic knowledge both about libgdx and about opengl es, so I do not know what is the best way to approach this.
By googling I found some options:
split texture into smaller textures. It seems like GL_MAX_TEXTURE_SIZE on most devices is at least 1024x1024, but I do not want to hit max, so maybe I can use 512x512, but wouldn't that mean drawing a lot of tiles, rebinding many textures on every frame => low performance?
libgdx has GraphicsTileMaps which seems to be the tool to automate drawing tiles. But it also has support for many features (mapping info to tiles) that I do not need, maybe it would be better to use splitting by hand?
Again, the main point here is performance for me - because drawing background is expected to be the most basic thing, more animation will be on top of it!
And with tablet screen growing in size I expect soon I'll need to be able to comfortably render even bigger image sizes :)
Any advice is greatly appreciated! :)
Many tablets (and some celphones) support 2048 textures. Drawing it in one piece will be the fastest option. If you still need to be 100% sure, you can divide your background into 2 pieces whenever GL_MAX_TEXTURE happens to be smaller (640x400).
'Future' tables will surely support bigger textures, so don't worry so much about it.
For the actual drawing just create a libgdx mesh which uses VBOs whenever possible! ;)
Two things you dindn't mention will be very important to the performance. The texture filter (GL_NEAREST is the ugliest if you don't do a pixel perfect mapping, but the fastest), and the texture format (RGBA_8888 would be the best and slowest, you can downgrade it until it suits your needs - At least you can remove alpha, can't you?).
You can also research on compressed formats which will reduce the fillrate considerably!
I suggest you start coding something, and then tune the performance up. This particular problem you have is not that hard to optimize later.

Poor performance of Android Canvas.drawBitmap - switch to OpenGL?

I'm porting a 2D action game from Windows Phone 7 (developed in XNA 4.0) over to Android. I'm using a lot of Canvas.drawBitmap() calls - around 200-300 per frame update - with different Paints for each call to handle varying transparency and colourisation at draw-time. This is managing particle systems and various other overlays and in-game effects as well as a tiled background and in-game sprites. I'm not doing any on-demand resizing or rotating, it's simple src->dest rectangles of the same size.
On WP7 this runs at 30+fps but I'm struggling to get 12fps on my test 'droid hardware (Samsung Galaxy S). This is making the game unplayable. Having profiled the code, I've confirmed that all my time is being lost in Canvas.drawBitmap()
I seem to be following all the usual performance advice - using a SurfaceView, mindful of GC so not creating loads of throwaway objects, and avoiding Drawables.
Am I right in understanding that Canvas.drawBitmap() is CPU-bound, and if I want to improve performance I have to switch to OpenGL which will use the GPU? I can't find it stated that baldly anywhere, but reading between the lines of some comments I think that might have to be my next step...?
This is normal. Canvas is amazingly slow when using transparency (like ARGB_8888).
2 options:
Switch to OpenGL ES
Use the least possible transparency on your bitmaps (i.e use RGB_565 the most you can).
Perhaps this will run better on Android 3+ since it uses hardware acceleration for canvas operations.

Differences and advantages of SurfaceView vs GLSurfaceView on Android?

I'm currently playing around with 2D graphics in android and have been using a plain old SurfaceView to draw Drawables and Bitmaps to the screen. This has been working alright, but there's a little stutter in the sprite movement, and I'm wondering the feasibility to do a real time (but not terrible fast) game with this.
I know GLSurfaceView exists which uses OpenGL, but I'm curious as to the extent to which this makes a difference. Is a plain SurfaceView hardware accelerated, or do I need to use OpenGL? What type of speed difference could I expect from switching to OpenGL, and how much altering of code would it require to switch (the game logic is all in a separate object that provides an ordered array of drawables to the SurfaceView)?
As far as I can tell, you have to use openGL to get HW acceleration. But don't take is for granted and wait for other answers ^^
If it really is the case, the speedup should be quite important. Any 2D application should work at at very least 20 fps (generally less polygons than 3D applications)
it would take a substantial amount of code, but 1) as a first attempt, you could try with only 1 square VBO and change the matrix each time and 2) your rendering seems already quite encapsulated so it should simplify things a lot.
SurfaceView is not hardware accelerated in default.
if you want to get HW acceleration
use GLSurfaceView, which use opengl and is hardware accelerated.
Hardware acceleration is possible for a regular SurfaceView since 3.0.
http://developer.android.com/guide/topics/graphics/hardware-accel.html

Categories

Resources