I know this question has been asked too many times here, but all the time it is related to either 2D or 3D game.
I am trying to create an application like Skitch, and want to know either to use Opengl or stick with Canvas, Surface view.
Which is best to use Performance wise.
Maximum performance = OpenGL
Easy implementation & more than average performance = SurfaceView
I'm not providing any benchmark tests here, but that's based on own experience. Sometimes even OpenGL may have performance problems)
Remember - KISS
Related
Drawing on android itself, is a herculean task. Now my requirement is on to see, how robust I can draw atleast 10million points with different intensity levels.
Some methods I came across:
Android draws with Canvas and Bitmaps
SurfaceView with OpenGL
Using libGDX fastest drawing library
Custom view to refresh & update automatically
What is best method to go about it? If I need to draw 10million or more points maybe on a static image on android, how can I enhance it and not degrade its performance. Every second I need to refresh and draw another 10million points. Is it possible or android is capable of doing such a task?
As your question states 10mil/sec, I understand that you want them realtime, thus opengl is the way to go, leaving you with options 2, 3 and 4.
You would definitely need to batch those calls.
You can think about using point sprites to reduce the amount of data you need to transfer to GPU.
Android as OS is capable of anything your machine can support. Your specific device may have performance issues, or not.
Don't optimize prematurely and try option 3 (libGDX). It would be the easiest to set up and achieve your task. If it won't be performant enough I'd think about rolling my own opengl-based solution.
https://gamedev.stackexchange.com/questions/11095/opengl-es-2-0-point-sprites-size
Background
I'm writing a graphing library for an android application (yes yes, I know there are plenty out there but none that offer the customizability we need).
I want the graphs to zoomable and pan-able.
Problem
I want the experience to be smooth, leave a small CPU footprint.
Solutions
Use View.onDraw(Canvas)
Use high resolution Bitmap
Use OpenGL
View.onDraw():
Benefits
Some what easy to implement
Drawbacks
Bad performance? (unless it uses OpenGL, does it?)
Bitmap:
Benefits
Really easy to implement
Great performance
Drawbacks
Have to use scaling which is ugly
OpenGL:
Benefits
Probably good performance depending on my implementation
Drawbacks
More work to implement
Final words
OpenGL would probably be the professional solution and would definitely offer more flexibility but it would require more work (how much is unclear).
One thing that is definitely easier in OpenGL is panning/zooming since I can just manipulate the matrix to get it right, the rest should be harder though I think.
I'm not afraid to get my hands dirty but I want to know I'm heading in the right direction before I start digging.
Have I missed any solutions? Are all my solutions sane?
Additional notes:
I can add that when a graph changes I want to animated the changes, this will perhaps be the most demanding task of all.
The problem with using Views is that you inherit from the overhead of the UI toolkit itself. While the toolkit is pretty well optimized, what it does it not necessarily what you want. The biggest drawback when you want to control your drawing is the invalidate/draw loop.
You can work around this "issue" by using a SurfaceView. A SurfaceView lets you render onto a window using your own rendering thread, thus bypassing the UI toolkit's overhead. And you can still use the Canvas 2D rendering API.
Canvas is however using a software rendering pipeline. Your performance will mostly depend on the speed of the CPU and the bandwidth available. In practice, it's rarely as fast as OpenGL. Android 3.0 offer a new hardware pipeline for Canvas but only when rendering through Views. You cannot at this time use the hardware pipeline to render directly onto a SurfaceView.
I would recommend you give SurfaceView a try first. If you write your code correctly (don't draw more than you need it, redraw only what has changed, etc.) you should be able to achieve the performance you seek. If that doesn't work, go with OpenGL.
The reason I ask is that I've written a rendering scheme for a 2d top down game that uses ortho projection. To give the appearance of depth I scale the objects. I'm wondering if switching to frustum and using the z-coor in lieu of scaling will improve performance. I would just implement it and find out, but it would take me several hours, and its easier to just ask here.
glScale just simply modifies the matrix on top of the stack, all done in CPU. As for expensive "relatively speaking", I guess I ask - relative to what? :) It's not that expensive if you are doing it occasionally. If you are doing it in a critical inner loop, then, yeah, it could be expensive.
Almost certainly faster to switch to a 3D view and use glFrustum. It's what the hardware is really built for, and it does it quite well, at least in most cases.
As far as I'm aware, glScale http://www.opengl.org/sdk/docs/man/xhtml/glScale.xml, is a standard OpenGL function, which modifies the matrix, but it does not specify if it is done on the CPU or the GPU. My guess is this would vary by implementation, but it could be done on the GPU; or the CPU, like any other OpenGL function.
2D using an ortho projection won't scale on z-depth change. If you're wondering cost relative to changing the z-depth it should be an identical cost - just a matrix transform, but might look different, and might interfere with clipping planes. Making a function call is always more expensive than not making a function call, however, so if you can track and use depth at the same time, it should be micro-cheaper, with previously listed caveats. Hope that's not too much word soup.
I'm working on a 2D game for android using OpenGL ES 1.1 and I would like to know if this idea is good/bad/useless.
I have a screen divided in 3 sections, so I used scissors to avoid object overlapping from one view to the other.
I roughly understand the low level implementation of scissor and since my draws take a big part of the computation, I'm looking for ideas to speed it up.
My current idea is as follows:
If I put a glscissor around each object before I draw it, would I increase the speed of my application.
The idea is if I put a GLScissor, (center+/-sizetexture), then the OpenGL pipeline will have less tests to do (since it can discard 90~99% of the surface thanks to the glscissors.
So to all opengl experts, is this good, bad or will have no impact ? And why?
It shouldn't have any impact, IMHO. I'm not an expert, but my thinking is as follows:
Scissor test saves on your GPU's fill rate (the amount of fragments/pixels a hardware can put in the framebuffer per second),
if you put a glScissor around each object, the test won't actually cut off anything - the same number of pixels will be rendered, so no fill rate will be saved.
If you want to have your rendering optimized, a good place to start is to make sure you're doing optimal batching and reduce the number of draw calls or complex state switches (texture switches).
Of course the correct approach to optimizations is to try to diagnose why is your rendering slow, so the above is just my guess which may or may not help in your particular situation.
I'm currently implementing a software keyboard ( using some sophisticated prediction ), and drawing it using canvas is insufficient in terms of perfomance. I'm getting frame drawing times well above 100ms, which is clearly unacceptable.
The keyboard itself consists of about 33 keys, each of them drawn using drawRoundRect and a simple Text above that. No widgets whatsoever are used, so it's the plain perfomance. Also, almost all of Googles perfomance tips are in use, so thats not the reason for the speed either.
I've now reached a point where switching to opengl actually would make sense, but I'm still sceptical considering the impact an opengl-based keyboard might have on battery life.
As I've found no sufficient documentation on that topic, I hope someone here can point me to the right direction.
Regardless of how much it drains the battery, you probably don't want to do this because most existing devices don't support multiple OpenGL contexts at the same time, so your soft keyboard would be incompatible with any application that is using OpenGL for its own drawing. On these devices the OpenGL context is owned only by the foreground application; it can not be used in secondary parts of the UI like the soft keyboard.
Also as the previous poster said, you would probably be best off looking how to optimize your regular drawing. Drawing vectors is quite slow, so pre-rendering them into a bitmap to just do bitmap blits would help a lot. Also be careful to only draw the parts of the window that have changed. 100ms is a pretty insane amount of time to take to draw the UI, so there are almost certainly significant optimizations you can make. You might want to look at the KeyboardView code in the platform (which is used by the standard soft keyboard and sample IME); this already contains many similar drawing optimizations.
An aside: Have you considered rendering the keys once and then grabbing them as sprites and blitting these? It should be vastly superior to rendering vector graphics.
I cannot give you hard numbers (and as apphacker pointed out, this is device-specific), but even if OpenGL is hardware-accelerated and hence might use more battery, the operation should complete much faster and so use less power in total.
If it is not hardware-accelerated, it seems logical that it should only use more power if it takes longer to complete the operation, as you are only exchanging one drawing API for another.
All in all, as you only have to draw when external events happen it shouldn't matter much in the long run, as people are probably tiping only a few keys per minute.
You'll probably just have to implement it (maybe in a simplified test case) and make measurements.