(I tried to stuff the question with keywords in case someone else has this issue - I couldn't find much help.)
I have a custom View in Android that contains an LED bargraph that displays levels received via socket communication. It's basically just a clipped image. The higher the level, the less clipped the image is.
When I update the level and then invalidate the View, some devices seem to "collect" multiple updates and render them in chunks. The screen visibly hesitates for say 1/10th of a second, then rapidly paints multiple frames, and then hesitates again. It looks like it's overwhelmed and dropping frames.
However, when changing another UI control on the screen, the LED bargraph paints much more frequently and smoothly. I'm thinking Android is trying to help me by "collecting" multiple invalidations and then doing them all at once. Perhaps by manipulating controls, I'm "increasing" my frame rate simply by giving it "more to do" so it delays less between actual paints.
Unlike animation (with smooth transitions) I want to show the absolute latest value as quickly as possible. My data samples aren't faster than 10-20fps anyway.
Is there an easy way to "force" a paint at certain points, or is this a limit of how Views work? Should I be implementing this in a SurfaceView instead? (I have not played with that yet... want advice first.) Thanks in advance for suggestions.
(Later that same day...)
Update: I found a page in the Docs that does suggest implementing my widget as a SurfaceView is the way to go:
http://developer.android.com/guide/topics/graphics/2d-graphics.html
(An hour after that...)
SurfaceView seems overkill for what I want to do. The best-practice method is to "own" the whole canvas, but I have already developed the rest of my controls and layouts and they work well. It must be possible to get some better performance with what I have, especially since interacting with the UI makes the redraw speed satisfactory.
It turns out SurfaceView was the way to go. I was benchmarking on an older phone which didn't help. (The frame rate using a standard View was fine on an ASUS eeePad). I had to throw away some code, but the end result is smoother and faster with SurfaceView. Further, I was able to re-use more code than I expected and actually dramatically simplified my multitouch handling code (since everything I want to touch is in the same SurfaceView.
FYI: I'm still only getting about 15fps on Droid X, but half of the CPU load appears to be data packet processing. The eeePad is doing almost 40fps now -- and my data rate is only 20 samples/sec.
So... a win I guess. I want the Droid X to run better, but it flies on a real tablet.
Related
Little prologue: I'm creating a some kind of drawing application for Android (API 14 and higher). Few months ago I've started working on it and decided to use SurfaceView as a canvas to draw on. I thought that this a good decision cause SurfaceView works directly with Graphics. And everything seemed to work fine until one day I've noticed that the drawing process is a little bit laggy. There're probably a lot of weird code down there.
Anyway, now I'm optimizing that code and stuff, and I thought, do I really need to use SurfaceView for such scenario? The main things I need from my "canvas" is to draw smooth and be able to save all the drawings to Bitmap->File on External storage (this works fine).
So, should I use the simple View or the SurfaceView? Also, it would be great to hear props and cons of your decision/proposition.
Thanks
If you want to use Canvas, you are increasingly better off with a custom View, rather than a SurfaceView. The simple reason is that Canvas rendering on a View can be hardware accelerated, while Canvas rendering on a SurfaceView's surface is always done in software (still true as of Android 5.0).
By drawing "smooth" I assume you want some anti-aliasing effects. Check the chart on the hardware acceleration page to confirm that the effects you want are supported for the Android releases you want to ship on.
As device display pixel counts get steadily higher, software rendering gets increasingly expensive, and on some devices the CPU or bus isn't fast enough to keep frame rates high. Fortunately on these the pixel density is so high that you don't really need anti-aliasing, so even if it's not supported you could ignore it until you generate your software-rendered bitmap.
Before you do anything, though, it would be wise to figure out what your source of sluggishness is. It's possible you're being slowed down by inefficiencies in your drawing code rather than pixel fill rate. Check it with some of the profiling tools.
(See also the graphics architecture doc.)
After a lot of searching and days of experiments I haven't found a straight-forward solution.
I'm developing an app that user will interact with a pet on the screen and i want to let him save it as video.
Is there any "simple" way to capture the screen of the app itself?
I found a workaround (to save some bitmaps every second and then pass them to an encoder) but it seems too heavy. I will happy even with a framerate of 15fps
It seems to be possible, i.e. there is a similar app that does this, its called "Talking Tom"
It really depends on the way you implement your "pet view". Are you drawing on a Canvas? OpenGl ES? Normal Android view hierarchy?
Anyway, there is no magical "recordScreenToVideo()" like one of the comments said.
You need to:
Obtain bitmaps representing your "frames".
This depends on how you implement your view. If you draw yourself (Canvas or OpenGL), then save your raw pixel buffers as frames.
If you use the normal view hierarchy, subclass Android's onDraw and save the "frames" that you get on the canvas. The frequency of the system's call to onDraw will be no less than the actual "framerate" of the screen. If needed, duplicate frames afterwards to supply a 15fps video.
Encode your frames. Seems like you already have a mechanism to do that, you just need it to be more efficient.
Ways you can optimize encoding:
Cache your bitmaps (frames) and encode afterwards. This will work
only if your expected video will be relatively short, otherwise
you'll get out of storage.
Record only at the framerate that your app actually generates (depending on the way you draw) and use an encoder parameter to generate a 15fps video (without actually supplying 15 frames per second).
Adjust quality settings to current device. Can be done by performing a hidden CPU cycle test on app startup and defining some thresholds.
Encode only the most relevant portion of the screen.
Again, really depending on the way you implement - if you can save some "history data", and then convert that to frames without having to do it in real time, that would be best.
For example, "move", "smile", "change color" - or whatever your business logic is, since you didn't elaborate on that. Your "generate movie" function will animate this history data as a frame sequence (without drawing to the screen) and then encode.
Hope that helps
In a game we're developing we need to animate a large sprite, of size 485x485 and the animation has about 30 frames. We have a problem animating such artifacts. I have tried some solutions as listed below, but unfortunately we haven't been able to come up with a solution yet.
Tiling
It looks like putting every frame in one big tile is not an option because:
The texture size needs to be a power of two, so it shows up as black on most devices
When I make the texture size a power of two, it becomes too big for most devices too handle
Recommended maximum texture size of AndEngine seems to be 1024x1024.
Seperate sprites
The other option is loading each texture, and thus each frame, seperately and putting it in a Sprite (as described here). This works quite well and toggling the visibility of each sprite at the right time causes the user to see the animation.
The problem with this method is that loading the whole animation takes quite some time. This is not visible when initially loading the game because of the loading screen, but later in the game the animation needs to be changed and the game needs then about 2-3 seconds to load. Putting a loading screen up is not an option.
Loading on seperate thread
I tried to put loading the textures in a seperate, newly created thread, but even while the thread loads the textures the drawing of the game seems to be paused.
Other options?
I don't know any option, and it appears no one else tried to animate a texture greater than 50x50 pixels because it is very difficult to find anyone with a similar case.
My question is: Is it even properly possible to animate large textures in AndEngine?
I think your problem is going to run up against device limitiations, not andengine limitations. Designing for mobile, there are few android devices that could run that.
However, you may be able to come up with an alternative solution using VertexShaders and FramentShaders. This si an important features of Andengine GLES2
Here is an article describing that approach:
http://code.zynga.com/2011/11/mesh-compression-in-dream-zoo/
In my application I need to draw a large network (basically, little boxes connected with lines) and the user will be able to zoom and pan it around.
My first option was to draw the network directly to the canvas, but I thought that was not very efficient, because each time a pan event occurs, the drawing process begins again.
So I tried to use a large mutable bitmap and draw my entire network only once (or at least whenever zoom occurs), and blit the necessary areas to the canvas.
My problem is, since the network is rather big, I get OOM exception when creating the bitmap…
What should I do? Draw directly to the canvas? Use several smaller bitmaps?
Thanks,
Direz
My first question to you is how many sprites you have going at once? By far, the fastest mechanism for having many sprites on the screen is to use OpenGL due to the hardware accelleration. The besy way, on Android, I have found to do this is to use the Cocos2d android (not to be confused with the ios version) which is available on Google Projects. You will have to use the IOS documentation in order to understand it though and there are a few decent tutorials to get started with online..in particular, the hello world template here... Www.sketchydroide.com/Blog/p?=8. It is out of date compared to the latest IOS cocos2d but thats to be expected. I have found that the programs run MUCH faster when not connected to an active debugger session in my experiments.
If you want to stick with your current approach or the above is still not fast enough, you are going to have to attempt to cull any drawing which does not appear n the screen meaning a general function of the form "if the sprite's x and y values are outside the bounds of the visible area, dont draw it" which is basically how most tile base games handle the issue.
It sounds like you are doing the drawing manually if you are doing little squares. I think it is more adviseable to go ahead and draw on the canvas but to be very careful about managing your sprite counts and to avoid heavy for loop iteratiins that occur on
the frame update loops where possible. It is rather easy to max out your little handset with drawing operations.
Another option might be to draw the entire bitmap once into memory and then use a copy rectangle operation to transfer the image to the screen without drawing the full bitmap you have created. I think that copy rect should be a fast operation normally but if you are using it to draw the whole screen it seems like overkill and probably wont work as well.
You're probably not going to like this, but if all you're doing is drawing boxes and lines, the efficiency of the canvas is going to be pretty dang high. Are you getting UI lag or something?
One thing I have messed around with is drawing collections of subcomponents that won't change much or at all to a bitmap then rendering (scaling/moving aren't all that expensive if done at the right level) to the canvas can help efficiency. I have tried in the past to create a framework for rendering a tile-like subset of an existing larger image, but did not meet much success. I've made things work, but the code just gets ugly.
Oh, also a quick test to see if the component you are rendering is within the rectangle created by the screen can save you a bunch of processor time.
I'm working with my friend on our first Android game. Basic idea is that every frame of the game the whole surface is redrawn (1 large bitmap) in 2 steps:
Background with some static image (PNG) wipes out previous frame
Then it is sprinkled all over with large number of particles which produces effect of soapy bubbles where there's a pool of about 20 bitmaps which randomly gets picked to produce illusion that all bubbles (between 200 - 300) are all different. Bubbles positions updated on each frame (~50ms) producing effect of moving bubbles.
The math engine is in C (JNI) and currently all drawing is done using android.graphics package very similar (since that was the example I was using) to Lunar Lander.
It works but animation is somewhat jerky and I can feel by temperature of my phone that it is very busy. Will we benefit from switching to OpenGL? And as a bonus question: what would be a good way to optimize the drawing mechanism (Lunar Lander like) we have now?
Now I've started to work with OpenGL ES, I would also use it for 2D graphics. This way is the most flexible and it's extremely fast (look at this example code. It's about 2D rendering, and there you can see the power of OpenGL.
It's not the easiest thing to start with, but there are some good tutorials out there - for example, this is a very good one.
Don't redraw the entire screen each time. That's what causes your low framerate. Use the invalidate method to mark the areas that have changed each frame.