How to use SurfaceView to draw a single complex image - android

I am creating an Android App that produces random images based on complex mathematical expressions. The color of a pixel depends on its location and the expression chosen. (I have seen many iPhone apps that produce "random art" but relatively few Android apps.)
It takes 5 to 15 seconds for the image to be drawn on a Nexus S dev phone.
To keep the UI thread responsive this seems like the purpose of the SurfaceView class. Almost all the examples of SurfaceView deal with animation, not a single complex image. Once the image is done being drawn / rendered I won't change it until the user
So, is SurfaceView the right component to use? If so, can I get a callback from the SurfaceView or its internal thread when it is done drawing when it is done rendering the image? The callback is so I know it is okay to switch the low resolution and blocky version of the image art with the high resolution one?
Is there an alternative to SurfaceView that is better for complex rendering of single images. (Not animation.)
Cheers!

If all you want to do is render a single complex image on another thread to keep the UI responsive, then after it's done rendering actually draw it, you might consider just doing this in the standard keep-work-off-the-UI-thread way by using something like an AsyncTask. It's not like you're doing compositing or anything that really is GPU-specific (unless as others have suggested you can offload the actual rendering calculations to the GPU).
I would at least experiment with simply building an array representing your pixels in an AsyncTask then when you're done, create a bitmap with it using setPixels and set the source of an ImageView to that bitmap.
If on the other hand you want your image to appear pixel by pixel, then maybe SurfaceView might be a reasonable choice, in which case it'll basically be an animation so you can follow other tutorials. There's some other setup, but the main thing would be to override onDraw and then you'll probably have to use Canvas.drawPoint to draw each pixel.

Related

How to draw cleverly a complex shape in Android

I'm working on a university project in which I need to visualize on a smartphone datas from pressure sensors built in an insole.
I need to draw on a View, as a background, a footprint, something like the image below but just for one foot.
I don't want to use a static image because with different screens resolution it could lose too much quality, so I'm trying to do it by code.
The main problem is that I'm not very skilled in graphic programming so I have not a smart approach to this problem.
The first and only idea that I had was to take the insole's CAD representation, with its real dimensions, scale it in function of screen's ones and put it together using simple shapes (arc, circle, ecc...) available in Android.
In this way I'm creating a Path which will compose the whole footprint, once I will draw it with a Canvas.
This method will let me to do the work but is awful and needs an exceptional amount of work and time to set every part.
I have searched for some similar questions but I haven't found anything to solve my problem.
Is there (of course there is) a smarter way to do this stuff, saving time and energies?
Thank you
of course, you can always use open gl es
this method might not save your time and energy but will give you a better graphic and scope to improve it for later purpose
the concept is to create a float buffer with all the points of your footwear and then draw is with connected lines on a surfaceView
to get started with open gl es you can use this tutorial : https://developer.android.com/guide/topics/graphics/opengl.html
and here is an example : https://developer.android.com/training/graphics/opengl/index.html

Using Multiple Surfaces Views For Optimization?

The Situation
I started developing for Android, and found that Android's way of handling layouts, animations etc. is not adequate for smooth touch feedback and real-time animations, especially before Android 4.0. So instead, I decided to use the game app approach: use a SurfaceView and define my own drawing code.
The Problem
After a few tests, I discovered that this method required too much CPU for a non-game app, which I believe is due to redrawing static elements 60 times per second.
The First Solution & Flaws
To solve this issue, I modified my code so that the app would redraw the screen (call postInvalidate) only if there were any changes to what should be drawn. This solution solved part of the issue, but the app still had to redraw static elements even if a small button moved a single pixel.
The Question: Possible Better Solution?
For a better solution, I considered how Android dealt with the problem; it had separate View's for every screen element. So I though, maybe I could have one SurfaceView for large, static, content elements and another for small, moving UI elements and achieve a similar effect. My question is, would this actually improve performance the way I described it above?
Thanks.
If you're using postInvalidate(), you should be using a custom View, not a SurfaceView. The whole point of using a SurfaceView is to have a separate layer that is independent of the View UI. If you're overriding onDraw(), you're drawing on the View part, not the Surface part, and just wasting the Surface.
All Views occupy a single layer, no matter how many you have. Each SurfaceView has a separate layer, so having a lot of them will become problematic. In practice you can have no more than three, because of Z-ordering limitations. (See the "multi-surface test" activity in Grafika for an example of three partially transparent SurfaceViews blended with the View UI.)
If you can't render fast enough to maintain 60 fps, you need to consider changing the way you render. Custom Views and OpenGL ES take advantage of hardware acceleration. Canvas rendering onto a SurfaceView Surface does not. On the plus side, you can down-size a SurfaceView's Surface and let the hardware scale it back up; this lets you limit the number of pixels you have to draw each frame, regardless of the display's resolution. (Blog, demo.) If you have a lot of static elements, the best approach may be to render to an off-screen Bitmap, and then just blit the Bitmap every frame.
One approach that will be very fast is to render all of the static elements onto the View part of the SurfaceView, taking care to keep the background transparent, and then render the animated parts on the Surface with GLES. You could use a second SurfaceView, but that adds an additional composition layer, which will degrade system performance if you exceed the number of overlay planes supported by the hardware.
For a deeper understanding of the way Android graphics work, take a look at the graphics architecture doc.

AS3 rendering bitmaps in GPU mode

Flash Pro CC, AS3, Air for Android (v17), rendering mode GPU, stage quality.LOW, FPS: 60, testing device: an old Nexus One smartphone (Android 2.3.3).
The guides say that GPU makes rendering the bitmaps cheap, somehow i can't grasp how exactly it works.
So i have 49 separate bitmap squares covering the stage and one MovieClip in the middle with a bitmap inside tweened to move up and down (jumping ball). Pretty simple, right?
This is the view: http://i.stack.imgur.com/EKcJ6.png
All graphics are bitmaps (not vectors). Yet i get 55 fps (it varies arround 53-57).
Then i select all 49 squares and put them inside a symbol (MovieClip). Visually nothing changes. It seems to increase the FPS a tiny bit, the average fps is now ~57 (55-59).
Then i take the MovieClip (with all the squares inside it) and set cacheAsBitmap=true. Voila, now i have 60 fps!
What is happening in all 3 different cases? Why i need to put bitmaps into one MC and cache this MC as bitmap - aren't the squares already bitmaps?
I have also tried to make each square a MovieClip and cache it as bitmap, but i still got 55 fps.
Is it possible to keep squares separate at 60 fps?
In my real project i have many MovieClips on the stage (~100) but in most cases only one of them is animated at a time. Yet somehow it seems that the mere presence of other movieclips reduce the performance (fps). Obviously, i cannot put them all into one MC and cache it as bitmap as in the simplified example above.
How can i solve this, what should i do?
Thanks!
I think it relates to this best practice recommendation:
Limit the numbers of items visible on stage. Each item takes some time
to render and composite with the other items around it. When you no
longer want to display a display object, set its visible property to
false. Do not simply move it off stage, hide it behind another object,
or set its alpha property to 0. If the display object is no longer
needed at all, remove it from the stage with removeChild().
By putting all the bitmaps in a single container and setting cacheAsBitmap=true you are essentially turning them into a single bitmap as far as the compositor is concerned. This tends to be faster to composite than multiple individual bitmaps. Setting a bitmap to cacheAsBitmap=true (or in a single container with cacheAsBitmap=true) has no effect because it's already a bitmap.
Also note that GPU mode isn't recommended anymore, it was Adobe's first attempt at GPU accelerating the display and they basically gave up on that path in favor of the new Stage3D rendering pipeline. While GPU render mode can work really well when used just right, it can be somewhat unpredictable and confusing, so I would highly recommend you check out Stage3D.
Hope that helps.

scaling images in libgdx only once

In my android game, I am using images of fixed resolution lets say 256x256. Now for different device screens, I am rendering them by calculating their width and height as appropriate for that device.
Assume that on galaxy note2 I calculated width=128 and height=128... similarly for different devices, width and height will vary.
This is how I created texture..
....
imageTexture = new Texture(...);
....
in render()..
....
spriteBatch.draw(imageTexture,x,y,width,height);
....
So, every time when I call draw() method, does libgdx/opengl scale image from 256x256 to 128x128, which I think, yes!
Is there any way to tell opengl/libgdx to calculate all scaling only once ?
I have no idea how images were rendered? loaded into memory? scaled etc ?
How does Sprite in libgdx work? I tried understanding the code of Sprite and looks to me like they are also getting image width and height and then scale it every time, even though they have setScale() method.
First rule of optimizing: get some numbers. Premature optimization is the root of many problems. That said, there are still some good rules of thumb to know.
The texture data will be uploaded by libgdx/OpenGL to the GPU when you invoke new Texture. When you actually draw the texture with spriteBatch.draw instructions are uploaded to the GPU by OpenGL that tell the hardware to use your existing texture and to fit it to the bounds. The draw call just uploads coordinates (the corners of the box that defines the Sprite) and a pointer to the texture. The actual texture data is not uploaded.
So, in practice your image is "scaled" on every frame. However, this is not that bad, as this is exactly what GPUs are designed to do very, very well. You only really need to worry about uploading so many textures that the GPU has trouble keeping track of them all, you do not need to worry much about scaling the textures beforehand.
The costs of scaling and transforming the four corners of the sprite are relatively trivial next to the costs of sending the data to the GPU and the cost of refreshing the screen, so they probably are not worth worrying about too much. The "batch" in SpriteBatch is all about "batching up" (or gathering together) a lot of coordinates to send up to the GPU at once, as roughly, each call out to the GPU can be expensive. So, its always good to do as much work within a single batch's begin/end as you can.
Again, though, modern machines are stupidly fast, and you should be able to do whatever is easiest to get your app running first. Then once you have something working correctly, you can figure out which parts are actually slow and fix those. The parts that are "inefficient" but are not actually measurably impacting your application can be left alone.

android view or surfaceView, which should i use?

Ive been trying to make a scrollable/zoomable app and everything has gone great except for drawing bitmaps. It is a very large image (6656 by 4096) that i have split into tiles. There is a rectangle array that the bitmaps are drawn to, and it detects what rectangle is in the top left corner so it can draw the bitmaps that will cover the user's viewable screen. My problem is this all lags when the app has to load the bitmaps into memory; Once they are loaded it isnt an issue. I started with 512 by 512 tiles, then went down to 128 by 128. although it helped, there still is some noticeable lag. I have been looking into surfaceView and wanted your opinions if i should stick with View, or use surfaceView to solve my lag.
If you derive your own SurfaceView you have several advantages.
Mainly because you can have all drawing logic in a seperate thread. This means that the ui won't wait for you (I'm assuming the lag is because the ui-thread is being blocked?).
SurfaceView's are also faster in nature.
I also find this overview on developer.android.com to be a good reference to choose drawing method.

Categories

Resources