I am creating a layout of type FrameLayout, in which I am adding two views. Two views are objects of GLSurfaceView and SurfaceView respectively. According to Android Developers Documentation regarding SurfaceView,
"The surface is Z ordered so that it is behind the window holding its SurfaceView; the SurfaceView punches a hole in its window to allow its surface to be displayed."
It works well for me and SurfaceView always stays behind my GLSurfaceView (used for opneGL drawings). But resuming after external event the behavior is odd for a following configuration,
Android Version: 4.3
Device Model Number : Nexus 7
Kernel Version 3.4.0.g1f57c39
Jun 13
Build Number: JWR66N
For this configuration, resuming after external event puts my GLSurfaceView behind SurfaceView. In other words, SurfaceView is placed at top in ZOrder and my OpenGL drawings are no more visible. On versions greater that Android 4.3, this behavior is not seen.
I can replicate this behavior on all versions by calling SurfaceView's following method with true as a parameter.
void setZOrderOnTop
Is this known issue. Anybody can help me on this?
Regards,
Sumedh
SurfaceViews have two parts, the Surface and the View. The Surface is a completely independent layer. The View is there so the UI layout code has something to work with. Generally the View is just transparent black, so you can see through to whatever is behind it.
GLSurfaceView is just SurfaceView with some code to manage EGL contexts and threading. Underneath it's just a SurfaceView. So if you have both a SurfaceView and a GLSurfaceView, and they have the same dimensions and Z-order, then one of them is going to "win" and the other is going to "lose" because they're trying to occupy the same space at the same time. There is no defined value for which one will "win", so inconsistent behavior is expected.
One way to avoid clashes is to leave one set to the default Z, and call setZOrderMediaOverlay() on the other. The "media overlay" is still behind the UI, but above the default Surface position. If you use setZOrderOnTop(), the Surface will be positioned above the UI as well.
The upper Surface will need to be rendered with transparent pixels if you want to see something behind it (the same way that the View needs to be transparent to see the Surface).
The most efficient way to avoid this issue is to not have this issue: use one SurfaceView for everything, rendering all of your non-UI-element content to it. This requires a bit more work (and probably a SurfaceTexture) if you're rendering video or showing a camera preview on one of the Surfaces.
You can find some examples in Grafika. The "multi-surface exerciser" demonstrates three overlapping SurfaceViews rendered in software, overlapping with UI elements. Other activities show ways to work with Surfaces, GLES, the camera, and video.
See also the Android System-Level Graphics Architecture doc, which explains all this in much greater detail.
Dont use "setZOrderOnTop" as true. That will get it over all the other layouts.
If you are using multiple surfaceviews. use this for each surfaceview
yourSurfaceView.setZOrderMediaOverlay(true);
then set this setZOrderOnTop as false for the surfaceview you initiated later and wanted it to get back to the other surfaceviews
secondSurfaceview.setZOrderOnTop(false);
Related
The Situation
I started developing for Android, and found that Android's way of handling layouts, animations etc. is not adequate for smooth touch feedback and real-time animations, especially before Android 4.0. So instead, I decided to use the game app approach: use a SurfaceView and define my own drawing code.
The Problem
After a few tests, I discovered that this method required too much CPU for a non-game app, which I believe is due to redrawing static elements 60 times per second.
The First Solution & Flaws
To solve this issue, I modified my code so that the app would redraw the screen (call postInvalidate) only if there were any changes to what should be drawn. This solution solved part of the issue, but the app still had to redraw static elements even if a small button moved a single pixel.
The Question: Possible Better Solution?
For a better solution, I considered how Android dealt with the problem; it had separate View's for every screen element. So I though, maybe I could have one SurfaceView for large, static, content elements and another for small, moving UI elements and achieve a similar effect. My question is, would this actually improve performance the way I described it above?
Thanks.
If you're using postInvalidate(), you should be using a custom View, not a SurfaceView. The whole point of using a SurfaceView is to have a separate layer that is independent of the View UI. If you're overriding onDraw(), you're drawing on the View part, not the Surface part, and just wasting the Surface.
All Views occupy a single layer, no matter how many you have. Each SurfaceView has a separate layer, so having a lot of them will become problematic. In practice you can have no more than three, because of Z-ordering limitations. (See the "multi-surface test" activity in Grafika for an example of three partially transparent SurfaceViews blended with the View UI.)
If you can't render fast enough to maintain 60 fps, you need to consider changing the way you render. Custom Views and OpenGL ES take advantage of hardware acceleration. Canvas rendering onto a SurfaceView Surface does not. On the plus side, you can down-size a SurfaceView's Surface and let the hardware scale it back up; this lets you limit the number of pixels you have to draw each frame, regardless of the display's resolution. (Blog, demo.) If you have a lot of static elements, the best approach may be to render to an off-screen Bitmap, and then just blit the Bitmap every frame.
One approach that will be very fast is to render all of the static elements onto the View part of the SurfaceView, taking care to keep the background transparent, and then render the animated parts on the Surface with GLES. You could use a second SurfaceView, but that adds an additional composition layer, which will degrade system performance if you exceed the number of overlay planes supported by the hardware.
For a deeper understanding of the way Android graphics work, take a look at the graphics architecture doc.
I have a special design requiring for the app I'm developing right now.
Right now, I have a third-party private video library which plays a video stream. The design of this screen includes a translucent panel overlaid on top of the video, blurring the portion of the video that lies behind.
Normally in order to blur the background, you are supposed to take a screenshot of the view behind, blur it and use it as an image for the foreground view.
In this case, the video keeps on playing, so the blurred image changes every frame. How would you implement this then?
A possible solution would be to create a thread, taking screenshots, cropping them and put them as a background. Even better if that view is a SurfaceView, I guess. But I'm wondering what would be the best approach in this case. Would a thread that is continually taking screenshots create a huge performance impact? Is it possible to feed a surfaceView buffer with these images?
Thanks!
A SurfaceView surface is a consumer of graphics buffers. You can't have two producers for one consumer, which means you can't send the video to it and draw on it at the same time.
You can have multiple layers; the SurfaceView surface is on a separate layer behind the View UI layer. So you could play the video to the SurfaceView's surface, and draw your blur rectangle on the SurfaceView's view. (Normally the SurfaceView's view is completely transparent, and is just used as a place-holder for layout purposes.)
Another option would be to render the video frame to a SurfaceTexture. You would then render that texture to the SurfaceView surface with GLES, and render the blur rectangle on top. You can find an example of treating live camera input as a GLES texture in Grafika ("texture from camera" activity). This has the additional advantage that, since you're not interacting with the View system -- the SurfaceView surface is composited by the system, not the app -- you can do it all on an independent thread.
In any event, rendering, grabbing a screenshot, and re-rendering is going to be slower than the options described above.
For more details about why things work the way they do, see the Android System-Level Graphics architecture doc.
I have a video view. This view is contained inside a custom FrameLayout called VideoStructure, where I can put also a channel logo or things alike.
Under normal conditions, the video is hardware accelerated, so the view is (i suppose) really a transparent "black hole", while the video is decoded & rendered by the relevant hardware.
My question is, if I override draw() in the Video View's container (the VideoStructure extends FrameLayout in the image) to draw some stuff (ie. the circle in the image) OVER the video -I'm overriding draw(), not onDraw()- will this break the hardware acceleration? Can I expect a big performance hit for doing this?
It should have no effect on performance.
SurfaceViews have two parts, the "view" part, and the "surface" part. The "view" part is a transparent hole that fits in with the other views, the "surface" part is a completely independent layer that is composited with the view layer by the system. The video is being sent to the "surface" part.
If you override SurfaceView's "view" renderer, you'll get a hardware-accelerated Canvas for a View that is normally completely transparent (so if you erase it, you better use an alpha of zero and the correct transfer mode).
If you attempt to render on the "surface" part, by getting a Canvas from lockCanvas(), you will either fail (because the video effectively has it locked), or succeed and prevent video from being written to it.
The system compositor is going to have to blend the "view" and "surface" layers no matter what appears in the "view" layer, so making a few more pixels opaque isn't going to have a measurable impact.
Update: see the graphics architecture doc for more details on Surfaces and composition.
I have 2 Activitiys which use OpenGL for drawing. At a transition from one activity to the next I get an unsightly empty screen filled with my OpenGL clear colour (so its not as bad as a black screen).
I wish to effectively transition seamlessly between Activitys, but there are several high load regions when a GLSurfaceView is created. The main issue is texture loading as this is slowest.
Is there anyway to double buffer between Activitys so that the last Activity view is frozen until I explicitly tell my next Activity to draw? I want transitions to be seamless?
Moving everything into one GLSurfaceView instance isn't really an option I want to consider.
You can use setRenderMode( RENDERMODE_WHEN_DIRTY) in your GLSurfaceView, so the surface only will be redraw when you call requestRender().
This way, anything that you draw before calling another surface view will only be cleared when you request a new draw.
You can back to the continuous drawing by setting render mode as RENDERMODE_CONTINUOUSLY.
It is hard to do it in Android 2.x because of its OpenGL ES. Also, it is not recommended that you use two OpenGL in one applications if you are in render continously. If so, to control them easily, you will need RENDERMODE_WHEN_DIRTY.
If you use it in Android 4.x, TextureView is an optional to do it.
TextureView is as same as GLSurfaceView but with View compatible, it means that you can use ViewAnimation for TextureView.
We've noticed that when you put Android views with view animation (nothing complex, just AlphaAnimation and TranslateAnimation) on top of a GLSurfaceView, the animation runs slowly (i.e. you see a lot of stuttering.) I am calling pause() on the GLSurfaceView, and I believe I've confirmed (through setting breakpoints) that the GL draw calls are not getting hit while the animation is playing, so I'm not sure where the slowness is coming from.
Does anyone know of a way around this? I know that on iPhone this also used to be a problem, but there was some OS update they made to fix the issue. They are short view animations (e.g. You Win!) so it's not the worst thing in the world, but it would be nice if there was some workaround.
The reason we are not doing the animations in GL is that they have to be able to run from any Activity in our game, and not all of our Activities have GLSurfaceViews.
Finally, if it matters, we am using the modified GLSurfaceView source from Replica Island http://code.google.com/p/replicaisland/
Drawing on top of a GLSurfaceView is slow, therefore animating is as well. You are forcing the framework to do more work to determine what part of the surface view is visible.
You should really consider doing these animations inside the surface view when you are using a surface view.
An alternative is to put the animation in a small window above your activity.