Can anyone explain me what is android SurfaceView? I have been trough the android development web site and read about and i cant still understand it.
Why or when is it use in android application development.Maybe a good example if possible
Thank you
Android SurfaceView is an object that is associated with a window (but behind a window), using which you can directly manipulate the canvas and draw whatever you like.
What is interesting about the implementation of SurfaceView is that although it lies BEHIND a window, as long as it has any content to show, Android framework will let the corresponding pixels on that window to be transparent, thus making the surface view visible.
It is most likely to be used for building a game or browser, where you want a graphical renderer to calculate pixels for you while you can also use java code to control the normal APP logic.
If you are new to normal Android programming, chances are you do not need to know too much about it.
For further information, see this and the official documentation.
View or SurfaceView comes into picture when you need custom design in the android layout instead of using existing android widgets provided from android.
Here the main difference of View and SurfaceView is drawing threads. View is drawn in the UI thread and SurfaceView can be drawn in a separate thread.
Therefore SurfaceView is more appropriate when it requires to update UI rapidly or rendering takes too much time (eg: animations, video playback, camera preview etc..)
Related
I have a confusion and look forward to some comments on this. I was assuming that WebView creates a separate surface to draw and does not use the default surface of the activity to draw. But, in surfaceflinger dump, I dont see a new surface getting created when using webview.
When I do the similar experimnet of using videoview, I see a separate surface getting created.
Onwebview also, I wanted to play a video, so was assuming a separate surface would be created and thereby the surface resolution would be as per video resolution. But if it uses application's surface, then the max resolution of the video has to be of the UI resolution.
In chromium code, I see the code for separate surface but then in practical I could not see one getting created.
Can someone help me to clarify this.
Thank You.
If you look at VideoView inheritance graph you'll notice that it inherits from SurfaceView, while WebView does not, so WebView can only achieve that by creating an external SurfaceView.
While if you search for usages of ExternalVideoSurface in WebView part of Chromium code, you will notice that it is only enabled if "video hole" is enabled, which is intended to be used only for decoding encrypted videos, where WebView needs to do "hole punching". There is a System API-level setting in WebView that enables this behaviour, but it has its own limitations, and thus not recommended to be used in general.
I am also curious webview is not in sufaceflinger dump.
I think the reason is webview also render to the related activity native window, so there is not another surface in this situation.
But the situation seems differs in the lastest Android and Webview version by developer's option.
I would like to write an application for Android which displays stuff on screen using the framebuffer. This will run only on a specific rooted device, so permissions etc is not a problem. Same application (simple test version anyway) is already running okay on PC/Linux.
The questions:
How to avoid the Android OS from accessing the framebuffer? I would like, while my application is running, to have the OS never touch the framebuffer, no writes and no ioctls. What do I need to do to get exclusive use of the framebuffer, and then (when my application quits) give it back to the OS?
Are there any differences between Android framebuffer and Linux framebuffer to watch out for?
P.S. I would like to start my application as a regular Android application (with some native code), it just has no visible UI except for framebuffer draws which take over the whole screen. It would be nice to still be able to get events from the OS.
See also:
http://www.kandroid.org/online-pdk/guide/display_drivers.html
Hi Alex Not sure why / how to stop android OS from writing to framebuffer. As long as your android application is visible and on top you have the control as what you want to display.
Your application should have an activity with a SurfaceView ( you may want your application to hide notification bar call this function in oncreate of your activity)
requestWindowFeature(Window.FEATURE_NO_TITLE); )
your activity should have SurfaceHolder.Callback implementation to handle callbacks as when the surface is ready to filled with framebuffer. Get the surface holder object as SurfaceView.getHolder() incase you want set pixel formats of the view etc.
Once "surfaceCreated" callback is called you can safely pass your surfaceview object(passing width and height maybe a good idea too) to the native so that you can fill it framebuffer using "ANativeWindow" class.
Check NDK sample code to see how to use the class NDK documentation
SurfaceHolder.Callback documentation
SurfaceHolder documentation
essentially you need to these (ON JB /Kitkat)
get the native window (ANativeWindow) associated with the surfaceview by ANativeWindow_fromSurface.
Acquire a lock on the ANativeWindow by ANativeWindow_acquire .
Set geometry parameters(window,width,height,pf) for the nativewindow by ANativeWindow_setBuffersGeometry
Load the nativewindow with the frambuffer stored (apply dirty rectangle if any here)
by ANativeWindow_lock
Final step to unlock and post the changes for rendering by ANativeWindow_unlockAndPost
Go through the ndk sample examples in case you need sample code.NDK documentation
Is there any tradeoff of add some OpenGL to a "serious" (not game) Android app?
The reason why I want to use OpenGL, is to add some 3d behaviour to a few views.
According to this http://developer.android.com/guide/topics/graphics/opengl.html OpenGL 1.0 is available in every Android device and doesn't require modification of manifest file. So there will never be compatibility issues.
The only 2 things I can think about is 1. mantainability by other developers which can't OpenGL. And possible 2. Integration problems with other components / not well reusable (although, not sure).
Is there also anything else, unexpected things, overhead of some sort, complications, etc.?
Asking because it seems not to be a very popular practice, people seem to prefer to "fake" the 3d with 2d or give it up. Don't know if it's only because they don't want to learn OpenGL.
I use OpenGL for some visualization in a released app, and I have an uncaught exception handler in place to catch any exception coming from the GLThread and disable OpenGL the next time the app is run, since I had some crash reports in the internals of GLSurfaceView.java coming in from buggier devices. If the 3D rendering is not crucial to your app, this is one approach you can take so that users with these devices can continue to use the app.
From Android 3.0+ you can also preserve the EGL context by calling GLSurfaceView. setPreserveEGLContextOnPause(true);. You'll only really need to do this if your renderer is very expensive to initialize, and it only works if you're not destroying the GLSurfaceView in between (i.e. the default behavior of an activity when rotating the device). If you're not loading that many resources then initializing OpenGL is usually fast enough.
From the SurfaceView docs (emphasis mine):
The surface is Z ordered so that it is behind the window holding its SurfaceView; the SurfaceView punches a hole in its window to allow its surface to be displayed. The view hierarchy will take care of correctly compositing with the Surface any siblings of the SurfaceView that would normally appear on top of it. This can be used to place overlays such as buttons on top of the Surface, though note however that it can have an impact on performance since a full alpha-blended composite will be performed each time the Surface changes.
The advantage is that your GL thread can update the screen independently of the UI thread (i.e. it doesn't need to render to a texture and render the texture to the screen); the disadvantage is that something needs to composite your view with the screen. If you're lucky, this can be done in the "hardware composer"; otherwise it is done on the GPU and may be a bit wasteful of GPU resources (see For Butter or Worse: Smoothing Out Performance in Android UIs at 27:32 and 40:23).
If your view is small, it may be better to use a TextureView. This will render to a texture and render the texture as part of the normal view hierarchy which might be better, but can increase latency. The downside is it's only available since API level 14.
I am wondering if it possible to use Android renderscript to manipulate activity windows. For example if it is possible to implement something like 3dCarousel, with activity running in every window?
I was researching for a long time, and all the examples I found are for manipulating bitmaps on the screen. If it is true, and renderscript is only meant for images, than what is used in SPB Shell 3d, or these panels aren't actual acitivites?
It does not appear to me that those are activities. To answer your question directly, to the best of my knowledge there is no way to do what you want with renderscript as the nature of how it works prevents it from controlling activities. The control actually works the other way around... You could however build a series of fragments containing renderscript surface views, however the processing load of this would be horrific to say the least. I am unsure of how to take those fragments or activities and then draw a carousel.
What I *think * they are doing is using render script or open gl to draw a carousel and then placing the icon images where they need to be. But I have never made a home screen app so I could be and likely am mistaken in that regard.
I wrote the same program two ways.
One using a Surfaceview, and the other using a custom view. According to the android SDK development guide, using a surface view is better because you can spawn a separate thread to handle graphics. Th SDK development guide claims that using a custom view with invalidate calls is only good for slower animations, less intense graphics.
However, in my simple app, I can clearly see that using a custom view with calls to invalidate seems to render faster.
What do you guys know/think about this?
My touchEvent code is exactly the same, and my drawing code is exactly the same. The only difference is that one is all in the UI thread, and the other is using a tread to handle the drawing.
SurfaceView enables to work on 2 buffer for drawing, how about your custom view?
Another thing: You mentioned that the doc says invalidate works fast on slower animations/less intense graphics. How intense is your "simple app"? You should try to do a stress test and also take in account, how the single thread handles your touch-input.
I have 3 threads in my game. One for game logic, one for drawing and then the "normal" UI thread...