I am developing an android app for the fresh Google Cast API for remote display
https://developers.google.com/cast/docs/remote
I have used libgdx for rendering my objects on the device android screen.
That worked fine.
Now I want to add functionality to my app: render some other objects to another view that I can pass to the layout of remote screen.
I have tried as in the remote display example, first I created an AndroidGraphic and pass this to the setRender function because AndroidGraphic is implementing Renderer interface:
setContentView(R.layout.first_screen_layout);
firstScreenSurfaceView = (GLSurfaceView) findViewById(R.id.surface_view);
// Create an OpenGL ES 2.0 context.
firstScreenSurfaceView.setEGLContextClientVersion(2);
// Allow UI elements above this surface; used for text overlay
firstScreenSurfaceView.setZOrderMediaOverlay(true);
firstScreenSurfaceView.setRenderer((AndroidGraphics) mGraphics);
If i run this alone, this is working fine - objects are rendered by libgdx to the remote screen. But if I start my Activity that renders via libgdx and I also start the renderer in the service as described above, one screen freezes at the begin, the other screen in my case remote screen (TV connected via Chromecast) is rendering the view right.
My question is now: is it possible to render for two views at the same time with libgx android backend? Or are they using shared resources that this is not possible?
Because if i run my activity on device render via libgdx and run the CubeRender at the same time, they are both working well simultaneously. So I think my problem is by libgdx and shared resources.
Related
I'm running automated tests on an Android app which I don't have the source code for, but I know the UI is a 2D interface made with OpenGL. The problem is, getting screen dumps through uiautomator, monitor or layout inspector doesn't work like it would with other activities, there are no ids or objects, just a blank screen.
So I'm running input clicks by position to navigate the UI, but it changes often, so that makes the code unreliable. I have tried using gapid, but I don't know if it can help me click on screen elements. Is there any way to analyze the screen and get ids or positions, or anything that will help me automate navigating an OpenGL ui on Android?
OpenGL have no information about geometry you draw. It can be image or button. Or you can draw two buttons in one batch.
To solve your problem you have to provide information about controls for your automated tests. You can try to create invisible controls (Button or TextView) on top of your OpenGL scene (ofc for debug configuration only). So you can query positions as usual.
Can anyone explain me what is android SurfaceView? I have been trough the android development web site and read about and i cant still understand it.
Why or when is it use in android application development.Maybe a good example if possible
Thank you
Android SurfaceView is an object that is associated with a window (but behind a window), using which you can directly manipulate the canvas and draw whatever you like.
What is interesting about the implementation of SurfaceView is that although it lies BEHIND a window, as long as it has any content to show, Android framework will let the corresponding pixels on that window to be transparent, thus making the surface view visible.
It is most likely to be used for building a game or browser, where you want a graphical renderer to calculate pixels for you while you can also use java code to control the normal APP logic.
If you are new to normal Android programming, chances are you do not need to know too much about it.
For further information, see this and the official documentation.
View or SurfaceView comes into picture when you need custom design in the android layout instead of using existing android widgets provided from android.
Here the main difference of View and SurfaceView is drawing threads. View is drawn in the UI thread and SurfaceView can be drawn in a separate thread.
Therefore SurfaceView is more appropriate when it requires to update UI rapidly or rendering takes too much time (eg: animations, video playback, camera preview etc..)
I have an OpenGL ES app that works both on iOS and Android. Most of the code was written ages ago by another person and now I have to maintain it. OpenGL usage seems fairly simple (the game is 2D and uses only textured sprites in a simple manner). But I see two major differences in graphics code realization for iOS and Android:
1) iOS code contains this code:
glGenFramebuffersOES(1, &m_defaultFramebuffer);
glGenRenderbuffersOES(1, &m_colorRenderbuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, m_defaultFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, m_colorRenderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, m_colorRenderbuffer);
and Android's one does not.
2) When Android app goes to background, all OpenGL textures are destroyed (glDeleteTextures) and EGL is shutdowned using eglTerminate. When app returns from sleep, EGL is re-initialized and textures are re-created.
iOS code does not do these things. It just pauses rendering loop by calling [m_displayLink setPaused:YES];
Other OpenGL-related code is the same for iOS and Android.
Everything works well on both platforms, but I want to have a full understanding of what's going on. Can anybody explain me a rationale behind these two differences?
1)
This is just a difference in the APIs. On iOS, you create your own framebuffer to render in to when the App starts. On Android the framebuffer is created automatically in GLSurfaceView, so the App doesn't need to create its own.
2)
On iOS, when your App goes to the background, the OpenGL context is preserved, which means all your textures and buffers are still there when you return it to the foreground.
Older versions of Android had only a single OpenGL context, so it was destroyed whenever your App went to the background (so that other Apps could then make use of it).
Later versions of Android do have the option to behave more like iOS by calling setPreserveEGLContextOnPause. However for this to work, the Android version has to be 3.x or above (API 11) and the device must support it also.
When it is not used or supported, the App must delete and re-create all it's OpenGL resources when going between background and foreground, which is what your App appears to be doing.
I would like to write an application for Android which displays stuff on screen using the framebuffer. This will run only on a specific rooted device, so permissions etc is not a problem. Same application (simple test version anyway) is already running okay on PC/Linux.
The questions:
How to avoid the Android OS from accessing the framebuffer? I would like, while my application is running, to have the OS never touch the framebuffer, no writes and no ioctls. What do I need to do to get exclusive use of the framebuffer, and then (when my application quits) give it back to the OS?
Are there any differences between Android framebuffer and Linux framebuffer to watch out for?
P.S. I would like to start my application as a regular Android application (with some native code), it just has no visible UI except for framebuffer draws which take over the whole screen. It would be nice to still be able to get events from the OS.
See also:
http://www.kandroid.org/online-pdk/guide/display_drivers.html
Hi Alex Not sure why / how to stop android OS from writing to framebuffer. As long as your android application is visible and on top you have the control as what you want to display.
Your application should have an activity with a SurfaceView ( you may want your application to hide notification bar call this function in oncreate of your activity)
requestWindowFeature(Window.FEATURE_NO_TITLE); )
your activity should have SurfaceHolder.Callback implementation to handle callbacks as when the surface is ready to filled with framebuffer. Get the surface holder object as SurfaceView.getHolder() incase you want set pixel formats of the view etc.
Once "surfaceCreated" callback is called you can safely pass your surfaceview object(passing width and height maybe a good idea too) to the native so that you can fill it framebuffer using "ANativeWindow" class.
Check NDK sample code to see how to use the class NDK documentation
SurfaceHolder.Callback documentation
SurfaceHolder documentation
essentially you need to these (ON JB /Kitkat)
get the native window (ANativeWindow) associated with the surfaceview by ANativeWindow_fromSurface.
Acquire a lock on the ANativeWindow by ANativeWindow_acquire .
Set geometry parameters(window,width,height,pf) for the nativewindow by ANativeWindow_setBuffersGeometry
Load the nativewindow with the frambuffer stored (apply dirty rectangle if any here)
by ANativeWindow_lock
Final step to unlock and post the changes for rendering by ANativeWindow_unlockAndPost
Go through the ndk sample examples in case you need sample code.NDK documentation
Im porting an iOS game to Android.
Im using as template the GL2JNI example from the latest NDK, and have added inside GL2JNIActivity.java/onCreate mView.setPreserveEGLContextOnPause( true ); to preserve the EGL context but it as soon the device orientation change the screen is all black.
I've research on this and some suggests to reload all textures, shaders, geometry etc... But that is not an option for me (too slow to reload).
Is there any way to handle like on iOS multiple devices orientation using on OpenGLES2 view on Android? If yes how can it be done without reloading everything. I've seen like Unity games doing it... so it must be possible right?
TIA!