Manipulating android activity window using renderscript? - android

I am wondering if it possible to use Android renderscript to manipulate activity windows. For example if it is possible to implement something like 3dCarousel, with activity running in every window?
I was researching for a long time, and all the examples I found are for manipulating bitmaps on the screen. If it is true, and renderscript is only meant for images, than what is used in SPB Shell 3d, or these panels aren't actual acitivites?

It does not appear to me that those are activities. To answer your question directly, to the best of my knowledge there is no way to do what you want with renderscript as the nature of how it works prevents it from controlling activities. The control actually works the other way around... You could however build a series of fragments containing renderscript surface views, however the processing load of this would be horrific to say the least. I am unsure of how to take those fragments or activities and then draw a carousel.
What I *think * they are doing is using render script or open gl to draw a carousel and then placing the icon images where they need to be. But I have never made a home screen app so I could be and likely am mistaken in that regard.

Related

Android testing - How to click on OpenGL resource on the UI?

I'm running automated tests on an Android app which I don't have the source code for, but I know the UI is a 2D interface made with OpenGL. The problem is, getting screen dumps through uiautomator, monitor or layout inspector doesn't work like it would with other activities, there are no ids or objects, just a blank screen.
So I'm running input clicks by position to navigate the UI, but it changes often, so that makes the code unreliable. I have tried using gapid, but I don't know if it can help me click on screen elements. Is there any way to analyze the screen and get ids or positions, or anything that will help me automate navigating an OpenGL ui on Android?
OpenGL have no information about geometry you draw. It can be image or button. Or you can draw two buttons in one batch.
To solve your problem you have to provide information about controls for your automated tests. You can try to create invisible controls (Button or TextView) on top of your OpenGL scene (ofc for debug configuration only). So you can query positions as usual.

How to properly rotate Android device with GLSurfaceView while containing OpenGL context and GL thread?

I have a simple Android application that renders data with our OpenGL rendering SDK to an Android GLSurfaceView. Since we provide and SDK for others to use we need to support all use cases for the GLSurfaceViews. Currently we need to be able to rotate the device while recreating all the Android views and keeping the OpenGL context alive. This originates from a customer needing different layouts in landscape and horizontal mode.
The normal way of going about this would be:
1. Add android:configChanges="orientation|screenSize" to your Activity in AndroidManifest.xml and you will be fine.
This will not work in this case as this does not recreate the views on rotation. Thus by doing this we cannot have different layouts in landscape and horizontal mode.
2. Call GLSurfaceView.onPause() and GLSurfaceView.onResume() from the Activity.
While this is considered good practice it is not enough in this use case as the OpenGL context is destroyed when doing this. Note that we are still doing this, it just doesn't solve our issue.
3. Use an EGLContextFactory to preserve the OpenGL context while rotating.
This is possible and useful as described in, for example, this answer. It feels like a hack, but it definitely works. The idea is simply to create an EGLContext when you don't have one and reuse the one you have if it exists.
The main problem we face when using this hack is that the render thread is destroyed and recreated when the GLSurfaceView is detached and reattached to the view hierarchy. This seems to be by design by looking at the GLSurfaceView implementation.
In our SDK we have some Thread Local Store connected to the thread, so suddenly getting a new render thread is not really desirable. We could probably change some state when the render thread has changed, but we want to investigate if there are better ways of doing this.
So my questions are:
A. Is using the EGLContextFactory the "proper" way to be able to manually save the OpenGL context on rotation?
B. Are there any ways to not destroy and recreate the render thread on rotation (without modifying the source)?
C. Are there any better/simpler alternatives to achieving rotation with views destruction/recreation while keeping the OpenGL context and the rendering thread?
Extra info:
We always call setPreserveEGLContextOnPause(true);.
There are no issues with the rendering itself, it is simply the described related issues that are problematic.

What does TwoPassFilter GPUImage actually do?

I am trying to re-create the GPUImageTwoPassFilter from GPUImage(ios) for Android. I am working off the work done here for an Android Port of GPUImage. The port actually works great for many of the filters. I have ported over many of the shaders, basically line for line with great success.
The problem is that to port some of the filters, you have to extend from the GPUImageTwoPassFilter from GPUImage, which the Author of the android version didn't implement yet. I want to take a stab at writing it, but unfortunately the iOS version is very undocumented so I'm not really sure what the TwoPass filter is supposed to do.
Does anyone have any tips for going about this? I have a limited knowledge of openGL, but very good knowledge of Android and iOS. Im definitely looking for a very psudocode description here
I guess I need to explain my thinking here.
As the name indicates, rather than just applying a single operation to an input image, this runs two passes of shaders against that image, one after the other. This is needed for operations like Gaussian blurs, where I use a separable kernel to perform one vertical blur pass and then a horizontal one (cuts down texture reads from 81 to 18 on a 9-hit blur). I also use it to reduce images to their luminance component for edge detection, although I recently made the filters detect if they were receiving monochrome content to make that optional.
Therefore, this extends the base GPUImageFilter to use two framebuffers and two shader programs instead of just one of each. In the first pass, rendering happens just like it would with a standard GPUImageFilter. However, at the end of that, instead of sending the resulting texture to the next filter in the chain, that texture is taken in as input for a second render pass. The filter switches to the second shader program and runs that against the first output texture to produce a second output texture, which is finally passed on as the output from this filter.
The filter overrides only the methods of GPUImageFilter required to do this. One tricky thing to watch out for is the fact that I correct for the rotation of the input image in the first stage of the filter, but the second stage needs to not rotate the image again. That's why there's a difference in texture coordinates used for the first and second stages. Also, filters like the blurs that sample in a single direction may need to have their sampling inputs flipped depending on whether the first stage is rotating the image or not.
There are also some memory optimization and shader caching things in there, but you can safely ignore those when porting this to Android.

SurfaceView vs Custom View (extended from View). SurfaceView is slower, Why?

I wrote the same program two ways.
One using a Surfaceview, and the other using a custom view. According to the android SDK development guide, using a surface view is better because you can spawn a separate thread to handle graphics. Th SDK development guide claims that using a custom view with invalidate calls is only good for slower animations, less intense graphics.
However, in my simple app, I can clearly see that using a custom view with calls to invalidate seems to render faster.
What do you guys know/think about this?
My touchEvent code is exactly the same, and my drawing code is exactly the same. The only difference is that one is all in the UI thread, and the other is using a tread to handle the drawing.
SurfaceView enables to work on 2 buffer for drawing, how about your custom view?
Another thing: You mentioned that the doc says invalidate works fast on slower animations/less intense graphics. How intense is your "simple app"? You should try to do a stress test and also take in account, how the single thread handles your touch-input.
I have 3 threads in my game. One for game logic, one for drawing and then the "normal" UI thread...

Android acess to native screen buffer from Java

Is it possible, from within my android java app, to capture an image of what is on the screen, even if it was written using native (ndk)? I do not wish to take screen shots of other apps, just my own. I can already capture and image of a canvas that I am aware of, but is there a view or canvas or something like it that always represents what is on the screen, so that a) I don't have to capture the separate views images and recompile them, and b) I can see what my native (jni) code is doing with the graphics too?
There is no way to access the "raw" framebuffer from an application. On some devices you can get at it with sufficient permissions (e.g. the DDMS screen dump), but not even that will work on all devices.
Um if you are using view.getRootView() then although it is a view as you were complaining about, it gets a collection of all the views within the screen so you should not need to recompile anything, at least I know I don't when I use it. Sorry don't think it would be too helpful for seeing what the code-behind is doing to on screen graphics.

Categories

Resources