I am developing a native Android app using the JUCE C++ framework. The app is rendering using OpenGL. Non-interative animations perform very well.
However, interactive touch-responsive animations e.g. dragging a component are slow to update. It is not at all smooth. I measured on the Java side and its averaging around 70-80 ms or so between each ACTION_MOVE event.
UPDATE: I think the main issue may be to do with rendering whats
underneath the component being moved. When I tried out the JuceDemo,
using the Window demo I found I had bad performance dragging a window
over another, but if I drag the window around where there is only
empty space, it performs fine and feels smooth.
Is there a way I can increase the animated UI responsiveness in my app?
I've made some changes to the standard Java template provided by the Introjucer so that the native handlePaint() function is not called when there is an OpenGL context. (as suggested here)
Related
On Android, when translating off-screen components onto the screen, it takes a very long time. When the function to bring them on-screen is triggered, it seems to completely block the main thread for (almost a) second. I am only rendering 100 simple elements and iOS can handle it even with a much greater pixel density. On Android, this issue is reproducible for me even on a high end phone.
I have created a minimal repo here: https://github.com/darajava/render-bug-android-react-native
See these 2 videos for a comparison of how Android vs iOS handles it:
Android: https://youtu.be/KBP2HHMzAiU
iOS: https://youtu.be/fw-Prh_9HhY
I created a react native issue here: https://github.com/facebook/react-native/issues/30987
Am I doing something wrong? Is this a bug in RN? If so, is there a workaround I can use?
I'm using a Texture widget, rendering its content from native code using OpenGL ES. In native code I call ANativeWindow_fromSurface and from that create an EGL surface. AIUI what happens is:
The ANativeWindow represents the producer side of a buffer queue.
Calling eglSwapBuffers causes a texture to be sent to this queue.
Flutter receives the texture and renders it using Skia when the TextureLayer is painted.
The texture is scaled to match the size of the TextureLayer (the scaling happens in AndroidExternalTextureGL::Paint()).
I'm trying to figure out how to synchronise the OpenGL rendering. I think I can use the choreographer to synchronise with the display vsync, but I'm unclear on how much latency this bufferqueue-then-render-with-skia mechanism introduces. I don't see any means to explicitly synchronise my native code's generation of textures with the TextureLayer's painting of them.
The scaling appears to be a particularly tricky aspect. I would like to avoid it entirely, by ensuring that the textures the native code generates are always of the right size. However there doesn't appear to be any direct link between the size of the TextureLayer and the size of the Surface/ANativeWindow. I could use a SizeChangedLayoutNotifier (or one of various alternative hacks) to detect changes in the size and communicate them to the native code, but I think this would lag by at least a frame so scaling would still take place when resizing.
I did find this issue, which talks about similar resizing challenges, but in the context of using an OEM web view. I don't understand Hixie's detailed proposal in that issue, but it appears to be specific to embedding of OEM views so I don't think it would help with my case.
Perhaps using a Texture widget here is the wrong approach. It seems to be designed mainly for displaying things like videos and camera previews. Is there another way to host natively rendered, interactive OpenGL graphics in Flutter?
I'm running automated tests on an Android app which I don't have the source code for, but I know the UI is a 2D interface made with OpenGL. The problem is, getting screen dumps through uiautomator, monitor or layout inspector doesn't work like it would with other activities, there are no ids or objects, just a blank screen.
So I'm running input clicks by position to navigate the UI, but it changes often, so that makes the code unreliable. I have tried using gapid, but I don't know if it can help me click on screen elements. Is there any way to analyze the screen and get ids or positions, or anything that will help me automate navigating an OpenGL ui on Android?
OpenGL have no information about geometry you draw. It can be image or button. Or you can draw two buttons in one batch.
To solve your problem you have to provide information about controls for your automated tests. You can try to create invisible controls (Button or TextView) on top of your OpenGL scene (ofc for debug configuration only). So you can query positions as usual.
I am wondering if it possible to use Android renderscript to manipulate activity windows. For example if it is possible to implement something like 3dCarousel, with activity running in every window?
I was researching for a long time, and all the examples I found are for manipulating bitmaps on the screen. If it is true, and renderscript is only meant for images, than what is used in SPB Shell 3d, or these panels aren't actual acitivites?
It does not appear to me that those are activities. To answer your question directly, to the best of my knowledge there is no way to do what you want with renderscript as the nature of how it works prevents it from controlling activities. The control actually works the other way around... You could however build a series of fragments containing renderscript surface views, however the processing load of this would be horrific to say the least. I am unsure of how to take those fragments or activities and then draw a carousel.
What I *think * they are doing is using render script or open gl to draw a carousel and then placing the icon images where they need to be. But I have never made a home screen app so I could be and likely am mistaken in that regard.
I wrote the same program two ways.
One using a Surfaceview, and the other using a custom view. According to the android SDK development guide, using a surface view is better because you can spawn a separate thread to handle graphics. Th SDK development guide claims that using a custom view with invalidate calls is only good for slower animations, less intense graphics.
However, in my simple app, I can clearly see that using a custom view with calls to invalidate seems to render faster.
What do you guys know/think about this?
My touchEvent code is exactly the same, and my drawing code is exactly the same. The only difference is that one is all in the UI thread, and the other is using a tread to handle the drawing.
SurfaceView enables to work on 2 buffer for drawing, how about your custom view?
Another thing: You mentioned that the doc says invalidate works fast on slower animations/less intense graphics. How intense is your "simple app"? You should try to do a stress test and also take in account, how the single thread handles your touch-input.
I have 3 threads in my game. One for game logic, one for drawing and then the "normal" UI thread...