I'm running automated tests on an Android app which I don't have the source code for, but I know the UI is a 2D interface made with OpenGL. The problem is, getting screen dumps through uiautomator, monitor or layout inspector doesn't work like it would with other activities, there are no ids or objects, just a blank screen.
So I'm running input clicks by position to navigate the UI, but it changes often, so that makes the code unreliable. I have tried using gapid, but I don't know if it can help me click on screen elements. Is there any way to analyze the screen and get ids or positions, or anything that will help me automate navigating an OpenGL ui on Android?
OpenGL have no information about geometry you draw. It can be image or button. Or you can draw two buttons in one batch.
To solve your problem you have to provide information about controls for your automated tests. You can try to create invisible controls (Button or TextView) on top of your OpenGL scene (ofc for debug configuration only). So you can query positions as usual.
Related
I am using opengl and eclipse to build an android app that loads ply models and renders it . But when i tried rendering two files together one being transparent and the other being opaque the result i got is rather abnormal ..
front view
as you can see the hair is owerlapping the face rather than simply displaying
pls help ..
for the body, you'll want to disable blending and enable depth testing, like you said.
for the hair, you need to enable alpha blending of course but still need to enable depth testing. otherwise, all hair will be visible regardless of whether it's behind the body or not.
but, if the frontmost strand of hair is rendered first, all hair behind that won't get drawn at all since the depth test now fails.
the solution is to have depth testing enabled but disable writing to the depth buffer with glDepthMask. This will render everything that's in front of the body, regardless of the ordering, but nothing that's behind it.
I am developing a native Android app using the JUCE C++ framework. The app is rendering using OpenGL. Non-interative animations perform very well.
However, interactive touch-responsive animations e.g. dragging a component are slow to update. It is not at all smooth. I measured on the Java side and its averaging around 70-80 ms or so between each ACTION_MOVE event.
UPDATE: I think the main issue may be to do with rendering whats
underneath the component being moved. When I tried out the JuceDemo,
using the Window demo I found I had bad performance dragging a window
over another, but if I drag the window around where there is only
empty space, it performs fine and feels smooth.
Is there a way I can increase the animated UI responsiveness in my app?
I've made some changes to the standard Java template provided by the Introjucer so that the native handlePaint() function is not called when there is an OpenGL context. (as suggested here)
Can anyone explain me what is android SurfaceView? I have been trough the android development web site and read about and i cant still understand it.
Why or when is it use in android application development.Maybe a good example if possible
Thank you
Android SurfaceView is an object that is associated with a window (but behind a window), using which you can directly manipulate the canvas and draw whatever you like.
What is interesting about the implementation of SurfaceView is that although it lies BEHIND a window, as long as it has any content to show, Android framework will let the corresponding pixels on that window to be transparent, thus making the surface view visible.
It is most likely to be used for building a game or browser, where you want a graphical renderer to calculate pixels for you while you can also use java code to control the normal APP logic.
If you are new to normal Android programming, chances are you do not need to know too much about it.
For further information, see this and the official documentation.
View or SurfaceView comes into picture when you need custom design in the android layout instead of using existing android widgets provided from android.
Here the main difference of View and SurfaceView is drawing threads. View is drawn in the UI thread and SurfaceView can be drawn in a separate thread.
Therefore SurfaceView is more appropriate when it requires to update UI rapidly or rendering takes too much time (eg: animations, video playback, camera preview etc..)
I am wondering if it possible to use Android renderscript to manipulate activity windows. For example if it is possible to implement something like 3dCarousel, with activity running in every window?
I was researching for a long time, and all the examples I found are for manipulating bitmaps on the screen. If it is true, and renderscript is only meant for images, than what is used in SPB Shell 3d, or these panels aren't actual acitivites?
It does not appear to me that those are activities. To answer your question directly, to the best of my knowledge there is no way to do what you want with renderscript as the nature of how it works prevents it from controlling activities. The control actually works the other way around... You could however build a series of fragments containing renderscript surface views, however the processing load of this would be horrific to say the least. I am unsure of how to take those fragments or activities and then draw a carousel.
What I *think * they are doing is using render script or open gl to draw a carousel and then placing the icon images where they need to be. But I have never made a home screen app so I could be and likely am mistaken in that regard.
Im trying to plan out the best way to develop a sample game. I would like to create a board game that act like words with friends (not game rules). For example, I want the board to be a six sided board that has a set number of game tiles. When a game piece (like a checker or ball) is placed on it, i would like to have that piece "snap" to the closest location. My question is, what is the best way to do design the board? Should this be done as a background image? Or draw the board live? If so, how do i create the "snap to" and register when a piece is on the board? I also want to make sure that the board is drawn to the correct dimensions for different phones.
Thanks for any suggestions
jason
This is probably a bit to broad of a question for SO. But I'll give it a go.
Little bit of preface, I have not personally attempted what you are trying. So please do not take what I suggest as what you must do.
If I were you I would create a GameBoard object and a GamePiece object. Put everything to do with making and holding information for the board in the GameBoard class. Whether you draw your board in java, or start with a graphic and build from there depends on a few different things. How specialized would you like the board to look?(you'll have more control if you start with a graphic if you want to make it fancy) If you're just looking for a grid of lines and nothing fancy I'd think you'd be fine to just draw it from java. Do you ever want to use more/less than 6 rows/columns? If you want to use a different number it may be easier to do that when the time comes if you do your drawing from java rather than from a stored graphic. Your game board will also need to be able to keep track of which pieces are on it and where they are.
As for the snap-to: You'll be creating a touch listener that allows you to drag a GamePiece along under your finger. inside of the Finger_Up event in your listener you'll check the piece's current Rect against the Rect's on the GameBoard and drop into whichever slot it intersects most. Fair warning while you are creating your touch listener you're going to have to use some very nitty gritty linear algebra to take raw motion events from the touch screen and tell it where to draw the piece next.