Why Robo tests gets marked as passed so quickly? - android

TL;DR
The app has tons of flows, but sometimes runs get passed faster than 2 mins...
Is there any way to keep it running until the timeout period (e.g. 1hr) is almost consumed? Attached a screenshot for a quick termination e.g.
Although the app is very big with tons of flows, sometimes runs get passed after 2min, 5mins but what is the critieria which decides that the running robo test should terminate now with a passed result? any idea what makes the recorded graph decides to go to this node? n.b. I assumed it's the terminal node

Why Robo tests get marked as passed so quickly?
It turns out that due to having a varying b.e. responses, the app journey gets changed. If there're 3 disconnected components (as in gif), what happens is sth like the app can start in any of the 3 flows resembling the 3 components. Which implies how long the journey will be
Is there any way to keep it running until the timeout period (e.g. 1hr) is almost consumed
Guiding the robo tests as explained here is a promising way to let journey bigger by following some sequence of actions which make the graph bigger
What is the criteria which decides that the running robo test should terminate now with a passed result?
Robo tests are simply applying flood fill on the app (as in gif). Where the graph nodes are represented by screens, e.g. onboarding screen, and edges are represented by actions, e.g. click on next button

Most likely it is always more or less the same time duration ...while the only difference may be the test's position in the queue (you're not the only user there, which is why it may appear as if the duration would vary). And that TerminatedActivity-33 only confirms that the Activity under test had been successfully terminated ...which is "The End" of the story.
For reasons of efficiency, the test will terminate as soon as possible - the timeout value can only be reached when it's stuck.
That the queue may also run in parallel might be another possible cause; while then, even if the real-time duration would indeed vary, the processing time (CPU shares) would still be about the same. Disclaimer: I have no clue how it internally works, just tried to apply some common sense.

Related

When Android phone has SoC with 6 cores then what does it mean form app developer perspective

I have phone with Snapdragon 632 Mobile Platform and some random Android app which shows what your phone has inside (RAM, SoC, sensors, screen density etc.) shows it has 8 cores.
What does it mean from Android app developer perspective?
So I can start (theoretically) up to 8 independent processes which can do work in parallel? Or this has to do with Java's Thread? Or none, find something else to study :) ?
Q : ...up to 8 independent processes which can do work in parallel?
Well, no.A process-based true-[PARALLEL] execution is way more complex, than a just-[CONCURRENT] orchestration of processes ( well known for every serious multitasking / multiprocessing O/S designer ).
Q : What does it mean from Android app developer perspective?
The SoC's 1.8 [GHz] 8-core CPU, reported by your system, is just a one class of resources the O/S has to coordinate all processes' work among - RAM being the next, storage, RTC-device(s), a (global) source of randomness, light-sensor, gyro-sensor(s), etc.
All this sharing-of-resources is a sign of a just-[CONCURRENT] orchestration of processes, where opportunistic scheduling permits a Process to go forward, once some requested resource ( CPU-core, RAM, storage, ... ) gets free to use and scheduler permits a next one waiting to make a small part of it's work and releasing and returning all such resources back, once either a time-quota expires, a signal-request arrives or some async awaiting makes such process to have to wait for some external, independently timed event ( yes, operations across a network are typical case of this ) or was ordered to go and sleep (so, why to block others who need not wait ... and can work during that time or "sleep" ).
O/S may further restrict processes, to become able to use but some of the CPU-cores - this way, such planning may show up, that a physically 8-core CPU might get reported as but a 6-core CPU from some processes, while the other 2-cores were affinity-mapped so that no user-level process will ever touch 'em, so these remain, under any circumstances, free/ready to serve the background processes, not interfering with other, user-level processing bottlenecks, that may happen on the remaining, less restricted 6-cores, where both system-level and user-level processes may get scheduled for execution to take place there.
On a processor level, further details matter. Some CPU-s have SIMD-instructions, that can process many-data, if properly pre-aligned into SIMD-registers, in one and single CPU-instruction step. On the contrary, some 8+ core CPU-s have to share but 2 physical FMA-uop units that can multiply-add, spending but a pair of CPU-CLK-cycles. So if all 8+ cores ask for this very same uOP-instruction, well, "Houston, we have a small problem here ..." - CPU-design thus CISC-CPUs have introduced ( RISC-s have completely different philosophy to avoid getting into this ) a superscalar pipelining with out-of-order instruction re-ordering, so 2-FMA-s process each step but a pair of such pack of 8-requested FMA-uops, interleaving these, on a CPU-uops level, with other ( legally re-ordered instructions ) work. Here you can see, that a deeper Level-of-Detail can surprise during execution, so HPC and hard-RealTime system designers have to pay attention to even this LoD ordering, if System-under-Test has to prove it's ultimate robustness for field-deployment.
Threads are in principle way lighter, than a fully-fledged O/S Process, so way easier to put/release from CPU-core ( cf. a context-switching ), so these are typically used for in-process [CONCURRENT] code-execution ( threads typically share the O/S delivered quota of CPU-time-sharing - i.e. when many O/S Processes inside your O/S's scheduler's queue wait for their time to execute on shared-CPU (cores), all their respective threads wait either ( no sign of thread independence from it's mother Process ). A similar scheduling logic applies to cases, when an 8-core CPU ought execute 888-threads, spawned from a single O/S Process, all that among other 999-system-processes, all waiting in a scheduler queue for their turn ) Also the memory-management is way easier for threads, as they "share" the same address-space, inherited from their mother-Process and can freely access but that address-space, under a homogeneous memory-access policy (and restrictions - will not crash other O/S Processes, yet may devastate their own one's memory state... see Thread-safe-ness issues )
Q : ...something else to study :) ?
The best place to learn from masters is to dive into the O/S design practices - best engineering comes from Real-Time systems, yet it depends a lot on your level of endurance and experience, how easy or hard that will be for you to follow and learn from.
Non-blocking, independent processes can work in a true-[PARALLEL] fashion, given no resources' blocking appears and results are deterministic in TimeDOMAIN -- all start + all execute + all finish -- at the same time. Like an orchestra performing a piece from W.A.Mozart.
If a just-[CONCURRENT] orchestration were permitted for the same piece of music, the violins might start only after they were able to borrow some or all fiddlesticks from viola-players, who might have been waiting in the concert-hall basement, as there was yet not their turn to even get into the dressing-room, piano soloist was still blocked downtown, in the traffic jam and will not be able to finish her part of the Concerto Grosso in about next 3 hours, while bass-players have superfast fiddled all their notes, as nobody was in a need of their super-long fiddle-sticks and they are almost ready to leave the concert-hall and move to playing on another "party" in the neighbouring city, as their boss promised there...
Yes, this would be a just-[CONCURRENT] orchestration, where the resulting "performance" always depends on many local-[ states, parameters ] and also heavily on externalities-( availability of taxi, actual traffic jam and its dynamics, situations like some resource {under|over}-booking )
All that makes a just-[CONCURRENT] execution way simpler in execution ( no strict coordination of resources needed - a "best-effort" - a "Do, if and when someone can" typically suffice ), but in-deterministic in results' ordering.
Wolfgang Amadeus Mozart was definitely designing his pieces of art in a true-[PARALLEL] fashion of how to orchestrate its performance - this is why we all love Amadeus and no one will ever dream to let it be executed in a just-[CONCURRENT] manner :o) no one will ever detect, the today's product of in-deterministically performed piece was the same as was performed, under different external set of external and other conditions, last night or last week, so no one could say if it was Mozart's piece or not at all ... God bless true-[PARALLEL] orchestration never permits to devastate such lovely pieces of art & performs the very way that every time the same result is (almost guaranteed to be) produced...

Android app dies with "Launch timeout has expired, giving up wake lock!" on certain phones

I am working on an Android app that displays a continuous custom-rendered animation on the title screen and thus doesn't really enter the idle state when it's done loading. On most devices I've tested, everything runs fine, but Samsung's Galaxy S2 kills the app after a few seconds. I don't get a stack trace or any output of the System.out output that I put into the onPause event handler and the default uncaught exception handler, so it doesn't seem to be a normal exit or an Exception in my code.
The only output I get in LogCat is the following:
Launch timeout has expired, giving up wake lock!
Sending signal. PID: 22344 SIG: 3
handleActivityTimeout pid=[22344] cnt=10
Process ... (pid 22344) has died.
There are several related posts here on SO (1, 2, 3, 4), but they all seem to trigger the issue slightly differently (alarm, recursive loops, network requests in the UI thread, ...). The last one links to a Google Groups discussion that says that this error message can simply be ignored. An approach I'd rather not take since it causes my app to actually crash on the Galaxy S2 (and maybe others?).
Basically what I did was to write a custom View that renders the next animation-frame in its onDraw() method and then calls postInvalidate() right before returning from onDraw(). In case it matters: My first postInvalidate() call happens during onCreate(...).
The rendering is very quick and runs at 40+ frames per second on that device and well over 60 fps on more modern phones. So control goes back to the event loop very frequently and the app is also very responsive. Yet, the Galaxy seems to think that it has crashed and kills it (if that is even the reason for my app dying there). The thing is: If I am quick enough to click on a menu-item in my app to end up on a screen without an animation to break out of the "tail-recursive" postInvalidate() once, everything runs fine. Even if I then go back to the title screen for a long time where the animation runs again.
So, of course, I could probably just use postInvalidateDelayed(...) once to break out of the start-up check, but that seems like a bit of a hacky solution and I don't know if there might be any other devices out there that might consider my app dead at a later stage (not just during start-up) and kill it.
Is there something fundamentally wrong with using postInvalidate() in the way I'm doing? Is there a way to fix it? I would like to avoid having to move to a separate thread since that opens a whole other can of worms as far as passing events back and forth between the UI and that thread. I know it wouldn't be the end of the world and using a SurfaceView, it might even lead to a slight performance improvement, but it's really just not necessary from a user experience point of view (everything runs perfectly smooth), so I'd like to avoid the involved additional opportunities for issues (multi-threading is notoriously difficult to debug).

Is context switching using up significant time?

I have been having problem with an app (which uses both java and C++ and OpenCV) which appears to be very inconsistent in the amount of time it takes to perform various tasks. To help diagnose this, I made a function in java (called one_off_speed_test()) which did nothing but a series of integer maths problems in a loop that takes about half a second, and then prints the time taken to the log. If I call this function repeatedly from within onCreate() then the time taken for each call is very consistent (+= 3%), but if I call it from within onCameraFrame(), a function that OpenCV calls when it has an image ready from the camera, then the time taken for the maths test in each fram varies by anything up to a factor of two. I decided to try the execution sampler in eclipse/DDMS and see if I could work out what was happening. I saw that when I clicked on one_off_speed_test(), it listed the parents and children of that function, along with a line saying "(context switch)". Then on that row, under a column labelled "Incl Real Time", it says "66%". Now I'm not very expert in using DDMS, and I only have a hazy idea about context switching, but from the description so far, does it look like I have a problem with context switching taking up a lot of time? Or am I misunderstanding the DDMS output.
Context switch describes the time spent to execute other threads. So, when your function is called from onCameraFrame(), it shares CPU with other threads, not necessarily threads that belong to your app.
See also answers https://stackoverflow.com/a/10969757/192373, https://stackoverflow.com/a/17902682/192373
In the posted example, onCameraFrame() spent 14.413665 sec on the wall clock, of which 4.814454 sec was used by one_off_speed_test() (presumably, for 10 frames), and 9.596984 sec was spent waiting for other threads. This makes sense, because onCameraFrame() callback competes for the CPU resource with the camera service, which runs in a separate system process.

Traceview profile: Handler.dispatchMessage using significant CPU time

I've just started to profile my app at its prototype stage using Traceview.
I've found that the main user of cpu time (90%) is Handler.dispatchMessage. The main user of real time resource (50% combined) is MessageQueue.next and MessageQueue.nativePollOnce. Calls made by my own methods account for, on average, 2% of real time resource each.
Currently, whilst I am still developing the app I have toasts that appear when there is interaction with my service. I am assuming (about to test the theory tonight) that these calls are down to my frequent use of Toast. Is this correct?
Since Toasts appear on top of the activity whilst you are still using it, having its usage appear in Traceview is a bit deceiving. Is there a way to filter out certain method calls in Traceview, or will I just have to comment the Toast calls in my code, re-build, and re-test? I suppose using a SharedPreference to set whether to use Toasts or not might be useful.
Thanks for the help.

Measure application response time/wait for next activity ready in Android?

I am developing an automated test suite to get the timing information for some Android applications (whose source code I do not have access to).
I haven't decided whether to use MonkeyRunner or Robotium yet. My main concern is after I perform an action on the UI (say typed an URL), how to determine when Android has fulfilled my request, all of the next activity's components are ready for use and I am ready to get the result and take the next action (say the page I requested is fully loaded, or email is fully opened).
For web-browser this is simple, I can just use onProgressChaged() or onPageFinished(). But I am looking for a more general way which works for all applications. I think Instrumentation.waitForIdleSync() or Instrumentation.waitForIdle() might be my best bet here.
However, as far as the documentation I read about MonkeyRunner and Robotium, none of them seem to integrate with waitForIdle well. In Robotium I could send some input and then get the output, but there doesn't seem to be a simple way for me to know when the output is ready, and maybe invoke a callback at that point. MonkeyRunner is similar in this aspect, too.
So I wonder is there a simple way for me to know what time my request has been fulfilled (as perceived by the user) without re-implementing Robotium functionality all by my own?
Thanks a lot.
This can be very tricky and entirely dependent on what exactly you asked monkeyrunner to do.
For example, if you have a monkeyrunner script, and issued a command to launch calculator app, you can have a python subprocess monitoring adb logcat -b events output to determine whether calculator app has been launched or not. If you are asking to press a button in the calculator, you can have a sleep of 1 or 2 seconds.
But there is no direct way to determine whether android has processed your event or not. Simply because, every operation differs and takes its own time.
You can put asserts in robotium and then use system.nanoseconds() before and after like a timer.
This might be a easy way to get timing information

Categories

Resources