Android supports various log levels, Verbose, Debug, Info, Warn and Error. I understand how the logging levels work; I'm more interested in the typical output expected for a given level.
For example, when developing an application I might be curious when a certain method is doing something (this often tends to be for debugging purposes). I'll look through the logs to make sure the methods are being called in an expected order, if the network response is what I think it should be, if the parsers finds the right information, etc.
Why would some one use Verbose vs Debug vs Info?
From the perspective of a developer, for a first, second, or third party application aren't all logs for debugging purposes? (assuming devs don't stare a logs for fun... I'm not that sadistic)
From the perspective of a consumer, when the s*** hits the fan and they need to contact customer support because their super important / business critical application isn't working a developer uses the log for debugging purposes.
The only reason I could see for using verbose or info is perhaps metrics gathering / data warehouse related operations. If so, why use verbose vs info.
Not sure If I'm overcomplicating this or if the android framework is...
I basically follow what Tomasz Nurkiewicz has to say when considering logging level:
ERROR – something terribly wrong had happened, that must be investigated immediately. No system can tolerate items logged on this level. Example: NPE, database unavailable, mission critical use case cannot be continued.
WARN – the process might be continued, but take extra caution. Example: “Application running in development mode” or “Administration console is not secured with a password”. The application can tolerate warning messages, but they should always be justified and examined.
INFO – Important business process has finished. In ideal world, administrator or advanced user should be able to understand INFO messages and quickly find out what the application is doing. For example if an application is all about booking airplane tickets, there should be only one INFO statement per each ticket saying “[Who] booked ticket from [Where] to [Where]“. Other definition of INFO message: each action that changes the state of the application significantly (database update, external system request).
DEBUG – Developers stuff.
VERBOSE – Very detailed information, intended only for development. You might keep trace messages for a short period of time after deployment on production environment, but treat these log statements as temporary, that should or might be turned-off eventually. The distinction between DEBUG and VERBOSE is the most difficult, but if you put logging statement and remove it after the feature has been developed and tested, it should probably be on VERBOSE level.
My most favorite level is WTF(2.2+) which is supposed to stand for "What a Terrible Failure", for situations that should never happen.
I normally use "info" for simple messages.
Related
I'm using the app DevOptsHide to make other apps think the developer mode is disabled. To summarize, we are hooking into Settings.Global.getInt() and Settings.Secure.getInt(), and override values of Settings.Global/Secure.ADB_ENABLED and Settings.Global/Secure.DEVELOPMENT_SETTINGS_ENABLED constants.
I found some points of interest, but I'm not an Android developer, and the app author isn't willing to spend much time working on the app anymore, thus I'm asking the questions here in hope to find some insights.
The originating discussion can be found on DevOptsHide issue #17. The main code is at HideDevOpts.kt.
My questions are:
I am right when I spotted that when hooking into Settings.Secure.getInt(), in the hook param equals Settings.Secure.* thus detection of access to the desired constants fails, as currently the only checked names are Settings.Global.*?
I am right when I pointed out that as handleLoadPackage() is executed for each and every package loaded, Settings.Global.getInt() and Settings.Secure.getInt() end up with a huge stack of registered hooks?
But maybe there is some singleton mechanism?
On the other hand, if you confirm there is an issue, how could it be fixed? Could initZygote() be used instead (maybe it runs too soon)?
I have phone with Snapdragon 632 Mobile Platform and some random Android app which shows what your phone has inside (RAM, SoC, sensors, screen density etc.) shows it has 8 cores.
What does it mean from Android app developer perspective?
So I can start (theoretically) up to 8 independent processes which can do work in parallel? Or this has to do with Java's Thread? Or none, find something else to study :) ?
Q : ...up to 8 independent processes which can do work in parallel?
Well, no.A process-based true-[PARALLEL] execution is way more complex, than a just-[CONCURRENT] orchestration of processes ( well known for every serious multitasking / multiprocessing O/S designer ).
Q : What does it mean from Android app developer perspective?
The SoC's 1.8 [GHz] 8-core CPU, reported by your system, is just a one class of resources the O/S has to coordinate all processes' work among - RAM being the next, storage, RTC-device(s), a (global) source of randomness, light-sensor, gyro-sensor(s), etc.
All this sharing-of-resources is a sign of a just-[CONCURRENT] orchestration of processes, where opportunistic scheduling permits a Process to go forward, once some requested resource ( CPU-core, RAM, storage, ... ) gets free to use and scheduler permits a next one waiting to make a small part of it's work and releasing and returning all such resources back, once either a time-quota expires, a signal-request arrives or some async awaiting makes such process to have to wait for some external, independently timed event ( yes, operations across a network are typical case of this ) or was ordered to go and sleep (so, why to block others who need not wait ... and can work during that time or "sleep" ).
O/S may further restrict processes, to become able to use but some of the CPU-cores - this way, such planning may show up, that a physically 8-core CPU might get reported as but a 6-core CPU from some processes, while the other 2-cores were affinity-mapped so that no user-level process will ever touch 'em, so these remain, under any circumstances, free/ready to serve the background processes, not interfering with other, user-level processing bottlenecks, that may happen on the remaining, less restricted 6-cores, where both system-level and user-level processes may get scheduled for execution to take place there.
On a processor level, further details matter. Some CPU-s have SIMD-instructions, that can process many-data, if properly pre-aligned into SIMD-registers, in one and single CPU-instruction step. On the contrary, some 8+ core CPU-s have to share but 2 physical FMA-uop units that can multiply-add, spending but a pair of CPU-CLK-cycles. So if all 8+ cores ask for this very same uOP-instruction, well, "Houston, we have a small problem here ..." - CPU-design thus CISC-CPUs have introduced ( RISC-s have completely different philosophy to avoid getting into this ) a superscalar pipelining with out-of-order instruction re-ordering, so 2-FMA-s process each step but a pair of such pack of 8-requested FMA-uops, interleaving these, on a CPU-uops level, with other ( legally re-ordered instructions ) work. Here you can see, that a deeper Level-of-Detail can surprise during execution, so HPC and hard-RealTime system designers have to pay attention to even this LoD ordering, if System-under-Test has to prove it's ultimate robustness for field-deployment.
Threads are in principle way lighter, than a fully-fledged O/S Process, so way easier to put/release from CPU-core ( cf. a context-switching ), so these are typically used for in-process [CONCURRENT] code-execution ( threads typically share the O/S delivered quota of CPU-time-sharing - i.e. when many O/S Processes inside your O/S's scheduler's queue wait for their time to execute on shared-CPU (cores), all their respective threads wait either ( no sign of thread independence from it's mother Process ). A similar scheduling logic applies to cases, when an 8-core CPU ought execute 888-threads, spawned from a single O/S Process, all that among other 999-system-processes, all waiting in a scheduler queue for their turn ) Also the memory-management is way easier for threads, as they "share" the same address-space, inherited from their mother-Process and can freely access but that address-space, under a homogeneous memory-access policy (and restrictions - will not crash other O/S Processes, yet may devastate their own one's memory state... see Thread-safe-ness issues )
Q : ...something else to study :) ?
The best place to learn from masters is to dive into the O/S design practices - best engineering comes from Real-Time systems, yet it depends a lot on your level of endurance and experience, how easy or hard that will be for you to follow and learn from.
Non-blocking, independent processes can work in a true-[PARALLEL] fashion, given no resources' blocking appears and results are deterministic in TimeDOMAIN -- all start + all execute + all finish -- at the same time. Like an orchestra performing a piece from W.A.Mozart.
If a just-[CONCURRENT] orchestration were permitted for the same piece of music, the violins might start only after they were able to borrow some or all fiddlesticks from viola-players, who might have been waiting in the concert-hall basement, as there was yet not their turn to even get into the dressing-room, piano soloist was still blocked downtown, in the traffic jam and will not be able to finish her part of the Concerto Grosso in about next 3 hours, while bass-players have superfast fiddled all their notes, as nobody was in a need of their super-long fiddle-sticks and they are almost ready to leave the concert-hall and move to playing on another "party" in the neighbouring city, as their boss promised there...
Yes, this would be a just-[CONCURRENT] orchestration, where the resulting "performance" always depends on many local-[ states, parameters ] and also heavily on externalities-( availability of taxi, actual traffic jam and its dynamics, situations like some resource {under|over}-booking )
All that makes a just-[CONCURRENT] execution way simpler in execution ( no strict coordination of resources needed - a "best-effort" - a "Do, if and when someone can" typically suffice ), but in-deterministic in results' ordering.
Wolfgang Amadeus Mozart was definitely designing his pieces of art in a true-[PARALLEL] fashion of how to orchestrate its performance - this is why we all love Amadeus and no one will ever dream to let it be executed in a just-[CONCURRENT] manner :o) no one will ever detect, the today's product of in-deterministically performed piece was the same as was performed, under different external set of external and other conditions, last night or last week, so no one could say if it was Mozart's piece or not at all ... God bless true-[PARALLEL] orchestration never permits to devastate such lovely pieces of art & performs the very way that every time the same result is (almost guaranteed to be) produced...
I am trying to make a background service which should measure traffic usage of various applications so as to be able to show to the user which apps consume most data traffic.
I found that Spare Parts app does exactly that, but after installing it on a 1.6 Dell Streak device I always get "No battery usage data available" for "Network usage". Does this function at all work in Spare Parts?
Also, I couldn't find a working source code for Spare Parts.
https://android.googlesource.com/platform/development/+/froyo-release/apps/SpareParts
looks to be outdated or incomplete. (?)
But Spare Parts seems to measure e.g. CPU usage per app. How does it do that on an unrooted phone?
My general idea of how traffic per app could be measured is to regularly check the
"sys/class/net/" + sWiFiInterface + "/statistics/rx_bytes"
"sys/class/net/" + sWiFiInterface + "/statistics/tx_bytes"
"sys/class/net/" + sMobileInterface + "/statistics/rx_bytes"
"sys/class/net/" + sMobileInterface + "/statistics/tx_bytes"
files and to see which app currently has focus and thus most likely to cause the generated network traffic.
Unfortunately I can't find how to get the app currently having focus.
I found this:
Android, how to get information on which activity is currently showing (in foregorund)?
but seems it's about testing, not just a 3d party service running on non-rooted Android device.
We can get what activities are running with ActivityManager.getCurrentTasks(), but any of them can be the one with focus. It seems like the Android architects explicitly don't want 3d party apps to know what app has focus, because of security concerns
(see http://android.bigresource.com/Track/android-zb2mhvZX4/).
Is there a way around this?
Also, if I want to not only detect which activities eat up traffic but also what services, I can get all currently running services with
ActivityManager.getCurrentSerives()
and even see for each one if it's in foreground mode (unlike to be thrown out if Android needs resources). But this again doesn't bring me any far.
Any ideas?
You can detect currently foreground application with ActivityManager.getRunningAppProcesses call. It will return a list of RunningAppProcessInfo records. To determine which application is on foreground check RunningAppProcessInfo.importance field for equality to RunningAppProcessInfo.IMPORTANCE_FOREGROUND.
But be ware that call to ActivityManager.getRunningAppProcesses() method must be performed NOT in the UI thread. Just call it in the background thread (for example via AsyncTask) and it will return correct results. Check my post for additional details.
I've released my second game project on the Android Market this week, and immediately had multiple 1-star reports due to force closes. I tested it on many handsets and many emulators with zero issues. I'm completely at a loss for how to proceed and looking for advice.
I use Thread.setDefaultUncaughtExceptionHandler to intercept and report uncaught exceptions, then close gracefully. The people reporting force closes aren't getting to any of that, even though it is the first thing set in the application's main task constructor, and everything is wrapped in try/catches throughout. They are also reporting that there is no "Send Report" option in the force close popup (providing the Developer Console error reports), so I have absolutely no way of knowing what the problem is.
Uses Android 2.0, with android:minSdkVersion="5". Only Permission required is INTERNET.
(on Android market as 'Fortunes of War FREE' if you want to test)
I'm a bit surprised about the missing "Send report" button. What API level did you build the game with? I usually build the level with your minimum API level to make sure you're not using any API calls beyond that, but then switch back to the highest API level so you can use functionality like "install to SD".
I'm sure there's at least one user who wrote you a mail. Can you ask them to install LogCollector and mail you the log?
Btw, in general, I wouldn't use Thread.setDefaultUncaughtExceptionHandler so there IS the option to send a report. (It's ominously missing in your case, but normally, it should be there.)
Btw btw, the exception handler applies to the current thread. If you have an OpenGL app, maybe the crash happens in the GL thread?
I'm not sure if I understood you correctly, but as far as I know Android only shows that report dialog if you use its default UncaughtExceptionHandler.
Try this:
In your UncaughtExceptionHander's constructor, call Thread.getDefaultUncaughtExceptionHandler and save the returned object in a variable (let's call it defaultHandler). In your Handler's uncaughtException() do the things you want to do, and then call defaultHandler.uncaughtException() afterwards.
Maybe something you should know:
In my experience, your Context isn't functional anymore at uncaughtException(). So, you can't send broadcasts, etc anymore.
By the way, if you really wrapped everything in try/catch, that could be the reason why error reporting doesn't work as expected? :P
Good luck
Tom
Perhaps the force closes are caused by stalls, rather than exceptions. Users may not notice the difference. This kind of problem can occur more often if users have CPU hogging services running at the same time as your application, which explains why you're not seeing the issue in your testing.
Permission Internet sounds a lot like you try to transfer data from the net, which is very fast in your local LAN, but all of a sudden becomes slow (and time consuming) when people try this over their GSM connections.
If you then do the data transfer in the UI thread, this one is blocked and the system detects the block - but then this should end up in a "Did not respond" -- but then I've seen one user report an error with in the market on my app that was such a slow down cause.
So I have read the LVL docs backward and forward, and have it working with my app. I have seen the questions about the response being cached. But it still leaves me wondering, based on some of the wording in the LVL docs, does Google want us to call the license checker every time the app is initialized? Is that the safest way to implement this? Using the ServerManagedPolicy like Google suggests, do we just call the license check, and either run our app or do whatever we choose if they fail? One of my small concerns is the use of network data. They drill into us the need to be cautious of using resources without informing the user, and it seems to me this is a use of network data without letting the user know.
To add to this, is anyone experiencing any type of delay to their app due to this code? Due to the nature of my app, opening it and then waiting every time for an ok to come through the network would definitely distract from its use. Should I cache the response myself, or am I way over thinking this?
You answered your own question; if you feel that calling the service every time you start would be disruptive (which it would, e.g. the user is out of coverage), then don't do it.
Google make no recommendations about how often to use the licensing service; it's down to how paranoid you as the application developer are about piracy, balanced with how much you feel constantly checking would annoy the user.
Ok, fair, only check it once in a while.. But where can you "safely" store the information, that you should check it once a day only?
Eg, the first time you start the app, you will check it. Result of LVL is valid: so you store the date of the last successful check. But where to store it? Using SharedPreferences ? Is this safe? Because if you have root access on your device you could access the preference and change the valid date (to either way in the future, an yes, ofcourse you can check that in the code :-))
PS. Sorry, could not make a comment :(
Call it every time you start the app. The LVL library, as shipped by Google, will cache the response and use it the next time the user starts the app, thus not requiring a network connection if they restart the application within the cache valid time-frame.
What you likely want to do is change the amount of time the cache is valid. By default, google ships with a fairly low cache-valid time, which resulted in some upset users who were outside of a network when the cache had expired.
Concerning LVL: Although the SDK provides a sample implementation, Google themselves, clearly recommend against using it "as-is".
http://www.google.com/events/io/2011/sessions/evading-pirates-and-stopping-vampires-using-license-verification-library-in-app-billing-and-app-engine.html
After watching that, I believe, LVL is not an option for apps sold for 1-2$. Furthermore, a failed LVL check (if no network is available) will piss off legitimate users.
while it is true, that you can implement some kind of caching LVL responses, it will always boild down to the question, in how far you want to protect against piracy at the expense of legitimate users?
And: developer time is limited, so maybe it is more worthwhile to put efforts in improving an app, instead off wasting to much time trying to cut down illegal usage.