Just out of curiosity. Logically thinking, does Android debugging mode slows down the performance of Android devices?
How can I prove to users that Android debugging does or does not slow down the Android?
P.S.: I need specific answer and reliable source to how can I prove it?
Yes. Attaching a debugger almost always slows down the performance. The best way to prove any performance related argument is always to run some tests. Set some timers in your code and gather the data empirically. Then you'll know not only which way is faster but by exactly how much.
For a 'specific answer' - measure and your tests will be the 'reliable source'
Run the application with and without debugging and show the execution time difference. It's best to use an app that simply opens, does some calculations or something then exits that way there's no user error in interacting with it.
Related
I work at a phone manufacturer (OEM).
One of our preloaded third-party apps has been complained about by our users that the performance is very bad on our mid-to-low end devices.
Without having access to the app's source code, how do I determine whether the performance problem is caused mostly by how the app was written or by the limitation of the device (e.g. CPU)?
This is an interesting question. Even though it should require code.
My take;
When it comes to performance especially when it comes to Android, the issue always has to do with application development.
If an application is struggling, regardless of its capabilities, it is trying to do too much at once. There are many ways to improve performance with asynchronous background threads, and with modern development Croutines.
People complain about performance for one reason. The application is showing the user it is struggling. If you are working on low end devices, your views should be very simple and clean, and data/views should be asynchronous to prevent too too much from happening at one time.
I've developed an application which seems to work on most tablets/phones I've tested it on (S2/S3/S4/Xoom/some emulator configurations etc)
However, I've noticed a few complaints around a "Pantech Burst" - I can't seem to find any of these phones to pick one up (possibly it's specific to the US) and thought perhaps I could simulate one.
I know its 480 x 800 pixels, and has 1GB of memory
http://www.gsmarena.com/pantech_burst-4429.php
Is it possibly to simulate this kind of phone?
Or are some phones inherently different based on hardward that you could never simulate?
(I have a gut feeling it might be related to mp3's and Soundpools, but I'd rather prove it)
Short answer: no. In my experience if you have device-specific problems really the best way to debug them is to get your hands on the specific device.
Failing that, I can recommend integrating some kind of crash-reporting framework into your app, if you haven't already. These really help in capturing, tracking, and sending errors (with stacktraces) to you and have helped me fix problems on devices I can't get my hands on.
One I use is bugsense, there is also ACRA and others.
http://www.bugsense.com/
https://github.com/ACRA/acra
If you're having a problem on one particular device, then it is likely a hardware + software bug, and simply simulating the hardware configuration will not solve your problem.
That said, you can always duplicate the hardware by setting the RAM, screen size, storage etc. to its specifications. You probably won't get the same processing speed due to the fact that you're on an emulator.
If getting device is not really an option for you, you might want to consider using the Apkudo service, assuming they have the device your app is having trouble with.
You submit your app, and they run it on their set of devices using Monkey, returning to you a logcat and a stack trace when the application crashes on a particular device.
I want to create a couple of functional tests for an Android application to run them on a continuous integration server. As far as I understand, there are two main approaches: monkeyrunner and test cases via instrumentation.
At the moment, I can't see any advantages of monkeyrunner, but I might be missing something. What is it good for?
I like to use MonkeyRunner because its really portable (Linux, Mac and Windows), easy to setup and can work easily across many different devices and emulators. Also, sometimes with instrumentation you get crashes that are unrelated to the app, but are rather because of the instrumentation implementation. With MonkeyRunner you will know what caused the crash.
From my experience, monkey testing is really good for detecting flaws of the application in terms of:
Memory leaks: sometimes it is impossible to track scenarios generating excessive memory usage (say basic fast rotation, subsequent button clicks etc.).
Monkey also helps identifying test cases; unintended, strange uses of the applications that lead eventually to crashes.
Using monkey tests you can also somehow mesure performance of the application, when used by "heavy" users.
I would say, that monkey testing does not stand in opposition to unit/instrumenation testing, but it is yet another way to test, that your application is working as intended.
Of course it also depends on the software is about to be tested, but in my opinion it is not always that easy to determine what happens if your button is clicked, then 9px above the button is touched and eventually a phone activity is run. :) That what monkey tests are for...
I have been battling an ANR happening in one of my services for a while now. It is very hard to reproduce and the UI seems to have full functionality right before it happens 100% of the time there is never any noticeable lag or freezing. My service has a TimerTask and a few AsyncTask's that it runs and that is it.
The stack traces I get when you report it with the Android Market in 2.2 are hard to read, there doesn't seem to be a reference to any of my code directly but only from classes in the SDK. Can anyone take a look at the stacktrace and see if you can tell what is going on.
The print out is so large I opted to post it to pastebin, I hope that isn't against the rules.
http://pastebin.com/KHUD0UHW
Here is the Logcat log as well
http://pastebin.com/V5xSey36
It's possible that it's not your apps fault. An ANR shows up for whatever the 'current' app is. However if another app on the system was really taking out your CPU and forcing your app to starve, your app will get the ANR because it has the user's focus. If that is the situation, there is nothing you can do. Perhaps the best way to test for this is watch for or force the system to do a sync. Syncs are usually quite heavy on the phone and can cause bad lag on less powerful phones.
Another way to test if you have a slower phone is to install a big app from the market. Once it goes in to the 'Installing' phase after downloading, do something just a little bit intensive in your app. If the installing phase takes longer than 5 seconds or so, it has quite a decent chance to make your app get the ANR. This is because of bad IO speeds while installing the app. IO blocks other apps from getting CPU time. Android thinks that means that you are stealing it.
I started developing android applications. And am testing with the android emulator. Do I really need android phone before releasing it for public usage?
Short answer No. You can test and build a android application package with the SDK and an emulator. But I would say there are usually many things which it would be wise to test on a device.
Personally I have noticed that the emulator does not give a good indication of response times for UI controls. It is usually necessary to move functionality which has long processing times into background threads to maintain user interactivity without the 'force close' pop-up. Testing the effectiveness of your UI responsiveness must be done on a phone to be meaningful.
Network connectivity is another aspect which can be vastly different on a phone, 3G or wifi.
Device sizes and Android platform versions can be tested effectively on the emulator.
Some phone allow hot-swapping of the SD card (replacing the SDcard without turning off the phone). I am not sure how to replicate this on the emulator.
There may be many more things which may only become apparent when using your application on a real device. I would strongly suggest to always test under real conditions when feasible for any commercial project.
From a technical perspective there's no reason why you can't develop purely on the emulator. You're not going to be able to test on every available device, so there's always going to be possibility of device specific bugs that you've missed.
However, I'd strongly recommend getting an actual phone to test your application on.
For me the biggest difference between an actual device and the emulator is the difference between using the interface with your fingers and using a mouse. Interactions which make sense in the emulator sometimes don't work as well when you start using touch on the screen. So if you develop purely on an emulator you'll won't lots of little improvements to your UI that would obvious when you used your app on a phone.
You can't feel a real app in your hands until you have a real phone. (I'm telling you as an Android developer)
So, developing w/o real phone is possible, but real phone gives you a lot more experience, fun & usefulness.
It depends on what type of application you're developing, for serious ones you need at least one device to test it on. For complex applications you would need a range of devices, for example with or without hardware keyboard, different navigational button etc. For basic, simple applications you'll probably do fine with just the emulator.
I would imagine with games you would definitely need to test on real devices.
Thanks to you all. I am going to get HTC Legend and test it, so that I can hope that my apps can be used by others :)
You guys suggest me HTC Desire or HTC Legend?