Performance of app on Android Phone Emulator vs Actual Device? - android

I am working on a compute intensive app for Android. I do not have any device yet to test performance. However, the performance on emulator is a bit slower than expected.
I see an old question about some estimation of emulator vs device. What are your experiences while developing apps with recent SDK Froyo/2.2. Is performance observed on emulator is slower than actual device.
Please share your experience and the specs of your dev machine and mobile devices, both.
Note: To get virtual device's CPU speed one can run cat /proc/cpuinfo from adb shell. How is this comparable to actual device's CPU frequency?
Processor : ARM926EJ-S rev 5 (v5l)
BogoMIPS : 240.02

From a CPU standpoint, the emulator tends to be slower than actual hardware, presumably due to the overhead of converting ARM bytecodes to x86 ones on the fly.
From a graphics standpoint, the emulator tends to be dramatically slower than actual hardware, because the emulator lacks hardware graphics acceleration (regardless of the quality of the host computer's video card).
From a disk I/O standpoint, the emulator tends to be faster than actual hardware, particularly on write operations. Flash writes can be very slow, depending on a wide range of criteria (wear leveling, percentage of the flash storage that is in use, etc.). Brad Fitzpatrick covered this topic in his Writing zippy Android applications presentation at the 2010 Google I|O conference.
From a network standpoint, the emulator can be faster than actual hardware, because WiFi or wired Ethernet hooked up to broadband will typically be faster than a mobile data connection.

Yes true the emulator is slower than a real device. I have quite a decent development computer and it still is slower.
Here are my specs.
CPU: AMD Phenom X4 940BE (3GHz)
RAM: 8GB Corsair (800MHz)
GFX: Nvidia 9800 GTX+
MBD: Asus M3A78T
HDD: WesternDigital Velociraptor SATA2 (10k RPM)
OS: Ubuntu Lucid Lynx 64bit
And still it is quite slow compared to a real device.

Quick answer. I have found the emulator to be slower than real devices, even on my relatively fast PC.

if you install a SSD (solid State Drive) that will help a lot. i can see that you cpu and ram are decent but the simulator is pretty heavy (lots of rendering .. etc) and a faster Disk can be very helpful.

Related

Why CPU usage differs too much between AVD and real device?

I have an android app, playing back HLS.
And I'm very curious about why Android Studio's CPU usage monitor shows so different loads:
AVD: Kernel: ~2% / User: ~0% :
My Sony Z3: Kernel: ~5% / User: ~30% :
And there are no other background services run to load my Z3 so badly, I assure you.
Yep, the only obvious difference I see - AVD uses software decoder OMX.google.aac.decoder, but Z3 uses OMX.qcom.audio.decoder.aac (hardware, I guess).
AVD uses the CPU of your computer which is much faster than your phone. So only a few percentage of CPU cycles are required by the AVD whereas more percentage is required in a phone.
Also the decoder's implementation makes the difference. But generally hardware decoder are faster than software decoders. So this should not be the case.

Is the processing time on emulator more than than on a real device?(android, eclipse)

I am making an android app. When I run it on the emulator, which has 512 MiB RAM, a simple activity takes 3 seconds to complete. Will this time be lower on a real device, as 3 seconds is inacceptably long! If it will be lower then by what factor( an estimate will do)? Thanks.
It really depends on exactly what you're doing, but as a rule of the thumb, devices are much faster than emulators.
Emulators are slow because an entire ARM process architecture is emulated through software, which adds a (big) extra layer between the Android system and your computer's processor.
Trying using one of the x86 intel images for a faster emulator closer to device speeds.
However, no emulator will give you the same speed as a device.

How cpu and memory intensive is android development?

I've been programming for about two years (android and java for a few months), but I still don't really know what impact the processor (speed and cores) and the amount of RAM has on the "programming experience" (compilation time, responsiveness of tools, overall workflow, etc.).
(If I'm allowed) to be specific (otherwise ignore): I'm about to buy a 13" MacBook Pro and try to decide between the i5 (2,5 GHz Dual-Core) and the i7 (2,9 GHz Dual-Core) and find this hard.
I don't program in Android that often, or even use Eclipse regularly, but I know that in my old 1GB laptop, Eclipse takes forever to load and is very sluggish, while it loads pretty much almost instantly with my new 8GB, is almost flawlessly smooth. But the only difference in the two laptops are not just the RAM or the CPU though, the old laptop is spinning a 5400rpm disk platter while my new one uses solid chips.
Specifically with Android development though, you will be running the emulator and probably multiple instances of the emulator at the same time. These emulators are pretty memory hungry, and Eclipse aren't lightweight by any measures either. You never ever want swapping, swapping is only there for so the system doesn't start killing processes if it ever runs out of memory, but you don't want it to ever be filled with anything with day-to-day usage. If your system starts swapping, that's the sign that it's severely need more RAM.
In my personal experience, larger and faster RAMs generally contribute to responsiveness better than faster CPUs (though you still need to get at least a mid-range CPU); compilation is usually I/O bound (although this might be different depending on the codebase) so it's best if the OS can keep all the files you're currently working with in disk cache in the RAM, and for loading times you want a fast harddrive (or even better, an SSD; because fast harddrive is much noisier, much more vibrations, and battery hogs, which is sometimes acceptable for a desktop tower below your desk but not for laptop).
Another important consideration is power-saving features in the CPU and battery life if you're going to be using it on the way; the weight, the screen and keyboard size, the "feel" of the touchpad (is it too slippery, or too rough, etc; once you get used to it, touchpad is much faster and comfortable than the mouse since it's much closer to the keyboard). Don't just compare the numbers.
I have three development computers. One Dell Latitude with i7, 8GB ram and a SSD with Windows 7. The other is a 17' i7, 8GB ram, SSD Macbook Pro with OS X.
The last one is a old HP small form factor, Core2Duo, 2GB ram and a slow HDD.
All three are fine for Eclipse, and NONE will run the emulator as fast as I want to. The emulator is sluggish, even on a $3000 laptop - the difference between i5 and i7 won't be that huge.
If I were you, I would opt for the cheaper one of them, and invest in a decent Android phone for running the software.

Will a faster video card improve the performance of the Android emulator?

This is a general question. I am trying to create apps for Google TV using the 3.1 Honeycomb OS, which is not yet available on Google TV... Only on Android Tablets. I know that using a device is much faster, but I cannot use it yet.
Therefore, until later this summer I must use the emulator.
The video card on my computer is pretty weak, I do not know what it is at the moment, but I can tell you it was a bargain-bin card. I have 1.5G of DDR RAM, Win XP 32bit, plenty of storage, and almost nothing installed on the computer (I just wiped it this week).
If anyone thinks getting a new video card would improve my performance, please say so.
Getting a new video card probably won't help much, AFAIK. I have a notebook with 128MB discrete video RAM and a desktop with 512MB video RAM. Both have the same approximate single-core CPU speed (2.5GHz dual-core vs. 2.66GHz quad-core), and they have comparable Android 3.1 emulator performance.
While graphics are at the heart of the performance issue with the Android 3.1 emulator, it is unclear how much qemu uses hardware graphics acceleration, and the emulator itself has to do all its rendering using software, since it does not have access to the underlying actual hardware.
If you only have, say, $50 to spend, I'd bump up your RAM, so you can allocate 1024MB to the device RAM setting of the AVD. That is known to incrementally improve matters. It is still slow, but not as horrible.
Video Cards and Processor its really important to the emulator speed, i dont know what processor you have, so cant know for sure if only upgrading the Video Card will make a lot of diference but it will definitly be better that the performance you have now.

Difference between iPhone Simulator and Android Emulator

What is the difference between iPhone Simulator and Android emulator? I have heard people saying that Emulator really emulates the target device which is not true in case of simulator.
I think Android emulator mimics the processing speed of the target device, the memory usage, but a simulator does not emulate the device.
Disclaimer: I'm only an iPhone developer, not an Android developer.
You are correct, the difference between emulators and simulators is that emulators mimic the software and hardware environments found on actual devices. Simulators, on the other hand, only mimic the software environment; they otherwise have access to all of the host system's hardware resources such as disk space, memory and processor speed.
Apple always harps on the importance of device testing because iPhone Simulator does not emulate an iPhone processor, disk drive, memory constraints and whatnot. You hardly ever get memory warnings unless your Mac is struggling to manage resources itself, unless you simulate (again) memory warnings from the Simulator's menu item.
In fact, if you go to Settings > General > About, you'll see that the Simulator's disk capacity is the same as the filesystem of the Mac it's installed on:
Though android emulator emulates the ARM processors and certain hardware, it still doesn't do a good job of matching the CPU performance.
Being an emulator it can match Memory consumption well, it can emulate certain simple devices well but fails when the devices gets complicated for example mobile GPU and HW media decoders. And as the emulation logic works on translating each ARM instruction to X86 instruction and executing its a performance hog and by design not cycle accurate.
With programming model being Java this design doesn't buy anything to the Application developer as the performance on emulator doesn't reflect performance on device (due to speed of emulation and due to missing HW devices) and more often is is an inconvenience as debug cycles are slow. The only advantage being Application complied for emulator work as it is on devices (no recompilation).
In the simulator model of iPhone Apple realized they are not going to match device perfectly, so they do not even try and in return give developers a fast cycle time and enhanced tool support to develop applications effectively.

Categories

Resources