There are dual core and now quad core phones in market. However i really don't know what kind of apps does truly makes use of the feature. Can anyone provide some information on the apps that can really make use the power of dual -quad cores in mobile devices.
The idea of having dual,quad or more processing is not for specific apps to use it.
It just means having more processing speed available at hand, which will only be used when completely necessary.
For example, when there is a process that can be handled by one core, which is usually the case for most apps, the other cores aren't necessary. But there are high end games or more than one process that have to be run, which need lots of calculations at a given time, other cores may also be used, if there is room for improvement in the first core.
Related
I was wondering what is the maximum framerate that could be achieved on iOS and Android devices with Unity3d. Can 60 fps and 100 fps be reached?
What FPS should I provide:
Android as a platform aims to provide 60fps as a standard. However, keep in mind this is for Applications which come nowhere near the GPU requirements of a game.
If you can't do all of the calculations you require in 16ms (60fps) you should aim to provide 30fps and provide the user a consistent experience. User's will quickly detect variations in frame rate and interpret this as a performance issue with their phone.
Never over-promise and under-deliver.
Modern phones claim to have quad core processors with other incredible hardware profiles. Rarely are you taking advantage of the full capabilities of a phone, the hardware and Android platform is designed to use as minimal battery as possible and cut corners when it can.
Your user's phone is typically idling and the full potential will be activated for a sparing amount of ms to perform work and catch up on operations.
What is the max performance on Android:
You can search for Android benchmark test's using Unity, keep a very open mind for what each phone puts through as there are more than 12,000 hardware configurations for Android.
Your development phone and those for testing should be expected to be significantly better than your user's phones.
I wonder if there is a penalty for running Dalvik+JIT on a multi-core ARM chip vs a single core chip?
E.g., if I disable multi-core support in my Android system build and execute the entire phone with a single CPU core, will I get higher performance when running a single-threaded Java benchmark?
How much is the cost of memory barrier and synchronization on multi-core?
I am asking because I vaugely remember seeing single-threaded benchmark scores from single core phones vs dual core phones. As long as the Mhz is about the same, there is no big difference between the two phones. I had expected a slow down in the dual-core phone ....
The simple answer is "why don't you try it and find out?"
The complex answer is this:
There are costs to doing multicore synchronization but there are also benefits to have multiple cores. You can undoubtedly devise a pathological case where a program suffers from the additional overhead of synchronization primitives such that it is deeply affected by their performance. This is usually due to locking at too deep of a level (inside your fast loop). But in the general case, the fact that the dozen other system programs are able to get CPU time on other cores, as well as the kernel servicing interrupts and IO on them instead of interrupting your process, are likely to greatly overwhelm the penalty incurred by MP synchronization.
In answer to your question, a DSB can take dozens or hundreds of cycles and a DMB is likely more costly. Depending on the implementation exclusive load-store instructions can be very fast or very slow. WFE can consume several microseconds, though it shouldn't be needed if you are not experiencing contention.
Background: http://developer.android.com/training/articles/smp.html
Dalvik built for SMP does have additional overhead. The Java Memory Model requires that certain guarantees be enforced, which means issuing additional memory barriers, particularly when dealing with volatile fields and immutable objects.
Whether or not the added overhead will be noticeable depends on what exactly you're doing and what device you're on, but generally speaking it's unlikely you'll notice it unless you're running a targeted benchmark.
If you build for UP and run Dalvik on a device with multiple cores, you may see flaky behavior -- see the "SMP failure example" appendix in the doc referenced above.
I want to control the aperture, shutter speed and ISO on my android phone. Is there a way in which I can access the hardware features?
I won't say it's impossible to do this, but it IS effectively impossible to do it in a way that's generalizable to all -- or even many -- Android phones. If you stray from the official path defined by the Android API, you're pretty much on your own, and this is basically an embedded hardware development project.
Let's start with the basics: you need a schematic of the camera subsystem and datasheets for everything in the image pipeline. For every phone you intend to support. In some cases, you might find a few phones with more or less identical camera subsystems (particularly when you're talking about slightly-different carrier-specific models sold in the US), and occasionally you might get lucky enough to have a lot of similarity between the phone you care about and a Nexus phone.
This is no small feat. As far as I know, not even NEXUS phones have official schematics released. Popular phones (especially Samsung and HTC) usually get teardowns published, so everyone knows the broad details (camera module, video-encoding chipset, etc), but there's still a lot of guesswork involved in figuring out how it's all wired together.
Make no mistake -- this isn't casual hacking territory. If terms like I2C, SPI, MMC, and iDCT mean nothing to you, you aren't likely to get very far. If you don't literally understand how CMOS image sensors are read serially, and how bayer arrays are used to produce RGB images, you're almost certainly in over your head.
That doesn't mean you should throw in the towel and give up... but it DOES mean that trying to hack the camera on a commercial Android phone probably isn't the best place to start. There's a lot of background knowledge you're going to need in order to pull off a project like this, and you really need to acquire that knowledge from a hardware platform that YOU control & have proper documentation for. Make no mistake... on the hierarchy of "hard" Android software projects, this ranks pretty close to the top of the list.
My suggestion (simplified and condensed a bit): buy a Raspberry Pi, and learn how to light up a LED from a GPIO pin. Then learn how to selectively light up 8 LEDs through an 74HC595 shift register. Then buy a SPI-addressed flash chip on a breakout board, and learn how to write to it. At some point, buy a video image sensor with "serial" (fyi, "serial" != "rs232") interface from somebody like Sparkfun.com & learn how to read it one frame at a time, and dump the raw RGB data to flash. Learn how to use i2c to read and write the camera's control registers. At this point, you MIGHT be ready to tackle the camera in an Android phone for single photos.
If you're determined to start with an Android phone, at least stick to "Nexus" devices for now, and don't buy the phone (if you don't already own it) until you have the schematics, datasheets, and sourcecode in your possession. Don't buy the phone thinking you'll be able to trace the schematic yourself. You won't. At least, not unless you're a grad student and have one hell of a graduate-level electronics lab (with X-Ray capabilities) at your disposal. Most of these chips and modules are micro-BGA. You aren't going to trace them with a multimeter, and every Android camera I'm aware of has most of its low-level driver logic hidden in loadable kernel modules whose source isn't available.
That said, I'd dearly love to see somebody pull a project like this off. :-)
Android has published online training which contain all the information you need:
You can find it here - Media APIs
However, there are limitations, not all hardware's support all kind of parameters.
And if I recall correctly, you can't control the shutter speed and ISO.
I am trying to find the GPU clock speed in Android.
So far no luck. Is that possible at all? I cannot find any instruction in order to get the hardware clock speed.
Android does not provide APIs for low level interaction with the GPU. Depending on the meaning of "Android" it is not entirely clear that there has to even be a GPU - the emulator would be a common example of something that does not, and basic ports to various development boards could be another.
It is possible, though sadly unlikely, that a given device vendor might choose to publicize some low-level programming information. Unfortunately, details of how to work with the GPU tend to be things that they hold quite closely and refuse to disclose - they argue it would give an advantage to their competitors - perhaps, but what it clearly does is prevent open source implementations of accelerated graphics drivers.
Even beyond the availability of information, there is the issue of access permission. The graphics hardware in Android is owned by system components such as surfaceflinger, and on secured devices not really made available for direct interaction by 3rd party application code.
Ultimately though, even if you could find a number it would not mean much. Clock speed of the internal engine does not tell you the number of clock cycles needed to complete an operation, the number of parallel operations which can be in process, what delays are encountered in moving data to/from memory and what caches are available, the efficiency of algorithms, etc. You might be better off benchmarking some performance test.
I am planning to build my embedded system for processing the sound of my guitar, like a pod, with input and output and so on and a system running with a program with presets, options etc in a small lcd screen should be multitouch for navigation.
Now I am at the very beginning and dont know where to start and what system I should use.
It should support the features I wrote above (like multitouch) and should be free.
Embedded Linux,
or
Android
or what?
Are you using off the shelf effects modules with some sort of interface to an embedded system or are you planning on doing the effects in your program as well? I assume the latter in this response, please clarify if I have misunderstood the nature of the project:
Do your system engineering...
You are going to need to deal with the analog of the inputs and outputs. Even digital inputs and outputs are analog in some respects to keep the signals clean. Even optical is going to be analog between the optical interface and the processors interface.
(I know this is long, keep reading it will converge on the answer to your question)
You will have some sort of hardware to software data in interface, ideally if you choose to support different interfaces you will ideally want to normalize the data into a common form and datarate so that the effects processing only has to deal with it one way. (avoiding a bunch of if-then-elses in the code, if bitrate is this then, else if bitrate is this then, else...if bitrate is this and data is unipolar then, else if bitrate is this and data is bipolar then, else...).
The guts of the effects processing is as complicated as you want to make it, one effect at a time or multiple? For each effect define the parameters you are going to allow to be adjusted (I would start with the minimum number which might be none, then add parameters later once it is all working). These parameters are going to need to be global in some for or fashion so that the user interface can get at them and modify them for the effects processing.
the output, same as the input, a lot of analog work, convert from the normalized data stream into whatever the interface wants or needs or you defined it to be.
then there is the user interface...the easy part.
...
The guts of the software for the effects processing can be system independent code, and is probably more comfortable being developed and tested on a desktop/laptop than on the target system, bearing in mind the code should be written system and operating system independent as well as being written embeddable (avoid floating point, divides, lots of local variables, etc).
Sometimes if not often in an enclosed system with some sort of user interface on the same black box, knobs or buttons a screen of some sort, touch screens, etc. One system may manage the user interface the other performs the task and there is a connection between. not always but it is a nice clean design, and allows, for example a product designed yesterday with buttons and knobs and say a two line lcd panel, to be modernized to a touch screen, at a fraction of the effort, and tomorrow sometime there may be some fiber that plugs directly into a socket in the back of your head, who knows.
Another reason to separate the processing tasks is so that it is easier to insure that the effects processor will never get bogged down by user interface stuff. you dont want to be turning a virtual knob on your touchscreen and the graphics load to draw the picture causes your audio to get garbled or turn to a nasty whine. Basically the effects processor is real-time critical. you dont want to pick the string on the guitar, and have the sound come out of the amp three seconds later because the processor is also drawing an animated background on your touch screen panel. That processing needs to be tight and fast and deterministic, every if-then-else in the code has to be accounted for and balanced. If you allow for multiple effects in parallel your processor needs to be able to have the bandwidth to process all of the effects without a noticeable delay, otherwise if only one effect at a time then the processor needs to be chosen to handle the one effect with the worst computation effort. The worst that could happen is that the input to output latency varies because of something the gui processing is doing, causing the music to sound horrible.
So you can work the effects processor with its user interface being, for example, a serial interface and a protocol across that interface (which you define) for selecting effects and changing parameters. You can get the effects processor up and working and tested using your desktop and/or laptop connected through the serial interface with some adhoc code being used to change parameters, perhaps a command line program.
Now is where it becomes interesting. You can get an off the shelf embedded linux system for example or embedded android or whatever, write your app that uses the serial protocol, if need be glue, bolt, tape, mold, etc this user interface system on top of around, next to the effects processor module. Note that you could have all of the platforms suggested, an android version, a linux (without android) version, a mac version, a windows version, a dos version, a qnx version, an amiga version, you name it. You can try 100 different user interface variations on the same OS, maybe I want the knobs to be sliders, or up/down push buttons, or a dial looking thing that I use a two finger touch to rotate, or some other multi-touch gesture.
And it gets better, instead of or in addition to serial you could use a bluetooth module. Your user interface could be an iPhone app, or android phone app, or laptop linux or windows app. or your desktop computer, etc. All of which are (relatively) easy platforms for writing graphical user interfaces for selecting things.
Another approach of course could be ethernet, in particular wireless ethernet then your user interface could be a web page and the bulk of your user interface work has already been done by the firefox or chrome or other team. (wireless ethernet or bluetoot or zigbee or other allows the effects processor to be somewhere convenient and doesnt have to be within arms/foot reach of you).
...
Do your system engineering. Break the problem into a few big modules, define the interfaces between the modules and then worry about the system engineering if necessary inside those modules until you get to easily digestable bites. The better the system engineering and the better defined the interfaces between modules the easier the project will be to implement.
...
I would also investigate the xcore processors at xmos, they have a very nice simulator with vcd waveform output that you can also use to accurately profile your effects processing. Personally I would have a very tough time not choosing this platform for this project.
You should also investigate the omap from ti, this is what is on a beagleboard. You get a nice arm that already has linux and other things ported and running on it, but you also get a dsp block, that dsp block could do your effects processing and likely in a way that the two dont interfere. You lose the ability to separate your user interface processor and effects processor physically, but gain elsewhere, and can probably use a beagleboard off the shelf to develop a prototype (using analog audio in and out). I actually liked the hawkboard better (with the hawkboard you get a usable system out of the box, with the beagleboard you spend another beagleboards worth of money for stuff that should have been on the board), but last I saw they had an instability flaw with the pcb design.
I am not up on the specs but the tegra (a number of upcoming phones are or will be tegra based), like the omap, should give some parallel processing with a lean toward audio/video as well as gui. You only need the audio and gui (the easier two of the three). I think there is a development platform for sale that has a touchscreen on it and popular embedded OSes.
If you are trying to save money buy making one of these things yourself. Stop now and go to the store and buy one. The homebrew one will cost a lot more, even if all the design stuff is free. The hardware and melted down guitars and guitar amps are not. I speak from experience, many times I have spent many thousands of dollars on a homebrew projects to avoid buying some off the shelf $300 item. I learned an awful lot, and personally the building of the thing is more fun than the using it, I normally shelve it once it is finally working. YMMV
If I have misunderstood your question, please let me know and I will edit/remove/replace all of it with a different (short) answer.
In facts it depends on what kind of hardware you want to run and interface (as a consequence how much you will work at driver level... or not).
The problem with android remains the same than with a bare linux. Could even be worse if there is no framework-level library (Java) since you will have to manage C part (with JNI) and the Java part.
Work the specs... then you will choose wisely...
Reminder: android is linux-based.
Go for Android:
With any other embedded OS you will have too much of an integration work to deal with.
You can start by buying off-the-shelf hardware (Galaxy Tab, HTC phone, etc) to start your development and reach a prototype fast