How to modify android kernel process scheduler? - android

I want to modify the android scheduler "CFS" by myself.
I want to to assign a real-time priority to the user interactive task distinguished by heuristic or what so ever.
So, I just want to modify android kernel, build my modified kernel and research the performance.
How can I do this?

To modify the Android's kernel scheduling policy is unlikely to be allowed from a security point of view. But based various features of "realtime" you can always make your program meets these requirements:
a. Responsiveness: by ensure the input loop is as efficient as possible and always responding as fast to input as possible. In the Linux kernel this is done through "voluntary preemption".
b. Low latency: by piecing every jobs into as small a piece as possible so that control can be passed back to respond to input, or in the case of audio, control can be issued at a precise start of the clock (SCHED_DEADLINE scheduling). Android does have some API for this:
http://source.android.com/devices/audio/latency_design.html
In general changing priority is not ideal to solve the realtime requirement (eg, giving higher priority to one process may end up having another process suffering in performance). What is actually done (eg, LynxOS, a realtime OS used in Missile system, and is not Linux, but some of its component like TCP/IP is from FreeBSD) is to tune the system so that it perform at the level with lots of spare hardware capacity. So in LynxOS a lot of the system threshold limits are very low, so the hardware is always free enough to respond quickly to input events.
https://github.com/keesj/gomo/wiki/AndroidScheduling
Android Low latency Audio using SoundPool
Low-latency audio playback on Android

Related

Surfaceflinger notification

In the android displaying processes, SurfaceFlinger does an important role in that situation.
By the way, are there any methods to notice the SurfaceFlinger starting or stopping at the application level?
Or any other displaying processes to observe?
I want to know about the time difference between touching and displaying.
The SurfaceFlinger process does not start or stop while applications are running. If it does, the system restarts.
It sounds like you're interested in knowing the latency between when you touch the screen, and when the results of that touch are visible. You can use systrace to observe the various events, though this requires a fair understanding of the system. (Start with this doc.)
In general, an app can expect 2 to 2.5 frames of latency between cause and effect. On the N5, with the DispSync mechanism, this can be reduced to 1.5 - 2 frames.
There's no public API for 3rd party apps to access the surface flinger. If you are working at the platform level (your own device or custom ROM) then you'd have to add your own timing mechanism between the input subsystem and the surface flinger.

High Performance Audio Cracking / Prevent a CPU core from downclocking

This may be very specific, still trying to ask:
I'm founder of Heat Synthesizer, a software music synthesizer for Android. (https://play.google.com/store/apps/details?id=com.nilsschneider.heat.demo)
This app generates audio signals in realtime and needs to do heavy math calculations to do so.
Having seen the talk on Google I/O 2013 about "High Performance Audio on Android" (http://www.youtube.com/watch?v=d3kfEeMZ65c), I was excited to implement it as they suggested, but I keep having problems with crackling.
I have a CPU usage of a single core of about 50% on a Nexus 7 (2012), everything seems to be okay so far. Locking has been reduced to a minimum and most of the code is done lock-free.
Using an app that is called Usemon, I can see that the core I use for processing is used only 50% and is even being downclocked by the kernel because my CPU usage is not high enough.
However, this core speed changes result in crackling of the audio, because the next audio block is not calculated fast enough because my core is underclocked.
Is there any way to prevent a core from changing it's clock frequency?
FWIW, I recommend the use of systrace (docs, explanation, example) for this sort of analysis. On a rooted device you can enable the "freq" tags, which show the clock frequencies of various components. Works best on Android 4.3 and later.
The hackish battery-unfriendly way to deal with this is to start up a second thread that does nothing but spin while your computations are in progress. In theory this shouldn't work (since you're spinning on a different core), but in practice it usually gets the job done. Make sure you verify that the device has multiple cores (Runtime.getRuntime().availableProcessors() or NDK equivalent), as doing this on a single-core device would be bad.
Assuming your computations are performed asynchronously in a separate thread, you can do a bit better by changing the worker thread from a "compute, then wait for work" to a "compute, then poll for work" model. Again, far less efficient battery-wise, but if you never sleep then the kernel will assume you're working hard and needs to keep the core at full speed. Make sure you drop out of polling mode if there isn't any actual work to do (i.e. you hit the end of input).

Threaded low latency audio on Android

The short version:
I'm developing a synth app and using Opensl with low latency. I was doing all the audio calculation in the Opensl callback funktion (I know I should not but I did anyway). Now the calculations take about 75% cpu time on my nexus 4, so the next step is to do all the calculations in multiple threads instead.
The problem I ran into was that the audio started to stutter since the callback thread obviously run on a high priority while my new thread doesn't. If I use more/bigger buffers the problem goes away but so does the realtime too. Setting higher priority on the new thread don't seem to work.
So, is there even possible to do threaded low latency audio or do I have to do everything in the callback for it to work?
I have a buffer of 256 samples and that's about 5ms and that should be ages for the thread- scheduler-thingie to run my calc thread.
I think the fundamental problem lies in the performance of your synth-engine. A decent channel count with a Cortex-A8 or -A9 CPU is achievable with a single core. What language have you implemented it in? If it happens to be Java, I recommend porting it to C++.
Using multiple threads for synthesis is certainly possible, but brings with it new problems - namely that each thread must synchronise before the generated audio can be mixed.
Unless you take an additional latency hit that would come from running the synthesis threads asynchronously, the likely set-up is that in your render call-back you'd signal the additional synthesis threads and then wait for them to complete before mixing the audio from all of them together.
(an obvious optimisation is that the render call-back runs some of the processing itself as it's already running on the CPU and would otherwise be doing nothing).
Herein lies the problem. Unless you can be certain that your synth render threads run with real-time priority, you can potentially take a scheduling hit each time the render callback runs, and potentially another if you block the callback thread waiting for the synth render threads to catch up.
Last time I looked at audio on Android, Bionic was deficient of a means of setting real-time thread priority (e.g. SCHED_FIFO). In any case, whether this is even allowed is matter of operating system policy: on a desktop Linux system you either need to be root or have adjusted the appropriate ulimit (as root) - I'm not sure what Android does here, but I very much suspect that downloaded apps aren't by default given this permission. Nor the other useful permission which is to mlock() the code and its likely stack needs into physical memory.

GPU clock speed in Android

I am trying to find the GPU clock speed in Android.
So far no luck. Is that possible at all? I cannot find any instruction in order to get the hardware clock speed.
Android does not provide APIs for low level interaction with the GPU. Depending on the meaning of "Android" it is not entirely clear that there has to even be a GPU - the emulator would be a common example of something that does not, and basic ports to various development boards could be another.
It is possible, though sadly unlikely, that a given device vendor might choose to publicize some low-level programming information. Unfortunately, details of how to work with the GPU tend to be things that they hold quite closely and refuse to disclose - they argue it would give an advantage to their competitors - perhaps, but what it clearly does is prevent open source implementations of accelerated graphics drivers.
Even beyond the availability of information, there is the issue of access permission. The graphics hardware in Android is owned by system components such as surfaceflinger, and on secured devices not really made available for direct interaction by 3rd party application code.
Ultimately though, even if you could find a number it would not mean much. Clock speed of the internal engine does not tell you the number of clock cycles needed to complete an operation, the number of parallel operations which can be in process, what delays are encountered in moving data to/from memory and what caches are available, the efficiency of algorithms, etc. You might be better off benchmarking some performance test.

Embedded System: which OS should I use?

I am planning to build my embedded system for processing the sound of my guitar, like a pod, with input and output and so on and a system running with a program with presets, options etc in a small lcd screen should be multitouch for navigation.
Now I am at the very beginning and dont know where to start and what system I should use.
It should support the features I wrote above (like multitouch) and should be free.
Embedded Linux,
or
Android
or what?
Are you using off the shelf effects modules with some sort of interface to an embedded system or are you planning on doing the effects in your program as well? I assume the latter in this response, please clarify if I have misunderstood the nature of the project:
Do your system engineering...
You are going to need to deal with the analog of the inputs and outputs. Even digital inputs and outputs are analog in some respects to keep the signals clean. Even optical is going to be analog between the optical interface and the processors interface.
(I know this is long, keep reading it will converge on the answer to your question)
You will have some sort of hardware to software data in interface, ideally if you choose to support different interfaces you will ideally want to normalize the data into a common form and datarate so that the effects processing only has to deal with it one way. (avoiding a bunch of if-then-elses in the code, if bitrate is this then, else if bitrate is this then, else...if bitrate is this and data is unipolar then, else if bitrate is this and data is bipolar then, else...).
The guts of the effects processing is as complicated as you want to make it, one effect at a time or multiple? For each effect define the parameters you are going to allow to be adjusted (I would start with the minimum number which might be none, then add parameters later once it is all working). These parameters are going to need to be global in some for or fashion so that the user interface can get at them and modify them for the effects processing.
the output, same as the input, a lot of analog work, convert from the normalized data stream into whatever the interface wants or needs or you defined it to be.
then there is the user interface...the easy part.
...
The guts of the software for the effects processing can be system independent code, and is probably more comfortable being developed and tested on a desktop/laptop than on the target system, bearing in mind the code should be written system and operating system independent as well as being written embeddable (avoid floating point, divides, lots of local variables, etc).
Sometimes if not often in an enclosed system with some sort of user interface on the same black box, knobs or buttons a screen of some sort, touch screens, etc. One system may manage the user interface the other performs the task and there is a connection between. not always but it is a nice clean design, and allows, for example a product designed yesterday with buttons and knobs and say a two line lcd panel, to be modernized to a touch screen, at a fraction of the effort, and tomorrow sometime there may be some fiber that plugs directly into a socket in the back of your head, who knows.
Another reason to separate the processing tasks is so that it is easier to insure that the effects processor will never get bogged down by user interface stuff. you dont want to be turning a virtual knob on your touchscreen and the graphics load to draw the picture causes your audio to get garbled or turn to a nasty whine. Basically the effects processor is real-time critical. you dont want to pick the string on the guitar, and have the sound come out of the amp three seconds later because the processor is also drawing an animated background on your touch screen panel. That processing needs to be tight and fast and deterministic, every if-then-else in the code has to be accounted for and balanced. If you allow for multiple effects in parallel your processor needs to be able to have the bandwidth to process all of the effects without a noticeable delay, otherwise if only one effect at a time then the processor needs to be chosen to handle the one effect with the worst computation effort. The worst that could happen is that the input to output latency varies because of something the gui processing is doing, causing the music to sound horrible.
So you can work the effects processor with its user interface being, for example, a serial interface and a protocol across that interface (which you define) for selecting effects and changing parameters. You can get the effects processor up and working and tested using your desktop and/or laptop connected through the serial interface with some adhoc code being used to change parameters, perhaps a command line program.
Now is where it becomes interesting. You can get an off the shelf embedded linux system for example or embedded android or whatever, write your app that uses the serial protocol, if need be glue, bolt, tape, mold, etc this user interface system on top of around, next to the effects processor module. Note that you could have all of the platforms suggested, an android version, a linux (without android) version, a mac version, a windows version, a dos version, a qnx version, an amiga version, you name it. You can try 100 different user interface variations on the same OS, maybe I want the knobs to be sliders, or up/down push buttons, or a dial looking thing that I use a two finger touch to rotate, or some other multi-touch gesture.
And it gets better, instead of or in addition to serial you could use a bluetooth module. Your user interface could be an iPhone app, or android phone app, or laptop linux or windows app. or your desktop computer, etc. All of which are (relatively) easy platforms for writing graphical user interfaces for selecting things.
Another approach of course could be ethernet, in particular wireless ethernet then your user interface could be a web page and the bulk of your user interface work has already been done by the firefox or chrome or other team. (wireless ethernet or bluetoot or zigbee or other allows the effects processor to be somewhere convenient and doesnt have to be within arms/foot reach of you).
...
Do your system engineering. Break the problem into a few big modules, define the interfaces between the modules and then worry about the system engineering if necessary inside those modules until you get to easily digestable bites. The better the system engineering and the better defined the interfaces between modules the easier the project will be to implement.
...
I would also investigate the xcore processors at xmos, they have a very nice simulator with vcd waveform output that you can also use to accurately profile your effects processing. Personally I would have a very tough time not choosing this platform for this project.
You should also investigate the omap from ti, this is what is on a beagleboard. You get a nice arm that already has linux and other things ported and running on it, but you also get a dsp block, that dsp block could do your effects processing and likely in a way that the two dont interfere. You lose the ability to separate your user interface processor and effects processor physically, but gain elsewhere, and can probably use a beagleboard off the shelf to develop a prototype (using analog audio in and out). I actually liked the hawkboard better (with the hawkboard you get a usable system out of the box, with the beagleboard you spend another beagleboards worth of money for stuff that should have been on the board), but last I saw they had an instability flaw with the pcb design.
I am not up on the specs but the tegra (a number of upcoming phones are or will be tegra based), like the omap, should give some parallel processing with a lean toward audio/video as well as gui. You only need the audio and gui (the easier two of the three). I think there is a development platform for sale that has a touchscreen on it and popular embedded OSes.
If you are trying to save money buy making one of these things yourself. Stop now and go to the store and buy one. The homebrew one will cost a lot more, even if all the design stuff is free. The hardware and melted down guitars and guitar amps are not. I speak from experience, many times I have spent many thousands of dollars on a homebrew projects to avoid buying some off the shelf $300 item. I learned an awful lot, and personally the building of the thing is more fun than the using it, I normally shelve it once it is finally working. YMMV
If I have misunderstood your question, please let me know and I will edit/remove/replace all of it with a different (short) answer.
In facts it depends on what kind of hardware you want to run and interface (as a consequence how much you will work at driver level... or not).
The problem with android remains the same than with a bare linux. Could even be worse if there is no framework-level library (Java) since you will have to manage C part (with JNI) and the Java part.
Work the specs... then you will choose wisely...
Reminder: android is linux-based.
Go for Android:
With any other embedded OS you will have too much of an integration work to deal with.
You can start by buying off-the-shelf hardware (Galaxy Tab, HTC phone, etc) to start your development and reach a prototype fast

Categories

Resources