Android Threading with single core - android

Just wondering if threading with one processor improves things for me.
I am building an application that performs data intensive calculations (fft on pcm data) while a UI is running and needs to run smoothly.
I have been looking at AsyncTask but was thinking:
If I have a single core processor (600 MHz ARM 11 processor) running on my Optimus One, will threading make a difference? I thought for threads to run independantly you would need multiple processors? Or have I gone wrong somewhere?

In order to guarantee responsiveness, it is imperative to leave the main or UI thread to do the UI things. This excludes intensive drawing or 3d rendering in games. When you start to do computationally intensive things in your main thread, the user will see lag. A classic example:
on a button click, sleep(1000). Compare this with, on a button click, start an AsyncTask that sleeps(1000).
An asynctask (and other threading) allows the app to process the calculations and UI interactions "simulataneously".
As far as how concurrency works, context switching is the name of the game (as Dan posts).
Multithreading on a single core cpu will not increase your performance. In fact, the overhead associated with the context switching will actually decrease your performance. HOWEVER, who cares how fast your app is working, when the user gets frustrated with the UI and just closes the app?
Asynctask is the way to go, for sure.

Take a look at the Dev Guide article Designing for Responsiveness.
Android uses the Linux kernel and other specialized software to create the Android Operating System. It uses several processes and each process has at least one thread. Multi-threading on a single (and mutiple) processor hardware platform is accomplished by context switching. This gives the illusion of running more than one thread per processor at a time.

Related

Is Main/UI Thread same as System UI Thread in Android?

This question is different than what's being discussed about Main thread and UI thread in Android.
By mean of System thread - the thread which handles system UI like statusbar, notifications & other ongoing system processes, say thread which handles home-button-press, recents-menu etc...
By mean of Main thread - App's thread which handles UI (forked when process launched)
I believe it's saperate thread as busy main thread of app is not hanging your device and all other than app things work fine.
My doubt is only for the purpose:
If system can manage a separate thread for it self (to do UI work) than any process/app's-main thread; then why Apps can not have multiple threads who can handle UI (No matter how complex it's for devs!!)
Please provide references as well while pointing out answers on this.
By mean of System thread - the thread which handles system UI like statusbar, notifications & other ongoing system processes, say thread which handles home-button-press, recents-menu etc...
There may well be several threads involved in this.
I believe it's saperate thread as busy main thread of app is not hanging your device and all other than app things work fine.
More importantly, system UI is handled by separate OS processes, independent of the OS process for an app. As with most modern operating systems, in Android, a thread is owned by a process. Hence, by definition, separate processes have separate threads.
then why Apps can not have multiple threads who can handle UI (No matter how complex it's for devs!!)
That was an architectural decision, made close to 15 years ago, back when phone hardware was a lot more limited than it is today. Having a single "magic thread" is a common practice in constrained environments, as it avoids the overhead of constant checking for locks and other approaches to ensure thread-safety of data structures. While Google has done some things to try to improve on this (e.g., added a separate rendering thread), the core architectural limit remains, for backwards compatibility with older apps.
Also note that a fair amount of work is going into Jetpack Compose to try to allow for multi-threaded composables. Basically, since we now have a lot better hardware, Google is handling low-level synchronization for us in Compose, so composables can perhaps run on several threads without issue.

Context switching is too expensive

I am writing a video processing app and have come across the following performance issue:
Most of the methods in my app have great differences between cpu time and real time.
I have investigated using the DDMS TraceView and have discovered that the main culprit for these discrepancies is context switching in some base methods, such as MediaCodec.start() or MediaCodec.dequeueOutputBuffer()
MediaCodec.start() for example has 0.7ms Cpu time and 24.2ms Real time. 97% of this real time is used up by the context switch.
This would not be a real problem, but the method is called quite often, and it is not the only one that presents this kind of symptom.
I also need to mention that all of the processing happens in a single AsyncTask, therefore on a single non-UI thread.
Is context switching a result of poor implementation, or an inescapable reality of threading?
I would very much appreciate any advice in this matter.
First, I doubt the time is actually spent context-switching. MediaCodec.start() is going to spend some amount of time waiting for the mediaserver process to talk to the video driver, and that's probably what you're seeing. (Unless you're using a software codec, your process doesn't do any of the actual work -- it sends IPC requests to mediaserver, which talks to the hardware codec.) It's possible traceview is just reporting its best guess at where the time went.
Second, AsyncTask threads are executed at a lower priority. Since MediaCodec should be doing all of the heavy lifting in the hardware codec, this won't affect throughput, but it's possible that it's having some effect on latency because other threads will be prioritized by the scheduler. If you're worried about performance, stop using AsyncTask. Either do the thread management yourself, or use the handy helpers in java.util.concurrent.
Third, if you really want to know what's happening when multiple threads and processes are involved, you should be using systrace, not traceview. An example of using systrace with custom trace markers (to watch CPU cores spin up) can be found here.

Bad performance in the ui thread which TextToSpeech

My app record video and use TextToSpeech->android.speech.tts.TextToSpeech.speak() at the same time.
If I run in high device like 4 procesor at 1.5 ghz works ok. But if I use in 2 procesor 1.1 ghz device ui thread go very slow, with freezing of 2-6 seconds.
I know that problem is in TextToSpeech because if I don´t use it and record video the ui thread works very fluently in low device. If I use TextToSpeech + record video ui thread don´t work and also voice freeze 1-2 seg.
Is there any way to improve performance of TextToSpeech.speak()?
You're using text to speech and video recording at the same time? And you're surprised its slow? Both of these take a non-trivial amount of CPU resources. Some things just take processing power. Try not using them at the same time and you'll get better results.
If you need to use them at the same time- try using synthesizeToFile first to write the sound clip to a file, then playing the soundclip while recording. This way you aren't trying to generate the phonemes at the same time as recording.
If you are referring to 'Cores' when you say 'processors'? It seems like you are doing activities that should run on 3 different threads.
the Main Thread should be free always. Try not to bog it down... ever!
Extend the AsyncTask class. AsyncTask will allow you to do something that will take some lengthy amount of time without blocking the main thread.
Since this is all running on a virtual machine (Dalvik, to be precise), we must assume that threading is also virtual. This means that if you run 3 threads on two cores, the Virtual machine will decide which threads get processor cycles, and sometimes that means sharing cores.
I would say that if you ONLY plan on doing two heavy things at once, for a lower end device, you could implement this using the main thread for video, and a second thread for TextToSpeech. This isn't ideal because it potentially blocks the main thread. But since Video is the smoother of the two, it would be the first choice candidate for running on the Main UI Thread.
Ideally, you want minimum three threads, leaving the main UI Thread primarily unblocked. You can poll for results from both threads to detect completion.
If you happen to have 4 cores, then creating three threads should likely have more distributed performance over the available cores.
Some docs to get you going:
Android Multithreading - a Qualcomm article, and
Android: AsyncTask

Do std::async and std::future scale well on iOS and Android platforms?

I have a 2D particle system for a game engine in which I want to decouple the update loop of the particles from the main thread. I am using a thread pool implemented with boost::asio and am splitting up all the tasks into several stages and then combining the results on the main thread. This works well for me, and guarantees a limit on the total threads I have allowed for the pool (which is shared by all particle emitters which fire their own tasks independently).
I have read a lot of articles discussing std::async and std::future. This seems to be totally reasonable on windows or linux type systems where thread creation is well managed by the OS and memory is bountiful. But it seems like very little focus is spent on discussing this kind of thing in a mobile platform where thread creation may be costly. Because I am not sure how many threads might be made on these platforms it seems hard to rely on std::async in contexts where I don't know how many async calls may be in action at a given time (if each creates its own thread).
https://developer.apple.com/library/ios/documentation/Cocoa/Conceptual/Multithreading/CreatingThreads/CreatingThreads.html#//apple_ref/doc/uid/10000057i-CH15-SW2
I would like to hear from an individual with multithreaded C++ mobile development experience. My question in summary is this:
If I create 20 particle effects, each with 4 concurrent async tasks, will this result in 100 threads each taking .5 mb of ram? Or does async have some magic support for being intelligent with thread creation and task sharing?
Alternatively will async bog down the main thread (if not being explicit with the thread creation rule) effectively lagging my gameplay for relatively low-priority particle effects?
If so, what context is async actually useful in on mobile as it seems thread creation is something you'd like to be a little more explicit about on these devices.

Worker Threads On Single Core Device

I have reposted my question from Android Enthusiasts here, as this is more of a programming question, and it was recommended.
Anyway. Here it is:
I am making an app, that changes the build.prop of key values for a ROM. However, Android often gives me an ANR warning, as I am doing all the work on the UI thread. On the Android documentation, it tells me that I should use worker threads, and not do any work in the UI thread. But, I am building this system app to go with a ROM for a single core device.
Why would I want to use worker threads, as isn't this less efficient? As, Android has to halt the UI thread, load the worker thread, and when the UI is used again, halt the worker thread and load the UI thread again. Isn't this less efficient?
So, Should I use worker threads (Which slows the UI thread down anyway) or just do all of my work on the UI thread *Even if the application UI is really slow)?
If your users were robots, your logic would make perfect sense. No context switching equals (very slightly) less overall computation time. You could benchmark it and see how much exactly.
However, in the present (and near future) your users will most likely be humans and with that you need to start thinking of psychology: A moving progress bar or responsiveness in general will give your users the impression that the the task is actually taking a shorter time than without any sort of feedback. The subjective speed is much higher with feedback.
There exist numerous papers on the subject of subjective speed, the first one I could find on the web has a nice comparison of progress bars in a video (basically, some bars seem to go faster than others, thus reducing the subjective overall wait time).
Use worker threads.
As you've said, doing everything on the UI thread locks your UI until the operation is completed. This means you can't update progress, can't handle input events (such as the user pressing a cancel button), etc.
Your concern about the speed of context switching is misplaced - this happens all the time anyway, as core system processes and other apps run in the background. Some quick Googling shows that context switching a thread within the same process is typically faster than a process-level context switch anyway. There is slightly more overhead introduced by creating the threads and then the subsequent context switches, but it's likely to be minute - especially if you only have the 1 thread doing the work. For the reasons I've listed above alone (UI updates and the ability to accept user input), take the few-millisecond overall performance hit.

Categories

Resources