I'm new to the android platform, and I wanted to develop an app that runs in the background and reads the microphone input, applies a transformation to it, and outputs the resulting audio to the speaker.
I'm wondering if there is any lag perceived by the user in this process, or if it's possible to do it in near-realtime so that the user can hear the transformed audio in sync with the ambient audio. Thanks!
Yes, users will hear a severe latency lag or echo with attempts at real-time audio on current unmodified Android devices using the provided APIs.
The summary is that Android devices are configured for fairly long audio buffers, which has been reported to be in the somewhere around the range of 100 to 400 milliseconds long, depending on the particular device and the Android OS version it is running. (Shorter buffers might be possible on Android devices on which one can build and install a modified custom build of the OS with your own custom audio drivers.)
(Humans hear echoes at somewhere around or above 25 mS. Audio buffers on iOS can be as short as 5.8 mS, so you may have better luck trying to develop your near-real-time audio processing on a different device platform.)
Audio processing on android isn't all the great, in fact to be honest, it sucks. The out-of-the-box latency on android devices for such things is pretty awful. You can however tinker with the NDK and try to put together something based on OpenSL ES which will have significantly low latency.
There is a similar StackOverflow question: Playing back sound coming from microphone in real-time
Some other helpful links:
http://arunraghavan.net/2012/01/pulseaudio-vs-audioflinger-fight/
http://www.musiquetactile.fr/android-is-far-behind-ios/
http://www.geardiary.com/2012/02/21/the-dismal-state-of-android-as-a-music-production-solution/
On the other side of the coin, android mic quality is way better than IOS quality. I have a galaxy s4 and a huawei very low end phone and both have a wonderful mic quality when recording.
Related
How to config WebRTC for the lowest latency for streaming live video only one side from Android phone camera to PC via WebRTC app on android to Firefox PC?
the quality maybe 15-24 fps and maybe 640 x 480?
My app need to live streaming video in android phone and transporting it as real time as possible to the PC to view in Firefox PC (using P2P protocol). That app looks like control some robot, play live streaming video game.
How do I do it for the best expected? Maybe it can do with 50 ms latency with 3G/4G network?
Thank you.
Maybe it can do with 50 ms latency with 3G/4G network? Thank you.
Impossible. You can't send a single packet with that little amount of latency over a mobile network, let alone capture video, encode video, mux video with audio, send it, receive it, buffer it, demux it, decode it, present it. 50ms latency per frame is not a whole lot higher than what you get with analog transmission!
You'll find that even many cameras on phones are going to have that much lag by the time the system gets the data to even work with it.
You realize it can take ~200ms for a human to even react to visual stimulus anyway? My TV takes at least 150ms to display a frame from its lossless HDMI input.
Your project requirements are completely out of touch with reality. You should also take time to gain an understanding of the tradeoffs that occur when you push digital video down into the extreme ends of low latency. You're about to make some real sacrifices by going under 1s or 500ms or so. Consider reading my post here: https://stackoverflow.com/a/37475943/362536 Particularly the "why not [magic technology here]" section.
The audio recorded by MediaRecorder in android App having so much noise.How can I use noise suppression to remove noice while recording.
I had the same problem of low audio quality while using MediaRecorder and finally figured out the correct working solution. Here are few modifications you need to do for good quality audio recordings:
save the file using .m4a extention.
and
mRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
mRecorder.setAudioEncoder(MediaRecorder.OutputFormat.AMR_NB);
mRecorder.setAudioEncodingBitRate(16*44100);
mRecorder.setAudioSamplingRate(44100);
Many solutions on stackoverflow would suggest .setAudioEncodingBiteRate(16) but 16 is too low to be considered meaningless .
Source: #Grant answer on stackoverflow very poor quality of audio recorded on my droidx using MediaRecorder, why?
Are you testing this with the emulator, or on an actual device (if so, which device)? The acoustic tuning (which includes gain control, noise reduction, etc) will be specific to a given platform and product, and is not something you can change.
Jellybean includes APIs to let applications apply certain acoustic filters on recordings, and a noise suppressor is one of those. However, by using that API you're limiting your app to only function correctly on devices running Jellybean or later (and not even all of those devices might actually implement this functionality).
Another possibility would be to include a noise suppressor in your app. I think e.g. Speex includes noise supressing functionality, but it's geared towards low-bitrate speech encoding.
https://developer.android.com/reference/android/media/audiofx/NoiseSuppressor.html
android voice recording - voice with background noice issue
I'm making an app that needs to send a video feed from a single source to a server where it can be accessed by desktop browsers and mobile apps.
So far, I've been using Adobe Media Server 5 with a live RTMP stream. This gives me about a 2.5 second delay on desktop browsers, which gives me no native support for iOS, but leaves me with the option to use Air to export the app for iOS, which produces a minimum 5-6 second delay.
The iOS docs strongly recommend the use of HTTP Live Streaming which segments the stream into chunks and serves it using a dynamic playlist in a .m3u8 file. Doing this produces a 15+ second delay in desktop browsers and mobile devices. A Google search seemed to reveal that this is to be expected from HLS.
I need a maximum of 2-4 second delays across all devices, if possible. I've gotten poor results with Wowza, but am open to revisiting it. FFMpeg seems inefficient, but I'm open to that as well, if someone has had good results with it. Anybody have any suggestions?? Thanks in advance.
I haven't even begun to find the most efficient way to stream to Android, so any help in that department would be much appreciated.
EDIT: Just to be clear, my plan is to make an iOS app, whether it's written natively or in Air. Same goes for Android, but I've yet to start on that.
In the ios browser HLS is the only way to serve live video. The absolute lowest latency would be to use 2 second segments with a 2 segment windows in the manifest. This will give you 4 seconds latency on the client, plus another 2 to 4 on the server. There is no way to do better without writing an app.
15 Second delay for HLS streams is pretty good, to provide lower latency you need to use a different streaming protocol.
RTP/RTSP will give you the lowest latency and is typically used for VoIP and video conferencing, but you will find it very difficult to use over multiple mobile and WiFi networks (some of them unintentionally block RTP).
If you can write an iOS app that supports RTMP then that is the easiest way to go and should work on Android too (only old Androids support Flash/RTMP natively). Decoding in software will result in poor battery life. There are other iOS apps that don't use HLS for streaming, but I think you need to limit it to your service (not a generic video player).
Also please remember that higher latency equals higher video quality, less buffering, better user experience etc. so don't unnecessarily reduce latency.
I wrote an app that records audio. Everything works. However, I am going to be using this app to record class room notes. How can I boost the input of the microphone to better capture all the noise? I wouldn't mind using root if I must. But wasn't sure if there was an API to do this.
Thanks all for reading!
If you are asking how to make the microphone more sensitive, I'm not sure. That would involve either operating the microphone at a higher voltage and/or hacking the drivers, neither of which are doable programatically, AFAIK. However, you could try amplifying the output by multiplying the output by some value (say 1.1 for 10% volume boost). Of course, the more you "amplify" the output, the more you will saturate the speaker (aka distort the audio). There are some signal processing techniques you can try to remove background noise and to isolate the paticular audio of interest, however, these things are merely processing improvements, not hardware upgrades. You can always try plugging in an external microphone into the headphone jack and using that to record the audio.
I know this isn't the answer you were hoping for, but I hope it helps.
I am trying to receive audio from headset's microphone using AudioRecord and playback the audio in real time to the Headphones using AudioTrack.I have implemented required code but the problem is that there is a disturbing Echo. I'm not using speakers and i'm using headphones. So,whats causing this echo? I used device's echocanceller which introduced in API level 11 and echo decreased but didn't go away.Im aware of audio latency in android devices but i can't understand how the delay may cause echo while i'm using headphones. Please guide me in the right direction.
I dont think there is a generic solution to this problem.
The reason are
1) the headphone quality may be bad, there may be internal coupling between the mic and headphones as the wires are very close in headphones
2) The echo canceler in android is not mandatory to be implemented by all devices. Try querying it first and settings. also the echo canceler implementation may vary from device to device
3) the latency affect the performance of the echo canceler a lot as the algorithm has to adapt to the delay and buffer that much audio.
4) lower versions of android have horrible delay problems acknowledged by google itself. You may want to move to higher android version as these things have been improved greatly.
In general any API that has some direct hardware access like mic and camera , the performance varies from device to device and performance cannot be guaranteed.
you may want to look at openSLES for better audio performance and easier integration to AEC library if you are thinking of integrating.
Please look at -
https://source.android.com/devices/latency_design.html
Low-latency audio playback on Android
https://www.youtube.com/watch?v=d3kfEeMZ65c
Hope this helps,
Regards,
Shrish