We have a webrtc enabled service with 2 different endpoints; a web app and a native android app. The android app is installed on a android device with a USB Camera.
Using the web app on chrome/firefox, pc2pc audio quality is almost perfect. But we want to improve pc2android and android2android audio quality.
Chrome uses acoustic echo cancellation (AEC - conference) for high-end devices. But for Android it forces AECM -- a light-weight AEC for mobiles. We are not happy with the AECM performance. For our native app, we modify webrc source-code to use AEC instead. But the result is even worse. It acts like echo cancellation is totally disabled and we end up having so much echo and feedback!
According to this issue, AEC should only work with 8k and 16k sample rates and only in high-end devices. That should be OK. We are using PCMU codec having 8k sample rate and I think our Android device is powerful enough to overcome the additional computational complexity of AEC:
Quad core ARM CPU # 2Ghz
8 core Mali-450MP GPU # 600Mhz
DDR3 1GB RAM
Android Kit Kat
If needed, I'm happy to share plots of our echo cancellation performance.
Is it not possible to use AEC for mobiles or are we missing something?
maybe you need to adjust the latency for estimate the echo.
Related
I was wondering what is the maximum framerate that could be achieved on iOS and Android devices with Unity3d. Can 60 fps and 100 fps be reached?
What FPS should I provide:
Android as a platform aims to provide 60fps as a standard. However, keep in mind this is for Applications which come nowhere near the GPU requirements of a game.
If you can't do all of the calculations you require in 16ms (60fps) you should aim to provide 30fps and provide the user a consistent experience. User's will quickly detect variations in frame rate and interpret this as a performance issue with their phone.
Never over-promise and under-deliver.
Modern phones claim to have quad core processors with other incredible hardware profiles. Rarely are you taking advantage of the full capabilities of a phone, the hardware and Android platform is designed to use as minimal battery as possible and cut corners when it can.
Your user's phone is typically idling and the full potential will be activated for a sparing amount of ms to perform work and catch up on operations.
What is the max performance on Android:
You can search for Android benchmark test's using Unity, keep a very open mind for what each phone puts through as there are more than 12,000 hardware configurations for Android.
Your development phone and those for testing should be expected to be significantly better than your user's phones.
I have three android devices Single core, Dual core and Quad core. I was able to make an app using v4l2 to grab the picture.
In standby mode all three devices are giving me 30FPS (as announced by camera hardware provider). But as soon as I start some processing on image and draw on canvas the FPS of Dual core and Quad core devices drops drastically. Single core device FPS reaches 28 which is acceptable but Dual core becomes 18 (on average) and Quad core 12.
I have used CPU performance governor in all three devices. Without performance mode FPS are even lower.
echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
Initially I thought it has something to do with customized linux kernel in android So,
I flashed PicUntu on quad core device but still I saw similar behavior in FPS drop. On Desktop with Windows and Ubuntu there is no such issue in the camera. I guess it has something to do with ARM.
Have anyone faced similar problem for UVC camera with V4L2 driver on android(ARM devices)? Is there any way to increase the FPS?
EDIT ** Adding some more information
For KitKat device if I bind my usb task to single core (taskset), FPS is improving by small factor. In kitkat overall performance is also better than Jelly bean.
I am not sure why usb speed suffers when the application goes to background even when overall cpu usage is small.
I am trying to design hardware accelerated video encoder based on Android. I have done research for some time but I did not find much useful.
Anyway, I saw the Gstreamer (http://gstreamer.freedesktop.org/). It is said this can provide hardware video encoder. However, after I read the manual, I found nothing about encoder.
Does anyone know about this stuff? Thank you!
It's going to be dependent on your hardware. What device are you running on?
If your processor contains an IP core that implements video encoding/decoding, the manufacturer needs to either offer a driver so you can call this hardware, or ideally go a step further and offer a specific plugin for GStreamer that does it.
For example, the Freescale i.MX6 processor (used in the Wandboard and CuBox) has a driver maintained by Freescale: https://github.com/Freescale/gstreamer-imx
TI OMAP processors have support: http://processors.wiki.ti.com/index.php/GStreamer, also see TI Distributed Codec Engine.
Broadcom processors have support: https://packages.debian.org/wheezy/gstreamer0.10-crystalhd
There are also several standard interfaces to video accelerator hardware, including VDPAU, VAAPI, and OpenMax IL. If your processor is not one of the above, someone may have written a driver that maps one of these standard interfaces to your hardware.
The Rasberry Pi is apparently supported by the OpenMax IL plugin: http://gstreamer.freedesktop.org/releases/gst-omx/1.0.0.html
If you don't know whether your processor is supported, I'd search for the name and various combinations of "VDPAU", "VAAPI", etc.
There are a wide variety of encoding options in Gstreamer to take a raw stream and encode it. Pretty much any element ending in "enc" can be used to do the encoding. Here is a good example of a few encoding pipelines:
https://developer.ridgerun.com/wiki/index.php/TVP5146_GStreamer_example_pipelines
With that said, I'd caution that video encoding is extremely hardware intensive. I would also look at getting a special purpose hardware encoder and to not do software encoding via GStreamer if you're stream is a robust size.
I am doing a bit of research about SVC for the H264 codec and as far as I know, the SVC is an extension of the previous AVC which uses a base layer for SVC so that it works on a mobile device(perferably android).
My question is, is it possible to enhance this base layer on a mobile device using SVC? Is a mobile device powerful enough(memory, ram ect.) to perform this?
Thanks
Your question can not really be answered, it depends...
FWIW here's my 0.02 cents:
Modern mobile phones e.g such as the Samsung Galaxy S2 have a 1.2 GHz Dual Core processor and 1GB of RAM. While other phones may have lower specifications, mobiles in general are constantly improving. I see no reason why such devices could not decode an SVC stream. However this also depends on other factors such as the resolution and complexity of the video, the number of SVC layers and of course, very importantly, the efficiency of the decoder implementation.
While Android does have an H.264 decoder, I suspect it may be some time until it supports SVC.
Im not sure i completely understand the question but ill try to answer anyway
an SVC stream is always composed of a base layer which is H264 compatible and 1 or more enhancmement layers (temporal, spatial or quality ) which can only be decoded by and SVC decoder.
Most mobile devices use and HW accelrator to decode the H.264 stream so the CPU is hardly loaded while decoding the base layer
to decode enhancement layer(s) on android you will need to use an SVC decoder for arm which i'm not sure if exist at all. you can try to port open source projects like opensvc yourself
since the decoding of the enhancement layer is highly dependant on the base layers you will not be able to use the H264 HW accelerator for the base layer because the HW accelerator cannot supply the metadata for the enhancement layer deocde process.
so in terms of processing power you will need to load the CPU both for the base layer and for the enhancement layers. wether it will runs depends on the following
1. performance of svc decoder code
2. resolution and fps of the video
3. complexity of the content
4. amount of type enhancment layers
hope this answers your question
So anybody worth their salt in the android development community knows about issue 3434 relating to low latency audio in Android. For those who don't, you can educate yourself here. http://code.google.com/p/android/issues/detail?id=3434
I'm looking for any sort of temporary workaround for my personal project. I've heard tell of exposing private interfaces to the NDK by rolling your own build of android and modifying the NDK.
All I need is a way to access the low level alsa drivers which are already packaged with the standard 2.2 build. I'd like to have the ability to send PCM directly to the audio hardware on my device. I don't care that the resulting app won't be distributable over the marketplace, and likely won't run with any other device than mine.
Anybody have any useful ideas?
-Griff
EDIT: I should mention, I know AudioTrack provides this functionality, but I'd like much lower latency -- AudioTrack sits around 300ms, I'd like somewhere around 20-30 ms.
Griff, that's just the problem, NDK wil not improve the known latency issue (that's even documented). The hardware abstraction layer in native code is currently adding to the latency, so it's not just about access to the low level drivers (btw you shouldn't rely on alsa drivers being there anyway).
Android: sound API (deterministic, low latency) covers the tradeoffs pretty well. TL;DR: NDK gives you a minor benefit because the threads can run at higher priority, but this benefit is meaningless pre-Jellybean because the entire audio system is tuned for Java.
The Galaxy Nexus running 4.1 can get fairly close to 30ms of output latency.