Maximum number of simultaneous MediaRecorder instances on android? - android

I created android app that records device screen (using MediaProjection) API and video from camera at the same time. I use MediaRecorder in both cases. I need a way to find out whether device is actually capable of recording two video streams simultaneously. I assume there is some limit on number of streams that can be encoded simultaneously on given devices but I cannot find any API on android platform to query for that information.
Things I discovered so far:
Documentation for MediaRecorder.release() advises to release MediaRecorder as soon as possible as:
" Even if multiple instances of the same codec are supported, some performance degradation may be expected when unnecessary multiple instances are used at the same time."
This suggests that there's a limit on number of instances of the coded which directly limits number of MediaRecorders.
I've wrote testing code that creates MediaRecorders (configured to use MPEG4/H264) and starts them in a loop - On Nexus 5 it always fails with java.io.IOException: prepare failed when calling prepare() on 6th instance. This suggests you can have only 5 instances of MediaRecorder on Nexus5.

I'm not aware of anything you can query for this information, though it's possible something went into Lollipop that I didn't see.
There is a limit on the number of hardware codec instances that is largely dependent on hardware bandwidth. It's not a simple question of how many streams the device can handle -- some devices might be able to encode two 720p streams but not two 1080p streams.
On some devices the codec may fall back to a software implementation if it runs out of hardware resources. Things will work but will be considerably slower. (I've seen this for H.264 decoding, but I don't know if it also happens for encoding.)
I don't believe there is a minimum system requirement in CTS. It would be useful to know that all devices could, say, decode two 1080p streams and encode one 1080p simultaneously, so that a video editor could be made for all devices, but I don't know if such a thing has been added. (Some very inexpensive devices would struggle to meet that.)

I think it really depends on devices and ram capacity ... you could read the buffers for screen and cam as much as you like but only one read at a time not simultaneously I think to prevent concurrency but honestly I don't really know for sure

Related

How do some apps overcome phone recording restrictions?

Background
Phone recording is not really supported on Android, yet some devices support it to some extend.
This made various call recording apps gather as much possible information about devices and what should be done to them, and decide upon this what to do.
Some even offer root solutions.
One such example is boldbeast Call Recorder app, which offers a lot of various configurations to change:
"record mode" . Shows 14 modes for non-rooted devices, and up to 34 for rooted. Also shows "Alsa mode" as an option for it, for rooted devices.
Has "Tune Audio Effect ("auto tune a groupd of parameters") .
Has "Tune Audio Route", with the possible values of "Disabled", "Group1", "Group2", "Group3"
For rooted devices:
"change audio controls" ("auto change audio controls")
"change audio driver" (change audio drive settings to enable record mode 21,22,23,24,31,32,33,34")
For rooted devices: "start input stream"
The problem
If I'm in need to create a call recording app, there is no other way than to find the various workarounds for various devices, but as it seems other apps use terms that don't appear in the API.
I can't find any of those of the app I've mentioned, for example.
What I've found
Other than tons of questions of how to record calls on Android, showing that it doesn't work on all devices, I could find some interesting things. Here are my tries and insights so far:
There are some Audio recording sources we can use while preparing the recording (docs here) , but sadly in each device it might be different. For some, VOICE_CALL works, and for some, others. But at least we can try...
On OnePlus 2 with Android 6.0.1, incoming calls can be recorded using VOICE_CALL, but I can't make outgoing calls be recorded there, unless I use MIC as audio source together with speaker turned on. Somehow, the app I've mentioned succeeds recording it without any issues. I'm sure I will see other issues with other Android devices, as I've tried to address this whole topic in the past. Update: I've found this sample project (also here), which for some reason sleeps for 2 seconds on the UI thread between prepare and start calls of the mediaRecorder. It works fine, and when I did something similar (wait using Handler.postDelayed for 1 second), it worked fine too. The comment that was written there is "Sometimes prepare takes some time to complete".
On Galaxy S7 with Android 8, I've failed to get sound of the other side for outgoing calls AND incoming calls (even with MIC and speaker), no matter what I did, yet the app I've mentioned worked fine.
To let you try my POC of call recording, I've published an open source github repository here, having a sample that will record a single call, and let you listen to the most recent one, if all works well.
This "ViktorDegtyarev - CallRecLib" SDK , which doesn't seem to work at all, and crashes on various Android versions
These 2 old sample projects : rvoix , esnyder-callrecorder , both fail to actually record. The second doesn't even seem to work on Android 6.0.1 device, which it's supposed to support.
aykuttasil - CallRecorder sample and axet - android-call-recorder sample - both, just like on my POC, don't have any tweaking except for AudioSource, and because of this they fails to record on some cases, such as OnePlus 2 output-audio of outgoing calls.
Most third party apps only offer the AudioSource tweaking, but some (like "boldbeast") do offer more. One example is "Automatic Call Recorder" which has "configuration" (10 values to choose from, first is "default") and "method" (5 vales to choose from, first is "default"). Those apps probably do not want others to understand what those configurations mean, so they put general names. Or, it's just too complicated for everyone (especially for users), so they generalize the names.
There is an API of "setMode" here, but it doesn't seem to change upon calling it. I was thinking of maybe change the "channel" of where the call is being used, this way, but it doesn't work. It stays on the value of "2" during call, which is MODE_IN_CALL.
There are customized parameters that are available for various devices (each OEM and its own parameters), which can be set here and maybe even via JNI (here and here) , but I don't get where to get this information from (meaning which pairs of key-value are available). I've searched in a lot of places, but couldn't find any website that talks about which possible parameters are available, and for which devices.
I was thinking of using AudioRecord instead of MediaRecorder class for recording, thinking that it's a bit low level, so it could give me more power and access to customized capabilities, but it seems to be very similar to MediaRecorder, and even use the same audio sources (example here).
Another try I had with low level API, was even further, of using JNI (OpenSL ES for Android). For this, I couldn't find much information (except here and here), and only found the 2 samples of Google here (called "audio echo" and "native audio"), which are not about recording sound, or at least I don't see them occur.
Android P might have official way to record calls (read here and here). Testing on my Android P DP3 device (Pixel 2), I could record both sides fine in both incoming and outgoing calls, using "DEFAULT" as audio source, so maybe the API will finally be official and work on all Android versions. I wrote about it here and here.
I was thinking that maybe the Visualizer class could be a workaround of recording, but according to some StackOverflow post (here), the quality it extremely low, so I decided that maybe I shouldn't try it. Plus I couldn't find a sample of how to record from it.
I've found some parameters that might be available on some devices, here (found from here), all start with "AUDIO_PARAMETER_", but testing on Galaxy S7, all returned empty string. I've also found this website, that gave me the idea of using audioManager.setParameters("noise_suppression=off") together with MIC audio source, but this didn't seem to do anything in the case of Galaxy S7.
The questions
As opposed to other similar questions about this topic, I'm not asking how to record calls. I already know it's a very problematic and complex problem. I already know I will have to address various configurations, and that I will probably use a server to store all of them and find there the best match for each one.
What I want to ask is more about the tweaking and workarounds :
Is there a list of configurations for the various devices, Android versions, and what to choose for each?
Besides Audio source, which other configuration is possible to be used?
Which parameters are possible for the various devices and Android versions ? Are there any websites of the OEMs describing them?
What are the various terms in the app I've mentioned? Where can I find information of how to change them?
Which tools are available for rooted devices?
Is it possible to know which device supports call recording and which not, by using the API ?
About the workaround of OnePlus 2, to wait a moment till we start recording, why is it needed? Is it needed on all Android versions? Is it a known issue? Would 1 second be enough?
How come on the Galaxy S7 I've failed to record the other side even when using MIC&speaker?
EDIT: I've found this of accessibility service being able to help with call recording:
https://developer.android.com/guide/topics/media/sharing-audio-input#voice_call_ordinary_app
Not sure how to use it though. It seems "ACR Phone Dialer" uses it. If anyone knows how it can be done, please let me know.
I spent many weeks working on a Voicecall Recording App so I faced all your issues/questions/problems.
Moreover: my project had a low-priority so I didn't spent much time every day on it, so I worked on this App for many months while Android was changing under the hood (minor an major releases).
I was developing always on the same Galaxy Note 5 using its stock ROM (without Root) but I discovered that on the same device the behaviour was changing from one Android release to another without any explanation.
For example from Nougat 7.0 to 7.1.2 I was unable to record a voicecall using the same code as before.
Google has enforced_or_changed restrictions about voicecall recording many times.
At the beginning it was sufficient to use use VOICE_CALL AudioSource. Then manufactures has started to interprete this Value as they wanted, and the result was that one implementation was working well but another was not.
Then Reflection was needed to run undocumented/hidden methods to start voicecall recording.
Then Google has added a Runtime check, so calling them directly was not more possible even using Reflection.
However this method lack of stability because it was not guarantee that a method was using the same name on all devices.
Then I started to reverse-engineer currently working Apps that were working on newer Android version and I discovered that them were using a complete different and more secure approach. This takes me many weeks because all these Apps uses JNI Libraries trying to hide this method between Assembler code.
When I succesfully create a Test App which was recording well I tried the SAME code in many different devices and ROMs/Versions and surprisely it was working well.
This means that all those different methods you can see in these App Settings (I'm 98% sure about it) are just "fake" or just refers to OLD methods not more used.
A small different metion should be done for Rooted devices:
these devices could change AudioRoutes so a different approach can be used in this case.
[1] There isn't any list or website listing all supported devices or best method to do a successfully voicecall record
[6] It's not possibile to know which device supports Voicecall Recording
just using an API call. You have to try and catch Excepions...
[8] Recording by MIC+speaker suffers of many issues: (1) the caller will hear all your ambient sound so the privacy-bug is a big issue (2) the echo is a big problem (3) the recording volume is very low as the quality of recordered voice
According to my tests, one way to improve this is to have an AccessibilityService being active (no need to write there anything at all) while choosing voice-recognition as the audio source. Also it's recommended to have the speaker turned on because this will record the audio from the microphone.
This seems to exist in some call-recording apps.
Weird thing is that Google has written this as a rule on the Play Store:
The Accessibility API is not designed and cannot be requested for
remote call audio recording.
https://support.google.com/googleplay/android-developer/answer/11899428
No idea what the "remote" means here.
Anyway, I've updated the Github repository to include these additions.

Android - Choosing between MediaRecorder, MediaCodec and Ffmpeg

I am working on a video recording and sharing application for Android. The specifications of the app are as follows:-
Recording a 10 second (maximum) video from inside the app (not using the device's camera app)
No further editing on the video
Storing the video in a Firebase Cloud Storage (GCS) bucket
Downloading and playing of the said video by other users
From the research, I did on SO and others sources for this, I have found the following (please correct me if I am wrong):-
The three options and their respective features are:-
1.Ffmpeg
Capable of achieving the above goal and has extensive answers and explanations on sites like SO, however
Increases the APK size by 20-30mb (large library)
Runs the risk of not working properly on certain 64-bit devices
2.MediaRecorder
Reliable and supported by most devices
Will store files in .mp4 format (unless converted to h264)
Easier for playback (no decoding needed)
Adds the mp4 and 3gp headers
Increases latency according to this question
3.MediaCodec
Low level
Will require MediaCodec, MediaMuxer, and MediaExtractor
Output in h264 ( without using MediaMuxer for playback )
Good for video manipulations (though, not required in my use case)
Not supported by pre 4.3 (API 18) devices
More difficult to implement and code (my opinion - please correct me if I am wrong)
Unavailability of extensive information, tutorials, answers or samples (Bigflake.com being the only exception)
After spending days on this, I still can't figure out which approach suits my particular use case. Please elaborate on what I should do for my application. If there's a completely different approach, then I am open to that as well.
My biggest criteria are that the video encoding process be as efficient as possible and the video to be stored in the cloud should have the lowest possible space usage without compromising on the video quality.
Also, I'd be grateful if you could suggest the appropriate format for saving and distributing the video in Firebase Storage, and point me to tutorials or samples of your suggested approach.
Thank you in advance! And sorry for the long read.
Your overview on this topic is applicable to the point.
I'll just add my 2 cents on this topic that you might have missed as addition:
1.FFMpeg
+/-If you build your own SO then you can reduce the size down to about 2-3 MB depending on the use-case of course. Editing a 6000 lines buildscript takes time and effort though
++Supports wide range of formats (almost everything)
++Results are the same for every device
++Any resolution supported
--High energy consumption due do SW-En-/Decoding, while also making it slow. There is a plugin to support lib-stagefright, but it doesn't work on many devices (as of May 2016)
--Licensing can be problematic depending on your location and use-case. I'm not a lawyer, but we had legal consulting on this topic and it's quite complex.
2. MediaRecorder
++Easiest to implement (simplified access to mediacodec/libstagefright) Raw data gets passed to the encoder directly so no messing around there
++HW Accelerated on most devices. Makes it fast and energy saving.
++Delay only applies to live streaming
--Dependent on implementation of HW-manufacturers
--Results may vary from device to device
++No licensing problems
3.MediaCodec
+/-Most of 2.MediaRecorder applies to this as well (apart from ease of use)
++Most flexible access to HW-en-/decoding
--Hard to use for cases that were not thought of (e.g. mixing videos from different sources)
+/-Delay for streaming can be eliminated (is tricky though)
--HW-manufacturers sometimes don't implement things correctly (e.g the Samsung Galaxy S5 sometimes produces a SIG-SEV if live data from some DLSR is fed to the encoder. Works fine for a while, then all of a sudden it's SIG-SEV. This might be the dslr's fault, but the SIG-SEV is not avoidable and crashes the app, which in the end is the app developers fault ;) )
--If used without MediaMuxer you need either good understanding of media containers or rely on 3rd party libraries
The list is obviously not complete and some points might not be correct. The last time I worked with video was almost half a year ago.
As for your use-case I would recommend using MediaRecorder since it is the easiest to implement, supported on all devices, and offers a good deal of quality/size option. FFMpeg produces better results for the same storage size, but takes longer (extreme case, DSLR live footage was encoded 30 times faster), and is more energy consuming.
As far as I understand your use-case, there is no need to fiddle around with MediaCodec since you want to encode and decode only.
I suggest using VP8 or 9 since you wont run into licensing problems. Again I'm no lawyer but distributing H264 over your own server might make you a broadcasting station, so i was told.
Hope this helps you in your decision making

AVC HW encoder with MediaCodec Surface reliability?

I'm working on a Android app that uses MediaCodec to encode H.264 video using the Surface method. I am targeting Android 5.0 and I've followed all the examples and samples from bigflake.com (I started working on this project two years ago, so I kind of went through all the gotchas and other issues).
All is working nice and well on Nexus 6 (which uses the Qualcomm hardware encoder for doing this), and I'm able to record flawlessly in real-time 1440p video with AAC audio, in a multitude of outputs (from MP4 local files, upto http streaming).
But when I try to use the app on a Sony Android TV (running Android 5.1) which uses a Mediatek chipset, all hell breaks loose even from the encoding level. To be more specific:
It's basically impossible to make the hardware encoder work properly (that is, "OMX.MTK.VIDEO.ENCODER.AVC"). With the most basic setup (which succeeds at MediaCodec's level), I will almsot never get output buffers out of it, only weird, spammy, logcat error messages stating that the driver has encountered errors each time a frame should be encoded, like this:
01-20 05:04:30.575 1096-10598/? E/venc_omx_lib: VENC_DrvInit failed(-1)!
01-20 05:04:30.575 1096-10598/? E/MtkOmxVenc: [ERROR] cannot set param
01-20 05:04:30.575 1096-10598/? E/MtkOmxVenc: [ERROR] EncSettingH264Enc fail
Sometimes, trying to configure it to encode at a 360 by 640 pixels resolution will succeed in making the encoder actually encode stuff, but the first problem I'll notice is that it will only create one keyframe, that is, the first video frame. After that, no more keyframes are ever, ever created, only P-frames. Ofcourse, the i-frame-interval was set to a decent value and is working with no issues on other devices. Needless to say, this makes it impossible to create seekable MP4 files, or any kind of streamable solution on top.
Most of the times, after releasing the encoder, logcat will start spamming endlessly with "Waiting for input frame to be released..." which basically requires a reboot of the device, since nothing will work from that point on anyway.
In the case where it doesn;t go havoc after a simple release(), no problem - the hardware encoder is making sure that it cannot be created a second time, and it falls back to the generic SOFTWARE avc google encoder. hich ofcourse is basically a mockup encoder which does little to nothing than spit out an error when trying to make it encode anything larger than 160p videos...
So, my question is: is there any hope of making this MediaCodec API actually work on such a device? My understanding was that there are some CTS tests performed by Google/manufacturers (in this case, Sony) that would allow a developer to actually think that an API is supported on a device which prouds itself as running Android 5.1. Am I missing something obvious here? Did anyone actually ever tried doing this (a simple MediaCodec video encoding test) and succeeded? It's really frustrating!
PS: it's worth mentioning that not even Sony provides yet a recording capability for this TV set, which many people are complaining anyway. So, my guess is that this sounds more like a Mediatek problem, but still, what exactly are the Android's CTS for in this case anyway?

expected live streaming lag from an android device

How much lag is expected when an android device's camera is streamed live?
I checked Miracast and it has a lag of around 150-300 ms(usually its around 160-190 ms).
I have a bluetooth and a wifi direct app and both lag by around 400-550 ms. I was wondering if it would be possible to reproduce or come closer to Miracast's performance.
I am encoding the camera frames in H.264 and using my own custom protocol to transmit the encoded frames over a TCP connection(in case of WiFi Direct).
Video encoding is compute intensive so any help you can get from the hardware usually speeds things up. It is worth making sure you are using codecs which leverage the hardware - i.e avoid having your H.264 encoding all in software.
There is a video here which discusses how to access HW accelerated video codecs, although it is a little old, but will apply to older devices:
https://www.youtube.com/watch?v=tLH-oGfiGaA
I believe that MediaCodec now provides a Java API to the Hardware Codecs (have not tried or seen speed comparison tests myself), which makes things easier:
http://developer.android.com/about/versions/android-4.1.html#Multimedia
See the note here also about using the camera's Surface preview as a speed aid:
https://stackoverflow.com/a/17243244/334402
As an aside, 500ms does not seem that bad, and you are probably into diminishing returns so the effort to reduce may be high if you can live with your current lag.
Also, I assume you are measuring the lag (or latency) on the server side? If so you need to look at how the server is decoding and presenting also, especially if you are comparing your own player with a third party one.
It is worth looking at your jitter buffer on the receiving side, irrespective of the stream etc latency - in simple terms waiting for a larger number of packets before you start playback may cause more startup delay but it also may provide a better overall user experience. This is because the large buffer will be more tolerant of delayed packets, before going into a 'buffering' mode that users tend not to like.
Its a balance, and your needs may dictate a bias one way or the other. If you were planning a video chat like application for example, then the delay is very important as it starts to become annoying to users above 200-300 ms. If you are providing a feed from a sports event, then delay may be less important and avoiding buffering pauses may give better user perceived quality.

Android: sound API (deterministic, low latency)

I'm reviewing all kinds of Android sound API and I'd like to know which one I should use.
My goal is to get low latency audio or, at least, deterministic behavior regarding delay of playback.
We've had a lot of problems and it seems that Android sound API is crap, so I'm exploring possibilities.
The problem we have is that there is significant delay between sound_out.write(sound_samples); and actual sound played from the speakers. Usually it is around 300 ms. The problem is that on all devices it's different; some don't have that problem, but most are crippled (however, CS call has zero latency). The biggest issue with this ridiculous delay is that on some devices this delay appears to be some random value (i.e. it's not always 300ms).
I'm reading about OpenSL ES and I'd like to know if anybody had experience with it, or it's the same shit but wrapped in different package?
I prefer to have native access, but I don't mind Java layer indirection as long as I can get deterministic behavior: either the delay has to be constant (for a given device), or I'd like to get access to current playback position instead of guessing it with a error range of ±300 ms...
EDIT:1.5 years later I tried multiple android phones to see how I can get best possible latency for a real time voice communication. Using specialized tools I measured the delay of waveout path. Best results were over 100 ms, most phones were in 180ms range. Anybody have ideas?
SoundPool is the lowest-latency interface on most devices, because the pool is stored in the audio process. All of the other audio paths require inter-process communication. OpenSL is the best choice if SoundPool doesn't meet your needs.
Why OpenSL? AudioTrack and OpenSL have similar latencies, with one important difference: AudioTrack buffer callbacks are serviced in Dalvik, while OpenSL callbacks are serviced in native threads. The current implementation of Dalvik isn't capable of servicing callbacks at extremely low latencies, because there is no way to suspend garbage collection during audio callbacks. This means that the minimum size for AudioTrack buffers has to be larger than the minimum size for OpenSL buffers to sustain glitch-free playback.
On most Android releases this difference between AudioTrack and OpenSL made no difference at all. But with Jellybean, Android now has a low-latency audio path. The actual latency is still device dependent, but it can be considerably lower than previously. For instance, http://code.google.com/p/music-synthesizer-for-android/ uses 384-frame buffers on the Galaxy Nexus for a total output latency of under 30ms. This requires the audio thread to service buffers approximately once every 8ms, which was not feasible on previous Android releases. It is still not feasible in a Dalvik thread.
This explanation assumes two things: first, that you are requesting the smallest possible buffers from OpenSL and doing your processing in the buffer callback rather than with a buffer queue. Second, that your device supports the low-latency path. On most current devices you will not see much difference between AudioTrack and OpenSL ES. But on devices that support Jellybean+ and low-latency audio, OpenSL ES will give you the lowest-latency path.
IIRC, OpenSL is passed through the same interface as AudioTrack, so at best it will match AudioTrack. (FWIW, I'm currently using OpenSL for "low latency" output)
The sad truth is there is no such thing as low latency audio on Android. There isn't even a proper way to flag and/or filter devices based on audio latency.
What interface you'll want to use to minimize latency is going to depend on what you are trying to do.
If you want to have an audio stream you'll be looking at either OpenSL or AudioTrack.
If you want to trigger some static oneshots you may want to use SoundPool. For static oneshots SoundPool will have low latency as the samples are preloaded to the hardware. I think it's possible to preload oneshots using OpenSL as well, but I haven't tried.
The lowest latency you can get is from SoundPool. There's a limit on how big of a sound you can play that way, but if you're under the limit (1Mb, IIRC) it's pretty low latency. Even that's probably not 40ms, though.
But it is faster than what you can get by streaming, at least in my experience.
Caveat: You may see occasional crashes in SoundPool on Samsung devices. I'm have a theory that it only happens when you access the SoundPool from multiple threads, but I haven't verified this.
EDIT: OpenSL ES apparently has extremely HIGH latency on Kindle Fire, while SoundPool is much better, but the reverse may be true on other platforms.
About the problem of a deterministic/constant latency, here you can find an interesting article:
APPROACHES FOR CONSTANT AUDIO LATENCY ON ANDROID
The core of their investigations is: Because the Audio HAL, which is one of the deeper levels of the audiopath and responsible for the timing of the audio-callback-events, is vendor-implemented the relative latencies can vary, especially in cheap hardware.
So they suggest two approaches to reduce the variance of the latency. One is to take care of the callback-timing by inserting audio in fixed intervals, the other one is to filter the callback times to estimate the time at which a constant latency callback should have occured by appliyng a smoothing filter.
With this two approaches the could significantly reduce the variance of the latency.
It should also be mentioned, that there is a new native Android-Audio-API, AAudio.
AAudio API
It's available/stable from Android Oreo 8.1 (API Level 27).
There is also a wrapper, which dynamically chooses between OpenSL ES and AAudio and is much easier to code then OpenSL ES. It's still in developer preview.
Oboe Audio Library
The best way to get low latency for native code on Android is to use Oboe.
https://github.com/google/oboe
Oboe wraps AAudio on newer devices. AAudio offers the lowest possible latency paths. If AAudio is not available then Oboe calls OpenSL ES. Oboe is much easier to use than OpenSL ES.
AAudio either calls through AudioTrack or through a new MMAP data path. AAudio makes it easier to get a FAST track because you can leave some parameters unspecified. AAudio will then choose the right parameters needed for a FAST track.

Categories

Resources