Background
Phone recording is not really supported on Android, yet some devices support it to some extend.
This made various call recording apps gather as much possible information about devices and what should be done to them, and decide upon this what to do.
Some even offer root solutions.
One such example is boldbeast Call Recorder app, which offers a lot of various configurations to change:
"record mode" . Shows 14 modes for non-rooted devices, and up to 34 for rooted. Also shows "Alsa mode" as an option for it, for rooted devices.
Has "Tune Audio Effect ("auto tune a groupd of parameters") .
Has "Tune Audio Route", with the possible values of "Disabled", "Group1", "Group2", "Group3"
For rooted devices:
"change audio controls" ("auto change audio controls")
"change audio driver" (change audio drive settings to enable record mode 21,22,23,24,31,32,33,34")
For rooted devices: "start input stream"
The problem
If I'm in need to create a call recording app, there is no other way than to find the various workarounds for various devices, but as it seems other apps use terms that don't appear in the API.
I can't find any of those of the app I've mentioned, for example.
What I've found
Other than tons of questions of how to record calls on Android, showing that it doesn't work on all devices, I could find some interesting things. Here are my tries and insights so far:
There are some Audio recording sources we can use while preparing the recording (docs here) , but sadly in each device it might be different. For some, VOICE_CALL works, and for some, others. But at least we can try...
On OnePlus 2 with Android 6.0.1, incoming calls can be recorded using VOICE_CALL, but I can't make outgoing calls be recorded there, unless I use MIC as audio source together with speaker turned on. Somehow, the app I've mentioned succeeds recording it without any issues. I'm sure I will see other issues with other Android devices, as I've tried to address this whole topic in the past. Update: I've found this sample project (also here), which for some reason sleeps for 2 seconds on the UI thread between prepare and start calls of the mediaRecorder. It works fine, and when I did something similar (wait using Handler.postDelayed for 1 second), it worked fine too. The comment that was written there is "Sometimes prepare takes some time to complete".
On Galaxy S7 with Android 8, I've failed to get sound of the other side for outgoing calls AND incoming calls (even with MIC and speaker), no matter what I did, yet the app I've mentioned worked fine.
To let you try my POC of call recording, I've published an open source github repository here, having a sample that will record a single call, and let you listen to the most recent one, if all works well.
This "ViktorDegtyarev - CallRecLib" SDK , which doesn't seem to work at all, and crashes on various Android versions
These 2 old sample projects : rvoix , esnyder-callrecorder , both fail to actually record. The second doesn't even seem to work on Android 6.0.1 device, which it's supposed to support.
aykuttasil - CallRecorder sample and axet - android-call-recorder sample - both, just like on my POC, don't have any tweaking except for AudioSource, and because of this they fails to record on some cases, such as OnePlus 2 output-audio of outgoing calls.
Most third party apps only offer the AudioSource tweaking, but some (like "boldbeast") do offer more. One example is "Automatic Call Recorder" which has "configuration" (10 values to choose from, first is "default") and "method" (5 vales to choose from, first is "default"). Those apps probably do not want others to understand what those configurations mean, so they put general names. Or, it's just too complicated for everyone (especially for users), so they generalize the names.
There is an API of "setMode" here, but it doesn't seem to change upon calling it. I was thinking of maybe change the "channel" of where the call is being used, this way, but it doesn't work. It stays on the value of "2" during call, which is MODE_IN_CALL.
There are customized parameters that are available for various devices (each OEM and its own parameters), which can be set here and maybe even via JNI (here and here) , but I don't get where to get this information from (meaning which pairs of key-value are available). I've searched in a lot of places, but couldn't find any website that talks about which possible parameters are available, and for which devices.
I was thinking of using AudioRecord instead of MediaRecorder class for recording, thinking that it's a bit low level, so it could give me more power and access to customized capabilities, but it seems to be very similar to MediaRecorder, and even use the same audio sources (example here).
Another try I had with low level API, was even further, of using JNI (OpenSL ES for Android). For this, I couldn't find much information (except here and here), and only found the 2 samples of Google here (called "audio echo" and "native audio"), which are not about recording sound, or at least I don't see them occur.
Android P might have official way to record calls (read here and here). Testing on my Android P DP3 device (Pixel 2), I could record both sides fine in both incoming and outgoing calls, using "DEFAULT" as audio source, so maybe the API will finally be official and work on all Android versions. I wrote about it here and here.
I was thinking that maybe the Visualizer class could be a workaround of recording, but according to some StackOverflow post (here), the quality it extremely low, so I decided that maybe I shouldn't try it. Plus I couldn't find a sample of how to record from it.
I've found some parameters that might be available on some devices, here (found from here), all start with "AUDIO_PARAMETER_", but testing on Galaxy S7, all returned empty string. I've also found this website, that gave me the idea of using audioManager.setParameters("noise_suppression=off") together with MIC audio source, but this didn't seem to do anything in the case of Galaxy S7.
The questions
As opposed to other similar questions about this topic, I'm not asking how to record calls. I already know it's a very problematic and complex problem. I already know I will have to address various configurations, and that I will probably use a server to store all of them and find there the best match for each one.
What I want to ask is more about the tweaking and workarounds :
Is there a list of configurations for the various devices, Android versions, and what to choose for each?
Besides Audio source, which other configuration is possible to be used?
Which parameters are possible for the various devices and Android versions ? Are there any websites of the OEMs describing them?
What are the various terms in the app I've mentioned? Where can I find information of how to change them?
Which tools are available for rooted devices?
Is it possible to know which device supports call recording and which not, by using the API ?
About the workaround of OnePlus 2, to wait a moment till we start recording, why is it needed? Is it needed on all Android versions? Is it a known issue? Would 1 second be enough?
How come on the Galaxy S7 I've failed to record the other side even when using MIC&speaker?
EDIT: I've found this of accessibility service being able to help with call recording:
https://developer.android.com/guide/topics/media/sharing-audio-input#voice_call_ordinary_app
Not sure how to use it though. It seems "ACR Phone Dialer" uses it. If anyone knows how it can be done, please let me know.
I spent many weeks working on a Voicecall Recording App so I faced all your issues/questions/problems.
Moreover: my project had a low-priority so I didn't spent much time every day on it, so I worked on this App for many months while Android was changing under the hood (minor an major releases).
I was developing always on the same Galaxy Note 5 using its stock ROM (without Root) but I discovered that on the same device the behaviour was changing from one Android release to another without any explanation.
For example from Nougat 7.0 to 7.1.2 I was unable to record a voicecall using the same code as before.
Google has enforced_or_changed restrictions about voicecall recording many times.
At the beginning it was sufficient to use use VOICE_CALL AudioSource. Then manufactures has started to interprete this Value as they wanted, and the result was that one implementation was working well but another was not.
Then Reflection was needed to run undocumented/hidden methods to start voicecall recording.
Then Google has added a Runtime check, so calling them directly was not more possible even using Reflection.
However this method lack of stability because it was not guarantee that a method was using the same name on all devices.
Then I started to reverse-engineer currently working Apps that were working on newer Android version and I discovered that them were using a complete different and more secure approach. This takes me many weeks because all these Apps uses JNI Libraries trying to hide this method between Assembler code.
When I succesfully create a Test App which was recording well I tried the SAME code in many different devices and ROMs/Versions and surprisely it was working well.
This means that all those different methods you can see in these App Settings (I'm 98% sure about it) are just "fake" or just refers to OLD methods not more used.
A small different metion should be done for Rooted devices:
these devices could change AudioRoutes so a different approach can be used in this case.
[1] There isn't any list or website listing all supported devices or best method to do a successfully voicecall record
[6] It's not possibile to know which device supports Voicecall Recording
just using an API call. You have to try and catch Excepions...
[8] Recording by MIC+speaker suffers of many issues: (1) the caller will hear all your ambient sound so the privacy-bug is a big issue (2) the echo is a big problem (3) the recording volume is very low as the quality of recordered voice
According to my tests, one way to improve this is to have an AccessibilityService being active (no need to write there anything at all) while choosing voice-recognition as the audio source. Also it's recommended to have the speaker turned on because this will record the audio from the microphone.
This seems to exist in some call-recording apps.
Weird thing is that Google has written this as a rule on the Play Store:
The Accessibility API is not designed and cannot be requested for
remote call audio recording.
https://support.google.com/googleplay/android-developer/answer/11899428
No idea what the "remote" means here.
Anyway, I've updated the Github repository to include these additions.
I am working on a video recording and sharing application for Android. The specifications of the app are as follows:-
Recording a 10 second (maximum) video from inside the app (not using the device's camera app)
No further editing on the video
Storing the video in a Firebase Cloud Storage (GCS) bucket
Downloading and playing of the said video by other users
From the research, I did on SO and others sources for this, I have found the following (please correct me if I am wrong):-
The three options and their respective features are:-
1.Ffmpeg
Capable of achieving the above goal and has extensive answers and explanations on sites like SO, however
Increases the APK size by 20-30mb (large library)
Runs the risk of not working properly on certain 64-bit devices
2.MediaRecorder
Reliable and supported by most devices
Will store files in .mp4 format (unless converted to h264)
Easier for playback (no decoding needed)
Adds the mp4 and 3gp headers
Increases latency according to this question
3.MediaCodec
Low level
Will require MediaCodec, MediaMuxer, and MediaExtractor
Output in h264 ( without using MediaMuxer for playback )
Good for video manipulations (though, not required in my use case)
Not supported by pre 4.3 (API 18) devices
More difficult to implement and code (my opinion - please correct me if I am wrong)
Unavailability of extensive information, tutorials, answers or samples (Bigflake.com being the only exception)
After spending days on this, I still can't figure out which approach suits my particular use case. Please elaborate on what I should do for my application. If there's a completely different approach, then I am open to that as well.
My biggest criteria are that the video encoding process be as efficient as possible and the video to be stored in the cloud should have the lowest possible space usage without compromising on the video quality.
Also, I'd be grateful if you could suggest the appropriate format for saving and distributing the video in Firebase Storage, and point me to tutorials or samples of your suggested approach.
Thank you in advance! And sorry for the long read.
Your overview on this topic is applicable to the point.
I'll just add my 2 cents on this topic that you might have missed as addition:
1.FFMpeg
+/-If you build your own SO then you can reduce the size down to about 2-3 MB depending on the use-case of course. Editing a 6000 lines buildscript takes time and effort though
++Supports wide range of formats (almost everything)
++Results are the same for every device
++Any resolution supported
--High energy consumption due do SW-En-/Decoding, while also making it slow. There is a plugin to support lib-stagefright, but it doesn't work on many devices (as of May 2016)
--Licensing can be problematic depending on your location and use-case. I'm not a lawyer, but we had legal consulting on this topic and it's quite complex.
2. MediaRecorder
++Easiest to implement (simplified access to mediacodec/libstagefright) Raw data gets passed to the encoder directly so no messing around there
++HW Accelerated on most devices. Makes it fast and energy saving.
++Delay only applies to live streaming
--Dependent on implementation of HW-manufacturers
--Results may vary from device to device
++No licensing problems
3.MediaCodec
+/-Most of 2.MediaRecorder applies to this as well (apart from ease of use)
++Most flexible access to HW-en-/decoding
--Hard to use for cases that were not thought of (e.g. mixing videos from different sources)
+/-Delay for streaming can be eliminated (is tricky though)
--HW-manufacturers sometimes don't implement things correctly (e.g the Samsung Galaxy S5 sometimes produces a SIG-SEV if live data from some DLSR is fed to the encoder. Works fine for a while, then all of a sudden it's SIG-SEV. This might be the dslr's fault, but the SIG-SEV is not avoidable and crashes the app, which in the end is the app developers fault ;) )
--If used without MediaMuxer you need either good understanding of media containers or rely on 3rd party libraries
The list is obviously not complete and some points might not be correct. The last time I worked with video was almost half a year ago.
As for your use-case I would recommend using MediaRecorder since it is the easiest to implement, supported on all devices, and offers a good deal of quality/size option. FFMpeg produces better results for the same storage size, but takes longer (extreme case, DSLR live footage was encoded 30 times faster), and is more energy consuming.
As far as I understand your use-case, there is no need to fiddle around with MediaCodec since you want to encode and decode only.
I suggest using VP8 or 9 since you wont run into licensing problems. Again I'm no lawyer but distributing H264 over your own server might make you a broadcasting station, so i was told.
Hope this helps you in your decision making
I'm working on a Android app that uses MediaCodec to encode H.264 video using the Surface method. I am targeting Android 5.0 and I've followed all the examples and samples from bigflake.com (I started working on this project two years ago, so I kind of went through all the gotchas and other issues).
All is working nice and well on Nexus 6 (which uses the Qualcomm hardware encoder for doing this), and I'm able to record flawlessly in real-time 1440p video with AAC audio, in a multitude of outputs (from MP4 local files, upto http streaming).
But when I try to use the app on a Sony Android TV (running Android 5.1) which uses a Mediatek chipset, all hell breaks loose even from the encoding level. To be more specific:
It's basically impossible to make the hardware encoder work properly (that is, "OMX.MTK.VIDEO.ENCODER.AVC"). With the most basic setup (which succeeds at MediaCodec's level), I will almsot never get output buffers out of it, only weird, spammy, logcat error messages stating that the driver has encountered errors each time a frame should be encoded, like this:
01-20 05:04:30.575 1096-10598/? E/venc_omx_lib: VENC_DrvInit failed(-1)!
01-20 05:04:30.575 1096-10598/? E/MtkOmxVenc: [ERROR] cannot set param
01-20 05:04:30.575 1096-10598/? E/MtkOmxVenc: [ERROR] EncSettingH264Enc fail
Sometimes, trying to configure it to encode at a 360 by 640 pixels resolution will succeed in making the encoder actually encode stuff, but the first problem I'll notice is that it will only create one keyframe, that is, the first video frame. After that, no more keyframes are ever, ever created, only P-frames. Ofcourse, the i-frame-interval was set to a decent value and is working with no issues on other devices. Needless to say, this makes it impossible to create seekable MP4 files, or any kind of streamable solution on top.
Most of the times, after releasing the encoder, logcat will start spamming endlessly with "Waiting for input frame to be released..." which basically requires a reboot of the device, since nothing will work from that point on anyway.
In the case where it doesn;t go havoc after a simple release(), no problem - the hardware encoder is making sure that it cannot be created a second time, and it falls back to the generic SOFTWARE avc google encoder. hich ofcourse is basically a mockup encoder which does little to nothing than spit out an error when trying to make it encode anything larger than 160p videos...
So, my question is: is there any hope of making this MediaCodec API actually work on such a device? My understanding was that there are some CTS tests performed by Google/manufacturers (in this case, Sony) that would allow a developer to actually think that an API is supported on a device which prouds itself as running Android 5.1. Am I missing something obvious here? Did anyone actually ever tried doing this (a simple MediaCodec video encoding test) and succeeded? It's really frustrating!
PS: it's worth mentioning that not even Sony provides yet a recording capability for this TV set, which many people are complaining anyway. So, my guess is that this sounds more like a Mediatek problem, but still, what exactly are the Android's CTS for in this case anyway?
I created android app that records device screen (using MediaProjection) API and video from camera at the same time. I use MediaRecorder in both cases. I need a way to find out whether device is actually capable of recording two video streams simultaneously. I assume there is some limit on number of streams that can be encoded simultaneously on given devices but I cannot find any API on android platform to query for that information.
Things I discovered so far:
Documentation for MediaRecorder.release() advises to release MediaRecorder as soon as possible as:
" Even if multiple instances of the same codec are supported, some performance degradation may be expected when unnecessary multiple instances are used at the same time."
This suggests that there's a limit on number of instances of the coded which directly limits number of MediaRecorders.
I've wrote testing code that creates MediaRecorders (configured to use MPEG4/H264) and starts them in a loop - On Nexus 5 it always fails with java.io.IOException: prepare failed when calling prepare() on 6th instance. This suggests you can have only 5 instances of MediaRecorder on Nexus5.
I'm not aware of anything you can query for this information, though it's possible something went into Lollipop that I didn't see.
There is a limit on the number of hardware codec instances that is largely dependent on hardware bandwidth. It's not a simple question of how many streams the device can handle -- some devices might be able to encode two 720p streams but not two 1080p streams.
On some devices the codec may fall back to a software implementation if it runs out of hardware resources. Things will work but will be considerably slower. (I've seen this for H.264 decoding, but I don't know if it also happens for encoding.)
I don't believe there is a minimum system requirement in CTS. It would be useful to know that all devices could, say, decode two 1080p streams and encode one 1080p simultaneously, so that a video editor could be made for all devices, but I don't know if such a thing has been added. (Some very inexpensive devices would struggle to meet that.)
I think it really depends on devices and ram capacity ... you could read the buffers for screen and cam as much as you like but only one read at a time not simultaneously I think to prevent concurrency but honestly I don't really know for sure
Many tablets and some smart phones use an array of microphone for things like noise cancellation. For example Motorola Droid X uses three microphone arrays and even allows you to set "audio scenes". An example is discussed here.
I want to be able to record from all the microphones that are available on the tablet/phone at the same time. I found that using AudioSource we can choose the mic (I do not know which mic this is specifically but it might be the one facing the user) or the mic that is in same orientation as the video camera, but could not find anyway of accessing all the other mic in the mic array. Any help that points me in the right direction to investigate this will be great. Thanks in advance for your time.
It's seems like you've verified that there isn't a standard Android API for accessing specific mics in an array. I couldn't find anything either.
As is the case with custom additions to the Android system, it's up to the manufacturer to release developer APIs. Motorola has done this before. I took a look at all of the ones they have listed and it seems they simply don't expose it. Obviously, they have code somewhere which can do it (the "audio scenes" uses it).
So the quick answer: you're out of luck.
The more involved answer: you can go spelunking around the source code for the Droid X because it's released as open source. If you can actually find it, understand that you're using an undocumented API which could be changed at any time. Plus, you'll have to do this for every device you want to support.