I have reports from several users on distinct Samsung devices (J6, S6, S7, ...), where recorded videos do not play, so seem to be corrupted.
The playback seems stuck/frozen at the first frame, while the audio plays correctly.
The issue happens with videos recorded using Android's MediaRecorder API.
The information I could gather is that it happens when a device goes into deep-sleep, so turning the screen off and perform no usage of the device for several minutes. When the devices becomes active again, then for some still unknown reasons, a new recording produces an excessively large delta duration between the first and second frame, giving the impression on playback that is frozen or has only 1 frame.
I've found around the internet the issue reported in different sites, yet no proper solution. Has anyone found a workaround? Samsung doesn’t seem to acknowledge the problem.
Further investigation have shown that the issue might be caused by a system bug in some Samsung models.
Inspecting the corrupted videos sent by some users I could confirm that in all the affected devices the first frame has an exaggerated large delta duration time.
So, with an incorrect delta time, it makes the impression that the video is frozen, when is actually just showing in screen the first frame as per its defined delta duration time, which for corrupted videos is tremendously long.
To fix these samples, I replaced the delta time of the first frame with the value from the second frame (only the first frame is affected). Then the video plays correctly as expected. I used IsoParser for this task.
But this is not a proper solution, as it means to have to check every video and repackage it if affected, as there is no way to fix it in-place. The operation requires to create a new video file, copy the contents of the original with the correct delta time, and replace the original file with the fixed one.
The proper solution would be to find out how the MediaRecorder API computes delta times, and why in some scenarios for the affected devices produces an invalid value for the first frame.
My only guess is that if by any chance the MediaRecorder implementation uses the System.nanoTime clock, then I read in some StackOverflow posts that this system clock gives sometimes a bizarre value when coming back from a device deep-sleep. If that was the real issue then the only real solution would be that Samsung fixes their implementation.
Background
Phone recording is not really supported on Android, yet some devices support it to some extend.
This made various call recording apps gather as much possible information about devices and what should be done to them, and decide upon this what to do.
Some even offer root solutions.
One such example is boldbeast Call Recorder app, which offers a lot of various configurations to change:
"record mode" . Shows 14 modes for non-rooted devices, and up to 34 for rooted. Also shows "Alsa mode" as an option for it, for rooted devices.
Has "Tune Audio Effect ("auto tune a groupd of parameters") .
Has "Tune Audio Route", with the possible values of "Disabled", "Group1", "Group2", "Group3"
For rooted devices:
"change audio controls" ("auto change audio controls")
"change audio driver" (change audio drive settings to enable record mode 21,22,23,24,31,32,33,34")
For rooted devices: "start input stream"
The problem
If I'm in need to create a call recording app, there is no other way than to find the various workarounds for various devices, but as it seems other apps use terms that don't appear in the API.
I can't find any of those of the app I've mentioned, for example.
What I've found
Other than tons of questions of how to record calls on Android, showing that it doesn't work on all devices, I could find some interesting things. Here are my tries and insights so far:
There are some Audio recording sources we can use while preparing the recording (docs here) , but sadly in each device it might be different. For some, VOICE_CALL works, and for some, others. But at least we can try...
On OnePlus 2 with Android 6.0.1, incoming calls can be recorded using VOICE_CALL, but I can't make outgoing calls be recorded there, unless I use MIC as audio source together with speaker turned on. Somehow, the app I've mentioned succeeds recording it without any issues. I'm sure I will see other issues with other Android devices, as I've tried to address this whole topic in the past. Update: I've found this sample project (also here), which for some reason sleeps for 2 seconds on the UI thread between prepare and start calls of the mediaRecorder. It works fine, and when I did something similar (wait using Handler.postDelayed for 1 second), it worked fine too. The comment that was written there is "Sometimes prepare takes some time to complete".
On Galaxy S7 with Android 8, I've failed to get sound of the other side for outgoing calls AND incoming calls (even with MIC and speaker), no matter what I did, yet the app I've mentioned worked fine.
To let you try my POC of call recording, I've published an open source github repository here, having a sample that will record a single call, and let you listen to the most recent one, if all works well.
This "ViktorDegtyarev - CallRecLib" SDK , which doesn't seem to work at all, and crashes on various Android versions
These 2 old sample projects : rvoix , esnyder-callrecorder , both fail to actually record. The second doesn't even seem to work on Android 6.0.1 device, which it's supposed to support.
aykuttasil - CallRecorder sample and axet - android-call-recorder sample - both, just like on my POC, don't have any tweaking except for AudioSource, and because of this they fails to record on some cases, such as OnePlus 2 output-audio of outgoing calls.
Most third party apps only offer the AudioSource tweaking, but some (like "boldbeast") do offer more. One example is "Automatic Call Recorder" which has "configuration" (10 values to choose from, first is "default") and "method" (5 vales to choose from, first is "default"). Those apps probably do not want others to understand what those configurations mean, so they put general names. Or, it's just too complicated for everyone (especially for users), so they generalize the names.
There is an API of "setMode" here, but it doesn't seem to change upon calling it. I was thinking of maybe change the "channel" of where the call is being used, this way, but it doesn't work. It stays on the value of "2" during call, which is MODE_IN_CALL.
There are customized parameters that are available for various devices (each OEM and its own parameters), which can be set here and maybe even via JNI (here and here) , but I don't get where to get this information from (meaning which pairs of key-value are available). I've searched in a lot of places, but couldn't find any website that talks about which possible parameters are available, and for which devices.
I was thinking of using AudioRecord instead of MediaRecorder class for recording, thinking that it's a bit low level, so it could give me more power and access to customized capabilities, but it seems to be very similar to MediaRecorder, and even use the same audio sources (example here).
Another try I had with low level API, was even further, of using JNI (OpenSL ES for Android). For this, I couldn't find much information (except here and here), and only found the 2 samples of Google here (called "audio echo" and "native audio"), which are not about recording sound, or at least I don't see them occur.
Android P might have official way to record calls (read here and here). Testing on my Android P DP3 device (Pixel 2), I could record both sides fine in both incoming and outgoing calls, using "DEFAULT" as audio source, so maybe the API will finally be official and work on all Android versions. I wrote about it here and here.
I was thinking that maybe the Visualizer class could be a workaround of recording, but according to some StackOverflow post (here), the quality it extremely low, so I decided that maybe I shouldn't try it. Plus I couldn't find a sample of how to record from it.
I've found some parameters that might be available on some devices, here (found from here), all start with "AUDIO_PARAMETER_", but testing on Galaxy S7, all returned empty string. I've also found this website, that gave me the idea of using audioManager.setParameters("noise_suppression=off") together with MIC audio source, but this didn't seem to do anything in the case of Galaxy S7.
The questions
As opposed to other similar questions about this topic, I'm not asking how to record calls. I already know it's a very problematic and complex problem. I already know I will have to address various configurations, and that I will probably use a server to store all of them and find there the best match for each one.
What I want to ask is more about the tweaking and workarounds :
Is there a list of configurations for the various devices, Android versions, and what to choose for each?
Besides Audio source, which other configuration is possible to be used?
Which parameters are possible for the various devices and Android versions ? Are there any websites of the OEMs describing them?
What are the various terms in the app I've mentioned? Where can I find information of how to change them?
Which tools are available for rooted devices?
Is it possible to know which device supports call recording and which not, by using the API ?
About the workaround of OnePlus 2, to wait a moment till we start recording, why is it needed? Is it needed on all Android versions? Is it a known issue? Would 1 second be enough?
How come on the Galaxy S7 I've failed to record the other side even when using MIC&speaker?
EDIT: I've found this of accessibility service being able to help with call recording:
https://developer.android.com/guide/topics/media/sharing-audio-input#voice_call_ordinary_app
Not sure how to use it though. It seems "ACR Phone Dialer" uses it. If anyone knows how it can be done, please let me know.
I spent many weeks working on a Voicecall Recording App so I faced all your issues/questions/problems.
Moreover: my project had a low-priority so I didn't spent much time every day on it, so I worked on this App for many months while Android was changing under the hood (minor an major releases).
I was developing always on the same Galaxy Note 5 using its stock ROM (without Root) but I discovered that on the same device the behaviour was changing from one Android release to another without any explanation.
For example from Nougat 7.0 to 7.1.2 I was unable to record a voicecall using the same code as before.
Google has enforced_or_changed restrictions about voicecall recording many times.
At the beginning it was sufficient to use use VOICE_CALL AudioSource. Then manufactures has started to interprete this Value as they wanted, and the result was that one implementation was working well but another was not.
Then Reflection was needed to run undocumented/hidden methods to start voicecall recording.
Then Google has added a Runtime check, so calling them directly was not more possible even using Reflection.
However this method lack of stability because it was not guarantee that a method was using the same name on all devices.
Then I started to reverse-engineer currently working Apps that were working on newer Android version and I discovered that them were using a complete different and more secure approach. This takes me many weeks because all these Apps uses JNI Libraries trying to hide this method between Assembler code.
When I succesfully create a Test App which was recording well I tried the SAME code in many different devices and ROMs/Versions and surprisely it was working well.
This means that all those different methods you can see in these App Settings (I'm 98% sure about it) are just "fake" or just refers to OLD methods not more used.
A small different metion should be done for Rooted devices:
these devices could change AudioRoutes so a different approach can be used in this case.
[1] There isn't any list or website listing all supported devices or best method to do a successfully voicecall record
[6] It's not possibile to know which device supports Voicecall Recording
just using an API call. You have to try and catch Excepions...
[8] Recording by MIC+speaker suffers of many issues: (1) the caller will hear all your ambient sound so the privacy-bug is a big issue (2) the echo is a big problem (3) the recording volume is very low as the quality of recordered voice
According to my tests, one way to improve this is to have an AccessibilityService being active (no need to write there anything at all) while choosing voice-recognition as the audio source. Also it's recommended to have the speaker turned on because this will record the audio from the microphone.
This seems to exist in some call-recording apps.
Weird thing is that Google has written this as a rule on the Play Store:
The Accessibility API is not designed and cannot be requested for
remote call audio recording.
https://support.google.com/googleplay/android-developer/answer/11899428
No idea what the "remote" means here.
Anyway, I've updated the Github repository to include these additions.
My current task involves fixing an Android music player that sucks too much RAM. By taking a look at the "Apps" section of the system settings, I found a mysterious (for me) process called Media. Here is a screenshot:
I would like to know more about this process and about the reason why it appears every time a song is played and disappears every time the playback stops. The only reference I have been able to find in the official documentation is this one.
What does the MediaProvider class do?
Do you have any clue about what may be causing that process to be kept alive during the playback?
I solved it myself. Turned out that a Cursor was being left open (I don't know whether accidentally or willfully), thus causing the Media process to appear.
Still, it is weird that, neither in the official SDK documentation nor on the Net, there is no mention about any MediaProvider-ish thing.
Whenever I'm trying to load at least 4 mediaPlayers, one of them will corrupt the video it's trying to load and trigger an Android OS message "Can't play this video"
Other information:
For 3 mediaPlayers everything works fine.
On other Android versions, different from 4.2, the same code with the same 4 video works.
The 4 video can be played independently on the device. There is no format problem.
After starting the program and getting the "Can't play this video" message, the video can no longer be played in any other application unless I reset the device.
I tried this both with VideoViews or independent MediaPlayers displayed on surfaceViews.
I replicated the error on more devices running Android 4.2.
On android 4.1.2 and other android 4 versions I do not recall the code worked fine.
On Android, the idea is that everything related to media codecs is hidden from the developer which has to use a consistent and unique API : MediaPlayer.
When you play a media, would it be a stream or something located on the external device, the low level codecs/parsers are instanciated every time an application will be needing their help.
However, it occurs that for particular reasons related to hardware decoding, some codecs, cannot be instantiated more than once. As a matter of fact, every application must be releasing resources (codecs instances for instance) when they do not need them anymore by calling MediaPlayer.release() in a valid state.
In fact, what I'm saying is illustrated in the documentation of release on the Android Developers website :
Releases resources associated with this MediaPlayer object. It is
considered good practice to call this method when you're done using
the MediaPlayer. In particular, whenever an Activity of an application
is paused (its onPause() method is called), or stopped (its onStop()
method is called), this method should be invoked to release the
MediaPlayer object, unless the application has a special need to keep
the object around. In addition to unnecessary resources (such as
memory and instances of codecs) being held, failure to call this
method immediately if a MediaPlayer object is no longer needed may
also lead to continuous battery consumption for mobile devices, and
playback failure for other applications if no multiple instances of
the same codec are supported on a device. Even if multiple instances
of the same codec are supported, some performance degradation may be
expected when unnecessary multiple instances are used at the same
time.
So, either you are not calling release when you are done playing back, or another app is holding a reference on this kind of resources.
EDIT :
If you need to be rendering several videos on the same Activity, you have two choices. As I said in my response, what you originally wanted is not possible because of low-level issues, neither it is on iOS by the way.
What you can try to do though is :
If the medias you are playing are not real-time streamed content, you could wrap the 4 videos into a single one, using one of the widely available free video editors. Then render the video in full screen in your Activity, it will look like you have 4 Views.
If they are real-time/non recorded content, keep the first video as is. I assume every video is encoded using the same codec/container. What you might be trying is to transcode the 3 other videos so they use a different codec and a different format. Make sure you are transcoding to a codec/container that is supported by Android. This might potentially force Android to use different decoders in the same time. I think this is overkill compared to the result you're expecting.
Lastly, you could use a different backend for decoding such as MediaPlayer + FFMPEG or just FFMPEG. But again, even if it works this will be, I think, a huge overkill.
To sum this up, you have to make compromises in order for this to work.
I have an app that is streaming audio content and sometimes it just
stops all of the suddent.
the logcat windows shows --
AudioHardware pcm playback is going to standby
and that's it.
I saw on another thread (pun intended) that someone was saying it was
because he was using too many threads. Could that really be causing
this? Could i give the audio thread higher priority?
Anyway to prevent the audio hardware pcm from going to standby?
I too am having this problem, and have nearly exhausted all my resources trying to find a workaround. I found an article on how to prevent this with AudioRecord, and maybe the same principles he talks about there will apply to you as well:
http://hificorder.blogspot.com/
In my case, I am having lockups using SoundPool. Increasing the number of max streams allowed prevented this from happening in most cases, but it still happens in other situations. I really wish there was better information about how to handled this correctly, because as it is, this is a show stopping bug in my app which may force me to abandon it all together.
Be sure to post back if you have any findings to add.