Whenever I'm trying to load at least 4 mediaPlayers, one of them will corrupt the video it's trying to load and trigger an Android OS message "Can't play this video"
Other information:
For 3 mediaPlayers everything works fine.
On other Android versions, different from 4.2, the same code with the same 4 video works.
The 4 video can be played independently on the device. There is no format problem.
After starting the program and getting the "Can't play this video" message, the video can no longer be played in any other application unless I reset the device.
I tried this both with VideoViews or independent MediaPlayers displayed on surfaceViews.
I replicated the error on more devices running Android 4.2.
On android 4.1.2 and other android 4 versions I do not recall the code worked fine.
On Android, the idea is that everything related to media codecs is hidden from the developer which has to use a consistent and unique API : MediaPlayer.
When you play a media, would it be a stream or something located on the external device, the low level codecs/parsers are instanciated every time an application will be needing their help.
However, it occurs that for particular reasons related to hardware decoding, some codecs, cannot be instantiated more than once. As a matter of fact, every application must be releasing resources (codecs instances for instance) when they do not need them anymore by calling MediaPlayer.release() in a valid state.
In fact, what I'm saying is illustrated in the documentation of release on the Android Developers website :
Releases resources associated with this MediaPlayer object. It is
considered good practice to call this method when you're done using
the MediaPlayer. In particular, whenever an Activity of an application
is paused (its onPause() method is called), or stopped (its onStop()
method is called), this method should be invoked to release the
MediaPlayer object, unless the application has a special need to keep
the object around. In addition to unnecessary resources (such as
memory and instances of codecs) being held, failure to call this
method immediately if a MediaPlayer object is no longer needed may
also lead to continuous battery consumption for mobile devices, and
playback failure for other applications if no multiple instances of
the same codec are supported on a device. Even if multiple instances
of the same codec are supported, some performance degradation may be
expected when unnecessary multiple instances are used at the same
time.
So, either you are not calling release when you are done playing back, or another app is holding a reference on this kind of resources.
EDIT :
If you need to be rendering several videos on the same Activity, you have two choices. As I said in my response, what you originally wanted is not possible because of low-level issues, neither it is on iOS by the way.
What you can try to do though is :
If the medias you are playing are not real-time streamed content, you could wrap the 4 videos into a single one, using one of the widely available free video editors. Then render the video in full screen in your Activity, it will look like you have 4 Views.
If they are real-time/non recorded content, keep the first video as is. I assume every video is encoded using the same codec/container. What you might be trying is to transcode the 3 other videos so they use a different codec and a different format. Make sure you are transcoding to a codec/container that is supported by Android. This might potentially force Android to use different decoders in the same time. I think this is overkill compared to the result you're expecting.
Lastly, you could use a different backend for decoding such as MediaPlayer + FFMPEG or just FFMPEG. But again, even if it works this will be, I think, a huge overkill.
To sum this up, you have to make compromises in order for this to work.
Related
Whenever I play a sound effect in my LibGDX game on an Android device, the game stutters. I have tried the game on three Samsung devices:
On Galaxy S7 Edge (2016, Android 8) and Galaxy Tab S 10.5 (2014, Android 6.0.1) the game is still playable, but not running smoothly whenever there are multiple sound effects being played (looping a sound effects are not a problem).
However on Galaxy S20 Ultra (2020, Android 10) the game is unplayable: Every call to Sound.play() takes 2...4 ms and causes "AUDIO_OUTPUT_FLAG_FAST denied by server; frameCount 0 -> 54276" error. This error does not appear with the other devices, but Sound.play() still takes 1...2 ms which of course is a considerable portion of a 16 ms frame.
So what I think is pretty clear is that the problem is in the Sound.play() method, not for example the number of concurrent sounds playing (which I have limited to 8 but have tried 4 as well), or that the Android device would be too slow to process the sounds (in which case a 6 year old GT should not outperform this year's high-end S20), or that the sound effect files would be too large (the one I'm using for testing was originally a 3.8 kB WAV). And yes, I am using AssetManager to load the sounds in advance.
I have now spent two long days doing research, found about 15-20 topics on different forums about what I believe is the same or related issue, and tried out all the suggested fixes without any success:
Changing audio format from WAV to OGG
Different sample rates: 44.1k, 48k, 96k on both formats (with 96k, there is no stutter and no error, but no audible sound either)
Adding silence of 1 or 2 seconds to the end of the sound effect (which itself is 41 ms long), with all the combinations of the above formats and sample rates.
Some say that looping a silent sound clip "in background" has solved the problem, but I anyway have another sound (car engine) looping constantly in the game and that seems to have no effect.
I have also seen suggestions to use Music class instead of Sound, but it's not suitable for collision sound effects with Box2D because pitch cannot be adjusted.
The only workaround that I found but have not tried yet is playing the sounds on a different thread. I have not tried it because I'm not familiar with multithreading and have not been able to find a comprehensive enough guide on how to do it (properly) in LibGDX. I also assume that this approach would be problematic for any sounds which may have to be paused, stopped or adjusted during playback by some actor from the main thread. Furthermore, according to https://github.com/libgdx/libgdx/wiki/Threading, "You should never perform multi-threaded operations on anything that is graphics or audio related".
Therefore, before I even start familiarizing my self on that topic (multithreading), I just wanted to ask once more: Is there really no other solution? It just doesn't feel right that a high-end Android device from this year cannot start the playback of a small WAV sound any faster than in 4 ms. There are lot of games in Play store with working sound effects and smooth gameplay, are they all really using multithreading?
I don't have a complete answer, but I'll share some ideas here.
My own anecdotal experience is that sound operations such as starting sound playback tend to be too time-consuming for a typical render thread on Android. I've tried a few different approaches (AudioTrack, SoundPool, etc.), and as best I can remember have gotten similar results in each case.
Putting the audio on a different thread seems like the most practical solution. I understand the hesitance if you're not familiar with multithreading, and I think you're right to be cautious, especially when using a third-party library. However, for simple tasks, Android supplies some fairly straightforward tools, like HandlerThread and Handler, that could perhaps be leveraged.
As for the LibGDX documentation saying not to perform multi-threaded operations on anything audio related, it's not clear to me whether that means don't do anything audio-related on a thread other than the render thread, or if it just means to keep all audio on a single thread, but that that thread doesn't have to be the render thread. If it's the latter, then putting audio on a separate thread might be an option.
I took a quick look at the LibGDX source code. I'd have to spend more time to better understand what's going on, but I see use of both AudioTrack and SoundPool, and I'm pretty sure I've run into this issue with both.
But, I also see some signs of asynchronous sound functionality. There are some classes with 'asynchronous' in the name that use a dedicated handler thread. I don't know if this functionality is documented (I couldn't find the documentation immediately) or otherwise supported, but it does seem to be present in the source code. The comments say there are some limitations, but it's not immediately clear to me what they are.
As for communication between the render thread and an audio thread, it would add some complexity, but you should be able to do it fairly straightforwardly using handlers or other similar tools. In fact, that's what the LibGDX code I looked at does - it creates a HandlerThread and uses a Handler (naturally) to post to it. It can still be difficult, especially when using a third-party library where you don't control where all audio operations occur. For example, LibGDX may always set up the audio objects on a specific thread (e.g. the render thread), which means if you use another thread, you'll be using the objects on a thread other than that on which they were created. I doubt that would be an issue, but it depends on the technology. (For example, the documentation for ExoPlayer says that instances should only be used from a single thread.)
In my own code I'm doing all audio myself, so I control it and can put everything on the same thread. That might be difficult or impossible with LibGDX, but the presence of the 'asynchronous' audio classes may be a hint that playing audio on a different thread is safe to do. (And maybe you can make use of those classes, assuming they're a supported part of the API.)
In case someone else has this issue. In your AndroidLauncher, override this.
#Override
public AndroidAudio createAudio(Context context, AndroidApplicationConfiguration config) {
return new AsynchronousAndroidAudio(context, config);
}
You MUST make sure you don't have any SoundId actions (eg. some_sound.Stop(sound_id)). As those will not work with AsynchronousAndroidAudio and will crash the game. So check that before you publish your game.
I have an app with lots of short videos being played all over. Some of them using VideoView, some just a TextureView with a MediaPlayer. This works fine, except after a while, any new attempt to play a video using either of those methods fails with MEDIA_ERROR_UNKNOWN and a seemingly random value for extra. It seems to happen a lot earlier on lower-end devices too. It starts working again after killing the app.
Some more info that might be useful:
Most of the videos are set to loop
There can sometimes be more than one video loaded at a time, though it still breaks even if I avoid those situations
I call release() on the MediaPlayers I use when the activity or fragment they're in is destroyed, so I'd assume it's not a problem with leaking resources or something like that
All the videos being played are in the app's local storage
On some devices, I sometimes get an IOException saying setDataSourceFD failed instead, though I couldn't get it to happen to post the exact message here
My min sdk is 21 and target is 29
From the googling I've done so far, the only thing that seemed remotely related I found was that the video is in an unsupported format, but that can't be the case here, as it sometimes fails on videos that were played just fine before, sometimes even on the same run.
There's also this similar question, though that seems to be specific to a device, whereas I see this on anything I try it on, from emulators, through low-ends to high-ends, with the time it takes to break generally being longer the higher-end the device is.
This has been eluding me for a few days now, so any help would be much appreciated. Thanks.
Background
Phone recording is not really supported on Android, yet some devices support it to some extend.
This made various call recording apps gather as much possible information about devices and what should be done to them, and decide upon this what to do.
Some even offer root solutions.
One such example is boldbeast Call Recorder app, which offers a lot of various configurations to change:
"record mode" . Shows 14 modes for non-rooted devices, and up to 34 for rooted. Also shows "Alsa mode" as an option for it, for rooted devices.
Has "Tune Audio Effect ("auto tune a groupd of parameters") .
Has "Tune Audio Route", with the possible values of "Disabled", "Group1", "Group2", "Group3"
For rooted devices:
"change audio controls" ("auto change audio controls")
"change audio driver" (change audio drive settings to enable record mode 21,22,23,24,31,32,33,34")
For rooted devices: "start input stream"
The problem
If I'm in need to create a call recording app, there is no other way than to find the various workarounds for various devices, but as it seems other apps use terms that don't appear in the API.
I can't find any of those of the app I've mentioned, for example.
What I've found
Other than tons of questions of how to record calls on Android, showing that it doesn't work on all devices, I could find some interesting things. Here are my tries and insights so far:
There are some Audio recording sources we can use while preparing the recording (docs here) , but sadly in each device it might be different. For some, VOICE_CALL works, and for some, others. But at least we can try...
On OnePlus 2 with Android 6.0.1, incoming calls can be recorded using VOICE_CALL, but I can't make outgoing calls be recorded there, unless I use MIC as audio source together with speaker turned on. Somehow, the app I've mentioned succeeds recording it without any issues. I'm sure I will see other issues with other Android devices, as I've tried to address this whole topic in the past. Update: I've found this sample project (also here), which for some reason sleeps for 2 seconds on the UI thread between prepare and start calls of the mediaRecorder. It works fine, and when I did something similar (wait using Handler.postDelayed for 1 second), it worked fine too. The comment that was written there is "Sometimes prepare takes some time to complete".
On Galaxy S7 with Android 8, I've failed to get sound of the other side for outgoing calls AND incoming calls (even with MIC and speaker), no matter what I did, yet the app I've mentioned worked fine.
To let you try my POC of call recording, I've published an open source github repository here, having a sample that will record a single call, and let you listen to the most recent one, if all works well.
This "ViktorDegtyarev - CallRecLib" SDK , which doesn't seem to work at all, and crashes on various Android versions
These 2 old sample projects : rvoix , esnyder-callrecorder , both fail to actually record. The second doesn't even seem to work on Android 6.0.1 device, which it's supposed to support.
aykuttasil - CallRecorder sample and axet - android-call-recorder sample - both, just like on my POC, don't have any tweaking except for AudioSource, and because of this they fails to record on some cases, such as OnePlus 2 output-audio of outgoing calls.
Most third party apps only offer the AudioSource tweaking, but some (like "boldbeast") do offer more. One example is "Automatic Call Recorder" which has "configuration" (10 values to choose from, first is "default") and "method" (5 vales to choose from, first is "default"). Those apps probably do not want others to understand what those configurations mean, so they put general names. Or, it's just too complicated for everyone (especially for users), so they generalize the names.
There is an API of "setMode" here, but it doesn't seem to change upon calling it. I was thinking of maybe change the "channel" of where the call is being used, this way, but it doesn't work. It stays on the value of "2" during call, which is MODE_IN_CALL.
There are customized parameters that are available for various devices (each OEM and its own parameters), which can be set here and maybe even via JNI (here and here) , but I don't get where to get this information from (meaning which pairs of key-value are available). I've searched in a lot of places, but couldn't find any website that talks about which possible parameters are available, and for which devices.
I was thinking of using AudioRecord instead of MediaRecorder class for recording, thinking that it's a bit low level, so it could give me more power and access to customized capabilities, but it seems to be very similar to MediaRecorder, and even use the same audio sources (example here).
Another try I had with low level API, was even further, of using JNI (OpenSL ES for Android). For this, I couldn't find much information (except here and here), and only found the 2 samples of Google here (called "audio echo" and "native audio"), which are not about recording sound, or at least I don't see them occur.
Android P might have official way to record calls (read here and here). Testing on my Android P DP3 device (Pixel 2), I could record both sides fine in both incoming and outgoing calls, using "DEFAULT" as audio source, so maybe the API will finally be official and work on all Android versions. I wrote about it here and here.
I was thinking that maybe the Visualizer class could be a workaround of recording, but according to some StackOverflow post (here), the quality it extremely low, so I decided that maybe I shouldn't try it. Plus I couldn't find a sample of how to record from it.
I've found some parameters that might be available on some devices, here (found from here), all start with "AUDIO_PARAMETER_", but testing on Galaxy S7, all returned empty string. I've also found this website, that gave me the idea of using audioManager.setParameters("noise_suppression=off") together with MIC audio source, but this didn't seem to do anything in the case of Galaxy S7.
The questions
As opposed to other similar questions about this topic, I'm not asking how to record calls. I already know it's a very problematic and complex problem. I already know I will have to address various configurations, and that I will probably use a server to store all of them and find there the best match for each one.
What I want to ask is more about the tweaking and workarounds :
Is there a list of configurations for the various devices, Android versions, and what to choose for each?
Besides Audio source, which other configuration is possible to be used?
Which parameters are possible for the various devices and Android versions ? Are there any websites of the OEMs describing them?
What are the various terms in the app I've mentioned? Where can I find information of how to change them?
Which tools are available for rooted devices?
Is it possible to know which device supports call recording and which not, by using the API ?
About the workaround of OnePlus 2, to wait a moment till we start recording, why is it needed? Is it needed on all Android versions? Is it a known issue? Would 1 second be enough?
How come on the Galaxy S7 I've failed to record the other side even when using MIC&speaker?
EDIT: I've found this of accessibility service being able to help with call recording:
https://developer.android.com/guide/topics/media/sharing-audio-input#voice_call_ordinary_app
Not sure how to use it though. It seems "ACR Phone Dialer" uses it. If anyone knows how it can be done, please let me know.
I spent many weeks working on a Voicecall Recording App so I faced all your issues/questions/problems.
Moreover: my project had a low-priority so I didn't spent much time every day on it, so I worked on this App for many months while Android was changing under the hood (minor an major releases).
I was developing always on the same Galaxy Note 5 using its stock ROM (without Root) but I discovered that on the same device the behaviour was changing from one Android release to another without any explanation.
For example from Nougat 7.0 to 7.1.2 I was unable to record a voicecall using the same code as before.
Google has enforced_or_changed restrictions about voicecall recording many times.
At the beginning it was sufficient to use use VOICE_CALL AudioSource. Then manufactures has started to interprete this Value as they wanted, and the result was that one implementation was working well but another was not.
Then Reflection was needed to run undocumented/hidden methods to start voicecall recording.
Then Google has added a Runtime check, so calling them directly was not more possible even using Reflection.
However this method lack of stability because it was not guarantee that a method was using the same name on all devices.
Then I started to reverse-engineer currently working Apps that were working on newer Android version and I discovered that them were using a complete different and more secure approach. This takes me many weeks because all these Apps uses JNI Libraries trying to hide this method between Assembler code.
When I succesfully create a Test App which was recording well I tried the SAME code in many different devices and ROMs/Versions and surprisely it was working well.
This means that all those different methods you can see in these App Settings (I'm 98% sure about it) are just "fake" or just refers to OLD methods not more used.
A small different metion should be done for Rooted devices:
these devices could change AudioRoutes so a different approach can be used in this case.
[1] There isn't any list or website listing all supported devices or best method to do a successfully voicecall record
[6] It's not possibile to know which device supports Voicecall Recording
just using an API call. You have to try and catch Excepions...
[8] Recording by MIC+speaker suffers of many issues: (1) the caller will hear all your ambient sound so the privacy-bug is a big issue (2) the echo is a big problem (3) the recording volume is very low as the quality of recordered voice
According to my tests, one way to improve this is to have an AccessibilityService being active (no need to write there anything at all) while choosing voice-recognition as the audio source. Also it's recommended to have the speaker turned on because this will record the audio from the microphone.
This seems to exist in some call-recording apps.
Weird thing is that Google has written this as a rule on the Play Store:
The Accessibility API is not designed and cannot be requested for
remote call audio recording.
https://support.google.com/googleplay/android-developer/answer/11899428
No idea what the "remote" means here.
Anyway, I've updated the Github repository to include these additions.
I have this use case, where video from MediaPlayer has to be delivered to two Surfaces. Unfortunately, whole Android Surface API lack's of that functionality (or at least, after studying developers site, I'm unable to find it).
I've had a simillar use case where the video was produced by a custom camera module, but after a slight modification, I was able to retrieve Bitmap from the camera so I just used lockCanvas, drawBitmap and unlockAndPost on two Surfaces. With MediaPlayer, I don't know how to retrieve Bitmap and keep playback with proper timing.
Also, I've tried to use Allocation for that purpose, with one Allocation serving as USAGE_IO_INPUT, two as USAGE_IO_OUTPUT, and with ioReceive, copyFrom, ioSend methods. But it was also an dead end. For some unknown reason, RenderScript engine is very unstable on my platform, I've had numerous errors like:
android.renderscript.RSInvalidStateException: Calling RS with no Context active.
when context passed to RenderScript.create was this from Application class, or
Failed loading RS driver: dlopen failed: could not locate symbol .... falling back to default
(I've lost full log somewhere...). And at the end, I was not able to create proper Input Allocation type to be compatible with MediaPlayer. Due to mentioned flaws with RenderScript on my platform, I would consider this as last resort for solving this issue.
So, in conclusion: How to play video (from mp4 file) to two Surfaces? This video has to be in sync. Also, more generic question, how to play video to #X Surface's which can be dynamically added, removed during playback?
I've resolved my issue by having multiple instances of MediaPlayer with same video file source. When doing basic player operations like pause/play/seek, I'm just doing them on every player.
I created android app that records device screen (using MediaProjection) API and video from camera at the same time. I use MediaRecorder in both cases. I need a way to find out whether device is actually capable of recording two video streams simultaneously. I assume there is some limit on number of streams that can be encoded simultaneously on given devices but I cannot find any API on android platform to query for that information.
Things I discovered so far:
Documentation for MediaRecorder.release() advises to release MediaRecorder as soon as possible as:
" Even if multiple instances of the same codec are supported, some performance degradation may be expected when unnecessary multiple instances are used at the same time."
This suggests that there's a limit on number of instances of the coded which directly limits number of MediaRecorders.
I've wrote testing code that creates MediaRecorders (configured to use MPEG4/H264) and starts them in a loop - On Nexus 5 it always fails with java.io.IOException: prepare failed when calling prepare() on 6th instance. This suggests you can have only 5 instances of MediaRecorder on Nexus5.
I'm not aware of anything you can query for this information, though it's possible something went into Lollipop that I didn't see.
There is a limit on the number of hardware codec instances that is largely dependent on hardware bandwidth. It's not a simple question of how many streams the device can handle -- some devices might be able to encode two 720p streams but not two 1080p streams.
On some devices the codec may fall back to a software implementation if it runs out of hardware resources. Things will work but will be considerably slower. (I've seen this for H.264 decoding, but I don't know if it also happens for encoding.)
I don't believe there is a minimum system requirement in CTS. It would be useful to know that all devices could, say, decode two 1080p streams and encode one 1080p simultaneously, so that a video editor could be made for all devices, but I don't know if such a thing has been added. (Some very inexpensive devices would struggle to meet that.)
I think it really depends on devices and ram capacity ... you could read the buffers for screen and cam as much as you like but only one read at a time not simultaneously I think to prevent concurrency but honestly I don't really know for sure