I'm looking for a way to check if other applications are using the STREAM_VOICE_CALL (for example during a phone call or while listening a whatsapp voice note bringing the phone to the ear) (but in general we should find a way to choose any stream to monitor). I need this for using in Tasker, but that should be irrelevant (but should help other people with the same problem finding this).
Basically this was just perfect:
AudioSystem.isStreamActive(int stream, int inPastMs)
which could be used with any audio stream... in my case
AudioSystem.isStreamActive(STREAM_VOICE_CALL, 0)
Too bad AudioSystem is now replaced with AudioManager class :( (info about AudioSystem in this other thread Android: Is there a way to detect if a sound from SoundPool is still playing )
Now we have AudioManager.isMusicActive which I have tried and it works flawlessly, but of course it refers to STREAM_MUSIC.
Any suggestion?
Thanks
AudioSystem is hidden API. You might need to use reflection to access it. Create extension for AudioManager and the use it as:
audioManager.isStreamActive(AudioManager.STREAM_SYSTEM)
Implementation:
private val AUDIO_SYSTEM_IS_STREAM_ACTIVE: Method by lazy { Class.forName("AudioSystem").getDeclaredMethod("isStreamActive",
Integer::class.javaPrimitiveType, Integer::class.javaPrimitiveType).also { it.isAccessible = true }
}
fun AudioManager.isStreamActive(stream : Int) : Boolean {
require(stream in -1..11){
"Stream type must be between -1 (DEFAULT) and 11 (ASSISTANT)"
}
return when (stream) {
AudioManager.STREAM_MUSIC -> {
isMusicActive
}
AudioManager.STREAM_RING -> {
ringerMode != AudioManager.RINGER_MODE_SILENT
&& AUDIO_SYSTEM_IS_STREAM_ACTIVE.invoke(this, stream, 0) as Boolean
}
else -> {
AUDIO_SYSTEM_IS_STREAM_ACTIVE.invoke(this, stream, 0) as Boolean
}
}
}
Related
I want to send multiple track to remote peer. For example, videoTrack, audioTrack, shareScreenTrack.
I used UNIFIED_PLAN like below usage.
val rtcConfig = PeerConnection.RTCConfiguration(
arrayListOf(PeerConnection.IceServer.builder("stun:stun.l.google.com:19302").createIceServer())
).apply { sdpSemantics = PeerConnection.SdpSemantics.UNIFIED_PLAN }
And I add tracks like this
peerConnection?.addTrack(videoTrack)
peerConnection?.addTrack(audioTrack)
peerConnection?.addTrack(captureScreenVideoTrack)
but only the first track goes.
When I add onTrack debugging, debugging for videoTrack drops only once. It doesn't fall for audioTrack and captureScreenVideoTrack.
override fun onTrack(transceiver: RtpTransceiver?) {
super.onTrack(transceiver)
val track = transceiver?.receiver?.track() ?: return
when (track.kind()){
MediaStreamTrack.VIDEO_TRACK_KIND ->{
//videoTrack or captureScreenVideoTrack
}
MediaStreamTrack.AUDIO_TRACK_KIND ->{
//audioTrack
}
else -> {}
}
}
I found my problem. my fault. I was calling createOffer from onRenegotiationNeeded. OnRenegotiationNeeded is triggered when the first track is added to the peerconnection. Since the second track is added right after this, onRenegotiationNeeded is triggered again.
I removed the createOffer from onRenegotiationNeeded and made it call only on the first call and the problem was resolved.
Android 12 came up with a new Privacy Settings to disable access to the Camera and Mic sensors, which is referred as Toggles in the docs.
As it is mentioned in the docs:
the system reminds the user that the device-wide toggle is turned off
However, it seems that it only reminds the user when requesting the Camera permission and not when trying to authenticate the user using biometrics (face authentication on Pixel phones, which guess what!? It uses the camera). [I'm using AndroidX biometrics library]
Is there any way to find out if the Camera access has been blocked by the user without requesting any permission?
I guess the note in the docs didn't take into account that the app might use face authentication:
Note: The toggles mentioned in this section shouldn't require changes to your app's logic, as long as you follow privacy best practices.
Notes:
You can't register a new face in Settings when camera access is blocked. The Settings app does not show any error, just a blank camera feed
I am using Pixel 4 (Android 12)
The feature 'Join Wi-Fi by scanning a QR code' does not work and neither shows a feedback to the user if Camera access is blocked (Pixel 5)
So, I also looking for a solution - a have a biometric library and few reports appear in DM with the same problem - FaceUnlock doesn't work on Pixel 4 when the camera 'muted'
For now, still now fix, but maybe my research can help someone.
1. I checked the new API for PrivacyToggle's.
Android 12 introduces a new SensorPrivacyManager with supportsSensorToggle() method - it returns TRUE in case of device able to 'mute' camera or mic.
val sensorPrivacyManager = applicationContext
.getSystemService(SensorPrivacyManager::class.java)
as SensorPrivacyManager
val supportsMicrophoneToggle = sensorPrivacyManager
.supportsSensorToggle(Sensors.MICROPHONE)
val supportsCameraToggle = sensorPrivacyManager
.supportsSensorToggle(Sensors.CAMERA)
If you look into SensorPrivacyManager, you can find that it provides some more useful methods, so I develop the next code:
fun isCameraAccessible(): Boolean {
return !checkIsPrivacyToggled(SensorPrivacyManager.Sensors.CAMERA)
}
#SuppressLint("PrivateApi")
private fun checkIsPrivacyToggled(sensor: Int): Boolean {
val sensorPrivacyManager: SensorPrivacyManager =
appContext.getSystemService(SensorPrivacyManager::class.java)
if (sensorPrivacyManager.supportsSensorToggle(sensor)) {
val userHandleField = UserHandle::class.java.getDeclaredField("USER_CURRENT")
userHandleField.isAccessible = true
val userHandle = userHandleField.get(null) as Int
val m = SensorPrivacyManager::class.java.getDeclaredMethod(
"isSensorPrivacyEnabled",
Int::class.javaPrimitiveType,
Int::class.javaPrimitiveType
)
m.isAccessible = true
return m.invoke(
sensorPrivacyManager,
sensor,
userHandle
) as Boolean
}
return false
}
Unfortunately, the service rejects this call due to SecurityException - missing android.permission.OBSERVE_SENSOR_PRIVACY, even if we declare it in Manifest.
At least on emulator.
2. We can try to identify a new "sensor-in-use" indicator
fun checkForIndicator(){
findViewById<View>(Window.ID_ANDROID_CONTENT)?.let {
it.setOnApplyWindowInsetsListener { view, windowInsets ->
val indicatorBounds = windowInsets.privacyIndicatorBounds
if(indicatorBounds !=null){
Toast.makeText(view.context, "Camera-in-use detected", Toast.LENGTH_LONG).show()
}
// change your UI to avoid overlapping
windowInsets
}
}
}
I didn't test this code (no real device), but as for me - it's not very useful, because we can check the camera indicator only AFTER we start Biometric Auth flow, when I need to understand is camera accessible BEFORE Biometric Auth started.
3. Because of PrivicyToogle related to QuickSettings, I decide that perhaps exists a way how Tiles determinate current Privacy Toggle state.
But this API use a very interesting solution - it does not use Settings.Global or Settings.Security section, instead, all preferences saved in "system/sensor_privacy.xml" and not accessible for 3rd party apps.
See SensorPrivacyService.java
I believe that exists a way how to find that Camera is blocked, but seems like some deeper research required
UPDATED 28/10/2021
So after some digging in AOSP sources, I found that APP_OP_CAMERA permission reflects the "blocking" state.
Just call if(SensorPrivacyCheck.isCameraBlocked()){ return } - this call also notify the system to show the "Unblock" dialog
Example
Solution:
#TargetApi(Build.VERSION_CODES.S)
#RestrictTo(RestrictTo.Scope.LIBRARY)
object SensorPrivacyCheck {
fun isMicrophoneBlocked(): Boolean {
return Utils.isAtLeastS && checkIsPrivacyToggled(SensorPrivacyManager.Sensors.MICROPHONE)
}
fun isCameraBlocked(): Boolean {
return Utils.isAtLeastS && checkIsPrivacyToggled(SensorPrivacyManager.Sensors.CAMERA)
}
#SuppressLint("PrivateApi", "BlockedPrivateApi")
private fun checkIsPrivacyToggled(sensor: Int): Boolean {
val sensorPrivacyManager: SensorPrivacyManager =
AndroidContext.appContext.getSystemService(SensorPrivacyManager::class.java)
if (sensorPrivacyManager.supportsSensorToggle(sensor)) {
try {
val permissionToOp: String =
AppOpCompatConstants.getAppOpFromPermission(
if (sensor == SensorPrivacyManager.Sensors.CAMERA)
Manifest.permission.CAMERA else Manifest.permission.RECORD_AUDIO
) ?: return false
val noteOp: Int = try {
AppOpsManagerCompat.noteOpNoThrow(
AndroidContext.appContext,
permissionToOp,
Process.myUid(),
AndroidContext.appContext.packageName
)
} catch (ignored: Throwable) {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.KITKAT)
PermissionUtils.appOpPermissionsCheckMiui(
permissionToOp,
Process.myUid(),
AndroidContext.appContext.packageName
) else AppOpsManagerCompat.MODE_IGNORED
}
return noteOp != AppOpsManagerCompat.MODE_ALLOWED
} catch (e: Throwable) {
e.printStackTrace()
}
}
return false
}
}
I have an application that occasionally speaks via the systems text to speech(TTS) system, but if there's a background service (like an audiobook, or music stream) running at the same time they overlap.
I would like to pause the media, play my TTS, then unpause the media. I've looked, but can't find any solutions.
I believe if I were to play actual audio from my app, it would pause the media until my playback was complete (if I understand what I've found correctly). But TTS doesn't seem to have the same affect. The speech is totally dynamic, so I can't just record all the options.
Using the latest Xamarin.Forms, I've looked into all the media nuget packages I could find, and they all seem pretty centered on controlling media from files.
My only potential thought (I don't like it), is to maybe play an empty audio file while the TTS is running. But would like a more elegant solution if it exists.
(I don't care about iOS at the moment, so if it's an android only solution, I'm okay with it. And if it's native (java/kotlin), I can convert/incorporate it.)
Agree with rbonestell said, you can use DependencyService and AudioFocus to achieve it, when you record the audio, you can create interface in PCL.
public interface IControl
{
void StopBackgroundMusic();
}
When you record the audio, you can executed the DependencyService with following code.
private void Button_Clicked(object sender, EventArgs e)
{
DependencyService.Get<IControl>().StopBackgroundMusic();
//record the audio
}
In android folder, you can create a StopMusicService to achieve that.
[assembly: Dependency(typeof(StopMusicService))]
namespace TTSDemo.Droid
{
public class StopMusicService : IControl
{
AudioManager audioMan;
AudioManager.IOnAudioFocusChangeListener listener;
public void StopBackgroundMusic()
{
audioMan = (AudioManager)Android.App.Application.Context.GetSystemService(Context.AudioService);
listener = new MyAudioListener(this);
var ret = audioMan.RequestAudioFocus(listener, Stream.Music, AudioFocus.Gain);
}
}
internal class MyAudioListener :Java.Lang.Object, AudioManager.IOnAudioFocusChangeListener
{
private StopMusicService stopMusicService;
public MyAudioListener(StopMusicService stopMusicService)
{
this.stopMusicService = stopMusicService;
}
public void OnAudioFocusChange([GeneratedEnum] AudioFocus focusChange)
{
// throw new NotImplementedException();
}
}
}
Thanks to Leon Lu - MSFT, I was able to go in the right direction. I took his implementation (which has some deprecated calls to the Android API), and updated it for what I needed.
I'll be doing a little more work making sure it's stable and functional. I'll also see if I can clean it up a little too. But here's what works on my first test:
[assembly: Dependency(typeof(MediaService))]
namespace ...Droid.Services
{
public class MediaService : IMediaService
public async Task PauseBackgroundMusicForTask(Func<Task> onFocusGranted)
{
var manager = (AudioManager)Android.App.Application.Context.GetSystemService(Context.AudioService);
var builder = new AudioFocusRequestClass.Builder(AudioFocus.GainTransientMayDuck);
var focusRequest = builder.Build();
var ret = manager.RequestAudioFocus(focusRequest);
if (ret == AudioFocusRequest.Granted)
{
await onFocusGranted?.Invoke();
manager.AbandonAudioFocusRequest(focusRequest);
}
}
}
}
I would like to be able to determine if my receiver (CAF receiver) has captions being displayed. This will be so that I can rely on the receiver to tell the sender that captions are enabled, rather than saving the previous state of closed captions on the sender. Is there a method or a way of doing this using the remoteMediaClient?
I'm uncertain if you mean the sender or receiver, but I'll give you both :)
It is possible to get it on Android like so
private val SUB_TITLE_TYPES = intArrayOf(MediaTrack.SUBTYPE_SUBTITLES, MediaTrack.SUBTYPE_CAPTIONS)
fun getActiveMediaTracks(context: Context): LongArray =
getRemoteMediaClient(context)?.mediaStatus?.activeTrackIds ?: longArrayOf()
fun getSubtitleTracks(context: Context): List<MediaTrack> =
getActiveMediaTracks(context).filter {
it.type == MediaTrack.TYPE_TEXT && it.subtype in SUB_TITLE_TYPES
}
or on the Chromecast Receiver (TextTracksManager)
cast.framework.CastReceiverContext.getInstance().getTextTracksManager().getActiveTracks()
EDIT:
Can see that I mixed up the two functions when I copied the code from our IDE. There are active ids and all media tracks (this includes audio, video, texts). There might be a difference between MediaTrack.SUBTYPE_SUBTITLES*, guess that depends on the stream.
Heres how to find the active text tracks
val remoteMediaClient = CastContext.getSharedInstance(context).sessionManager?.currentCastSession?.remoteMediaClient
remoteMediaClient?.mediaInfo?.mediaTracks?.filter {
it.type == MediaTrack.TYPE_TEXT && it.subtype in SUB_TITLE_TYPES
}?.let {
textTracks ->
val activeTrackIds = remoteMediaClient.mediaStatus?.activeTrackIds?.filter { activeTrackId ->
textTracks.none { track -> track.id == activeTrackId }
}.toLongArray()
activeTrackIds.size > 0
}
I am building a video chatting app via the Linphone SDK.
There is an issue that when someone "receives" a video call, the loud speaker is off by default, so users need to use the phone speaker, the one used for phone call, rather than the loud speaker. However, at the same time, the one who gives the call has the loud speaker on by default.
LinphoneManager.getInstance().routeAudioToSpeaker();
I thought this is the code for which Linphone turns on loud speaker, but actually it's not.
How do I turn on loud speaker when users receive video calls by default?
LinphoneCore has two handy method for that:
enableSpeaker(boolean)
muteMic(boolean)
Just create helper functions inside LinphoneManager:
public void enableVoice() {
getLc().muteMic(false);
getLc().enableSpeaker(true);
}
public void disableVoice() {
getLc().muteMic(true);
getLc().enableSpeaker(false);
}
If you don't have an access to LinphoneManager, then the functions above should call:
LinphoneManager.getLc().{method_call};
private AudioManager mAudioManager;
...
public LinphoneMiniManager(Context c) {
mContext = c;
mAudioManager = ((AudioManager) c.getSystemService(Context.AUDIO_SERVICE));
mAudioManager.setSpeakerphoneOn(true);
...
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
....
btnSpeaker.setOnClickListener {
mIsSpeakerEnabled = !mIsSpeakerEnabled
it.isSelected = mIsSpeakerEnabled
toggleSpeaker()
}
....
}
private fun toggleSpeaker() {
val currentAudioDevice = core.currentCall?.outputAudioDevice
val speakerEnabled = currentAudioDevice?.type == AudioDevice.Type.Speaker
for (audioDevice in core.audioDevices) {
if (speakerEnabled && audioDevice.type == AudioDevice.Type.Earpiece) {
core.currentCall?.outputAudioDevice = audioDevice
return
} else if (!speakerEnabled && audioDevice.type == AudioDevice.Type.Speaker) {
core.currentCall?.outputAudioDevice = audioDevice
return
}/* If we wanted to route the audio to a bluetooth headset
else if (audioDevice.type == AudioDevice.Type.Bluetooth) {
core.currentCall?.outputAudioDevice = audioDevice
}*/
}
}