Android Chromecast sender sdk - How to switch audio tracks from app - android

I'm able to successfully cast and play video with chromecast, however the video I'm playing has multi audio and I would like to switch between the audio tracks in real time. I tried below methods but failed with it. Please let me know what needs to be done in order to able to switch audio tracks with chromecast.
val audioTrack = MediaTrack.Builder(1, MediaTrack.TYPE_AUDIO)
.setName(audioFormat.label)
.setContentId(audioFormat.format.id)
.setLanguage("english")
.build()
audioFormats.add(audioTrack)
val mediaInfo: MediaInfo = MediaInfo.Builder(videoUrl)
.setStreamType(MediaInfo.STREAM_TYPE_BUFFERED)
.setContentType(MimeTypes.VIDEO_UNKNOWN)
.setCustomData(customData)
.setMediaTracks(audioFormats)
.setMetadata(movieMetadata).build()
return MediaQueueItem.Builder(mediaInfo).build()
override fun switchAudioTrack(languageId: String, langId: Long) {
getRemoteMediaClient()?.setActiveMediaTracks(longArrayOf(langId))
?.setResultCallback { mediaChannelResult: RemoteMediaClient.MediaChannelResult ->
if (!mediaChannelResult.status.isSuccess) {
Log.e(
this.javaClass.simpleName, "Failed with status code:" +
mediaChannelResult.status.statusCode
)
}
}
}
Any help is greatly appreciated. Thanks!

Related

How can I catch a stream from Webview and change it to VOICE_CALL Stream with AEC?

I am using Agora, and it has some issues. One of them is the speaker's voice comes out to the media sound.
On the browser, it can't control the media volume, So, I created an app to handle this. In the app, I dispatch the volume up/down button to control media volume.
However, this method created howling issue. So, I'd like to send the sound to STREAM_VOICE_CALL and use AEC(Acoustic Echo Cancellation) API on Android so that the sound comes out to the right stream and it can handle the echo problem.
what I wrote,
private fun enableVoiceCallMode() {
with(audioManager) {
volumeControlStream = AudioManager.STREAM_VOICE_CALL
setStreamVolume(
AudioManager.STREAM_VOICE_CALL,
audioManager.getStreamVolume(AudioManager.STREAM_VOICE_CALL),
0
)
}
}
But this didn't work.
And also, I tried to apply AEC like this:
private fun enableEchoCanceler() {
if (AcousticEchoCanceler.isAvailable() && aec == null) {
aec = AcousticEchoCanceler.create(audioManager.generateAudioSessionId())
aec?.enabled = true
} else {
aec!!.enabled = false
aec!!.release()
aec = null
}
}
private fun releaseEchoCanceler() {
aec!!.enabled = false
aec?.release()
aec = null
}
However, I don't know if AcousticEchoCanceler.create(audioManager.generateAudioSessionId()) is correct way or not.
please help me out.

About development using android camera

I'm trying to develop application that using the MediaRecorder API that runs on HMT-1.
Android Studio is used for development, and the operating environment is Android 10 or higher.
While shooting a video using the MediaRecoder API, we are verifying whether the same microphone can be used in another process such as the SpeechRecognizer API.
Recording processing alone with the MediaRecoder API and voice input alone with the SpeechRecognizer API can be performed without problems.
However, if you try to record and input voice at the same time, an error will occur.
If you want to use the input voice for multiple processes, please let me know if you have any reference documents or samples.
MediaRecorder settings.
path = getExternalFilesDir(null)!!.path
mMediaRecorder = MediaRecorder()
mMediaRecorder!!.setAudioSource(MediaRecorder.AudioSource.MIC)
mMediaRecorder!!.setVideoSource(MediaRecorder.VideoSource.SURFACE)
mMediaRecorder!!.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4)
mMediaRecorder!!.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB)
mMediaRecorder!!.setAudioEncodingBitRate(16)
mMediaRecorder!!.setAudioSamplingRate(44100)
mMediaRecorder!!.setVideoSize(1024, 768)
mMediaRecorder!!.setVideoEncoder(MediaRecorder.VideoEncoder.H264)
mMediaRecorder!!.setVideoEncodingBitRate(10000000)
mMediaRecorder!!.setOutputFile(path + "/" + DateFormat.format("yyyyMMdd'-'kkmmss", Calendar.getInstance()) + ".mp4")
mMediaRecorder!!.setOnInfoListener(this)
mMediaRecorder!!.setMaxDuration(VIDEO_DURATION)
mMediaRecorder!!.setMaxFileSize(VIDEO_FILESIZE)
mMediaRecorder!!.setPreviewDisplay(mSurfaceHolder!!.surface)
val rotation = (getSystemService(WINDOW_SERVICE) as WindowManager)
if(rotation.defaultDisplay.rotation == 2){
mMediaRecorder!!.setOrientationHint(180)
}
mMediaRecorder!!.prepare()
try {
mMediaRecorder!!.start()
} catch (ex: IOException) {
ex.printStackTrace()
mMediaRecorder!!.release()
}
SpeechRecognizer settings.
mSpeechRecognizer = SpeechRecognizer.createSpeechRecognizer(applicationContext)
mSpeechRecognizer?.setRecognitionListener(createRecognitionListenerStringStream { recognize_text_view.text = it })
public fun onStart(View: View){
mSpeechRecognizer?.startListening(Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH))
}
public fun onStop(View: View){
mSpeechRecognizer?.stopListening()
}

LibVlc android getting all tracks

I can't find much documentation on the process of getting all of the media tracks (video audio and subtitles) using libvlc on android.
From what I understand, I have to parse the media, and I'm doing it like this:
Media media = new Media(libVLC, Uri.parse(url));
media.setEventListener(new IMedia.EventListener() {
#Override
public void onEvent(IMedia.Event event) {
switch (event.type){
case IMedia.Event.ParsedChanged:
if(event.getParsedStatus() == IMedia.ParsedStatus.Done){
Log.i("App", "Parse done, track count " + media.getTrackCount());
Gson gson = new Gson();
for(int i=0; i<media.getTrackCount(); i++){
Log.i("App", "Track " + i + ": " + gson.toJson(media.getTrack(i)));
}
}
break;
}
}
});
media.parseAsync();
vlc.setMedia(media);
vlc.play();
The results I get from this are odd: sometimes I get one track only, the video track, but sometimes I also get the audio track, so two tracks total.
The problem is that the media also have a subtitle track, so there must be a way for me to get all three tracks (Playing the same exact media with vlc on windows shows, indeed, all three tracks).
What am I doing wrong?
Edit: I need a way to dynamically get all tracks, the media could have n tracks so I don't know the exact number. This is just a test and I know there are three tracks.
Thanks
If you are not able to get the tracks from media, use VLC MediaPlayer object, VLC media player provides methods to get Audio Tracks, Video Tracks and Subtitle tracks using MediaPlayer object.
mMediaPlayer!!.setEventListener {
when (p0?.type) {
MediaPlayer.Event.Opening-> {
val audioTracks = mMediaPlayer!!.audioTracks
val subtitleTracks = mMediaPlayer!!.spuTracks
val videoTracks = mMediaPlayer!!.videoTracks
}
}
You can iterate over the lists to get individual tracks.

How to set sound playback from external speaker?

There is a kind of weird issue, I am using oboe lib https://github.com/google/oboe, for sound playback. Of course you can choose sound playback output according to android settings
https://developer.android.com/reference/android/media/AudioDeviceInfo
So, if I need to set exact output chanel I need to set it to oboe lib.
By the way output chanal that I need is TYPE_BUILTIN_SPEAKER, but on some devices (sometimes, not constantly) I hear the sound from
TYPE_BUILTIN_EARPIECE
How I am doing this, I have such method to get needed chanel id
fun findAudioDevice(app: Application,
deviceFlag: Int,
deviceType: Int): AudioDeviceInfo?
{
var result: AudioDeviceInfo? = null
val manager = app.getSystemService(Context.AUDIO_SERVICE) as AudioManager
val adis = manager.getDevices(deviceFlag)
for (adi in adis)
{
if (adi.type == deviceType)
{
result = adi
break
}
}
return result
}
How I use it
val id = getAudioDeviceInfoId(getBuildInSpeakerInfo())
private fun getBuildInSpeakerInfo(): AudioDeviceInfo?
{
return com.tetavi.ar.basedomain.utils.Utils.findAudioDevice( //
getApplication<Application>(), //
AudioManager.GET_DEVICES_OUTPUTS, //
AudioDeviceInfo.TYPE_BUILTIN_SPEAKER //
)
}
private fun getAudioDeviceInfoId(info: AudioDeviceInfo?): Int
{
var result = -1
if (info != null)
{
result = info.id
}
return result
}
And eventually I need to set this id to oboe lib. Oboe lib is native lib, so with JNI I pass this id and set it
oboe::Result oboe_engine::createPlaybackStream()
{
oboe::AudioStreamBuilder builder;
const oboe::SharingMode sharingMode = oboe::SharingMode::Exclusive;
const int32_t sampleRate = mBackingTrack->getSampleRate();
const oboe::AudioFormat audioFormat = oboe::AudioFormat::Float;
const oboe::PerformanceMode performanceMode = oboe::PerformanceMode::PowerSaving;
builder.setSharingMode(sharingMode)
->setPerformanceMode(performanceMode)
->setFormat(audioFormat)
->setCallback(this)
->setSampleRate(sampleRate);
if (m_output_playback_chanel_id != EMPTY_NUM)
{
//set output playback chanel (like internal or external speaker)
builder.setDeviceId(m_output_playback_chanel_id); <------------- THIS LINE
}
return builder.openStream(&mAudioStream);
}
So, actually issue is that on some devices (sometimes, not constantly) I still hear that sound playback goes from internal speaker TYPE_BUILTIN_EARPIECE inspite of I set directly that I need to use TYPE_BUILTIN_SPEAKER
I checked a few times the flow, from moment that I get this id (it is acctually is 3) and up to the moment when I set it as a param to oboe lib, but still sometimes I hear sound from internal speaker.
So, question is - if I miss something here? Maybe some trick should be implemented or something else?

How to check whether microphone is used by any background app

I have been searching for couple of days now and havent been able to find a suitable solution.
I am trying to check if any app in the background is using the microphone, so my app can use it, otherwise i want just to show message "Microphone in use by another app".
I tried checking all the applications in the background and their permissions but that doesnt solve my problem, since there is package wearable.app which asks for the permissions but it doesnt affect the audio, or it is not using it.
I tried the other solutions that i was able to find here or on google, but none of that seems to be the proper way.
All i want to check if the microphone is not being used, so my app can use it.
Any suggestion i will appreciate.
After searching more i found the solution and i am adding it here for anyone that needs it to find it easier.
private boolean validateMicAvailability(){
Boolean available = true;
AudioRecord recorder =
new AudioRecord(MediaRecorder.AudioSource.MIC, 44100,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_DEFAULT, 44100);
try{
if(recorder.getRecordingState() != AudioRecord.RECORDSTATE_STOPPED ){
available = false;
}
recorder.startRecording();
if(recorder.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING){
recorder.stop();
available = false;
}
recorder.stop();
} finally{
recorder.release();
recorder = null;
}
return available;
}
You can do it the other way around.
Get the microphone in your app.
Get a list of the installed apps, who have a RECORD permission.
Then check if one of these apps is on the foreground and if there is one release the microphone so that the other app can use it (for example when a phone call occurs).
A bit dirty practice but I think it is what you are looking for.
Cheers!
This is how is done
AudioManager am = (AudioManager)context.getSystemService(Context.AUDIO_SERVICE);
if(am.getMode()==AudioManager.MODE_IN_COMMUNICATION){
//Mic is in use
}
MODE_NORMAL -> You good to go. Mic not in use
MODE_RINGTONE -> Incoming call. The phone is ringing
MODE_IN_CALL -> A phone call is in progress
MODE_IN_COMMUNICATION -> The Mic is being used by another application
AudioManager.AudioRecordingCallback()
am.registerAudioRecordingCallback(new AudioManager.AudioRecordingCallback() {
#Override
public void onRecordingConfigChanged(List<AudioRecordingConfiguration> configs) {
super.onRecordingConfigChanged(configs);
try {
isMicOn = configs.get(0) != null;
}catch (Exception e)
{
isMicOn = false;
}
if (isMicOn) {
//microphone is on
} else {
// microphone is off
}
Toast.makeText(context, isMicOn ? "Mic on" : "Mic off", Toast.LENGTH_SHORT).show();
}
}, null);
I know this may sound a bit tedious or the long way... But have you considered recording a logcat? Record a log for both Kernel and apps. Recreate the issue, then compare both logs to see what program is occupied when the kernel utilizes the mic.
Since sharing audio input behaviour varies depending on Android versions, this answer aims to provide a complete solution based on the docs.
Pre-Android 10
Before Android 10 the input audio stream could only be captured by one
app at a time. If some app was already recording or listening to
audio, your app could create an AudioRecord object, but an error would
be returned when you called AudioRecord.startRecording() and the
recording would not start.
So, you can use this function to check if the mic is used by another app for pre Android 10 versions.
private fun isAnotherAppUsingMic(): Boolean {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q) return false
createRecorder().apply {
try {
startRecording()
if (recordingState != AudioRecord.RECORDSTATE_RECORDING) {
return true
}
stop()
return false
} catch (e: IllegalStateException) {
return true
} finally {
release()
}
}
}
private fun createRecorder(): AudioRecord {
return AudioRecord(
MediaRecorder.AudioSource.MIC,
SAMPLE_RATE_HZ,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT,
2 * AudioRecord.getMinBufferSize(
SAMPLE_RATE_HZ,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT
)
)
}
const val SAMPLE_RATE_HZ = 44100
Android 10 and above
Android 10 imposes a priority scheme that can switch the input audio
stream between apps while they are running. In most cases, if a new
app acquires the audio input, the previously capturing app continues
to run, but receives silence.
So, for Android versions 10 and higher, in most cases your app will take priority if there is another app like voice or screen recorder is already running and then you start using mic in your app. But you will need to check for Voice/Video call as it has higher priority and mic won't be available for your app (it will receive silence). You can use below code to check if there is an active call:
private fun isVoiceCallActive(): Boolean {
val audioManager = context.getSystemService(Context.AUDIO_SERVICE) as AudioManager
return audioManager.mode in listOf(
AudioManager.MODE_IN_CALL,
AudioManager.MODE_IN_COMMUNICATION
)
}
In summary you can merge above two function to check if mic is available before you want to use it.
fun isMicAvailable() = !isAnotherAppUsingMic() && !isVoiceCallActive()

Categories

Resources