I integrated appRTC code in my Android application for calling purpose, which is done. Now the Video & Audio calling is working fine. My problem is, I need to achieve following things.
1. Mute & Unmute Audio while calling.
2. Switch Video call to Audio Call and vise versa while calling.
I have searched a lot and had no luck so far. It would be nice if you can give me any lead on these things. Thanks in advance.
1. Mute & Unmute Audio while calling.
This class is used to control the audio: https://chromium.googlesource.com/external/webrtc/+/b69ab79338bff71ea411b82f3dd59508617a11d5/talk/examples/android/src/org/appspot/apprtc/AppRTCAudioManager.java
You may need to explicitly add mute functionality here.
2. Switch Video call to Audio Call and vise versa while calling.
PeerConnectionClient class in AppRtc demo depends on the following class:
https://chromium.googlesource.com/external/webrtc/+/b69ab79338bff71ea411b82f3dd59508617a11d5/talk/app/webrtc/java/src/org/webrtc/VideoCapturerAndroid.java
To switch video call to audio, you need to explicitly call stopCapture in VideoCapturerAndroid.java
So far no luck about switching between Audio and Video Call. But I have found a solution to mute microphone audio while calling.
There is a setMicrophoneMute(boolean on) method in AppRTCAudioManager. We can use the same code to mute the microphone. I just created another method like this.
public void setMicroPhoneMute(){
boolean wasMuted = audioManager.isMicrophoneMute();
if(wasMuted)
audioManager.setMicrophoneMute(false);
else
audioManager.setMicrophoneMute(true);
}
Just call this one wherever you want.
Please refer to this Link for switching audio / Video - Link
although this is in javascript but it can be easily implemented in android.
when the connection is created via webrtc we receive Media Stream ,for switching purpose we can remove audiotrack or videotrack and then again send sdp from offerer to answerer side with new configuration.
mediaStream.removeAudioTrack(Audiotrack audioTrack) ; //for audio to video switching
mediaStream.removeVideoTrack(VideoTrack videoTrack) ; // video to audio switching
use this for switch between mute -> unMute, Video -> Audio
just need the MediaStream localMS
//disable video in stream
VideoTrack currentTrack = localMS.videoTracks.get(0);
currentTrack.setEnabled(false);
//disable audio in stream // mute
AudioTrack curentAudioTrack = localMS.audioTracks.get(0);
curentAudioTrack.setEnabled(false);
if you want to unMute just use true like this
AudioTrack curentAudioTrack = localMS.audioTracks.get(0);
curentAudioTrack.setEnabled(true);
same for video
Related
I have created a WebRTC session from one device to another, the device should be able to control the volume for music stream, but WebRTC is originally designed to stream voice_call so is using the voice_call channel and using the call volume control is not good behavior for non-call app.
I tried to change STREAM_VOICE_CALL to STREAM_MUSIC in WebRTC source WebRtcAudioTrack to use the stream music volume but the only change was android is detecting it as music but volume change with call volume.
I found the solution to this. You have to change the opensls player for this to happen
change this from here
// corresponds to android.media.AudioManager.STREAM_VOICE_CALL.
SLint32 stream_type = SL_ANDROID_STREAM_VOICE;
RETURN_ON_ERROR(
(*player_config)
->SetConfiguration(player_config, SL_ANDROID_KEY_STREAM_TYPE,
&stream_type, sizeof(SLint32)),
false);
to this
// corresponds to android.media.AudioManager.STREAM_MUSIC.
SLint32 stream_type = SL_ANDROID_STREAM_MEDIA;
RETURN_ON_ERROR(
(*player_config)
->SetConfiguration(player_config, SL_ANDROID_KEY_STREAM_TYPE,
&stream_type, sizeof(SLint32)),
false);
do this here too
In my app, I am trying to alter the volume of both(left/right headphone music streams) coming out of the device with a Seekbar.
The AudioManager Class can access the music stream coming from the device:
AudioManager am = (AudioManager)getSystemService(Context.AUDIO_SERVICE);
int maxValue = am.getStreamMaxVolume(AudioManager.STREAM_MUSIC);
int curValue = am.getStreamVolume(AudioManager.STREAM_MUSIC);
But the AudioManager .setStreamVolume() class can only change the volume coming from both ears together.
I figured out how to set the volume separately with the MediaPlayer class but how do I link the MediaPlayer class's .setDataSource() method to the stream coming out of the android device?
I looked everywhere and still haven't found an answer. Any help is appreciated!
You use setAudioStreamType() after you setDataSource(), but before calling
prepare
I am implementing an Application that includes the functionality of saving Recorded Video in to Different Video Files based on a certain amount of Time.
For Achieving that i have implemented a Custom Camera and used the MediaRecorder.stop() and MediaRecorder.start() in a certain Loop.
But this approach is creating a Lag Effect while restarting Media Recorder (Stop and Start). Is it possible to seamlessly Stop and Start Recording using Media Recorder or any Third Party Library ?
Any help is Highly Appreciated.
I believe the best solution to implement chunks recording is to set maximum time in MediaRecorder Object
mMediaRecorder.setMaxDuration(CHUNK_TIME);
then you can attach an info listener, it will intimate you when it will hit maximum chunk time
mMediaRecorder.setOnInfoListener(new MediaRecorder.OnInfoListener() {
#Override
public void onInfo(MediaRecorder mr, int what, int extra) {
if (what == MediaRecorder.MEDIA_RECORDER_INFO_MAX_DURATION_REACHED) {
// restartVideo()
}
}
});
in restartVideo you should firstly clear previous MediaRecorder Object and start video again.
You can create two instances of MediaRecorder which will overlap slightly (i.e. when the stream is close to the end of the first chunk you can prepare and start the second one). It is possible to record 2 video files using 2 MediaRecorders at the same time if they capture only the video. Unfortunately sharing the mic between 2 MediaRecorder instances is not supported.
Now, i have made a library to concatenate 2 videos, using the mp4parser library.
And with this i can pause and resume recording a video (after it records the second video, it appends it to the first one).
Now, my boss told me to do a wrapper, and use this for the phones that do not have hardware support for pausing a video. For phones that have that (Samsung Galaxy S2 and Samsung Galaxy S1 can pause a video recording , with their camera application), i need to do this with no libraries, so it would be fast.
How can I implement this native, if as seen on the media recorder state diagram, http://developer.android.com/reference/android/media/MediaRecorder.html , there is no pause state?
I have decompiled the Camera.apk app from an Samsung Galaxe Ace, and the code has in the CamcorderEngine.class a method like this:
public void doPauseVideoRecordingSync()
{
Log.v("CamcorderEngine", "doPauseVideoRecordingSync");
if (this.mMediaRecorder == null)
{
Log.e("CamcorderEngine", "MediaRecorder is not initialized.");
return;
}
if (!this.mMediaRecorderRecording)
{
Log.e("CamcorderEngine", "Recording is not started yet.");
return;
}
try
{
this.mMediaRecorder.pause();
enableAlertSound();
return;
}
catch (RuntimeException localRuntimeException)
{
Log.e("CamcorderEngine", "Could not pause media recorder. ", localRuntimeException);
enableAlertSound();
}
}
If I try this.mMediaRecorder.pause(); in my code, it does not work, how is this possible, they use the same import (android.media.MediaRecorder). Have they rewritten the whole code at a system level?
Is it possible to take the input stream of the second video (while recording it), and directly append this data to my first video?
for my concatenate method, i use 2 parameters (the 2 videos, which both are FileInputStream), is it possible to take the InputStream from the recording function and pass it as the second parameter?
If I try this.mMediaRecorder.pause();
The MediaRecorder class does not have a pause() function, so this is obvious that there is a custom MediaRecorder class on this specific device. This is not something unusual, as the only thing required from the OEMs is to pass the "android compatability tests" on the device; there is no restriction on adding functionality.
Is it possible to take the input stream of the second video (while
recording it), and directly append this data to my first video?
I am not sure if you can do this, because the video stream is encoded data (codec header, key frames, and so on), and just combining 2 streams into 1 file will not produce a valid video file in my opinion.
Basically what you can do:
get raw data images from camera preview surface (see Camera.setPreviewCallback())
use a android.media.MediaCodec to encode the video
and then use an OutputFilStream to write to the file.
This will give you the flexability you want, as in this case you in you app decide which frames get into encoder, and which do not.
However, it maybe an overkill for your specific project, as well as some performance issues may rise.
PS. Oh, an by the way, try taking a look at the MediaMuxer - maybe it can help you too. developer.android.com/reference/android/media/MediaMuxer.html
I've got some .MP4 video files that must be read in a VideoView in an Android activity. These videos include several audio tracks, with each one corresponding to a user language (eg. : English, French, Japanese...).
I've got unexpected trouble finding any help or documentation to provide such a feature. I'm currently able to load the video and play it in a VideoView with a MediaController, but not to change audio tracks.
I'm not sure the Android SDK provides any easy way to do this, which leaves me quite clueless on how to solve my problem. I was thinking of extracting every audio track, loading the audio that I want into a MediaPlayer depending on the language, then make audio and video play together. But I fear that some sync issues could arise and prevent me from doing this.
If you have any clue, any advice to help me getting started with this problem, you're more than welcome.
No 3rd party library required:
mVideoView.setVideoURI(Uri.parse("")); // set video source
mVideoView.setOnInfoListener(new MediaPlayer.OnInfoListener() {
#Override
public boolean onInfo(MediaPlayer mp, int what, int extra) {
MediaPlayer.TrackInfo[] trackInfoArray = mp.getTrackInfo();
for (int i = 0; i < trackInfoArray.length; i++) {
// you can switch out the language comparison logic to whatever works for you
if (trackInfoArray[i].getTrackType() == MediaPlayer.TrackInfo.MEDIA_TRACK_TYPE_AUDIO
&& trackInfoArray[i].getLanguage().equals(Locale.getDefault().getISO3Language()) {
mp.selectTrack(i);
break;
}
}
return true;
}
});
As far as I can tell - audio tracks should be encoded in the 3-letter ISO 639-2 in order to be recognized correctly.
Haven't tested myself yet, but it seems that Vitamio library has support for multiple audio tracks (among other interesting features). It is API-compatible with VideoView class from Android.
Probably you would have to use Vitamio VideoView.setAudioTrack() to set audio track (for example based on locale). See Vitamio API docs for details.
Now you can Play Multiple audio track through ExoPlayer.
Here is the details,
https://exoplayer.dev/track-selection.html
Exo Player Track Selection
VideoView class can't support your require.U must parse to get audio stream data(you want) to play with AudioTrack class on java layer.