I am developing native android WebRTC client that is suppoded to stream audio from custom device (I am getting audio stream via Bluetooth from that device). I am using libjingle library to implement WebRTC and I wonder if and how it is possible to hook up custom audio stream to audio track?
Currently I am adding default audio track like this:
localMS = factory.createLocalMediaStream("ARDAMS");
AudioSource audioSource = factory.createAudioSource(new MediaConstraints());
localMS.addTrack(factory.createAudioTrack("ARDAMSa0", audioSource));
I saw that there is WebRtcAuidioRecord (https://github.com/pristineio/webrtc-android/blob/master/libjingle_peerconnection/src/main/java/org/webrtc/voiceengine/WebRtcAudioRecord.java) - is it possible to override it?
Anybody tried doing something like that?
Your post lead me to the below code, I am going to try it and let you know if I get it to work. I am trying to send one audio stream to Watson API and one to WebRTC but Android only lets one InputStream read for the microphone. I will update you if I get it to work.
private org.webrtc.MediaStream createMediaStream() {
org.webrtc.MediaStream mediaStream = mFactory.createLocalMediaStream(ARDAMS);
if (mEnableVideo) {
mVideoCapturer = createVideoCapturer();
if (mVideoCapturer != null) {
mediaStream.addTrack(createVideoTrack(mVideoCapturer));
} else {
mEnableVideo = false;
}
}
if (mEnableAudio) {
createAudioCapturer();
mediaStream.addTrack(mFactory.createAudioTrack(
AUDIO_TRACK_ID,
mFactory.createAudioSource(mAudioConstraints)));
}
return mediaStream;
}
/**
* Creates a instance of WebRtcAudioRecord.
*/
private void createAudioCapturer() {
if (mOption.getAudioType() == PeerOption.AudioType.EXTERNAL_RESOURCE) {
WebRtcAudioRecord.setAudioRecordModuleFactory(new WebRtcAudioRecordModuleFactory() {
#Override
public WebRtcAudioRecordModule create() {
AudioCapturerExternalResource module = new AudioCapturerExternalResource();
module.setUri(mOption.getAudioUri());
module.setSampleRate(mOption.getAudioSampleRate());
module.setBitDepth(mOption.getAudioBitDepth());
module.setChannel(mOption.getAudioChannel());
return module;
}
});
} else {
WebRtcAudioRecord.setAudioRecordModuleFactory(null);
}
}
Source:
https://www.programcreek.com/java-api-examples/?code=DeviceConnect/DeviceConnect-Android/DeviceConnect-Android-master/dConnectDevicePlugin/dConnectDeviceWebRTC/app/src/main/java/org/deviceconnect/android/deviceplugin/webrtc/core/MediaStream.java
Related
I am using Agora, and it has some issues. One of them is the speaker's voice comes out to the media sound.
On the browser, it can't control the media volume, So, I created an app to handle this. In the app, I dispatch the volume up/down button to control media volume.
However, this method created howling issue. So, I'd like to send the sound to STREAM_VOICE_CALL and use AEC(Acoustic Echo Cancellation) API on Android so that the sound comes out to the right stream and it can handle the echo problem.
what I wrote,
private fun enableVoiceCallMode() {
with(audioManager) {
volumeControlStream = AudioManager.STREAM_VOICE_CALL
setStreamVolume(
AudioManager.STREAM_VOICE_CALL,
audioManager.getStreamVolume(AudioManager.STREAM_VOICE_CALL),
0
)
}
}
But this didn't work.
And also, I tried to apply AEC like this:
private fun enableEchoCanceler() {
if (AcousticEchoCanceler.isAvailable() && aec == null) {
aec = AcousticEchoCanceler.create(audioManager.generateAudioSessionId())
aec?.enabled = true
} else {
aec!!.enabled = false
aec!!.release()
aec = null
}
}
private fun releaseEchoCanceler() {
aec!!.enabled = false
aec?.release()
aec = null
}
However, I don't know if AcousticEchoCanceler.create(audioManager.generateAudioSessionId()) is correct way or not.
please help me out.
I'm trying to develop application that using the MediaRecorder API that runs on HMT-1.
Android Studio is used for development, and the operating environment is Android 10 or higher.
While shooting a video using the MediaRecoder API, we are verifying whether the same microphone can be used in another process such as the SpeechRecognizer API.
Recording processing alone with the MediaRecoder API and voice input alone with the SpeechRecognizer API can be performed without problems.
However, if you try to record and input voice at the same time, an error will occur.
If you want to use the input voice for multiple processes, please let me know if you have any reference documents or samples.
MediaRecorder settings.
path = getExternalFilesDir(null)!!.path
mMediaRecorder = MediaRecorder()
mMediaRecorder!!.setAudioSource(MediaRecorder.AudioSource.MIC)
mMediaRecorder!!.setVideoSource(MediaRecorder.VideoSource.SURFACE)
mMediaRecorder!!.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4)
mMediaRecorder!!.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB)
mMediaRecorder!!.setAudioEncodingBitRate(16)
mMediaRecorder!!.setAudioSamplingRate(44100)
mMediaRecorder!!.setVideoSize(1024, 768)
mMediaRecorder!!.setVideoEncoder(MediaRecorder.VideoEncoder.H264)
mMediaRecorder!!.setVideoEncodingBitRate(10000000)
mMediaRecorder!!.setOutputFile(path + "/" + DateFormat.format("yyyyMMdd'-'kkmmss", Calendar.getInstance()) + ".mp4")
mMediaRecorder!!.setOnInfoListener(this)
mMediaRecorder!!.setMaxDuration(VIDEO_DURATION)
mMediaRecorder!!.setMaxFileSize(VIDEO_FILESIZE)
mMediaRecorder!!.setPreviewDisplay(mSurfaceHolder!!.surface)
val rotation = (getSystemService(WINDOW_SERVICE) as WindowManager)
if(rotation.defaultDisplay.rotation == 2){
mMediaRecorder!!.setOrientationHint(180)
}
mMediaRecorder!!.prepare()
try {
mMediaRecorder!!.start()
} catch (ex: IOException) {
ex.printStackTrace()
mMediaRecorder!!.release()
}
SpeechRecognizer settings.
mSpeechRecognizer = SpeechRecognizer.createSpeechRecognizer(applicationContext)
mSpeechRecognizer?.setRecognitionListener(createRecognitionListenerStringStream { recognize_text_view.text = it })
public fun onStart(View: View){
mSpeechRecognizer?.startListening(Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH))
}
public fun onStop(View: View){
mSpeechRecognizer?.stopListening()
}
I am currently Developing a VoIP Android Application, and for VoIP support, I am using an Open source library Linphone.
Currently voice calling is happening, but video calling is not happening. After analyzing for a while, I came to know that by default when the app is loaded, the LinphoneCore library is using the H264 video codec.
But the VOIP Asterik server is configured with the VP8 video codec. I cannot change the video codec, which is configured in the server. Hence due to a codec mismatch, video data is not going.
So how can I set manually the video codec to VP8 from my app to LinphoneCore once the app is loaded?
To set videoCodec to LinphoneCore, what you can do is , once your LinphoneCore is ready, you can just Retrieve the VideoCodec Payload that it supports and then set a particular payload and disable others as shown below in the code.
private void enableVp8Codec () {
LinphoneCore lc = LinphoneManager.getLcIfManagerNotDestroyedOrNull();
if (lc != null) {
PayloadType[] lPayLoadArr = lc.getVideoCodecs();
for (final PayloadType pt : lPayLoadArr) {
try {
if (pt.getMime().equals("VP8")) {
lc.enablePayloadType(pt, true);
} else {
lc.enablePayloadType(pt, false);
}
} catch (LinphoneCoreException e) {
Log.e("tag",e.getMessage());
}
}
}
}
This method you can probably call in onResume of your Activity
I have been searching for couple of days now and havent been able to find a suitable solution.
I am trying to check if any app in the background is using the microphone, so my app can use it, otherwise i want just to show message "Microphone in use by another app".
I tried checking all the applications in the background and their permissions but that doesnt solve my problem, since there is package wearable.app which asks for the permissions but it doesnt affect the audio, or it is not using it.
I tried the other solutions that i was able to find here or on google, but none of that seems to be the proper way.
All i want to check if the microphone is not being used, so my app can use it.
Any suggestion i will appreciate.
After searching more i found the solution and i am adding it here for anyone that needs it to find it easier.
private boolean validateMicAvailability(){
Boolean available = true;
AudioRecord recorder =
new AudioRecord(MediaRecorder.AudioSource.MIC, 44100,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_DEFAULT, 44100);
try{
if(recorder.getRecordingState() != AudioRecord.RECORDSTATE_STOPPED ){
available = false;
}
recorder.startRecording();
if(recorder.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING){
recorder.stop();
available = false;
}
recorder.stop();
} finally{
recorder.release();
recorder = null;
}
return available;
}
You can do it the other way around.
Get the microphone in your app.
Get a list of the installed apps, who have a RECORD permission.
Then check if one of these apps is on the foreground and if there is one release the microphone so that the other app can use it (for example when a phone call occurs).
A bit dirty practice but I think it is what you are looking for.
Cheers!
This is how is done
AudioManager am = (AudioManager)context.getSystemService(Context.AUDIO_SERVICE);
if(am.getMode()==AudioManager.MODE_IN_COMMUNICATION){
//Mic is in use
}
MODE_NORMAL -> You good to go. Mic not in use
MODE_RINGTONE -> Incoming call. The phone is ringing
MODE_IN_CALL -> A phone call is in progress
MODE_IN_COMMUNICATION -> The Mic is being used by another application
AudioManager.AudioRecordingCallback()
am.registerAudioRecordingCallback(new AudioManager.AudioRecordingCallback() {
#Override
public void onRecordingConfigChanged(List<AudioRecordingConfiguration> configs) {
super.onRecordingConfigChanged(configs);
try {
isMicOn = configs.get(0) != null;
}catch (Exception e)
{
isMicOn = false;
}
if (isMicOn) {
//microphone is on
} else {
// microphone is off
}
Toast.makeText(context, isMicOn ? "Mic on" : "Mic off", Toast.LENGTH_SHORT).show();
}
}, null);
I know this may sound a bit tedious or the long way... But have you considered recording a logcat? Record a log for both Kernel and apps. Recreate the issue, then compare both logs to see what program is occupied when the kernel utilizes the mic.
Since sharing audio input behaviour varies depending on Android versions, this answer aims to provide a complete solution based on the docs.
Pre-Android 10
Before Android 10 the input audio stream could only be captured by one
app at a time. If some app was already recording or listening to
audio, your app could create an AudioRecord object, but an error would
be returned when you called AudioRecord.startRecording() and the
recording would not start.
So, you can use this function to check if the mic is used by another app for pre Android 10 versions.
private fun isAnotherAppUsingMic(): Boolean {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q) return false
createRecorder().apply {
try {
startRecording()
if (recordingState != AudioRecord.RECORDSTATE_RECORDING) {
return true
}
stop()
return false
} catch (e: IllegalStateException) {
return true
} finally {
release()
}
}
}
private fun createRecorder(): AudioRecord {
return AudioRecord(
MediaRecorder.AudioSource.MIC,
SAMPLE_RATE_HZ,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT,
2 * AudioRecord.getMinBufferSize(
SAMPLE_RATE_HZ,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT
)
)
}
const val SAMPLE_RATE_HZ = 44100
Android 10 and above
Android 10 imposes a priority scheme that can switch the input audio
stream between apps while they are running. In most cases, if a new
app acquires the audio input, the previously capturing app continues
to run, but receives silence.
So, for Android versions 10 and higher, in most cases your app will take priority if there is another app like voice or screen recorder is already running and then you start using mic in your app. But you will need to check for Voice/Video call as it has higher priority and mic won't be available for your app (it will receive silence). You can use below code to check if there is an active call:
private fun isVoiceCallActive(): Boolean {
val audioManager = context.getSystemService(Context.AUDIO_SERVICE) as AudioManager
return audioManager.mode in listOf(
AudioManager.MODE_IN_CALL,
AudioManager.MODE_IN_COMMUNICATION
)
}
In summary you can merge above two function to check if mic is available before you want to use it.
fun isMicAvailable() = !isAnotherAppUsingMic() && !isVoiceCallActive()
I have heard about screen sharing on desktop using WebRTC. But for the Android, it seems not to have much information.
My question is:
Is it possible to use WebRTC for screen sharing on android?. I mean I can cast the current screen to the other phone's screen.
If 1 is Yes, How can I achieve this?
Thanks.
It is possible!
It can be done using the directions below.
I've used ScreenShareRTC in conjunction with ProjectRTC to stream the contents of the screen to a browser with decent quality and fairly low latency ~100ms.
I've added an example below that shows how to configure a screen share as a video source and add it as a track on a stream.
Get the VideoCapturer
#TargetApi(21)
private VideoCapturer createScreenCapturer() {
if (mMediaProjectionPermissionResultCode != Activity.RESULT_OK) {
report("User didn't give permission to capture the screen.");
return null;
}
return new ScreenCapturerAndroid(
mMediaProjectionPermissionResultData, new MediaProjection.Callback() {
#Override
public void onStop() {
report("User revoked permission to capture the screen.");
}
});
}
Initialize the capturer and add the tracks to the local media stream
private void initScreenCapturStream() {
mLocalMediaStream = factory.createLocalMediaStream("ARDAMS");
MediaConstraints videoConstraints = new MediaConstraints();
videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("maxHeight", Integer.toString(mPeerConnParams.videoHeight)));
videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("maxWidth", Integer.toString(mPeerConnParams.videoWidth)));
videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("maxFrameRate", Integer.toString(mPeerConnParams.videoFps)));
videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("minFrameRate", Integer.toString(mPeerConnParams.videoFps)));
mVideoSource = factory.createVideoSource(videoCapturer);
videoCapturer.startCapture(mPeerConnParams.videoWidth, mPeerConnParams.videoHeight, mPeerConnParams.videoFps);
VideoTrack localVideoTrack = factory.createVideoTrack(VIDEO_TRACK_ID, mVideoSource);
localVideoTrack.setEnabled(true);
mLocalMediaStream.addTrack(factory.createVideoTrack("ARDAMSv0", mVideoSource));
AudioSource audioSource = factory.createAudioSource(new MediaConstraints());
mLocalMediaStream.addTrack(factory.createAudioTrack("ARDAMSa0", audioSource));
mListener.onStatusChanged("STREAMING");
}
For more information this might be a good place to start. Its a Android project that connects to a ProjectRTC signalling server and shares the screen as video. I found it very helpful!
Android screen sharing project(Android client - Java)
https://github.com/Jeffiano/ScreenShareRTC
ProjectRTC(Node server)
https://github.com/pchab/ProjectRTC