I'm trying to use the native webrtc SDK (libjingle) for android. So far i can send streams from android to web (or other platforms) just fine. i can also receive the MediaStream from a peer. (to the onAddStream callback)
The project I'm working on is requiring only audio streams. no video tracks are being created nor sent to anyone.
My question is, how do i play the MediaStream Object that i get from remote peers?
#Override
public void onAddStream(MediaStream mediaStream) {
Log.d(TAG, "onAddStream: got remote stream");
// Need to play the audio ///
}
Again, the question is about audio. I'm not using video.
apparently all the native webrtc examples uses video tracks, so i had no luck finding any documentation or examples on the web.
Thanks in advance!
We can get the Remote Audio Track using below code
import org.webrtc.AudioTrack;
#Override
public void onAddStream(final MediaStream stream){
if(stream.audioTracks.size() > 0) {
remoteAudioTrack = stream.audioTracks.get(0);
}
}
Apparently all the native webrtc examples uses video tracks, so i had no luck finding any documentation or examples on the web.
Yes, as an app developer we have to take care only video rendering.
If we have received Remote Audio Track, by default it will play in default speaker(ear speaker/loud speaker/wired headset) based proximity settings.
Check below code in AppRTCAudioManager.java
to enable/disable speaker
/** Sets the speaker phone mode. */
private void setSpeakerphoneOn(boolean on) {
boolean wasOn = audioManager.isSpeakerphoneOn();
if (wasOn == on) {
return;
}
audioManager.setSpeakerphoneOn(on);
}
Reference Source: AppRTCAudioManager.java
Related
Following up (or should have been preceding) my last post, I am now wondering if I am on the wrong path.
I want to play midi file data (not audio) to an external device, such as a midi enabled keyboard, and have the midi data play the keyboard.
As per my last post, I have:
MediaPlayer mediaPlayer;
String music = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_MUSIC).getAbsolutePath();
mediaPlayer = MediaPlayer.create(MainActivity.this, Uri.parse(music + "/test.mid"));
mediaPlayer.start();
When I run this, it just play the midi file as audio.
Later, the next part is sending this out (hopefully) by Bluetooth, ie code sometime like form the suggestion from my last post:
AudioManager audioManager = (AudioManager) getSystemService(Context.AUDIO_SERVICE);
AudioDeviceInfo[] devices = audioManager.getDevices(AudioManager.GET_DEVICES_OUTPUTS);
for (AudioDeviceInfo device : devices) {
int type = device.getType();
if (device.getType() == AudioDeviceInfo.TYPE_BLE_HEADSET ) {
mediaPlayer.setPreferredDevice(device);
break;
}
}
However, is the above going to midi data, or just the generated audio? Can MediaPlayerMediaPlayer be used for this, or do I need to use something completely different such as java-midi or the MidiManager as used in these examples.
I have not got any Bluetooth midi receiver device yet (ie to play into the keyboard), as I do not want to purchase something like this, for example this, though it only claim to support iOS, but isn't Bluetooth universal?, At any rate, that is a separate topic, this is just about actually playing out the midi data
I only ever want to replay a file, I never want to actually generate the midi events, so was hoping something simpler like MediaPlayer would do what I am after, but is this correct?
I have an Android TV app that I'm integrating a Speech Recognizer into. Only today did I find out that the Android TV supports the Remote microphone.
Detecting whether the device has a microphone is easy enough using requireActivity().packageManager.hasSystemFeature(PackageManager.FEATURE_MICROPHONE), but I don't think it accounts for remote microphones.
My question is whether it's possible to detect whether the remote I'm using has a Microphone or not, so that I can display the "No Audio Signals" error if it doesn't.
The way I will do is to use AudioRecord then call the function of startRecording() and after that, you should check recorder's state by invoking getRecordingState(). If the recording was started successfully, it proves that there is a microphone attached to the device regardless of internal or external. The function will return 3 (AudioRecord.RECORDSTATE_RECORDING) otherwise it will return 1 (AudioRecord.RECORDSTATE_STOPPED)
Below is the code for Kotlin :
private fun checkMicAvailability(audioRecord: AudioRecord): Boolean {
audioRecord.startRecording()
val micCheckAvailability= audioRecord.recordingState == AudioRecord.RECORDSTATE_RECORDING
audioRecord.stop()
audioRecord.release()
return micCheckAvailability
}
Do note that you must create it with the AudioRecord constructor. You can have a look at Audio Recorder Documentation
I am trying make an application that will allow a registered client to make an audio call to another registered client using Wi-Fi(It doesn't require internet).
I was able to successfully register and make call using SIP.
After the call is picked up, I don't know how to handle the RTP stream and connect it with the microphone and speaker of the phone(Android and IOS) to perform normal calling functionality.
I am using Xamarin and SIP Sorcery library. I am new to Xamarin and mobile application development.
Below is a part of code to explain myself a little better:
async Task Call()
{
Console.WriteLine("Start of Calling section");
rtpSession = new RTPMediaSession((int)SDPMediaFormatsEnum.PCMU, AddressFamily.InterNetwork);
// May be somthing like this to connect audio devices to RTP session.
//get microphone
//get speaker
//ConnectAudioDevicesToRtp(rtpSession, microphone, speaker);
// Place the call and wait for the result.
bool callResult = await userAgent.Call(DESTINATION, ssid, userName, registerPassword, domainHost, rtpSession);
if (callResult)
{
Console.WriteLine("Call attempt successful. Start talking");
//I am reaching to this point and need help with how to move forward from here to support audio calling functionality for both Android and IOS
}
else
{
Console.WriteLine("Call attempt failed.");
}
}
Any help or direction would be appreciated. Thank you.
I looked at the documentation from SIP Sorcery, there I found only an example for windows (https://sipsorcery.github.io/sipsorcery/articles/sipuseragent.html), but not for ios or android.
Here is the description from SIP Sorcery for cross platform (https://sipsorcery.github.io/sipsorcery/). I think you need the SIPSorceryMedia.FFmpeg library
I need to stream audio from external bluetooth device and video from camera to wowza server so that I can then access the live stream through a web app.
I've been able to successfully send other streams to Wowza using the GOCOder library, but as far as I can tell, this library only sends streams that come from the device's camera and mic.
Does anyone have a good suggesting for implementing this?
In the GoCoder Android SDK, the setAudioSource method of WZAudioSource allows you to specify an audio input source other than the default. Here's the relevant API doc for this method:
public void setAudioSource(int audioSource)
Sets the actively configured input device for capturing audio.
Parameters:
audioSource - An identifier for the active audio source. Possible values are those listed at MediaRecorder.AudioSource. The default value is MediaRecorder.AudioSource.CAMCORDER. Note that setting this while audio is actively being captured will have no effect until a new capture session is started. Setting this to an invalid value will cause an error to occur at session begin.
I am currently using an app that uses the method exemplified on libstreaming-example-1 (libstreaming) to stream the camera from an Android Device to an Ubuntu Server (using openCV and libVLC). This way, my Android device acts like a Server and waits for the Client (Ubuntu Server) to send the play signal over RTSP and then start the streaming over UDP.
The problem I am facing with the streaming is that I am getting a delay of approximately 1.1s during the transmission and I want to get it down to 150ms maximum.
I tried to implement the libstreaming-example-2 of libstreaming-examples, but I couldn't I don't have access to a detailed documentation and I couldn't figure out how to get the right signal to display the streaming on my server. Other than that, I was trying to see what I can do with the example 1 in order to get it down, but nothing new until now.
PS: I am using a LAN, so network/bandwidth is not the problem.
Here come the questions:
Which way is the best to get the lowest latency possible while
streaming video from the camera?
How can I implement example-2?
Is example-2 method of streaming better to get the latency down to
150ms?
Is this latency related to the decompression of the video on
the server side? (No frames are dropped, FPS: 30)
Thank you!
had same issue as you with huge stream delay (around 1.5 - 1.6 sec)
My setup is Android device which streams its camera over RTSP using libStreaming, receiving side is Android device using libVlc as media player. Now I found a solution to decrease delay to 250-300 ms. It was achieved by setting up libVlc with following parameters.
mLibvlc = new LibVLC();
mLibvlc.setVout(LibVLC.VOUT_ANDROID_WINDOW);
mLibvlc.setDevHardwareDecoder(LibVLC.DEV_HW_DECODER_AUTOMATIC);
mLibvlc.setHardwareAcceleration(LibVLC.HW_ACCELERATION_DISABLED);
mLibvlc.setNetworkCaching(150);
mLibvlc.setFrameSkip(true);
mLibvlc.setChroma("YV12");
restartPlayer();
private void restartPlayer() {
if (mLibvlc != null) {
try {
mLibvlc.destroy();
mLibvlc.init(this);
} catch (LibVlcException lve) {
throw new IllegalStateException("LibVLC initialisation failed: " + LibVlcUtil.getErrorMsg());
}
}
}
You can play with setNetworkCaching(int networkCaching) to customize a bit delay
Please let me know if it was helpful for you or you found better solution with this or another environment.