I am working on webRTC. Right now I am using echo challenge, so I was thinking about the microphone toggling techniques. For example, user A is talking by switch off the microphone of user B and vice versa. Is webRTC has this build in implemented? if not how can I achieve that ? any help will be really appreciated.
WebRTC doesn't have this built in, because this is really more related to the media presentation layer. A good strategy to use, employed by SimpleWebRTC via the attachMediaStream module is to simple attach the local participant's media with the video (or audio, for audio only) element muted.
The relevant code found in the main file of that module here. Is this:
if (!element) {
element = document.createElement(opts.audio ? 'audio' : 'video');
} else if (element.tagName.toLowerCase() === 'audio') {
opts.audio = true;
}
// Mute the video element so the local participant's audio doesn't play - do this only for the local participant, not the remote participants
if (opts.muted) element.muted = true;
// attach the stream to the element
if (typeof element.srcObject !== 'undefined') {
element.srcObject = stream;
}
Related
I'm currently attempting to implement an audio player in React Native using Expo's Audio module. Everything is working as expected in Android, but on iOS, the playbackStatusUpdate() callback does not appear to be working. I've pasted the relevant snippet below.
Audio.setAudioModeAsync({
playsInSilentModeIOS: true,
interruptionModeIOS: INTERRUPTION_MODE_IOS_DO_NOT_MIX,
interruptionModeAndroid: INTERRUPTION_MODE_ANDROID_DO_NOT_MIX,
});
const { sound } = await Audio.Sound.createAsync({ uri });
await sound.setProgressUpdateIntervalAsync(100);
sound.setOnPlaybackStatusUpdate((data) => {
if (data.isLoaded) {
setCurrentPosition(data.positionMillis);
setIsPlaying(data.isPlaying);
setClipDuration(data.durationMillis || 0);
if (data.didJustFinish && !data.isLooping) {
sound.setPositionAsync(0);
options.onPlaybackComplete();
}
}
});
On iOS, the callback simply is not called at all, so no statuses, etc. are set.
I'm using the current position to update a slider at a higher level in my component tree. Again - this all works exactly as expected in Android. Any idea what could be happening here?
I want to take audio input in my unity application which I am building for Android platform. The code I have added in Start Function is as follows:
var audio = GetComponent< AudioSource > ();
audio.clip = Microphone.Start("Built-in Microphone", true, 10, 44100);
audio.loop = true;
while (!(Microphone.GetPosition(null) > 0)) { }
audio.Play();
But it is showing the following error:
ArgumentException: Couldn't acquire device ID for device name Built-in Microphone
I'm referring from this post to add microphone. How to resolve this? Also, is there any blog available for doing this end to end?
The error message clearly indicates that it can't find a Microphone device named "Built-in Microphone". So you should probably see what devices it can find.
Try running the following code in the Start method and see what output you get:
foreach (var device in Microphone.devices)
{
Debug.Log("Name: " + device);
}
Once you have a list of the devices, then replace "Built-in Microphone" with the name of your desired device. If "Built-in Microphone" is in the list or you get the same issue with a different device, then you're probably dealing with a permissions issue.
I would like to implement media player functionality to my custom receiver.
On the google developer website, I found a description to implement a sender and a styled media receiver application.
I have done this sample, and it works fine. I can cast a MP3 file hosted on Google Drive to my chromecast device.
Now, I have implemented a custom receiver (see attachment) which should be able to play an URL refered to a m3u8 file. For this, I am using the Media Player Library as suggested from Google.
<body>
<div>
<p id='text'> </p>
<video id='vid'> </video>
</div>
<script type="text/javascript" src="https://www.gstatic.com/cast/sdk/libs/receiver/2.0.0/cast_receiver.js"></script>
<script type="text/javascript" src="https://www.gstatic.com/cast/sdk/libs/mediaplayer/1.0.0/media_player.js"></script>
<script type="text/javascript">
// If you set ?Debug=true in the URL, such as a different App ID in the
// developer console, include debugging information.
if (window.location.href.indexOf('Debug=true') != -1) {
cast.receiver.logger.setLevelValue(cast.receiver.LoggerLevel.DEBUG);
cast.player.api.setLoggerLevel(cast.player.api.LoggerLevel.DEBUG);
}
console.log("mediaElement set");
var mediaElement = document.getElementById('vid');
// Create the media manager. This will handle all media messages by default.
window.mediaManager = new cast.receiver.MediaManager(mediaElement);
// Remember the default value for the Receiver onLoad, so this sample can Play
// non-adaptive media as well.
window.defaultOnLoad = mediaManager.onLoad;
mediaManager.onLoad = function (event) {
// The Media Player Library requires that you call player unload between
// different invocations.
if (window.player !== null) {
player.unload(); // Must unload before starting again.
window.player = null;
}
// This trivial parser is by no means best practice, it shows how to access
// event data, and uses the a string search of the suffix, rather than looking
// at the MIME type which would be better. In practice, you will know what
// content you are serving while writing your player.
if (event.data['media'] && event.data['media']['contentId']) {
console.log('Starting media application');
var t = document.getElementById('text');
t.innerHTML = event.data['media'];
console.log("EventData: "+event.data);
console.log("EventData-Media: "+event.data['media']);
console.log("EventData-ContendID: "+event.data['media']['contentId']);
var url = event.data['media']['contentId'];
console.log("URL: "+url);
// Create the Host - much of your interaction with the library uses the Host and
// methods you provide to it.
window.host = new cast.player.api.Host(
{'mediaElement':mediaElement, 'url':url});
var ext = url.substring(url.lastIndexOf('.'), url.length);
var initStart = event.data['media']['currentTime'] || 0;
var autoplay = event.data['autoplay'] || true;
var protocol = null;
mediaElement.autoplay = autoplay; // Make sure autoplay get's set
protocol = cast.player.api.CreateHlsStreamingProtocol(host);
host.onError = function(errorCode) {
console.log("Fatal Error - "+errorCode);
if (window.player) {
window.player.unload();
window.player = null;
}
};
// If you need cookies, then set withCredentials = true also set any header
// information you need. If you don't need them, there can be some unexpected
// effects by setting this value.
// host.updateSegmentRequestInfo = function(requestInfo) {
// requestInfo.withCredentials = true;
// };
console.log("we have protocol "+ext);
if (protocol !== null) {
console.log("Starting Media Player Library");
window.player = new cast.player.api.Player(host);
window.player.load(protocol, initStart);
}
else {
window.defaultOnLoad(event); // do the default process
}
}
}
window.player = null;
console.log('Application is ready, starting system');
window.castReceiverManager = cast.receiver.CastReceiverManager.getInstance();
castReceiverManager.start();
</script>
</body>
I've figured out, that it's just possible to cast .m3u8, .ism and .mpd files with the Media Player Library. So I created a m3u8 file as follows, host it to Google Drive, and tried to cast it to my custom receiver.
#EXTM3U
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=173952
https://www.googledrive.com/host/0B1x31lLRAxTMRndtNkhSWVdGLVE
But it doesn't works. I used the Goolge Cast Developer Console to debug the custom receiver. By exceuting the
window.player.load(protocol, initStart);
command, I get an FATAL ERROR on the console.
I think the problem is on the custom receiver code because the sender application from the google documentation works fine with the styled media receiver.
Is there anyone who know this problem or see some issue on the custom receiver code? Has anyone a idea how I could debug the styled media player? It would be much easier if I could see which messages are exchanged with the styled media player but if I activate the debugging I canĀ“t see the exchanged messages.
If you turn on the debugging, you can see message exchanges (see here, under the Debugging section). There is a full fledged receiver sample project on our github repo as well.
I am referring and also going through source code of AppRTCDemo which is a demo application for WebRTC.
What i am trying is:
Build my own WebRTC application which will do AV calls on a Android Device.
Replace existing https://apprtc.appspot.com/ server and related functionality.
For archiving above points, I want to understand basic flow of WebRTC function calls and steps to make/receive calls (functions that i need to calls and there flow).
I have gone through the source code and understood few things,
but as code is pretty complicated to understand, and without any documentation.
It will be great help if some one provides any examples or documents explaining the steps for making/receiving AV calls (how we get/set SDP, how to render local/remote video etc.).
I have seen these posts and are very helpful:
WebRTC java server trouble
https://www.webrtc-experiment.com/docs/WebRTC-PeerConnection.html
I am able to build and run AppRTCDemo App.
Any help on this will be great help!
There is no timeline, it's asynchronous but i will try to explain but there is two main flow, the flow of offer and answer with SDP and the flow of icecandidate.
Flow 1 : SDP
Step 1 - Offer peer :
On the offer side, create a RTCPeerconnection (with stun, trun servers as parameters).
var STUN = {
url:'stun:stun.l.google.com:19302'
};
var TURN = {
url: 'turn:homeo#turn.bistri.com:80',
credential: 'homeo'
};
var iceServers = {
iceServers: [STUN, TURN]
};
var peer = new RTCPeerConnection(iceServers);
Step 2 - Offer peer :
Call getUserMedia with your constraints. In the success callback, add the stream to the RTCPeerconnection using the addStream method. Then you can create the offer with calling createOffer on the Peerconnection Object.
navigator.webkitGetUserMedia(
{
audio: false,
video: {
mandatory: {
maxWidth: screen.width,
maxHeight: screen.height,
minFrameRate: 1,
maxFrameRate: 25
}
}
},
gotStream, function(e){console.log("getUserMedia error: ", e);});
function gotStream(stream){
//If you want too see your own camera
vid.src = webkitURL.createObjectURL(stream);
peer.addStream(stream);
peer.createOffer(onSdpSuccess, onSdpError);
}
Step 3 - Offer peer :
In the callback method of the createOffer, set the parameter (the sdp offer) as the localDescription of the RTCPeerConnection (who will start gathering the ICE candidate). Then send the offer to the other peer using the signaling server. (I will not describe signaling server, it's just passing data to one from another).
function onSdpSuccess(sdp) {
console.log(sdp);
peer.setLocalDescription(sdp);
//I use socket.io for my signaling server
socket.emit('offer',sdp);
}
Step 5 - Answer peer :
The answer peer, each time it receives an offer, create a RTCPeerconnection with TURN, STUN server, then getUserMedia, then in the callback, add the stream to the RTCPeerConnection. With the SDP offer use setRemoteDescription with the sdpOffer. Then Trigger the createAnswer.
In the success callback of the createAnswer, use setLocalDescription with the parameter and then send the answer sdp to the offer peer using signaling server.
//Receive by a socket.io socket
//The callbacks are useless unless for tracking
socket.on('offer', function (sdp) {
peer.setRemoteDescription(new RTCSessionDescription(sdp), onSdpSuccess, onSdpError);
peer.createAnswer(function (sdp) {
peer.setLocalDescription(sdp);
socket.emit('answer',sdp);
}, onSdpError);
});
Step 7 : Offer peer
Receive the sdp answer, setRemoteDescription on the RTCPeerConnection.
socket.on('answer', function (sdp) {
peer.setRemoteDescription(new RTCSessionDescription(sdp), function(){console.log("Remote Description Success")}, function(){console.log("Remote Description Error")});
});
Flow 2 : ICECandidate
Both side :
Each time the RTCPeerConnection fire onicecandidate, send the candidate to the other peer through signalingserver.
When a icecandidate is received, coming from signaling server, just add it to the RTCPeerConnection using the addIceCandidate(New RTCIceCandidate(obj))
peer.onicecandidate = function (event) {
console.log("New Candidate");
console.log(event.candidate);
socket.emit('candidate',event.candidate);
};
socket.on('candidate', function (candidate) {
console.log("New Remote Candidate");
console.log(candidate);
peer.addIceCandidate(new RTCIceCandidate({
sdpMLineIndex: candidate.sdpMLineIndex,
candidate: candidate.candidate
}));
});
Finally :
If two flow above works well use the onaddstream event on each RTCPeerConnection. When ICE Candidates will pair each other and find the best way for peer-to-peer, they will add the stream negociated with the SDP and that is going through the peer to peer connection. So in this event, you juste need to add your stream then to a video tag for example and it's good.
peer.onaddstream = function (event) {
vid.src = webkitURL.createObjectURL(event.stream);
console.log("New Stream");
console.log(event.stream);
};
I will edit tommorow with some code i think to help understand what i'm saying. If have question go for it.
Here is my signaling server :
var app = require('express')();
var server = require('http').Server(app);
var io = require('socket.io')(server);
server.listen(3000);
app.get('/', function (req, res) {
res.send('The cake is a lie');
});
io.on('connection', function (socket) {
console.log('NEW CONNECTION');
socket.on('offer', function (data) {
console.log(data);
socket.broadcast.emit("offer",data);
});
socket.on('answer', function (data) {
console.log(data);
socket.broadcast.emit("answer",data);
});
socket.on('candidate', function (data) {
console.log(data);
socket.broadcast.emit("candidate",data);
});
});
So, I'm maintaining the main page of an archive located here, where an audio element gets inserted with a randomly chosen RELATIVE link to an audio file. It plays the audio fine on desktop browsers, but it fails to play the audio on Android mobile browsers, like Dolphin Jetpack and Opera Mobile. Code that creates the audio element:
var d = document, $ = function(a){ return d.querySelector(a); }, o, a, b
audios = ["htmlfiles/Log4/Audio Files/1339041384627.png.audio02.ogg",
"htmlfiles/Log4/Audio Files/1339039129463.png.audio01.ogg",
"htmlfiles/Log5/Audio Files/s05_08.png.audio01.ogg",
"htmlfiles/Log6/Audio files/s06_19.png.audio01.ogg",
"htmlfiles/Log7P1/Audio Files/s07_01.png.audio01.ogg",
"htmlfiles/Log10/Audio files/1343286991927.png.audio01.ogg",
"htmlfiles/Log10/Audio files/1343293678793.gif.audio02.ogg",
"AudioFiles/1343888663849.png.audio02.ogg",
"AudioFiles/1345719774310.png.audio01.ogg",
"AudioFiles/1346311163394.png.audio02.ogg",
"AudioFiles/1346919244950.png.audio02.ogg",
"AudioFiles/1347509276756.png.audio01.ogg",
"AudioFiles/1347515470408.png.audio02.ogg",
"AudioFiles/1348079866537.png.audio01.ogg",
"AudioFiles/1349419913717.png.audio01.ogg",
"AudioFiles/1350030423418.png.audio01.ogg",
"AudioFiles/1350033736151.png.audio02.ogg",
"AudioFiles/1351231673165.png.audio01.ogg",
"AudioFiles/1343870457212.png.audio01.ogg"];
/*The code above is in the head tag, the one below is at the end of the body tag*/
window.opera && (o = $('div:not([id])')).parentNode.removeChild(o);
var audio = d.createElement("audio")/*, source = d.createElement("source")*/;
audio.autoplay = audio.controls = audio.loop = true;
// source.type = "audio/ogg";
audio.src =/* source.src =*/ audios[Math.floor(Math.random() * audios.length)];
// audio.appendChild(source);
audio.appendChild(d.createTextNode("Your browser does not support the audio element."));
$("div#container").insertBefore(audio, $("div#container > div:last-of-type").nextElementSibling);
I'd like to know what can cause such behaviour. I've tested both mobile browsers on w3schools' try-it page, and their audio worked fine there. I'm suspecting it could be something with the https protocol.
Edit: I've reported the bug for Opera through the report wizard, and for Mobotap via an email with a link to this question.
Just getting rid of this zombie. In the end, I think updating the browsers at a later point fixed it.