I use libstreaming to create a RTSP server on an Android. Then, I use another phone to connect to the server to play the live stream. I hope the server can use its camera and microphone to record a video then play on the client.
After connecting, the video can play properly, but there is no sound.
The following is part of my RTSP server's code:
mSession = SessionBuilder.getInstance()
.setSurfaceView(mSurfaceView)
.setPreviewOrientation(90)
.setContext(getApplicationContext())
.setAudioEncoder(SessionBuilder.AUDIO_AAC)
//.setAudioQuality(new AudioQuality(16000, 32000))
.setAudioQuality(new AudioQuality(8000, 16000))
.setVideoEncoder(SessionBuilder.VIDEO_H264)
//.setVideoQuality(new VideoQuality(320, 240, 20, 500000))
.build();
mSession.startPreview(); //camera preview on phone surface
mSession.start();
I searched for this question, some people said I should modify the destination ports in SessionBuilder.java.
I tried to modify it as follow, but it still did not work
if (session.getAudioTrack() != null) {
Log.e("SessionBuilder", "Audio track != null");
AudioStream audio = session.getAudioTrack();
audio.setAudioQuality(mAudioQuality);
audio.setDestinationPorts(5008);
}
Does somebody know the reason for this question?
By the way, I used VLC player on another phone as the client.
I use the following line to connect to the server
rtsp:MY_IP:1234?h264=200-20-320-240
Thanks
I traced the source code and found out that the server did not receive the request of the audio stream, only received the request of the video stream.
After setup the connection in RtspServer.java, the received trackID=1.
(trackID=0 means AudioStream && trackID=1 means VideoStream)
public Response processRequest(Request request) throws IllegalStateException, IOException {
....
else if (request.method.equalsIgnoreCase("SETUP")) {
....
boolean streaming = isStreaming();
Log.e(TAG, "trackId: " + trackId);
// received trackID=1 which represent video stream
mSession.syncStart(trackId);
....
}
....
}
I solved this problem by using a different URL:
rtsp:MY_IP:1234?trackID=0
Thanks
I had the same problem. Setting the streaming method worked for me.
mSession.getVideoTrack().setStreamingMethod(MediaStream.MODE_MEDIACODEC_API_2);
Related
I want audio share with webrtc in android. I try with MediaProjection, video share is OK. but, audio record have not Audio Track. How can I get Audio Track from Audio Record?
AudioPlaybackCaptureConfiguration config = new AudioPlaybackCaptureConfiguration.Builder(sMediaProjection)
.addMatchingUsage(AudioAttributes.USAGE_MEDIA)
.addMatchingUsage(AudioAttributes.USAGE_VOICE_COMMUNICATION)
.addMatchingUsage(AudioAttributes.USAGE_VOICE_COMMUNICATION_SIGNALLING)
.addMatchingUsage(AudioAttributes.USAGE_GAME)
.build();
AudioFormat audioFormat = new AudioFormat.Builder()
.setEncoding(AudioFormat.ENCODING_PCM_16BIT)
.setSampleRate(44100)
.setChannelMask(AudioFormat.CHANNEL_IN_MONO)
.build();
audioRecord = new AudioRecord.Builder()
.setAudioFormat(audioFormat)
.setBufferSizeInBytes(BUFFER_SIZE_IN_BYTES)
.setAudioPlaybackCaptureConfig(config)
.build();
audioRecord.startRecording();
//Other code
String audioTrackId = stateProvider.getNextTrackUUID();
AudioSource as = new AudioSource(audioRecord.getAudioSource());
tracks[0] = pcFactory.createAudioTrack(audioTrackId, as); // Not Working
I think you have to checks below
https://developer.android.com/reference/android/media/AudioPlaybackCaptureConfiguration
the usage value MUST be AudioAttributes#USAGE_UNKNOWN or AudioAttributes#USAGE_GAME or AudioAttributes#USAGE_MEDIA. All other usages CAN NOT be captured.
the capture policy set by their app (with AudioManager#setAllowedCapturePolicy) or on each player (with AudioAttributes.Builder#setAllowedCapturePolicy) is AudioAttributes#ALLOW_CAPTURE_BY_ALL, whichever is the most strict.
You can it with below adb command
Run webrtc and enter next command
$ adb shell dumpsys audio
...
...
players:
AudioPlaybackConfiguration piid:15 type:android.media.SoundPool u/pid:1000/1619 state:idle attr:AudioAttributes: usage=USAGE_ASSISTANCE_SONIFICATION content=CONTENT_TYPE_SONIFICATION flags=0x800 tags= bundle=null
AudioPlaybackConfiguration piid:23 type:android.media.SoundPool u/pid:10222/2040 state:idle attr:AudioAttributes: usage=USAGE_ASSISTANCE_SONIFICATION content=CONTENT_TYPE_SONIFICATION flags=0x800 tags= bundle=null
...
...
allowed capture policies:
...
...
players: means
These are audio attribute information which is being used now.
You could see the usage using by webrc with pid
flags=0x800: means
Shows what value the app has setAllowedCapturePolicy.
Refer to below link for the information of flags
https://cs.android.com/android/platform/superproject/+/master:frameworks/base/media/java/android/media/AudioAttributes.java;drc=master;l=1532?q=capturePolicyToFlags&ss=android%2Fplatform%2Fsuperproject&hl=ko
allowed capture policies: means
It shows which app has set setAllowedCapturePolicy.
their app attribute allowAudioPlaybackCapture in their manifest MUST either be set to true
If webrc doesn't set this value, you could see below warning log.
ALOGW("%s: Playback capture is denied for uid %u as the manifest property could not be "
"retrieved from the package manager: %s", __func__, uid, status.toString8().c_str());
I want to take audio input in my unity application which I am building for Android platform. The code I have added in Start Function is as follows:
var audio = GetComponent< AudioSource > ();
audio.clip = Microphone.Start("Built-in Microphone", true, 10, 44100);
audio.loop = true;
while (!(Microphone.GetPosition(null) > 0)) { }
audio.Play();
But it is showing the following error:
ArgumentException: Couldn't acquire device ID for device name Built-in Microphone
I'm referring from this post to add microphone. How to resolve this? Also, is there any blog available for doing this end to end?
The error message clearly indicates that it can't find a Microphone device named "Built-in Microphone". So you should probably see what devices it can find.
Try running the following code in the Start method and see what output you get:
foreach (var device in Microphone.devices)
{
Debug.Log("Name: " + device);
}
Once you have a list of the devices, then replace "Built-in Microphone" with the name of your desired device. If "Built-in Microphone" is in the list or you get the same issue with a different device, then you're probably dealing with a permissions issue.
I know how to create a send-only offer by add "OfferToReceiveVideo:false" and "OfferToReceiveAudio:false" in param MediaConstraints in this method:
public void createOffer(SdpObserver observer, MediaConstraints constraints)
But how can I create a receive-only sdp offer? I try to create it by adding no media stream to peer connection, however, it will cause sdp very short and no line "a:recvonly" contains. And no ice candidate generated.
I want to create a webrtc peer connection to receive media stream, but not send.
Solved.
Set "OfferToReceiveAudio" and "OfferToReceiveVideo" to "true" in MediaConstraints. And do not add stream.
I am working on webRTC. Right now I am using echo challenge, so I was thinking about the microphone toggling techniques. For example, user A is talking by switch off the microphone of user B and vice versa. Is webRTC has this build in implemented? if not how can I achieve that ? any help will be really appreciated.
WebRTC doesn't have this built in, because this is really more related to the media presentation layer. A good strategy to use, employed by SimpleWebRTC via the attachMediaStream module is to simple attach the local participant's media with the video (or audio, for audio only) element muted.
The relevant code found in the main file of that module here. Is this:
if (!element) {
element = document.createElement(opts.audio ? 'audio' : 'video');
} else if (element.tagName.toLowerCase() === 'audio') {
opts.audio = true;
}
// Mute the video element so the local participant's audio doesn't play - do this only for the local participant, not the remote participants
if (opts.muted) element.muted = true;
// attach the stream to the element
if (typeof element.srcObject !== 'undefined') {
element.srcObject = stream;
}
I have a stream from an icecast server downloading, and I can grab the information in the headers by doing the following:
URLConnection cn = new URL(mediaUrl).openConnection();
cn.connect();
int pos=1;
String x;
String y;
while (cn.getHeaderField(pos) != null)
{
x=cn.getHeaderFieldKey(pos);
y = cn.getHeaderField(x);
Log.e(":::::",""+x+" : "+y);
pos++;
}
When I do this all of the headers I receive are shown as:
content-type : audio/mpeg
icy-br : 64
ice-audio-info : ice-samplerate=22050;ice-bitrate=64;ice-channels=2
icy-br : 64
icy-description : RadioStation
icy-genre : Classical, New Age, Ambient
icy-name : RadioStation Example
icy-private : 0
icy-pub : 1
icy-url : http://exampleradio.com
server : Icecast 2.3.2
cache-control : no-cache
However if I open my stream in mplayer I get:
ICY Info: StreamTitle='artist - album - trackname'
and with each time the song is changed, the new track information is sent appearing the same way in mplayer.
In android when I attempt to read the icy-info all I get returned is null. Also how would I go about retrieving the new information from the headers while I am buffering from the stream? Because even if I try to read the header of something I already know exists whilst buffering such as:
Log.e(getClass().getName()," "+cn.getHeaderField("icy-br"));
All I get returned is null.
I hope this makes sense, I can post more code on request.
I realize this question is old, but for others who are facing this challenge, I am using this project: http://code.google.com/p/streamscraper/ to get track information from an icecast stream. I'm using it on android and so far it works as expected.
All you need is to setDataSource() and pass the URL as a String, then you must prepareAsync() and with a mp.setOnPreparedListener(this); or etc. you will get noticed when the MediaPlayer is done buffering, then all you need to do is mp.start(); P.S.: Don't forget to mp.stop, mp.reset and mp.release upon destroying the application. ;) I'm still thinking of a way to read the ICY info... I must either make my own buffering mechanism and write a buffer file (init the MediaPlayer with FileDescriptor) or make a separate connection from time to time to check for ICY info tags and close the connection... Any better ideas anyone?