I want audio share with webrtc in android. I try with MediaProjection, video share is OK. but, audio record have not Audio Track. How can I get Audio Track from Audio Record?
AudioPlaybackCaptureConfiguration config = new AudioPlaybackCaptureConfiguration.Builder(sMediaProjection)
.addMatchingUsage(AudioAttributes.USAGE_MEDIA)
.addMatchingUsage(AudioAttributes.USAGE_VOICE_COMMUNICATION)
.addMatchingUsage(AudioAttributes.USAGE_VOICE_COMMUNICATION_SIGNALLING)
.addMatchingUsage(AudioAttributes.USAGE_GAME)
.build();
AudioFormat audioFormat = new AudioFormat.Builder()
.setEncoding(AudioFormat.ENCODING_PCM_16BIT)
.setSampleRate(44100)
.setChannelMask(AudioFormat.CHANNEL_IN_MONO)
.build();
audioRecord = new AudioRecord.Builder()
.setAudioFormat(audioFormat)
.setBufferSizeInBytes(BUFFER_SIZE_IN_BYTES)
.setAudioPlaybackCaptureConfig(config)
.build();
audioRecord.startRecording();
//Other code
String audioTrackId = stateProvider.getNextTrackUUID();
AudioSource as = new AudioSource(audioRecord.getAudioSource());
tracks[0] = pcFactory.createAudioTrack(audioTrackId, as); // Not Working
I think you have to checks below
https://developer.android.com/reference/android/media/AudioPlaybackCaptureConfiguration
the usage value MUST be AudioAttributes#USAGE_UNKNOWN or AudioAttributes#USAGE_GAME or AudioAttributes#USAGE_MEDIA. All other usages CAN NOT be captured.
the capture policy set by their app (with AudioManager#setAllowedCapturePolicy) or on each player (with AudioAttributes.Builder#setAllowedCapturePolicy) is AudioAttributes#ALLOW_CAPTURE_BY_ALL, whichever is the most strict.
You can it with below adb command
Run webrtc and enter next command
$ adb shell dumpsys audio
...
...
players:
AudioPlaybackConfiguration piid:15 type:android.media.SoundPool u/pid:1000/1619 state:idle attr:AudioAttributes: usage=USAGE_ASSISTANCE_SONIFICATION content=CONTENT_TYPE_SONIFICATION flags=0x800 tags= bundle=null
AudioPlaybackConfiguration piid:23 type:android.media.SoundPool u/pid:10222/2040 state:idle attr:AudioAttributes: usage=USAGE_ASSISTANCE_SONIFICATION content=CONTENT_TYPE_SONIFICATION flags=0x800 tags= bundle=null
...
...
allowed capture policies:
...
...
players: means
These are audio attribute information which is being used now.
You could see the usage using by webrc with pid
flags=0x800: means
Shows what value the app has setAllowedCapturePolicy.
Refer to below link for the information of flags
https://cs.android.com/android/platform/superproject/+/master:frameworks/base/media/java/android/media/AudioAttributes.java;drc=master;l=1532?q=capturePolicyToFlags&ss=android%2Fplatform%2Fsuperproject&hl=ko
allowed capture policies: means
It shows which app has set setAllowedCapturePolicy.
their app attribute allowAudioPlaybackCapture in their manifest MUST either be set to true
If webrc doesn't set this value, you could see below warning log.
ALOGW("%s: Playback capture is denied for uid %u as the manifest property could not be "
"retrieved from the package manager: %s", __func__, uid, status.toString8().c_str());
Related
I want to take audio input in my unity application which I am building for Android platform. The code I have added in Start Function is as follows:
var audio = GetComponent< AudioSource > ();
audio.clip = Microphone.Start("Built-in Microphone", true, 10, 44100);
audio.loop = true;
while (!(Microphone.GetPosition(null) > 0)) { }
audio.Play();
But it is showing the following error:
ArgumentException: Couldn't acquire device ID for device name Built-in Microphone
I'm referring from this post to add microphone. How to resolve this? Also, is there any blog available for doing this end to end?
The error message clearly indicates that it can't find a Microphone device named "Built-in Microphone". So you should probably see what devices it can find.
Try running the following code in the Start method and see what output you get:
foreach (var device in Microphone.devices)
{
Debug.Log("Name: " + device);
}
Once you have a list of the devices, then replace "Built-in Microphone" with the name of your desired device. If "Built-in Microphone" is in the list or you get the same issue with a different device, then you're probably dealing with a permissions issue.
I'm trying to fix an existing project which is casting video and audio to the web.
I need to create local socket:
socketId = "my.application.media." + suffix + "-" + new
Random().nextInt();
localServerSocket = new LocalServerSocket(socketId);
receiver = new LocalSocket();
receiver.connect(new LocalSocketAddress(socketId));
receiver.setReceiveBufferSize(SOCKET_BUFFER_SIZE);
receiver.setSendBufferSize(SOCKET_BUFFER_SIZE);
sender = localServerSocket.accept();
sender.setReceiveBufferSize(SOCKET_BUFFER_SIZE);
sender.setSendBufferSize(SOCKET_BUFFER_SIZE);
and creating media recorder:
mMediaRecorder = new MediaRecorder();
mMediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mMediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.RAW_AMR);
mMediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
mMediaRecorder.setAudioEncodingBitRate((int) 7.95 * 1024);
mMediaRecorder.setAudioSamplingRate(8000);
mMediaRecorder.setAudioChannels(1);
mMediaRecorder.setOutputFile(sender.getFileDescriptor());
mMediaRecorder.prepare();
But I'm getting java.lang.IllegalStateException after calling start on mMediaRecorder. What am I missing? When I'm not using sender.getFileDescriptor() everything is working correctly so probably that's the problem. I know that there are many libraries which are providing this functionality but I prefer to fix this one. Casting the only video is working correctly and the only problem is with the audio. Thanks a lot for help.
Order of executed methods:
added logs to check the order of methods and thread:
creating sockets: Socket opening thread
creating receiver: Socket opening thread
creating sender: Socket opening thread
setting audio source: Socket opening thread
setting properties: Socket opening thread
creating file descriptor: Socket opening thread
preparing media recorder: Socket opening thread
starting media recorder: Socket opening thread
I found that I'm also receiving errors:
2019-02-13 18:15:49.701 6176-13833/? E/StagefrightRecorder: Output file descriptor is invalid
2019-02-13 18:15:49.701 7851-9780/my.application E/MediaRecorder: start failed: -38
As stated here this error java.lang.IllegalStateException occurring when
a method has been invoked at an illegal or inappropriate time.
So with that in mind and with this article in mind of how to use sockets, You should put your socket related staff inside AsyncTask (separated thread) and use try catch.
AsyncTask Documentation , and Socket Documentation if you want to expand your knowledge.
As it seems you are trying to use getFileDescriptor before (or after if it close) sender have the data to pull it out.
Try extracting the data at earlier location in the code to a variable and than use this variable instead.
Another possibility could be; MediaRecorder documentation says
You must specify a file descriptor that represents an actual file
so be sure that the type that sender.getFileDescriptor() return is the right type that mMediaRecorder.setAudioChannels can get.
I use libstreaming to create a RTSP server on an Android. Then, I use another phone to connect to the server to play the live stream. I hope the server can use its camera and microphone to record a video then play on the client.
After connecting, the video can play properly, but there is no sound.
The following is part of my RTSP server's code:
mSession = SessionBuilder.getInstance()
.setSurfaceView(mSurfaceView)
.setPreviewOrientation(90)
.setContext(getApplicationContext())
.setAudioEncoder(SessionBuilder.AUDIO_AAC)
//.setAudioQuality(new AudioQuality(16000, 32000))
.setAudioQuality(new AudioQuality(8000, 16000))
.setVideoEncoder(SessionBuilder.VIDEO_H264)
//.setVideoQuality(new VideoQuality(320, 240, 20, 500000))
.build();
mSession.startPreview(); //camera preview on phone surface
mSession.start();
I searched for this question, some people said I should modify the destination ports in SessionBuilder.java.
I tried to modify it as follow, but it still did not work
if (session.getAudioTrack() != null) {
Log.e("SessionBuilder", "Audio track != null");
AudioStream audio = session.getAudioTrack();
audio.setAudioQuality(mAudioQuality);
audio.setDestinationPorts(5008);
}
Does somebody know the reason for this question?
By the way, I used VLC player on another phone as the client.
I use the following line to connect to the server
rtsp:MY_IP:1234?h264=200-20-320-240
Thanks
I traced the source code and found out that the server did not receive the request of the audio stream, only received the request of the video stream.
After setup the connection in RtspServer.java, the received trackID=1.
(trackID=0 means AudioStream && trackID=1 means VideoStream)
public Response processRequest(Request request) throws IllegalStateException, IOException {
....
else if (request.method.equalsIgnoreCase("SETUP")) {
....
boolean streaming = isStreaming();
Log.e(TAG, "trackId: " + trackId);
// received trackID=1 which represent video stream
mSession.syncStart(trackId);
....
}
....
}
I solved this problem by using a different URL:
rtsp:MY_IP:1234?trackID=0
Thanks
I had the same problem. Setting the streaming method worked for me.
mSession.getVideoTrack().setStreamingMethod(MediaStream.MODE_MEDIACODEC_API_2);
I am working on webRTC. Right now I am using echo challenge, so I was thinking about the microphone toggling techniques. For example, user A is talking by switch off the microphone of user B and vice versa. Is webRTC has this build in implemented? if not how can I achieve that ? any help will be really appreciated.
WebRTC doesn't have this built in, because this is really more related to the media presentation layer. A good strategy to use, employed by SimpleWebRTC via the attachMediaStream module is to simple attach the local participant's media with the video (or audio, for audio only) element muted.
The relevant code found in the main file of that module here. Is this:
if (!element) {
element = document.createElement(opts.audio ? 'audio' : 'video');
} else if (element.tagName.toLowerCase() === 'audio') {
opts.audio = true;
}
// Mute the video element so the local participant's audio doesn't play - do this only for the local participant, not the remote participants
if (opts.muted) element.muted = true;
// attach the stream to the element
if (typeof element.srcObject !== 'undefined') {
element.srcObject = stream;
}
I have a stream from an icecast server downloading, and I can grab the information in the headers by doing the following:
URLConnection cn = new URL(mediaUrl).openConnection();
cn.connect();
int pos=1;
String x;
String y;
while (cn.getHeaderField(pos) != null)
{
x=cn.getHeaderFieldKey(pos);
y = cn.getHeaderField(x);
Log.e(":::::",""+x+" : "+y);
pos++;
}
When I do this all of the headers I receive are shown as:
content-type : audio/mpeg
icy-br : 64
ice-audio-info : ice-samplerate=22050;ice-bitrate=64;ice-channels=2
icy-br : 64
icy-description : RadioStation
icy-genre : Classical, New Age, Ambient
icy-name : RadioStation Example
icy-private : 0
icy-pub : 1
icy-url : http://exampleradio.com
server : Icecast 2.3.2
cache-control : no-cache
However if I open my stream in mplayer I get:
ICY Info: StreamTitle='artist - album - trackname'
and with each time the song is changed, the new track information is sent appearing the same way in mplayer.
In android when I attempt to read the icy-info all I get returned is null. Also how would I go about retrieving the new information from the headers while I am buffering from the stream? Because even if I try to read the header of something I already know exists whilst buffering such as:
Log.e(getClass().getName()," "+cn.getHeaderField("icy-br"));
All I get returned is null.
I hope this makes sense, I can post more code on request.
I realize this question is old, but for others who are facing this challenge, I am using this project: http://code.google.com/p/streamscraper/ to get track information from an icecast stream. I'm using it on android and so far it works as expected.
All you need is to setDataSource() and pass the URL as a String, then you must prepareAsync() and with a mp.setOnPreparedListener(this); or etc. you will get noticed when the MediaPlayer is done buffering, then all you need to do is mp.start(); P.S.: Don't forget to mp.stop, mp.reset and mp.release upon destroying the application. ;) I'm still thinking of a way to read the ICY info... I must either make my own buffering mechanism and write a buffer file (init the MediaPlayer with FileDescriptor) or make a separate connection from time to time to check for ICY info tags and close the connection... Any better ideas anyone?