Device in question:
DJI RC Pro Enterprise used for controlling enterprise DJI drones
OS is Android 10, probably running stock Android as there's no Google Play services on the device
When establishing a WebRTC session as a master from Android 10 mobile device, the audio that should be gathered via microphone is not getting through to the viewer side. When I'm configuring transceiver on the viewer side, I do enforce both send & receive for the audio part via:
viewer.peerConnection.addTransceiver('audio', {direction: 'sendrecv'});
Also looking at chrome://webrtc-internals tab, it clearly shows (inbound-rtp stats for audio) that audio channel is open and some small amount of data is coming through:
But clearly we can spot that:
Bytes received in bits/s are only around 3k, where another Android device that is using newer Android version and the microphone sound is actually coming through on the viewer side is hitting around 30k bits/s.
Audio level is staying at 0 regardless how loud I speak into the microphone.
Here's also the stats from inbound-rtc in text:
inbound-rtp (kind=audio, mid=4, ssrc=3185575416, [codec]=opus (111, minptime=10;useinbandfec=1), id=IT31A3185575416)
Statistics IT31A3185575416
timestamp 2/16/2023, 2:51:44 PM
ssrc 3185575416
kind audio
trackId DEPRECATED_TI8
transportId T31
codecId CIT31_111_minptime=10;useinbandfec=1
[codec] opus (111, minptime=10;useinbandfec=1)
mediaType audio
jitter 0.063
packetsLost 0
trackIdentifier efe06737-ed24-448c-bf32-d002cef9171b
mid 4
packetsReceived 170
[packetsReceived/s] 13.550688631363338
packetsDiscarded 0
fecPacketsReceived 0
fecPacketsDiscarded 0
bytesReceived 5524
[bytesReceived_in_bits/s] 3523.179044154468
headerBytesReceived 4760
[headerBytesReceived_in_bits/s] 3035.354253425388
lastPacketReceivedTimestamp 1676555504833
[lastPacketReceivedTimestamp] 2/16/2023, 2:51:44 PM
jitterBufferDelay 77020.8
[jitterBufferDelay/jitterBufferEmittedCount_in_ms] 0
jitterBufferTargetDelay 270316.8
jitterBufferMinimumDelay 270240
jitterBufferEmittedCount 153600
totalSamplesReceived 665760
[totalSamplesReceived/s] 48782.47907290802
concealedSamples 505242
[concealedSamples/s] 48782.47907290802
[concealedSamples/totalSamplesReceived] 1
silentConcealedSamples 482592
[silentConcealedSamples/s] 48782.47907290802
concealmentEvents 14
insertedSamplesForDeceleration 6960
[insertedSamplesForDeceleration/s] 0
removedSamplesForAcceleration 0
[removedSamplesForAcceleration/s] 0
audioLevel 0
totalAudioEnergy 0
[Audio_Level_in_RMS] 0
totalSamplesDuration 13.869999999999749
jitterBufferFlushes* 2
delayedPacketOutageSamples* 452322
relativePacketArrivalDelay* 718.31
interruptionCount* 10
totalInterruptionDuration* 9.197
remoteId ROA3185575416
We have confirmed that microphone works on this device using third party tool microphone test app or just simply doing screen recording. We've also made sure on the master side that correct mode is set and that microphone is not muted via:
audioManager.requestAudioFocus(null, AudioManager.STREAM_VOICE_CALL,
AudioManager.AUDIOFOCUS_GAIN_TRANSIENT);
// Start by setting MODE_IN_COMMUNICATION as default audio mode. It is
// required to be in this mode when playout and/or recording starts for
// best possible VoIP performance.
audioManager.setMode(AudioManager.MODE_IN_COMMUNICATION);
audioManager.setMicrophoneMute(false);
Also using this method to check if microphone is available just before the start of the streaming returns True:
public static boolean getMicrophoneAvailable(Context context) {
MediaRecorder recorder = new MediaRecorder();
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
recorder.setOutputFormat(MediaRecorder.OutputFormat.DEFAULT);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.DEFAULT);
recorder.setOutputFile(new File(context.getCacheDir(), "MediaUtil#micAvailTestFile").getAbsolutePath());
boolean available = true;
try {
recorder.prepare();
recorder.start();
}
catch (Exception exception) {
available = false;
}
recorder.release();
return available;
}
The questions are therefore the following:
Does somehow running stock Android without Google Play services impact the usage of microphone in the WebRTC streaming session? I'm basing this assumption that Google is the main developer behind WebRTC and perhaps they've built some features around their Google Play services library
Given that microphone is clearly available, could I manually start the recording and started sending the audio bits via open WebRTC audio channel?
Related
I am trying to record both Uplink and Downlink voice using Android. Regardless the law and everything, i am already aware, so please do not put comments related to the law.
The code below works fine, except when i mute the microphone, it wont record the downlink voice.
I am using Android 8.1. I've tried using a third party app called ACR on the same device, and it works fine, while i am muted, it still records the downlink voice.
val audioManager = applicationContext.getSystemService(Context.AUDIO_SERVICE) as AudioManager
val maximumVolume = audioManager.getStreamMaxVolume(AudioManager.STREAM_VOICE_CALL)
audioManager.setStreamVolume(AudioManager.STREAM_VOICE_CALL, maximumVolume, 0)
val audioSource = MediaRecorder.AudioSource.MIC
val mediaRecorder = MediaRecorder()
mediaRecorder.apply {
setAudioSource(audioSource)
setOutputFormat(MediaRecorder.OutputFormat.MPEG_4)
setAudioEncoder(MediaRecorder.AudioEncoder.AAC)
setAudioChannels(audioChannels)
setAudioSamplingRate(audioSamplingRate)
setAudioEncodingBitRate(audioEncodingBitRate)
setOutputFile(path)
prepare()
start()
This is not an issue. You set the MediaRecorder to use MIC as input, so if you MUTE the microphone it's obliviously that the input signat is lost/muted. When you use "downlink" word I expected to see a different input source as VOICECALL or DOWNLINK instead of MIC. Trying to record a voicecall using the MIC it's wrong in my opinion because: (1) you have to set max volume to speaker and redirect the voicecall through it (2) while recording a voicecall from the MIC the caller hears ALL what it happens around your device and all what you're saying to other people (3) this method records much noise and echoes. The right way is to record from VOICECALL but most of new devices (using newer Android version) prevents to record from this source and allows it only at System Apps. ACR uses a workaround by calling hidden API methods, but this method could stop stop work at any time due to Android updates.
The AudioRecord class allows recording of phone calls with one of the following options as the recording source:
VOICE_UPLINK: The audio transmitted from your end to the other party. IOW, what you speak into the microphone.
VOICE_DOWNLINK: The audio transmitted from the other party to your end.
VOICE_CALL: VOICE_UPLINK + VOICE_DOWNLINK.
I'd like to build an App that records both VOICE_UPLINK & VOICE_DOWNLINK and identify the source of the voice.
When using VOICE_CALL as the AudioSource option, the UP/DOWN-LINK streams are bundled together in to the received data buffer which makes it hard to identify the source of the voice.
Using two AudioRecords with VOICE_UPLINK & VOICE_DOWNLINK does not work - the second AudioRecord fails to start because the first AudioRecord locks the recording stream.
Is there any creative way to bypass the locking problem presented at case (2), thus enable recording of the VOICE_UPLINK & VOICE_DOWNLINK streams simultaneously and easily identifying the source?
I need to stream audio from external bluetooth device and video from camera to wowza server so that I can then access the live stream through a web app.
I've been able to successfully send other streams to Wowza using the GOCOder library, but as far as I can tell, this library only sends streams that come from the device's camera and mic.
Does anyone have a good suggesting for implementing this?
In the GoCoder Android SDK, the setAudioSource method of WZAudioSource allows you to specify an audio input source other than the default. Here's the relevant API doc for this method:
public void setAudioSource(int audioSource)
Sets the actively configured input device for capturing audio.
Parameters:
audioSource - An identifier for the active audio source. Possible values are those listed at MediaRecorder.AudioSource. The default value is MediaRecorder.AudioSource.CAMCORDER. Note that setting this while audio is actively being captured will have no effect until a new capture session is started. Setting this to an invalid value will cause an error to occur at session begin.
I have a college assignment to build Android app that communicates with Ubuntu (or any other Linux distribution), and streams audio via microphone and speakers both on PC and phone. Switching the direction of communication should be done on Android and script for listening on Bluetooth port on PC should be written in Python or some other lightweight language. It does not have to be full-duplex, only single-duplex.
Is the answer in the BluetoothA2dp Android profile or is there something else?
I'm common with making simple Android apps.
Thanks a lot!
Not sure if you still need the answer, but I am working on something similar.
Basically working with python on windows platform to record streaming audio from microphone of laptop then process the sound for ANC [ automatic noise cancellation ] and pass it through band-pass filter then output the audio stream to a Bluetooth device.
I would like to ultimately port this to smartphone, but for now prototyping with Python as that's lot easier.
While I am still early stage on the project, here are two piceses that may be helpful,
1) Stream Audio from microphone to speakers using sounddevice
Record external Audio and play back
Refer to soundaudio module installation details from here
http://python-sounddevice.readthedocs.org/en/0.3.1/
import sounddevice as sd
duration = 5 # seconds
myrecording = sd.rec(duration * fs, samplerate=fs, channels=2, dtype='float64')
print "Recording Audio"
sd.wait()
print "Audio recording complete , Play Audio"
sd.play(myrecording, fs)
sd.wait()
print "Play Audio Complete"
2)Communicate to bluetooth
Refer to details from here:
https://people.csail.mit.edu/albert/bluez-intro/c212.html
import bluetooth
target_name = "My Phone"
target_address = None
nearby_devices = bluetooth.discover_devices()
for bdaddr in nearby_devices:
if target_name == bluetooth.lookup_name( bdaddr ):
target_address = bdaddr
break
if target_address is not None:
print "found target bluetooth device with address ", target_address
else:
print "could not find target bluetooth device nearby"
I know I am simply quoting examples from these sites, You may refer to these sites to gain more insight.
Once I have a working prototype I will try to post it here in future.
I am connecting a mobile device with Android OS 4.1 to a Bluetooth device (device class = 1792), using BluetoothSco to route audio (voice). I've setup a BluetoothSocket using createRfcommSocketToServiceRecord successfully.
My settings:
Using AudioRecord and AudioTrack with frequency = 8000, MediaRecorder.AudioSource.MIC as the source for AudioRecord , AudioManager.STREAM_VOICE_CALL for the AudioTrack, and trying both MODE_IN_COMMUNICATION and MODE_IN_CALL for the AudioManager mode.
without success. I don't get audio on my device.
My questions:
Should I use MODE_IN_COMMUNICATION or MODE_IN_CALL?
Need I switch to MODE_NORMAL or other mode in order to play on device?
Can you suggest a code flow to make SCO audio play on a device?
Can you point out some working code to review?
Notes:
The "Media audio" profile (A2DP) is disabled on the device - only "Call audio" profile (HFP) is enabled.
Will gladly share some code, yet given existing SO Q&As it will probably look the same.
Regards.