Im building a react-native application.
Im trying to meter the current sound level (in decibel).
Libraries in use: react-native-audio and react-native-sound.
There is anybody familiar with this feature?
Thank you.
With the react-native-audio library, you can get the count for the only iOS.
For the currentMetering in Android, you need to customize in a native module. I have updated it add the following things in your package.json file instead of your code. You will get the currentMetering count for the Android as well as iOS.
"react-native-audio": "git+https://github.com/Harsh2402/react-native-audio.git",
You can use react-native-audio currentMetering value - in order to get the sound level in real-time.
First, you will have to initialise your recorder (which i will assume youve done). I use prepareRecordingAtPath in a similar way to below
AudioRecorder.prepareRecordingAtPath(audioPath, {
SampleRate: 22050,
Channels: 1,
AudioQuality: "Low",
AudioEncoding: "aac",
MeteringEnabled: true
});
then once you've called AudioRecorder.startRecording(); (Note you have access to .pause() and .stop() methods also
Now for handling the audio level, you are going to have to retrieve the data that is returned by the onProgress method. From what i remember, there should be some currentMetering value that you can access. Note that defining this behaviour will trigger the action every time a different decibel reading is retrieved. Like so
AudioRecorder.onProgress = data => {
let decibels = Math.floor(data.currentMetering);
//DO STUFF
};
Hope this helps,
Related
I'm using this library:
implementation 'com.mesibo.api:webrtc:1.0.5'
The problem is when I end the first videoconference and then call again - the opponent's video is not displayed in the surfaceViewRenderer (and his voice is absent as well). Whereas the onAddStream callback inside my PeerConnection.Observer is fired sucessfully when the abonent accepts my new call.
Could you please help me:
Why my peer's MediaStream is not rendered in the surfaceViewRenderer?
What are required steps to take on both peers' sides on ending call (besides peerConnection.close()) - e.g. what else to clear/close/dispose - in order to make everything ready for a new videocall?
Thank you in advance!
I'm building a mobile app in flutter which pipes the mic audio (mic_stream lib) to a websocket. I’m really struggling to close down the stream pipeline when I'm done with it. I’m getting various “Bad State” exceptions such as Cannot add event while adding a stream. The particulars depend on how I set up the pipeline but it seems to be at the root because the returned addStream future never completes. Any ideas what would cause that?
As said above, the source stream is from the mic_stream lib which pulls from native via Flutter's EventChannel.receiveBroadcastStream. The docs for this method says its returned stream will only close down when there are no more listeners. I try closing my websocket and get a similar error for the same reason (websocket internal bad state b/c addStream never completes). I'm tried wrapping the mic stream in a StreamController and closing that but I get the error mentioned above.
Starting to feel like it's a bug. Maybe EventChannel's stream is special? Or is it related to it being a "broadcast" stream.
Feeling stuck. Any help appreciated...thx
Flutter makes this a little confusing by returning a stream from EventChannel that you can't really use in the normal pipeline chaining way if you ever need to close it. Perhaps they should have done internally what I'm about to show as the workaround.
First for clarity, when you use addStream on StreamController (StreamConsumer rather) it blocks you from "manual" control via the add() method and also the close() until that stream completes. This makes sense, if you think about it, since the source stream should determine when it closes. That's why addStream() returns a Future – so you know when you can resume using those methods, or add another stream. Doing so beforehand will trigger the Bad State errors mentioned above.
From the docs for EventChannel::receiveBroadcastStream()...
Stream activation happens only when stream listener count changes from 0 to 1. Stream deactivation happens only when stream listener count changes from 1 to 0.
So we need to decide when it is done, and to do this we need to control its subscription rather than bury it in a pipeline or a StreamController's private internals via the addStream() method. So instead we'll listen to it directly, capturing the subscription to close when we're done. Then we just proxy the data into a StreamController or pipeline manually via add()
Stream<Uint8List> micStream = await MicStream.microphone(
sampleRate: AUDIO_SAMPLE_RATE,
channelConfig: ChannelConfig.CHANNEL_IN_MONO,
audioFormat: AudioFormat.ENCODING_PCM_16BIT);//,
// audioSource: AudioSource.MIC); // ios only supports default at the mo'
StreamController? s;
// We need to control the listener subscription
// so we can end this stream as per the docs instructions
final micListener = micStream.listen((event) {
print('emitting...');
// Feed the streamcon manually
s!.add(event);
});
s= StreamController();
// Let the SCon's close() trigger the Listener's cancel()
s!.onCancel = () {
print("onCancel");
micListener.cancel();
};
s!.done.whenComplete(() {
print("done");
});
// Further consumers will use the _StreamCon's_ stream,
// _not_ the micStream above
s!.stream.listen((event) => print("listening..."));
// Now we can close the StreamController when we are done.
Future.delayed(Duration(seconds: 3), () {
s!.close();
});
I am trying to create a custom FollowMe mission by sending a vehicle's GPS data on Android studio. i can send the vehicle coordinates,but the updateFollowingTarget gives a timeout error.I'm using mavic 2 zoom and dji sdk v1.14 .Did someone manage to fix this issue.
Thanks in advance.
It's a bug. It always returns timeout.
Just dont care about the error and it will work.
But it speed limited to like 15km/h so dont expect to much from it.
Edited (Do you know another function that i can use to follow a vehicle's GPS signal):
Yes, it involves more programming though.
You have to use virtualstick to control the drone. This is the only way to control the drone programmatically.
I have done it here, follows a tracker app running on a phone on my head:
https://www.youtube.com/watch?v=i3axYfIOHTY
Im working on a python api for dji. In that framework the top level code looks like below. The virtualstick calls are inside move_towards():
while True:
tracker_location = api.tracker.get_location()
drone_target_location = copy.deepcopy(tracker_location).move_to(Changeable.radius.get_value(), Changeable.bearing.get_value())
drone_location = api.drone.get_location()
course_to_tracker = drone_location.get_course(tracker_location)
heading = api.drone.smooth_heading(course_to_tracker)
drone_target_distance, drone_speed, course = api.drone.move_towards(drone_target_location, height=current_altitude, heading=heading, lowest_altitude=Changeable.lowest_altitude.get_value(), deadzone=[Changeable.dead_zone0.get_value(), Changeable.dead_zone1.get_value()], k_distance=[float(Changeable.k_dist0.get_value()), float(Changeable.k_dist1.get_value())], speed_deadzone=Changeable.speed_deadzone.get_value())
I'm using react-native-sound to playback looped audio files (exactly mp3 format). The problem is that there is a gap between the loops.
Here is my code example:
loadSound = (s) => {
let sound = new Sound(s, Sound.MAIN_BUNDLE, (error) => {
if (error) {
console.log('failed to load the sound', error);
return;
} else {
sound.setNumberOfLoops(-1);
}
});
return sound;
}
let sound = loadSound('campfire.mp3')
sound.play();
Is there any workaround how to make the loops to sound smooth?
Actually, this issue is open on github, but there is no solution yet...
There are gaps between loops because of the bug in Android MediaPlayer (which is used under the hood).
To fix the problem I've created the react-native-audio-exoplayer module, which has pretty same functionality as the react-native-sound, but it is based on the Android ExoPlayer. The API is a bit different but more robust and modern (have a look at the docs).
Later if I have time I'll do a pull request to introduce ExoPlayer in the react-native-sound.
But for now feel free to use react-native-audio-exoplayer as a good workaround (for now it's implemented only for Android).
Using the below version of the react-native-sounds fixed the gap issue.
https://github.com/Menardi/react-native-sound-gapless.git
From the README of the git repo:
Simply setting a track to loop on Android does not make it gapless,
instead leaving a short silence before restarting the track. However,
Android does actually support gapless playback between different
tracks. So, this fork essentially loads the same track twice, and gets
Android to handle the gapless playback for us. The downside is that
you will use more memory than before, because the track is loaded
twice. For most cases, this shouldn't matter, but you can profile your
app in Android Studio if you are concerned.
I have same issue with looping the sound. What I did to make it work is to call the same play sound function once sound play completed. I hope this will helps you.
function playBackground(){
bgSound = new Sound(bgMusic, (error, sound) => {
if (error) {
alert('error' + error.message);
return;
}
bgSound.play((success) => {
if(success)
{
// console.log("Play completed");
playBackground();
}
bgSound.release();
});
});
}
We have created an app and for some reason any sound played through Howler that is set to loop has a 30 second or so delay before it actually begins when played on an Android device. Its as if the entire sounds needs to be loaded prior to playing. The sound itself is stored locally on the device and we are using .ogg's. Also this hasn't been an issue before and its only since we updated crosswalk to version 23+ (2.3.0)
Has anybody else come across this or potentially have a fix for this?
Ok so I have found out the issue was to do with Howler and not Crosswalk. Essentially when setting up a new Howl we need to pass the parameter html5:true.
This worked for me:
let gasLooper;
let gasSound = new Howl({
preload:true
, src: require('./assets/audio/Gas-loop.mp3')
, autoplay: true
, volume: 0.5
, onplay: ()=>{
gasLooper = setTimeout(()=>{
gasSound.play();
},450);
}
, onstop: ()=>{
clearTimeout(gasLooper);
}
});