Good morning guys, I'm using lib speech_to_text to be able to use the microphone and do voice recognition ... I put a Timer to be able to leave the microphone active for longer after the person speaks, but it only works on iOS, so I saw Android already has a native Timer .... does anyone know what I can do? Thank you!
#action
onPressedMic({bool continous = false}) {
this.initSpeech();
if (this.hasSpeech) {
try {
stt.listen(
localeId: LocaleUtils.getPtBR().localeId,
onSoundLevelChange: _sttOnSoundLevelChange,
onResult: _sttResultListener,
cancelOnError: true,
);
// displays the wave indicating the audio capture
status = StatusFooter.capturing_speech;
if (Platform.isIOS) _startTimerListen();
if (_startAudioCapture != null) _startAudioCapture();
} catch (e) {
status = StatusFooter.open_speech;
print("Pressed Mic error: $e");
}
}
}
#computed
bool get canShowSuggestions => (this.suggestionChips?.length ?? 0) > 0;
#action
_sttResultListener(SpeechRecognitionResult result) async {
// restart timer, if stt stop command has not been issued
if (Platform.isIOS && !result.finalResult) _startTimerListen();
this.textCaptureAudio = result.recognizedWords;
// sends the question, as the stt stop command has been issued
if (result.finalResult && this.textCaptureAudio.trim().isNotEmpty)
this.sendQuestion(this.textCaptureAudio);
}
void _startTimerListen() {
_cancelTimerListen();
timerListen = Timer(Duration(seconds: 3), () {
if (this.textCaptureAudio.trim().isNotEmpty) {
stt.stop();
} else {
_defineOpenSpeechStatus();
}
});
}
As far as I know there is really no way for an app to extend the period of time the microphone is listening on Android, not with Flutter and not with native Android development. I tried solving this problem for a own app a few years ago but the speech recognition on Android just does not support this. I am sorry but I hope I could help clarifying things.
Related
The app plays some background audio (wav) in an endless loop. Audio playback was flawless using just_audio package, but after switching to audioplayers, all looped audio now contain a short gap of silence before they restart at the beginning. Question now is: How to get rid of the gap?
The code to load and play the files is quite simple:
Future<void> loadAmbient() async {
_audioAmbient = AudioPlayer();
await _audioAmbient!.setSource(AssetSource('audio/ambient.wav'));
await _audioAmbient!.setReleaseMode(ReleaseMode.loop);
}
void playAmbient() async {
if (PlayerState.playing != _audioAmbient?.state) {
await _audioAmbient?.seek(Duration.zero);
_audioAmbient?.resume();
}
}
According to the audioplayers docs, this seems to be a known issue:
The Getting Started docs state:
Note: there are caveats when looping audio without gaps. Depending on the file format and platform, when audioplayers uses the native implementation of the "looping" feature, there will be gaps between plays, witch might not be noticeable for non-continuous SFX but will definitely be noticeable for looping songs. Please check out the Gapless Loop section on our Troubleshooting Guide for more details.
Following the link to the [Trouble Shooting Guide] (https://github.com/bluefireteam/audioplayers/blob/main/troubleshooting.md) reveals:
Gapless Looping
Depending on the file format and platform, when audioplayers uses the native implementation of the "looping" feature, there will be gaps between plays, witch might not be noticeable for non-continuous SFX but will definitely be noticeable for looping songs.
TODO(luan): break down alternatives here, low latency mode, audio pool, gapless_audioplayer, ocarina, etc
Interesting is the Depending on the file format and platform, ... part, which sounds as if gap-less loops could somehow be doable. Question is, how?
Any ideas how to get audio to loop flawless (i.e. without gap) are very much appreciated. Thank you.
You can pass this parameter
_audioAmbient = AudioPlayer(mode: PlayerMode.LOW_LATENCY);
if not solved u can try something new like https://pub.dev/packages/ocarina
I fixed it this way:
Created 2 players, set asset path to both of them and created the method that starts second player before first is completed
import 'package:just_audio/just_audio.dart' as audio;
audio.AudioPlayer player1 = audio.AudioPlayer();
audio.AudioPlayer player2 = audio.AudioPlayer();
bool isFirstPlayerActive = true;
StreamSubscription? subscription;
audio.AudioPlayer getActivePlayer() {
return isFirstPlayerActive ? player1 : player2;
}
void setPlayerAsset(String asset) async {
if (isFirstPlayerActive) {
await player1.setAsset("assets/$asset");
await player2.setAsset("assets/$asset");
} else {
await player2.setAsset("assets/$asset");
await player1.setAsset("assets/$asset");
}
subscription?.cancel();
_loopPlayer(getActivePlayer(), isFirstPlayerActive ? player2 : player1);
}
audio.AudioPlayer getActivePlayer() {
return isFirstPlayerActive ? player1 : player2;
}
void _loopPlayer(audio.AudioPlayer player1, audio.AudioPlayer player2) async {
Duration? duration = await player1.durationFuture;
subscription = player1.positionStream.listen((event) {
if (duration != null &&
event.inMilliseconds >= duration.inMilliseconds - 110) {
log('finished');
player2.play();
isFirstPlayerActive = !isFirstPlayerActive;
StreamSubscription? tempSubscription;
tempSubscription = player1.playerStateStream.listen((event) {
if (event.processingState == audio.ProcessingState.completed) {
log('completed');
player1.seek(const Duration(seconds: 0));
player1.pause();
tempSubscription?.cancel();
}
});
subscription?.cancel();
_loopPlayer(player2, player1);
}
});
}
I am working on a personal project and I am using flutter to develop an app (cross platform) that reads in the user's health data from google fit (Android) or Apple Health. I am using this package and even the EXACT same code like in the documentation (I am currently only testing on Android):
Future fetchStepData() async {
int? steps;
// get steps for today (i.e., since midnight)
final now = DateTime.now();
final midnight = DateTime(now.year, now.month, now.day);
bool requested = await health.requestAuthorization([HealthDataType.STEPS]);
if (requested) {
try {
steps = await health.getTotalStepsInInterval(midnight, now);
} catch (error) {
print("Caught exception in getTotalStepsInInterval: $error");
}
print('Total number of steps: $steps');
setState(() {
_nofSteps = (steps == null) ? 0 : steps;
_state = (steps == null) ? AppState.NO_DATA : AppState.STEPS_READY;
});
} else {
print("Authorization not granted - error in authorization");
setState(() => _state = AppState.DATA_NOT_FETCHED);
}
}
Then I am calling this function with await and I also have inserted the correct permission in all Android Manifest files:
Also I set up an OAuth2 Client ID for the project and added my google account as a test user.
BUT THE FUNCTION SETS THE VARIABLE STEPS ALWAYS TO NULL? The boolean variable "requested" is true, so it seems like the actual connection is working?
I am really disappointed by myself guys and I really need help - THANK YOU!
I tried adding the correct android permissions, asking for permissions explicitly, different time intervalls but nothing worked for me, I always got a null value back.
guys, I'm building a music app in flutter and I have a problem which I don't know how to resolve, so if someone can help me out on this.
So this is the thing I have songs that are stored on CloudFront and they work perfectly
play(song, context) async {
int result = await audioPlayer.play(song); //path to the song
setState(() {
isPlaying = true;
});
// int seek = await audioPlayer.seek(Duration(minutes: 10, seconds: 10));
if (result == 1) {
return 'succes';
}
}
but when I give another path from podbean it will not start playing "https://mcdn.podbean.com/mf/play/9i7qz8/Japji_Sahib_Pauri_3.mp3" -> this is the link you can play it in browser
Error message. Shows after some time on the initial play of the podcast
Any idea what I can do about this problem.
Thanks in advance.
I use navigator.mediaDevices.enumerateDevices to retrieve list of all video devices (element.kind === 'videoinput') and then call navigator.mediaDevices.getUserMedia(constraints) call to rotate video devices (using deviceId as constraint). Everything works fine on Windows Chrome / Firefox, but on android phone (tried Samsung, Asus, Huawei with Android 8/9) this call fails for back camera with NotReadableError / Could not start video source (for Chrome) or AbortError / Starting video failed (for Firefox).
Strangely same code works ok in iOS / Safari.
Also this only happens when WebRTC call is present in browser. If there is no call I can select any video device.
Also if I select back camera first and try to establish the call, it does not work, I get similar error.
I know it's far-fetched but maybe someone had same/similar issue?
All browser versions are up-to-date.
[UPDATE - code snippet and log]
switchCamera() {
try {
if (this.localStream) {
const tracks = this.localStream.getTracks();
console.log('switchCamera stopping this.localStream tracks', tracks);
tracks.forEach((track: MediaStreamTrack) => {
console.log('switchCamera stopping track', track);
track.stop();
});
console.log('switchCamera stop stream');
}
const constraints = {
audio: true,
video: { facingMode: this.faceCamera ? 'environment' : 'face' }
};
this.faceCamera = !this.faceCamera;
console.log('switchCamera constraints: ', constraints);
navigator.mediaDevices.getUserMedia(constraints)
.then(stream => {
console.log('getUserMedia:', stream);
this.logText('got stream');
this.localVideo.srcObject = stream;
const videoTracks = stream.getVideoTracks();
const audioTracks = stream.getAudioTracks();
console.log('videoTracks', videoTracks);
if (videoTracks.length > 0) {
console.log(`Using video device: ${videoTracks[0].label}`);
}
const videoTrack = videoTracks[0];
const audioTrack = audioTracks[0];
console.log('Replacing track for pc', videoTrack, audioTrack);
const pc = this.session.sessionDescriptionHandler.peerConnection;
const videoSender = pc.getSenders().find(s => {
return s.track && s.track.kind === videoTrack.kind;
});
const audioSender = pc.getSenders().find(s => {
return s.track && s.track.kind === audioTrack.kind;
});
if (videoSender) {
console.log('videoSender.replaceTrack', videoTrack);
videoSender.replaceTrack(videoTrack);
}
if (audioSender) {
console.log('audioSender.replaceTrack', audioTrack);
audioSender.replaceTrack(audioTrack);
}
})
.catch(e => {
console.log('getUserMedia error:', e.name, e.code, e.message);
});
} catch (e) {
window.alert(e);
}
}
this is the log from chrome remote device debug:
The error is "NotReadableError", "Could not start video source" which means that the underlying device handle could not be obtained by chrome.
Again, safari/ios works ok.
For mobile devices, there is a dedicated way of how to select between front & back camera.
VideoFacingMode - https://www.w3.org/TR/mediacapture-streams/#dom-videofacingmodeenum
TL;DR
window.navigator.mediaDevices.enumerateDevices().then(devices => {
if (devices.filter(device => device.kind === 'videoinput').length > 1) {
navigator.mediaDevices.getUserMedia({video: {facingMode: 'user' /*'environment'*/}}).then(console.log.bind(this))
}
})
It works for mobile Safari, Chrome and FF.
NOTE
Remember, to stop the previous video track before calling the
getUserMedia with video again, otherwise, you will get an
exception.
Ok, so I narrowed it down to calling navigator.mediaDevices.getUserMedia() in ngInit() (this is Angular app).
Even if I remove all code in .then() handler function, the effect is the same.
Only removing this call solves the issue.
Not sure at this time why such behavior, will investigate it more thoroughly and update.
To switch between front and back cameras on mobile, you need to stop the previous stream before opening a new stream.
if (videoIn.srcObject) {
videoIn.srcObject.getTracks().forEach((track) => {
track.stop();
});
I have an odd issue I can't explain the reason for it - maybe someone here can shed some light on it
I have a ticket scanning app in Xamarin Forms currently testing it on android
the interface allows you to:
type an order number and click the check order Button
use the camera scanner to scan which automatically triggers check order
use the barcode scanner to scan which automatically triggers check order
after the check order validation, user has to select the number of tickets from a drop down list and press confrim entry button
what I'm trying to do, is if the seats available on that ticket is just 1 - then automatically trigger confirm entry button functionality
problem that I have is that - some of my logic depends on setting the drop down index in code - for some reason it doesn't update - as seen in the debugger shot here
and this is the second tme I've noticed this today, earlier it was a var I was trying to assign a string and it kept coming up as null - eventually I replaced that code
is this a bug in xamarin ?
code has been simplified:
async void OnCheckOrderButtonClicked(object sender, EventArgs e)
{
await ValidateOrderEntry();
}
private async void scanCameraButton_Clicked(object sender, EventArgs e)
{
messageLabel.Text = string.Empty;
var options = new ZXing.Mobile.MobileBarcodeScanningOptions();
options.PossibleFormats = new List<ZXing.BarcodeFormat>() {
ZXing.BarcodeFormat.QR_CODE,ZXing.BarcodeFormat.EAN_8, ZXing.BarcodeFormat.EAN_13
};
var scanPage = new ZXingScannerPage(options);
scanPage.OnScanResult += (result) =>
{
//stop scan
scanPage.IsScanning = false;
Device.BeginInvokeOnMainThread(async () =>
{
//pop the page and get the result
await Navigation.PopAsync();
orderNoEntry.Text = result.Text;
//automatically trigger update
await ValidateOrderEntry();
});
};
await Navigation.PushAsync(scanPage);
}
private async Task ValidateOrderEntry()
{
//...other code....
checkInPicker.Items.Clear();
if (availablTickets == 1)
{
checkInPickerStack.IsVisible = true;
checkInPicker.SelectedIndex = 0;
messageLabel.Text = "Ticket OK! - " + orderNoEntry.Text;
messageLabel.TextColor = Color.Green;
//select the only element
checkInPicker.SelectedIndex = 0;
await PostDoorEntry();
}
//...other code....
}
private async Task PostDoorEntry()
{
int entryCount = checkInPicker.SelectedIndex + 1;
//... more code...
//...post api code..
}
Maybe I'm overlooking something, but you clear all the items a few lines above the one you are pointing out. That means there are no items in your Picker and thus you can't set the SelectedIndex to anything other than -1, simply because there are no items.