I have an application that occasionally speaks via the systems text to speech(TTS) system, but if there's a background service (like an audiobook, or music stream) running at the same time they overlap.
I would like to pause the media, play my TTS, then unpause the media. I've looked, but can't find any solutions.
I believe if I were to play actual audio from my app, it would pause the media until my playback was complete (if I understand what I've found correctly). But TTS doesn't seem to have the same affect. The speech is totally dynamic, so I can't just record all the options.
Using the latest Xamarin.Forms, I've looked into all the media nuget packages I could find, and they all seem pretty centered on controlling media from files.
My only potential thought (I don't like it), is to maybe play an empty audio file while the TTS is running. But would like a more elegant solution if it exists.
(I don't care about iOS at the moment, so if it's an android only solution, I'm okay with it. And if it's native (java/kotlin), I can convert/incorporate it.)
Agree with rbonestell said, you can use DependencyService and AudioFocus to achieve it, when you record the audio, you can create interface in PCL.
public interface IControl
{
void StopBackgroundMusic();
}
When you record the audio, you can executed the DependencyService with following code.
private void Button_Clicked(object sender, EventArgs e)
{
DependencyService.Get<IControl>().StopBackgroundMusic();
//record the audio
}
In android folder, you can create a StopMusicService to achieve that.
[assembly: Dependency(typeof(StopMusicService))]
namespace TTSDemo.Droid
{
public class StopMusicService : IControl
{
AudioManager audioMan;
AudioManager.IOnAudioFocusChangeListener listener;
public void StopBackgroundMusic()
{
audioMan = (AudioManager)Android.App.Application.Context.GetSystemService(Context.AudioService);
listener = new MyAudioListener(this);
var ret = audioMan.RequestAudioFocus(listener, Stream.Music, AudioFocus.Gain);
}
}
internal class MyAudioListener :Java.Lang.Object, AudioManager.IOnAudioFocusChangeListener
{
private StopMusicService stopMusicService;
public MyAudioListener(StopMusicService stopMusicService)
{
this.stopMusicService = stopMusicService;
}
public void OnAudioFocusChange([GeneratedEnum] AudioFocus focusChange)
{
// throw new NotImplementedException();
}
}
}
Thanks to Leon Lu - MSFT, I was able to go in the right direction. I took his implementation (which has some deprecated calls to the Android API), and updated it for what I needed.
I'll be doing a little more work making sure it's stable and functional. I'll also see if I can clean it up a little too. But here's what works on my first test:
[assembly: Dependency(typeof(MediaService))]
namespace ...Droid.Services
{
public class MediaService : IMediaService
public async Task PauseBackgroundMusicForTask(Func<Task> onFocusGranted)
{
var manager = (AudioManager)Android.App.Application.Context.GetSystemService(Context.AudioService);
var builder = new AudioFocusRequestClass.Builder(AudioFocus.GainTransientMayDuck);
var focusRequest = builder.Build();
var ret = manager.RequestAudioFocus(focusRequest);
if (ret == AudioFocusRequest.Granted)
{
await onFocusGranted?.Invoke();
manager.AbandonAudioFocusRequest(focusRequest);
}
}
}
}
Related
I want to set up a mobile application with flutter which also runs in the background. this application allows you to scan Bluetooth devices and listen to events to launch notification and/or start a ringtone.
I managed to do all this and it works very well with the flutter_blue plugin. But my problem is that the application has to keep running in the background.
I came here to seek help.
The app does exactly what this app does https://play.google.com/store/apps/details?id=com.antilost.app3&hl=fr&gl=US
There are 2 ways to do it.
All you have to do that is write a native code in JAVA/Kotlin for android and obc-c/swift for ios.
The best place to start with this is here
If you just follow the above link then you will be able to code MethodChannel and EventChannel, which will be useful to communicate between flutter and native code. So, If you are good at the native side then it won't be big deal for you.
// For example, if you want to start service in android
// we write
//rest of the activity code
onCreate(){
startBluetoothService();
}
startBluetoothService(){
//your code
}
//then, For the flutter
// Flutter side
MessageChannel msgChannel=MessageChannel("MyChannel");
msgChannel.invokeMethode("startBluetoothService");
// Native side
public class MainActivity extends FlutterActivity {
private static final String CHANNEL = "MyChannel";
#Override
public void configureFlutterEngine(#NonNull FlutterEngine flutterEngine) {
super.configureFlutterEngine(flutterEngine);
new MethodChannel(flutterEngine.getDartExecutor().getBinaryMessenger(), CHANNEL)
.setMethodCallHandler(
(call, result) -> {
if (call.method.equals("startBluetoothService")) {
int response = startBluetoothService();
//then you can return the result based on the your code execution
if (response != -1) {
result.success(response);
} else {
result.error("UNAVAILABLE", "Error while starting service.", null);
}
} else {
result.notImplemented();
}
}
);
}
}
same as above you can write the code for the iOS side.
Second way is to write your own plugin for that you can take inspiration from alarm_manager or Background_location plugins.
I hope it helps you to solve the problem.
I am working on a multlayer third person game and I am using motion controller for animations and photon for network manager.I ahve a problem: when I connect and join the room the other players don't move on others player screen. They move only on their devices. Here is what I deactivated:
using UnityEngine;
using com.ootii.Input;
using com.ootii.Actors;
using com.ootii.Actors.AnimationControllers;
public class netView : Photon.MonoBehaviour {
public Camera cam;
public UnityInputSource uis;
public GameObject canvas;
public ActorController ac;
public MotionController mc;
// Use this for initialization
void Start () {
if (photonView.isMine) {
cam.enabled = true;
uis._IsEnabled = true;
canvas.active = true;
ac.enabled = true;
mc.enabled = true;
} else {
cam.enabled = false;
uis._IsEnabled = false;
canvas.active = false;
ac.enabled = false;
mc.enabled = false;
}
}
}
Here is a video: https://youtu.be/mOaAejsVX04 . In it i am playing in editor and on my phone. In my device I move around and the editor player does not move. Also in editor, the player from the device just stays there, doesn't move while on phone is moveing around.
For input I am using CrossPlatformManager class. How can I repair it?
In your case I think the problem is that you don't synchronize the transform to begin with. You need either a PhotonTransformView Component attached to your network object, with a photonView observing that PhotonTransformView, or inside your network behaviour manually writing and reading to that network object stream.
I strongly encourage you do go through the basic tutorial which will show you all the above technique step by step:
https://doc.photonengine.com/en-us/pun/current/demos-and-tutorials/pun-basics-tutorial/player-networking#trans_sync
https://doc.photonengine.com/en-us/pun/current/demos-and-tutorials/pun-basics-tutorial/player-networking#beams
it doesn't matter the input technique you use, what matters is the synchronization of the transform.
I'm currently working on an Android app that enables users to group-chat with each other via OpenTok API. And I want to add a feature to the app that automatically detects which user is talking right now and show his video to the others and minimize the other users' videos until someone else talks.
I cannot find such feature in OpenTok so I was wondering if there's a workaround.
private void joinVideoCall(String sessionId, String sessionToken) {
session = new Session.Builder(activity, OPENTOK_API_KEY, sessionId).build();
session.setSessionListener(this);
session.connect(sessionToken);
}
#Override
public void onConnected(Session session) {
publisher = new Publisher.Builder(activity).build();
publisher.setPublisherListener(this);
publisherView.addView(publisher.getView());
session.publish(publisher);
}
#Override
public void onStreamReceived(Session session, Stream stream) {
subscriber = new Subscriber.Builder(activity, stream).build();
session.subscribe(subscriber);
subscriberView.addView(subscriber.getView());
}
...
In order to do that, you'll need to use a custom audio driver that will detect the audio levels.
Take a look to this sample: https://github.com/opentok/opentok-android-sdk-samples/tree/master/Custom-Audio-Driver
And also, take a look to the API documentation: https://tokbox.com/developer/sdks/android/reference/com/opentok/android/BaseAudioDevice.html
I am trying to write a DLNA application using the Cling Java library. I Can able to search all the media servers in the DLNA network and play the content also. But i need to search Media Renderers available in the network and play the content on them. Just like UPnPlay does.
Thanks in advance.
public class MyUpnpService extends AndroidUpnpServiceImpl {
#Override
protected AndroidUpnpServiceConfiguration createConfiguration(WifiManager wifiManager) {
return new AndroidUpnpServiceConfiguration(wifiManager) {
#Override
public ServiceType[] getExclusiveServiceTypes() {
return new ServiceType[] {
new UDAServiceType("AVTransport")
};
}
};
}
}
Searching for devices with "AVTransport service" capability solved the issue of searching for Media renderers for remote playback. For remote playback i found enough documnetation from this
How can you read data, i.e. convert simple text strings to voice (speech) in Android?
Is there an API where I can do something like this:
TextToVoice speaker = new TextToVoice();
speaker.Speak("Hello World");
Using the TTS is a little bit more complicated than you expect, but it's easy to write a wrapper that gives you the API you desire.
There are a number of issues you must overcome to get it work nicely.
They are:
Always set the UtteranceId (or else
OnUtteranceCompleted will not be
called)
setting OnUtteranceCompleted
listener (only after the speech
system is properly initialized)
public class TextSpeakerDemo implements OnInitListener
{
private TextToSpeech tts;
private Activity activity;
private static HashMap DUMMY_PARAMS = new HashMap();
static
{
DUMMY_PARAMS.put(TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID, "theUtId");
}
private ReentrantLock waitForInitLock = new ReentrantLock();
public TextSpeakerDemo(Activity parentActivity)
{
activity = parentActivity;
tts = new TextToSpeech(activity, this);
//don't do speak until initing
waitForInitLock.lock();
}
public void onInit(int version)
{ //unlock it so that speech will happen
waitForInitLock.unlock();
}
public void say(WhatToSay say)
{
say(say.toString());
}
public void say(String say)
{
tts.speak(say, TextToSpeech.QUEUE_FLUSH, null);
}
public void say(String say, OnUtteranceCompletedListener whenTextDone)
{
if (waitForInitLock.isLocked())
{
try
{
waitForInitLock.tryLock(180, TimeUnit.SECONDS);
}
catch (InterruptedException e)
{
Log.e("speaker", "interruped");
}
//unlock it here so that it is never locked again
waitForInitLock.unlock();
}
int result = tts.setOnUtteranceCompletedListener(whenTextDone);
if (result == TextToSpeech.ERROR)
{
Log.e("speaker", "failed to add utterance listener");
}
//note: here pass in the dummy params so onUtteranceCompleted gets called
tts.speak(say, TextToSpeech.QUEUE_FLUSH, DUMMY_PARAMS);
}
/**
* make sure to call this at the end
*/
public void done()
{
tts.shutdown();
}
}
Here you go . A tutorial on using the library The big downside is that it requires an SD card to store the voices.
A good working example of tts usage can be found in the "Pro Android 2 book". Have a look at their source code for chapter 15.
There are third-party text-to-speech engines. Rumor has it that Donut contains a text-to-speech engine, suggesting it will be available in future versions of Android. Beyond that, though, there is nothing built into Android for text-to-speech.
Donut has this: see the android.speech.tts package.