I cant find anything online but how can i use a chrome tab web audio api in an android app so i can play sound during a phone call.
i went to this site but when i play the sound during a phone call the far end doens't here anything. I thought one feature of web audio was that it can play change the sound of someones voice in a phone call, so i thought it had access to the audio phone call stream.
even here the tech says its ready for android but i cant even get hte audio recorder demo to work on android.
While you do (with the user permission) have access to the input of the device you only have access to the main output of the device (internal speakers or headphones). This is represented as the AudioContext.destination. The buffers in a call is (probably) a different output that you simply don't have access to in Web Audio (and that's probably a good thing. Imagine the security issues we'd have if apps were allowed to hijack calls!).
Related
TLDR: Stream music from one Android to another Android device via bluetooth, remote controlling VLC or foobar2k or another app on source Android, displaying play/pause/nex/prev button and track name, which app (besides that one that came with my car Android radio)?
I got an old NoName China Android Car Radio (Android7 from 2017). This device comes with an app just called A2DP (and just called "Bluetooth' v2.4.0) in the App list), that magically is able to control VLC music from my Android smartphone. It's just got a Play/Pause, Next and Prev button, and is also showing the current track name from VLC.
I recently wrote myself a car dashboard app that displays the current track (via NotificationListenerService) which works with LOCAL VLC on my car Android, however not with this A2DP called China app open.
From what I think i found out is, that this app is probably not announcing/broadcasting the track name to the local car Android
(see e.g. send track informations via A2DP/AVRCP)
So to test out my theory I just wanted to try out ANOTHER app that does the same thing... and guess what... i searched the App Store, F-Droid and more, with search terms containing "bluettoh" "media controller" "avrcp/a2dp" "bluetooth music player" etc.etc... I COULDN'T FIND A SINGLE APP THAT DOES WHAT THIS CHINA APP DOES :)
So I'm questioning now if i'm missing something here... hope someone can point me in the right direction ;)
Thanks
Installed almost every app on Play Store that hast to do with bluetooth, nothing works. Even random .apk files....
I have two devices. One to play the video and the other device will be used as a remote. Later device had a network connection using Thrift to communicate with former device.
How can I control the former device to play/stop/pause through the later device. The communication from later device to former device is ok. But the message get to former device, how do I play/stop etc.
I found the answer from other post.
Control the default music player of android or any other music player
but can we retrieve the status of the media player with intent.
I want to make a speech recognizer app which transcribes the user's speech. I do not want any dialog while doing that, so startActivityForResult with recognizerIntent is out of the option. (I know I can get audio if I use this approach)
I am using SpeechRecognizer for that and call startListening to listen user's audio. I am getting results with very good accuracy in onResults.
Now, I also need the audio of user stored in sdcard in my device. For that I have tried both MediaRecorder as well as AudioRecord, but not getting any success. I am always getting Network Error in onError of RecognitionListener. I can't find anyway on how to overcome this issue. I have also tried to get data from onBufferReceived, but in vain.
If anyone can throw some light on this, then that would be great.
[Edit]
Guys, this is not a duplicate of record/save audio from voice recognition intent, it's slightly different. The answer you gave is for Google Keep. Keep uses the dialog to get data. I do not need a dialog hanging on the screen.
I have successfully accomplished this with the help of CLOUD SPEECH API.
You can find it's demo by google speech.
The API recognizes over 80 languages and variants, to support your
global user base. You can transcribe the text of users dictating to an
application’s microphone, enable command-and-control through voice, or
transcribe audio files, among many other use cases. Recognize audio
uploaded in the request, and integrate with your audio storage on
Google Cloud Storage, by using the same technology Google uses to
power its own products.
It uses audio buffer to transcribe data with help of Google Speech API. I have used this buffer to store Audio recording with help of AudioRecorder.
So with this demo we can transcribe user's speech parallely with Audio Recording.
I'm trying to send audio over my cars speakers without having the cars audio input set to bluetooth. It supports the A2DP profile, and I also managed to get it to work. But that only works when I manually set the input to Bluetooth.
But I would like to 'force' the audio being sent over the speakers.
I previously had an iPhone. The Google Maps app would 'call' itself everything it would pronounce the directions. So my car would see it is an incoming call and play the audio over the speakers.
I looked around the internet, and it seems I need to use the HSF profile to pull of the same trick.
The documentation states that HSF is supported, but it does not show me how to do it. I also found exactly what I needed in the documentation. It states the following:
NOTE: up to and including API version JELLY_BEAN_MR1, this method initiates a virtual voice call to the bluetooth headset. After API version JELLY_BEAN_MR2 only a raw SCO audio connection is established.
So initiating virtual voice calls was possible. I would like to know how to do that now. Any other ideas on how to do this would also be very helpfull.
I'm developing an AIR for Android application, and am current sending audio to fms servers via standard NetStream/Microphone options. I (ignorantly) assumed that attaching a bluetooth device would be pretty simple, and connecting it would make it show up as a native "Microphone". Unfortunately, it does not.
I don't think it is even possible to use Netstream.publish and publish raw bytes, so the only hope is that there's a way to use NativeProcess + Java to create a native microphone "handle" that AIR can pick up on.
Has anyone run into this issue?
I think one possible solution would be using NetConnection.send() instead of Netstream.publish().
You should get sound data from your BT microphone. I am not sure if you can get using AIR. You may need to use an android service that gets the sound data and feeds your AIR app via a file, a UDP port or an invoke etc.
When you get some sound data, encode it so flash can play it (Speex, Nellymoiser, etc) You can do the encoding in your Android service as well.
Whenever your AIR app receives a sound data, send it to your streaming server via NetConnection.Send().
Extend your streaming server to process sound data received. You can embed it into a flv stream, or send to other flash clients if it is a chat app.
Other than that, I can't find a way to have a "microphone handle" for your BT microphone. I once thought of creating a virtual device on Android, but I couldn't find any solution.