I have create an app for android that creates and stores a single tone, then plays it back utilizing android audio track class. Here's the issue: on my phone I can only play tones up to a frequency of about 11kHz, and on a virtual phone run from my PC (same exact code) I can get frequencies up to about 14kHz. What could cause this cutoff?
Using a tone generator app from the market, my phone can produce up to 20kHz signals, so I know it is not a hardware issue.
Thanks.
It might help if you provide some of the code for how you're generating the tone.
For audio stuff, you should go here http://music.columbia.edu/mailman/listinfo/andraudio and signup then ask there. There's a great community of Android developers for that list all dedicated to audio development.
Also, self-promotion, I run a forum website (relatively new and needing updates) and I plan to add an Android Audio forum on it once I get enough interested folks. If you're interested, sign up here
Related
I'm a biomedical engineering student and I'm doing my Final project on an assistive app for people with sensory issues. I'm also unfortunately not familiar with android apps coding (I have done some C and C++ plus a tiny bit of front-end on JS).
Long story short, I'm looking to analyze every single audio that will be played to the user (through headphones or peripheral speakers) before they are actually played to the user (could be a video playing from any app, games, music etc.) and filter out certain groups of frequencies from it that could cause overload or overstimulation. (the frequencies will be guessed through questions answered by user, the goal is to drastically decrease these frequencies amplitude).
I have found open-source code for the frequencies analyzer both in kotlin and java.
Now my issue is: can I have access to and manipulate other apps audio output? (I have found that the audiostream can be paused or prioritized through audio focus but didn't find an answer for this specific need)
sorry it got long and I thank everyone in advance!
I have a plan to develop an instrument app, when we shake the android phone, it will produce "angklung" (Google it) sound.
THE PROBLEM:
How to make one android phone can share its produced sound (by shake
gesture) to the other android phones having my application?
The connection that I want to use is mobile data connection and wi-fi.
I think this person has the same problem, but I don't know how to communicate with him. Stream android to android
But there is no help..
I need solution/example/suggestion for this problem. So far I succeed to produce the "angklung" sound when it is shaken.
I have no idea how to start this application. I've searched in the internet but there is no help :(
Thanks for your help.
I would give you the suggestion of streaming the audio data to a server and beaming that to other android devices (that are registered to your app). As the question/issue you have asked are way bigger than couple of lines code, hence am pointing you to some good resources, dig those deep & good luck.
Live-stream video from one android phone to another over WiFi
Stream Live Android Audio to Server
Hello guys I have a question
I have to admit before I ask my question I never used Android Sdk before but I have coded java for couple of years.
I have a fm radio app.It's an internet radio and I want to record it's output. Is it possible to use an external app to record some other app's output? And if yes, It also has some pre recorded shows which you can listen within the app. They do not get saved into my device when I listen however is it possible to download those shows? Like finding source of the audio and downloading it using my external app.
I'm pretty sure that the recorded shows are downloaded from the internet. I know some audio grabbers as browser extensions in Pc. So I'm asking, if such thing is possible in Android as well.
See below:
https://stackoverflow.com/a/25741006/850347
Seems to be currently there is no way to achieve this. I have read this article and it suggests to recompile the Android source code with some changes.
Or, you can use visualizer.
https://stackoverflow.com/a/25816052/850347
The closest API available to you for these purposes is Visualizer. Which only captures "partial and low quality audio content".
This question might seem to be a repetition of the questions such as following:
How to play an audio file on a voice call in android
Background Audio for a Call in Progress - Possible?
The answers of these questions suggests that it is not possible to play a pre-recorded audio on a voice call in android. I want to know why it is not possible? What is the limitation (hardware/software)? Is it really a limitation or done purposely? Can we alter the source code of android to make it possible?
I think this is a limitation, imposed for security reasons and restricted at the OS level.
Let's analyze the security threat, first of all. If you were able to play custom audio files to the callee, a whole world of cons opens up: you could trick customer supports, you could pretend to be someone else, you could give unauthorized purchase confirmations, and so on. For this reason, neither Android nor iOS allows this functionality.
On Android, you won't be able to do so in a programmatic way, simply because the current APIs won't allow you to do so. It is stated in the official documentation as well, as pointed out here. If you dig into the source code, you can probably enable this feature by accessing the microphone output during a phone call, but that would require running your custom version of Android. A good starting point would be the AudioTrack source, available here.
EDIT: a good example of an audio mod involves enabling the Nexus 5 earpiece as a second loudspeaker (requires root). Can be found here.
After a thorough research, what I have come to know is that there are more than one limitations/hurdles to make it possible. These limitations/hurdles are at three different levels.
First limitation is at API level, because there is no high-level API to play sound files in the conversation audio during a call as mentioned in Android official documentation.
Second limitation is at Radio Interface Layer (RIL). RIL passes on complete control of the call to Radio Daemon (rild) of the Linux library which then further passes the control to the vendor RIL. That means we cannot manipulate voice call in android source code.
Even if we are able to remove these two limitations, we may still not be able to play audio file to an ongoing voice call. Because there is a third limitation. Every vendor has their own library of RIL that communicates with Radio Daemon (rild). This requires that vendor RIL to be open source which is not actually. Hardware vendors do not usually make their device drivers code available.
Detail discussion on this topic is present at this link.
This is software related due to the prioritization of audio routing in Android.
Take a look into the CallManager where you can dig into the method setAudioMode(). After the audio mode was set to MODE_IN_COMMUNICATION the following code is called
audioManager.requestAudioFocusForCall(AudioManager.STREAM_VOICE_CALL,
AudioManager.AUDIOFOCUS_GAIN_TRANSIENT);
From this point on the telephony service has the highest priority and won't let any other audio play in parallel.
Note: You can play back the audio data only to the standard output device. Currently, that is the mobile device speaker or a Bluetooth headset. You cannot play sound files in the conversation audio during a call.
See official link
http://developer.android.com/guide/topics/media/mediaplayer.html
By implementing the AudioManager.OnAudioFocusChangeListener you can get the state of the audiomanager. so by this if any music is playing in the background you can get the AudioManager states(playing and pausing is completely in developer hands) similarly......
Some of the native music players in android device where handling this, they restrict the music when call is in TelephonyManager.EXTRA_STATE_OFFHOOK.so this scenario is also completely in developer hand (whether to handle or not) if he is not handling both will play parallel y
I am working on my Glass app demo and I used droidAtScreen to project the screencast from MyGlass for the presentation. The problem is that I cannot demonstrate the voice responses from the Glass based on user input. My backup plan is to record a video for demonstration and insert the voice output manually. Does anyone know if there is a better way to do both screen and audio cast for Google Glass app demo? Thanks for the help.
Have you tried Android Screen Monitor? I always use ASM.jar for any demonstration and it works fine with both audio and video demonstrations.
The link to download ASM.jar is here.
Detailed description is here. If you're using Droid#Screen than probably you know how to run Android Screen Monitor (ASM.jar), but here is a link for a reference that explains the process in detail.
This is how I solved the problem initially.
Use Screencast-O-Matic to record the video of screencast on my
laptop. The screencast is done using DroidAtScreen with
highest frame rate possible option checked. It has better frame rate than ASM screencast. During the video record session, my voice was captured. (so in other words, choose a
quiet place!!)
For simulating the Text-to-Speech engine voice, I used SitePal
demo site and the voice is Julie (US). It's the closest voice I could
find that matches the Google Glass speech engine. To record the
voice, I used Audacity and export it to .wav audio file. The key
is to play the video and find the exact time to insert the audio
file using any standard movie maker software.
UPDATE
Just finished the demo presentation at IT Expo. To my surprise, the simplest solution worked the best.
Create the video demo (under 2 minutes) as mentioned above but insist
on asking an audience to try it out.
Ask the person to say what
he/she heard from the Glass app as a response to the action (ex. The
item is saved)