Android : Recognize Incoming noise to the Device - android

android people..
I have already gone through answers of no. of question in stackoverflow , many of them are little bit similar to my problem. but i could not get solution of my problem.
problem is that .
In my Application i want to Rotate an Image until noise is coming from the microphone of Device and i don't want to save the audio file.
so , How can i start or implement this if anyone know the link or hint that how to implement this thing.
Thanks..

You can use the AudioRecord class to access the audio stream from the microphone. Than you can try to detect noise from that bitstream. After calling startRecord on the AudioRecord you have to use read to read the bytes from the stream (and you have to do that fast enough such that the internal buffer is not overflowing). You'll get 16bit integers from that stream which you can use to analyze basically the noise level. However recognizing a specific sound is not as easy.

Related

Is it possible in android to play an audio file over a phone call and receiver should hear it?

I search about it a lot but got nothing. I just want to know is it possible or not? If yes than how?
No.
The simple fact is the microphone / input audio stream during a phone call is not able to be pre-processed or replaced with another stream of any form.
So simply while one would think its easy to use media during a call, it simply is not possible due to the way the android system processes the audio.

How to play audio by a specific audio device in Android?

I've looked up around for a way to achieve mainly 3 different things with Android:
How to getting listed all the audio output and input devices?
I've found out that there's a method added since API 23 on AudioManager instances according to Android documentation which does this, but there's no documentation about a solution for previous versions. This question remains unanswered here at StackOverflow and several places online.
How to pick up a specific audio output device to stream audio?
The purpose I want to acomplish with this is to select one of the available output devices and stream out the audio by it. Is there a way to do so?
How to record by a specific input device?
The same that before, specific device usage, but in the other way around, I mean, to select one of the available microphones and use it to record audio.
Please help me with some guidence, source code or idea, all of them will be very welcome.

how to get the pcm data of android?

i'm try to get pcm data by class AudioRecord, the source of audio from headset,it linked a device ,the device will send some wave to my app(i hope you can understand what i say).![difference device's wave],the picture at http://i.stack.imgur.com/5iZnY.png
we see the picture, wave 1 and wave 2,i can get the right result,because i can calculate the point of one cycle, but using sony xl36h, i received wave not closeness real wave, device actually send signal closeness to wave 1.
my question is what caused this phenomenon, how to get the closeness wave such like wave1? i think it maybe Sony optimize the bottom layer of audio ,if that ,should i use NDK to avoid that?
should i use NDK to avoid that?
No, you will get the same results with the NDK.
AudioRecord provides already access to the raw PCM data. The difference between the devices occures because they use different audio modules. The modules have different hardware features (low pass filters/ sensibility) and you can not disable them through software. The reason behind that is that these features reduce noise.

Android: Record an audio stream and retrieve the latency information

I want to write simple test program to record audio stream from input jack (microphone?).
I must calculate the audio latency information and show it.
I must use NDK or SDK?
Does there exist a simple source code that does this for me?
In the end, how I can start to write it?
This might help as a start
http://developer.android.com/guide/topics/media/index.html
for the latency problem you might meed to use NDK - which would give you lower latency anyway but last time i checked audio was not really usable in the NDK but this may have changed

Android: Implementing a VoIP program

I have some design questions that I want to discuss with people interested in helping me. I am planning to develop a simple VoIP program that allows two Android phones in the same network to use VoIP. My goal is simply to capture sound, send the data with UDP, receive UDP data and play sound.
My current design is to have 2 threads: one captures the microphone and sends the data; the other one receives bytes and plays them.
I was starting to implement that using MediaPlayer and MediaRecorder. The issue that came up is how do I record and play sound? By that, I would like to know if I need to use a file, although that seems slow, or if there is anyway to have the recording automatically sent to my UDP socket please?
Basically, I wonder if I have to record to a file, then to be able to play it, or if I could just pass a socket (for recording and playing).
Does anyone has any suggestion please?
Thank you very much
MediaRecorder needs an FD so, you can use sockets as well. I dont see any issues with that. It all depends on how you would design your system.
Don't use those classes for streaming audio - use AudioTrack and AudioRecord instead.
They provide the functionality you need for playing and recording raw audio data, without dealing with an FD.
When you record a frame (either byte[] or short[]), wrap it with a UDP packet.
When you receive a UDP packet, unpack the relevant byte[] or short[] and play it.

Categories

Resources