Record and playback with Opus Codec in Android - android

I'm working on a project that need to record and playback with Opus Codec, I search a lot but I can't find any demo/example using that solution. I find a demo having encoder but can't find the decoder. I only find the source code of this codec using C, can you help me?

Hello that demo is a good place to start, he was realy close to solving it. However each package must be sent separetly from the encoder to the decoder. Instead of saving all to a file and then read them back without any regard of package start.
I modified the code to also write the number of encoded bytes and when I decode, I read first the number of bytes in each packet and then the payload.
Here is the modified code in OpusEncoder.java
public void write( short[] buffer ) throws IOException
{
byte[] encodedBuffer = new byte[buffer.length];
int lenEncodedBytes = this.nativeEncodeBytes( buffer , encodedBuffer);
Log.i(TAG,"encoded "+lenEncodedBytes+" bytes");
if (lenEncodedBytes > 0)
{
this.out.write(lenEncodedBytes);
this.out.write( encodedBuffer, 0, lenEncodedBytes );
}
else
{
Log.e( TAG, "Error during Encoding. Error Code: " + lenEncodedBytes);
throw new IOException( "Error during Encoding. Error Code: " + lenEncodedBytes );
}
}
Here is the modified code in OpusDecoder.java
byte[] encodedBuffer;
int bytesEncoded=this.in.read();
int bytesDecoded=0;
Log.d( TAG, bytesEncoded + " bytes read from input stream" );
if ( bytesEncoded >= 0 )
{
encodedBuffer=new byte[bytesEncoded];
int bytesRead = this.in.read( encodedBuffer );
bytesDecoded = nativeDecodeBytes( encodedBuffer , buffer);
Log.d( TAG, bytesEncoded + " bytes decoded" );
}

Try this GitHub demo. I compiled it but, it doesn't play the recorded sound.

Related

Android encoded stream (H264/AAC) audio does not play in Flash Player

I'm working on adding a live broadcasting feature to an Android app. I do so through RTMP and make use of the DailyMotion Android SDK, which in turn makes use of Kickflip.
Everything works perfect, except for the playback of the audio on the website (which makes use of Flash). The audio does work in VLC, so it seems to be an issue with Flash being unable to decode the AAC audio.
For the audio I instantiate an encoder with the "audio/mp4a-latm" mime type. The Android developer docs state the following about this mime type: "audio/mp4a-latm" - AAC audio (note, this is raw AAC packets, not packaged in LATM!). I expect that my problem lies here, but yet I have not been able to find a solution for it.
Pretty much all my research, including this SO question about the matter pointed me in the direction of adding an ADTS header to the audio byte array. That results in the following code in the writeSampleData method:
boolean isHeader = false;
if ((bufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
isHeader = true;
} else {
pts = bufferInfo.presentationTimeUs - mFirstPts;
}
if (mFirstPts != -1 && pts >= 0) {
pts /= 1000;
byte data[] = new byte[bufferInfo.size + 7];
addADTStoPacket(data, bufferInfo.size + 7);
encodedData.position(bufferInfo.offset);
encodedData.get(data, 7, bufferInfo.size);
addDataPacket(new AudioPacket(data, isHeader, pts, mAudioFirstByte));
}
The addADTStoPacket method is identical to the one in the above mentioned SO post, but I will show it here regardless:
private void addADTStoPacket(byte[] packet, int packetLen) {
int profile = 2; //AAC LC
//39=MediaCodecInfo.CodecProfileLevel.AACObjectELD;
int freqIdx = 4; //44.1KHz
int chanCfg = 1; //CPE
// fill in ADTS data
packet[0] = (byte)0xFF;
packet[1] = (byte)0xF9;
packet[2] = (byte)(((profile-1)<<6) + (freqIdx<<2) +(chanCfg>>2));
packet[3] = (byte)(((chanCfg&3)<<6) + (packetLen>>11));
packet[4] = (byte)((packetLen&0x7FF) >> 3);
packet[5] = (byte)(((packetLen&7)<<5) + 0x1F)
packet[6] = (byte)0xFC;
}
The variables in the above method match the settings I have configured in the application, so I'm pretty sure that's fine.
The data is written to the output stream in the following method of the AudioPacket class:
#Override
public void writePayload(OutputStream outputStream) throws IOException {
outputStream.write(mFirstByte);
outputStream.write(mIsAudioSpecificConfic ? 0 : 1);
outputStream.write(mData);
}
Am I missing something here? I could present more code if necessary, but I think this covers the most related parts. Thanks in advance and I really hope someone is able to help, I've been stuck for a couple of days now...

How to mix / overlay two mp3 audio file into one mp3 file (not concatenate)

I want to merge two mp3 files into one mp3 file.for example if 1st file is 1min and 2nd file is 30 sec then the output should be one min. In that one min it should play both the files.
First of all, in order to mix two audio files you need to manipulate their raw representation; since an MP3 file is compressed, you don't have a direct access to the signal's raw representation. You need to decode the compressed MP3 stream in order to "understand" the wave form of your audio signals and then you will be able to mix them.
Thus, in order to mix two compressed audio file into a single compressed audio file, the following steps are required:
decode the compressed file using a decoder to obtain the raw data (NO PUBLIC SYSTEM API available for this, you need to do it manually!).
mix the two raw uncompressed data streams (applying audio clipping if necessary). For this, you need to consider the raw data format obtained with your decoder (PCM)
encode the raw mixed data into a compressed MP3 file (as per the decoder, you need to do it manually using an encoder)
More info aboud MP3 decoders can be found here.
I am not sure if you want to do it on an Android phone (looks like that because of your tags), but if I'm right maybe try LoopStack, it's a mobile DAW (did not try it myself).
If you are just "mixing" two files without adjusting the output volume your output might clip. However I am not sure if it's possible to "mix" two mp3 files without decoding them.
If it is okay for you to merge them on your PC try Audacity, it's a free desktop DAW.
I have not done it in Android but I had done it using Adobe flex. I guess the logic remains the same. I followed the following steps:
I extracted both the mp3s into two byte arrays. (song1ByteArray, song2ByteArray)
Find out the bigger byte array. (Let's say song1ByteArray is the larger one).
Create a function which returns the mixed byte array.
private ByteArray mix2Songs(ByteArray song1ByteArray, ByteArray song2ByteArray){
int arrLength=song1ByteArray.length;
for(int i=0;i<arrLength;i+=8){ // here if you see we are incrementing the length by 8 because a sterio sound has both left and right channels 4 bytes for left +4 bytes for right.
// read left and right channel values for the first song
float source1_L=song1ByteArray.readFloat();// I'm not sure if readFloat() function exists in android but there will be an equivalant one.
float source1_R=song1ByteArray.readFloat();
float source2_L=0;
float source2_R=0;
if(song2ByteArray.bytesAvailable>0){
source2_L=song1ByteArray.readFloat();//left channel of audio song2ByteArray
source2_R=song1ByteArray.readFloat(); //right channel of audio song2ByteArray
}
returnResultArr.writeFloat((source_1_L+source_2_L)/2); // average value of the source 1 and 2 left channel
returnResultArr.writeFloat((source_1_R+source_2_R)/2); // average value of the source 1 and 2 right channel
}
return returnResultArr;
}
1. Post on Audio mixing in Android
2. Another post on mixing audio in Android
3. You could leverage Java Sound to mix two audio files
Example:
// First convert audiofile to audioinputstream
audioInputStream = AudioSystem.getAudioInputStream(soundFile);
audioInputStream2 = AudioSystem.getAudioInputStream(soundFile2);
// Create one collection list object using arraylist then add all AudioInputStreams
Collection list=new ArrayList();
list.add(audioInputStream2);
list.add(audioInputStream);
// Then pass the audioformat and collection list to MixingAudioInputStream constructor
MixingAudioInputStream mixer=new MixingAudioInputStream(audioFormat, list);
// Finally read data from mixed AudionInputStream and give it to SourceDataLine
nBytesRead =mixer.read(abData, 0,abData.length);
int nBytesWritten = line.write(abData, 0, nBytesRead);
4. Try AudioConcat that has a -m option for mixing
java AudioConcat [ -D ] [ -c ] | [ -m ] | [ -f ] -o outputfile inputfile ...
Parameters.
-c
selects concatenation mode
-m
selects mixing mode
-f
selects float mixing mode
-o outputfile
The filename of the output file
inputfile
the name(s) of input file(s)
5. You could use ffmpeg android wrapper using a syntax and approach as explained here
This guy used the JLayer library in a project quite similar to yours. He also gives you a guide on how to integrate that library in your android application directly recompiling the jar.
Paraphrasing his code it is very easy to accomplish your task:
public static byte[] decode(String path, int startMs, int maxMs)
throws IOException, com.mindtherobot.libs.mpg.DecoderException {
ByteArrayOutputStream outStream = new ByteArrayOutputStream(1024);
float totalMs = 0;
boolean seeking = true;
File file = new File(path);
InputStream inputStream = new BufferedInputStream(new FileInputStream(file), 8 * 1024);
try {
Bitstream bitstream = new Bitstream(inputStream);
Decoder decoder = new Decoder();
boolean done = false;
while (! done) {
Header frameHeader = bitstream.readFrame();
if (frameHeader == null) {
done = true;
} else {
totalMs += frameHeader.ms_per_frame();
if (totalMs >= startMs) {
seeking = false;
}
if (! seeking) {
SampleBuffer output = (SampleBuffer) decoder.decodeFrame(frameHeader, bitstream);
if (output.getSampleFrequency() != 44100
|| output.getChannelCount() != 2) {
throw new com.mindtherobot.libs.mpg.DecoderException("mono or non-44100 MP3 not supported");
}
short[] pcm = output.getBuffer();
for (short s : pcm) {
outStream.write(s & 0xff);
outStream.write((s >> 8 ) & 0xff);
}
}
if (totalMs >= (startMs + maxMs)) {
done = true;
}
}
bitstream.closeFrame();
}
return outStream.toByteArray();
} catch (BitstreamException e) {
throw new IOException("Bitstream error: " + e);
} catch (DecoderException e) {
Log.w(TAG, "Decoder error", e);
throw new com.mindtherobot.libs.mpg.DecoderException(e);
} finally {
IOUtils.safeClose(inputStream);
}
}
public static byte[] mix(String path1, String path2) {
byte[] pcm1 = decode(path1, 0, 60000);
byte[] pcm2 = decode(path2, 0, 60000);
int len1=pcm1.length;
int len2=pcm2.length;
byte[] pcmL;
byte[] pcmS;
int lenL; // length of the longest
int lenS; // length of the shortest
if (len2>len1) {
lenL = len1;
pcmL = pcm1;
lenS = len2;
pcmS = pcm2;
} else {
lenL = len2;
pcmL = pcm2;
lenS = len1;
pcmS = pcm1;
}
for (int idx = 0; idx < lenL; idx++) {
int sample;
if (idx >= lenS) {
sample = pcmL[idx];
} else {
sample = pcmL[idx] + pcmS[idx];
}
sample=(int)(sample*.71);
if (sample>127) sample=127;
if (sample<-128) sample=-128;
pcmL[idx] = (byte) sample;
}
return pcmL;
}
Note that I added attenuation and clipping in the last rows: you always have to do both when mixing two waveforms.
If you don't have memory/time requirements you can make an int[] of the sum of the samples and evaluate what is the best attenuation to avoid clipping.
To merge (overlap) two sound files, you can use This FFMPEG library.
Here is the Documentation
In their sample you can just enter the command you want. So lets talk about the command that we need.
-i [FISRST_FILE_PATH] -i [SECOND_FILE_PATH] -filter_complex amerge -ac 2 -c:a libmp3lame -q:a 4 [OUTPUT_FILE_PATH]
For first and second file paths, you will get the absolute path of the sound file.
1- If it is on storage then it is a sub folder for Environment.getExternalStorageDirectory().getAbsolutePath()
2- If it is assets so it should be a sub folder for file:///android_asset/
For the output path, Make sure to add the extension
ex.
String path = Environment.getExternalStorageDirectory().getAbsolutePath() + "/File Name.mp3"
I didn't get any fine solution.but we can do some trick here.. :)
You can assign both mp3 files to two different MediaPlayer object.then play both files at a time with a button.compare both mp3 files to find the longest duration.after that Use a AudioReorder to record to that duration. it will solve your problem..I know its not a right way but hope it will help you.. :)

Why does SKIA not use a custom FilterInputStream?

I'm trying to decode a bitmap from an extended FilterInputStream. I have to perform on-the-fly byte manipulation to the image data to provide a decodable image to SKIA, however it seems like SKIA ignores my custom InputStream and initializes one of its own...
When I run my test application, attempting to load in a 2mb large JPEG results in ObfuscatedInputStream.read([]) being called only once from BitmapFactory.decodeStream()
It seems like once the type of file is determined from the first 16kb of data retrieved from my ObfuscatedInputStream it initializes its own native stream and reads from that, effectively rendering all changes I make to how the input stream should work useless...
Here is the buffered read function in my extended FilterInputStream class. The Log.d at the top of the function is only executed once.
#Override
public int read(byte b[], int off, int len) throws IOException
{
Log.d(TAG, "called read[] with aval + " + super.available() + " len " + len);
int numBytesRead = -1;
if (pos == 0)
{
numBytesRead = fill(b);
if (numBytesRead < len)
{
int j;
numBytesRead += ((j = super.read(b, numBytesRead, len - numBytesRead)) == -1) ? 0 : j ;
}
}
else
numBytesRead = super.read(b, 0, len);
if (numBytesRead > -1)
pos += numBytesRead;
Log.d(TAG, "actually read " + numBytesRead);
return numBytesRead;
}
Has anyone ever encountered this issue? It seems like the only way to get my desired behavior is to rewrite portions of the SKIA library... I would really like to know what the point of the InputStream parameter is if the native implementation initializes a stream of its own...
turns out that it wasnt able to detect that it was an actual image from the first 1024 bytes it takes in. If it doesnt detect that the file is an actual image, it will not bother decoding the rest, hence only having read[] called once.

RTP on Android MediaPlayer

I've implemented RTSP on Android MediaPlayer using VLC as rtsp
server with this code:
# vlc -vvv /home/marco/Videos/pippo.mp4 --sout
#rtp{dst=192.168.100.246,port=6024-6025,sdp=rtsp://192.168.100.243:8080/test.sdp}
and on the Android project:
Uri videoUri = Uri.parse("rtsp://192.168.100.242:8080/test.sdp");
videoView.setVideoURI(videoUri);
videoView.start();
This works fine but if I'd like also to play live stream RTP so I
copied the sdp file into the sdcard (/mnt/sdcard/test.sdp) and setting
vlc:
# vlc -vvv /home/marco/Videos/pippo.mp4 --sout
#rtp{dst=192.168.100.249,port=6024-6025}
I tried to play the stream RTP setting the path of the sdp file
locally:
Uri videoUri = Uri.parse("/mnt/sdcard/test.sdp");
videoView.setVideoURI(videoUri);
videoView.start();
But I got an error:
D/MediaPlayer( 9616): Couldn't open file on client side, trying server side
W/MediaPlayer( 9616): info/warning (1, 26)
I/MediaPlayer( 9616): Info (1,26)
E/PlayerDriver( 76): Command PLAYER_INIT completed with an error or info PVMFFailure
E/MediaPlayer( 9616): error (1, -1)
E/MediaPlayer( 9616): Error (1,-1)
D/VideoView( 9616): Error: 1,-1
Does anyone know where's the problem? I'm I wrong or it's not possible
to play RTP on MediaPlayer?
Cheers
Giorgio
I have a partial solution for you.
I'm currently working on a Ra&D project involving RTP streaming of medias from a server to Android clients.
By doing this work, I contribute to my own library called smpte2022lib you may find here :
http://sourceforge.net/projects/smpte-2022lib/.
Helped with such library (the Java implementation is currently the best one) you may be able to parse RTP multicast streams coming from professional streaming equipements, VLC RTP sessions, ...
I already tested it successfully with streams coming from captured profesionnal RTP streams with SMPTE-2022 2D-FEC or with simple streams generated with VLC.
Unfortunately I cannot put a code-snippet here as the project using it is actually under copyright, but I ensure you you can use it simply by parsing UDP streams helped with RtpPacket constructor.
If the packets are valid RTP packets (the bytes) they will be decoded as such.
At this moment of time, I wrap the call to RtpPacket's constructor to a thread that actually stores the decoded payload as a media file. Then I will call the VideoView with this file as parameter.
Crossing fingers ;-)
Kind Regards,
David Fischer
Possible in android using ( not mediaPlayer but other stuff further down the stack) but do you really want do pursue RTSP/RTP when the rest of the media ecosystem does not??
IMO - there are far better media/stream approaches under the umbrella of HTML5/WebRTC. Like look at what 'Ondello' is doing with streams.
That said, here is some old-project code for android/RTSP/SDP/RTP using 'netty' and 'efflux'. It will negotiate some portions of 'Sessions' on SDP file providers. Cant remember whether it would actually play the audio portion of Youtube/RTSP stuff, but that is what my goal was at the time. ( i think that it worked using AMR-NB codec but , there were tons of issues and i dropped RTSP on android like a bad habit!)
on Git....
#Override
public void mediaDescriptor(Client client, String descriptor)
{
// searches for control: session and media arguments.
final String target = "control:";
Log.d(TAG, "Session Descriptor\n" + descriptor);
int position = -1;
while((position = descriptor.indexOf(target)) > -1)
{
descriptor = descriptor.substring(position + target.length());
resourceList.add(descriptor.substring(0, descriptor.indexOf('\r')));
}
}
private int nextPort()
{
return (port += 2) - 2;
}
private void getRTPStream(TransportHeader transport){
String[] words;
// only want 2000 part of 'client_port=2000-2001' in the Transport header in the response
words = transport.getParameter("client_port").substring(transport.getParameter("client_port").indexOf("=") +1).split("-");
port_lc = Integer.parseInt(words[0]);
words = transport.getParameter("server_port").substring(transport.getParameter("server_port").indexOf("=") +1).split("-");
port_rm = Integer.parseInt(words[0]);
source = transport.getParameter("source").substring(transport.getParameter("source").indexOf("=") +1);
ssrc = transport.getParameter("ssrc").substring(transport.getParameter("ssrc").indexOf("=") +1);
// assume dynamic Packet type = RTP , 99
getRTPStream(session, source, port_lc, port_rm, 99);
//getRTPStream("sessiona", source, port_lc, port_rm, 99);
Log.d(TAG, "raw parms " +port_lc +" " +port_rm +" " +source );
// String[] words = session.split(";");
Log.d(TAG, "session: " +session);
Log.d(TAG, "transport: " +transport.getParameter("client_port")
+" " +transport.getParameter("server_port") +" " +transport.getParameter("source")
+" " +transport.getParameter("ssrc"));
}
private void getRTPStream(String session, String source, int portl, int portr, int payloadFormat ){
// what do u do with ssrc?
InetAddress addr;
try {
addr = InetAddress.getLocalHost();
// Get IP Address
// LAN_IP_ADDR = addr.getHostAddress();
LAN_IP_ADDR = "192.168.1.125";
Log.d(TAG, "using client IP addr " +LAN_IP_ADDR);
} catch (UnknownHostException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
final CountDownLatch latch = new CountDownLatch(2);
RtpParticipant local1 = RtpParticipant.createReceiver(new RtpParticipantInfo(1), LAN_IP_ADDR, portl, portl+=1);
// RtpParticipant local1 = RtpParticipant.createReceiver(new RtpParticipantInfo(1), "127.0.0.1", portl, portl+=1);
RtpParticipant remote1 = RtpParticipant.createReceiver(new RtpParticipantInfo(2), source, portr, portr+=1);
remote1.getInfo().setSsrc( Long.parseLong(ssrc, 16));
session1 = new SingleParticipantSession(session, payloadFormat, local1, remote1);
Log.d(TAG, "remote ssrc " +session1.getRemoteParticipant().getInfo().getSsrc());
session1.init();
session1.addDataListener(new RtpSessionDataListener() {
#Override
public void dataPacketReceived(RtpSession session, RtpParticipantInfo participant, DataPacket packet) {
// System.err.println("Session 1 received packet: " + packet + "(session: " + session.getId() + ")");
//TODO close the file, flush the buffer
// if (_sink != null) _sink.getPackByte(packet);
getPackByte(packet);
// System.err.println("Ssn 1 packet seqn: typ: datasz " +packet.getSequenceNumber() + " " +packet.getPayloadType() +" " +packet.getDataSize());
// System.err.println("Ssn 1 packet sessn: typ: datasz " + session.getId() + " " +packet.getPayloadType() +" " +packet.getDataSize());
// latch.countDown();
}
});
// DataPacket packet = new DataPacket();
// packet.setData(new byte[]{0x45, 0x45, 0x45, 0x45});
// packet.setSequenceNumber(1);
// session1.sendDataPacket(packet);
// try {
// latch.await(2000, TimeUnit.MILLISECONDS);
// } catch (Exception e) {
// fail("Exception caught: " + e.getClass().getSimpleName() + " - " + e.getMessage());
// }
}
//TODO below should collaborate with the audioTrack object and should write to the AT buffr
// audioTrack write was blocking forever
public void getPackByte(DataPacket packet) {
//TODO this is getting called but not sure why only one time
// or whether it is stalling in mid-exec??
//TODO on firstPacket write bytes and start audioTrack
// AMR-nb frames at 12.2 KB or format type 7 frames are handled .
// after the normal header, the getDataArray contains extra 10 bytes of dynamic header that are bypassed by 'limit'
// real value for the frame separator comes in the input stream at position 1 in the data array
// returned by
// int newFrameSep = 0x3c;
// bytes avail = packet.getDataSize() - limit;
// byte[] lbuf = new byte[packet.getDataSize()];
// if ( packet.getDataSize() > 0)
// lbuf = packet.getDataAsArray();
//first frame includes the 1 byte frame header whose value should be used
// to write subsequent frame separators
Log.d(TAG, "getPackByt start and play");
if(!started){
Log.d(TAG, " PLAY audioTrak");
track.play();
started = true;
}
// track.write(packet.getDataAsArray(), limit, (packet.getDataSize() - limit));
track.write(packet.getDataAsArray(), 0, packet.getDataSize() );
Log.d(TAG, "getPackByt aft write");
// if(!started && nBytesRead > minBufferSize){
// Log.d(TAG, " PLAY audioTrak");
// track.play();
// started = true;}
nBytesRead += packet.getDataSize();
if (nBytesRead % 500 < 375) Log.d(TAG, " getPackByte plus 5K received");
}
}
Actually it's possible to play RTSP/RTP streams on Android by using a modified version of ExoPlayer which officially doesn't support RTSP/RTP (issue 55), however, there's an active pull request #3854 to add this support.
In the meantime, you can clone the original authors exoplayer fork which does support RTSP (branch dev-v2-rtsp):
git clone -b dev-v2-rtsp https://github.com/tresvecesseis/ExoPlayer.git.
I've tested it and it works perfectly. The authors are working actively to fix the issues reported by many users and I hope that RTSP support at some point becomes part of the official exoplayer.
Unfortunately it is not possible to play an RTP Stream with the Android MediaPlayer.
Solutions to this problems include the decoding of the RTP Stream with ffmpeg. Tutorials on how to compile ffmpeg for Android can be found on the Web.

real time audio recording and sending

I'm trying to capture audio from a microphone and no matter what buffer
size (bufferSizeInBytes) I set at construct time of AudioRecord, when
I do a AudioRecord.read I always get 8192 bytes of audio data (128
ms). I would like the AudioRecord.read be able to read 40ms of data
(2560 bytes). The only working sample code that I can find is Sipdroid
(RtpStreamSender.java) which I haven't tried.
Here's the relevant piece of my code:
public void run()
{
running = true;
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URG ENT_AUDIO);
try {
frameSize = AudioRecord.getMinBufferSize(samplingRate,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT);
Log.i(TAG, "trying to capture " + String.format("%d", frameSize) + " bytes");
record = new AudioRecord(MediaRecorder.AudioSource.MIC, samplingRate,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, frameSize);
record.startRecording();
byte[] buffer = new byte[frameSize];
while (running)
{
record.read(buffer, 0, frameSize);
Log.i(TAG, "Captured " + String.format("%d", frameSize) + " bytes of audio");
}
record.stop();
record.release();
} catch (Throwable t) {
Log.e(TAG, "Failed to capture audio");
}
}
My questions/comments are:
Is this a limitation of AudioRecord class and/or the particular
device ?
I don't see a method to query the supported sampling rate (steps of
bufferSizeInBytes) a given device can do. It would be nice if there's
one.
Can we use AudioRecord.OnRecordPositionUpdateListener and how?
In above code encoding pcm in 8 bit is not working. why?
if anyone tried sipdroid code, help me.
Thanks in advance,

Categories

Resources