Encoding AAC Audio using AudioRecord and MediaCodec on Android - android

I am trying to encode aac audio using android AudioRecord and MediaCodec. I have created a encoder class very similar to (Encoding H.264 from camera with Android MediaCodec). With this class, I created an instance of AudioRecord and tell it to read off its byte[] data to the AudioEncoder (audioEncoder.offerEncoder(Data)).
while(isRecording)
{
audioRecord.read(Data, 0, Data.length);
audioEncoder.offerEncoder(Data);
}
Here is my Setting for my AudioRecord
int audioSource = MediaRecorder.AudioSource.MIC;
int sampleRateInHz = 44100;
int channelConfig = AudioFormat.CHANNEL_IN_MONO;
int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
int bufferSizeInBytes = AudioRecord.getMinBufferSize(sampleRateInHz, channelConfig, audioFormat);
I successfully collected some byte[] array data and written it to a local file. Unfortunately the file is not playable. I did some more searching online and found a related post (How to generate the AAC ADTS elementary stream with Android MediaCodec). So, others who are having similar problem are saying the main problem is "The MediaCodec encoder generates the raw AAC stream. The raw AAC stream needs to be converted into a playable format, such as the ADTS stream". So I tried to add the ADTS header. Nevertheless, after I added the ADTS header(I commented out in the code below), my AudioEncoder wouldn't even write the output audio file.
Is there anything I'm missing? Is my setup correct?
Any suggestions, comments, and opinions are welcome and very appreciated. thanks guys!
import android.media.MediaCodec;
import android.media.MediaCodecInfo;
import android.media.MediaFormat;
import android.os.Environment;
import android.util.Log;
import java.io.BufferedOutputStream;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.nio.ByteBuffer;
public class AudioEncoder {
private MediaCodec mediaCodec;
private BufferedOutputStream outputStream;
private String mediaType = "audio/mp4a-latm";
public AudioEncoder() {
File f = new File(Environment.getExternalStorageDirectory(), "Download/audio_encoded.aac");
touch(f);
try {
outputStream = new BufferedOutputStream(new FileOutputStream(f));
Log.e("AudioEncoder", "outputStream initialized");
} catch (Exception e){
e.printStackTrace();
}
mediaCodec = MediaCodec.createEncoderByType(mediaType);
final int kSampleRates[] = { 8000, 11025, 22050, 44100, 48000 };
final int kBitRates[] = { 64000, 128000 };
MediaFormat mediaFormat = MediaFormat.createAudioFormat(mediaType,kSampleRates[3],1);
mediaFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC);
mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, kBitRates[1]);
mediaCodec.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
mediaCodec.start();
}
public void close() {
try {
mediaCodec.stop();
mediaCodec.release();
outputStream.flush();
outputStream.close();
} catch (Exception e){
e.printStackTrace();
}
}
// called AudioRecord's read
public synchronized void offerEncoder(byte[] input) {
Log.e("AudioEncoder", input.length + " is coming");
try {
ByteBuffer[] inputBuffers = mediaCodec.getInputBuffers();
ByteBuffer[] outputBuffers = mediaCodec.getOutputBuffers();
int inputBufferIndex = mediaCodec.dequeueInputBuffer(-1);
if (inputBufferIndex >= 0) {
ByteBuffer inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
inputBuffer.put(input);
mediaCodec.queueInputBuffer(inputBufferIndex, 0, input.length, 0, 0);
}
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
int outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo,0);
////trying to add a ADTS
// while (outputBufferIndex >= 0) {
// int outBitsSize = bufferInfo.size;
// int outPacketSize = outBitsSize + 7; // 7 is ADTS size
// ByteBuffer outputBuffer = outputBuffers[outputBufferIndex];
//
// outputBuffer.position(bufferInfo.offset);
// outputBuffer.limit(bufferInfo.offset + outBitsSize);
//
// byte[] outData = new byte[outPacketSize];
// addADTStoPacket(outData, outPacketSize);
//
// outputBuffer.get(outData, 7, outBitsSize);
// outputBuffer.position(bufferInfo.offset);
//
//// byte[] outData = new byte[bufferInfo.size];
// outputStream.write(outData, 0, outData.length);
// Log.e("AudioEncoder", outData.length + " bytes written");
//
// mediaCodec.releaseOutputBuffer(outputBufferIndex, false);
// outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo, 0);
//
// }
//Without ADTS header
while (outputBufferIndex >= 0) {
ByteBuffer outputBuffer = outputBuffers[outputBufferIndex];
byte[] outData = new byte[bufferInfo.size];
outputBuffer.get(outData);
outputStream.write(outData, 0, outData.length);
Log.e("AudioEncoder", outData.length + " bytes written");
mediaCodec.releaseOutputBuffer(outputBufferIndex, false);
outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo, 0);
}
} catch (Throwable t) {
t.printStackTrace();
}
}
/**
* Add ADTS header at the beginning of each and every AAC packet.
* This is needed as MediaCodec encoder generates a packet of raw
* AAC data.
*
* Note the packetLen must count in the ADTS header itself.
**/
private void addADTStoPacket(byte[] packet, int packetLen) {
int profile = 2; //AAC LC
//39=MediaCodecInfo.CodecProfileLevel.AACObjectELD;
int freqIdx = 4; //44.1KHz
int chanCfg = 2; //CPE
// fill in ADTS data
packet[0] = (byte)0xFF;
packet[1] = (byte)0xF9;
packet[2] = (byte)(((profile-1)<<6) + (freqIdx<<2) +(chanCfg>>2));
packet[3] = (byte)(((chanCfg&3)<<6) + (packetLen>>11));
packet[4] = (byte)((packetLen&0x7FF) >> 3);
packet[5] = (byte)(((packetLen&7)<<5) + 0x1F);
packet[6] = (byte)0xFC;
}
public void touch(File f)
{
try {
if(!f.exists())
f.createNewFile();
} catch (IOException e) {
e.printStackTrace();
}
}
}

You can use Android's MediaMuxer to package the raw streams created by MediaCodec into a .mp4 file. Bonus: AAC packets contained in a .mp4 don't require the ADTS header.
I've got a working example of this technique on Github.

Check "testEncoder" method here for how to use MediaCodec as Encoder properly.
after that
In your code,
your input(audio recorder) is configured for single audio channel while your output(ADTS packet header) is set for two channels(chanCfg = 2).
also if you change your input samplerate (currently 44.1khz) you also have to change freqIdx flag in ADTS packet header. check this link for valid values.
And ADTS header profile flag is set to "AAC LC", you can also found this under
MediaCodecInfo.CodecProfileLevel.
you have set profile = 2 that is MediaCodecInfo.CodecProfileLevel.AACObjectLC

Related

Android : how to extract audio samples from downloading audio file with MediaExtractor

I'm making a very simple music app, and I try to figure out how to do this:
download an audio file and record the file locally
at any moment, extract the audio frames from the file (during or after the download)
1) For the downloading part, I use Retrofit following this example. To make it short, it allows me to download a file, and recording it locally while it's downloading (so I don't have to wait for the end of the download to access the data of the file).
2) For the frame extracting part, I use MediaExtractor and MediaCodec like this:
MediaCodec codec;
MediaExtractor extractor;
MediaFormat format;
ByteBuffer[] codecInputBuffers;
ByteBuffer[] codecOutputBuffers;
Boolean sawInputEOS = false;
Boolean sawOutputEOS = false;
AudioTrack mAudioTrack;
MediaCodec.BufferInfo info;
File outputFile = null;
FileDescriptor fileDescriptor = null;
#Override
protected void onCreate(Bundle savedInstanceState) {
...
// the file being downloaded:
outputFile = new File(directory, "test.mp3");
try {
FileInputStream fileInputStream = new FileInputStream(outputFile);
fileDescriptor = fileInputStream.getFD();
}
catch (Exception e) {}
}
// Called once when enough data to extract.
private void onAudioFileReady() {
Thread thread = new Thread(new Runnable() {
#Override
public void run() {
// thread :
Process.setThreadPriority(Process.THREAD_PRIORITY_AUDIO);
// audio :
extractor = new MediaExtractor();
// the extractor only extracts the already downloaded part of the file:
try {
// extractor.setDataSource(url);
// extractor.setDataSource(outputFile.getAbsolutePath());
// extractor.setDataSource(MainActivity.this, Uri.parse(outputFile.getAbsolutePath()), null);
extractor.setDataSource(fileDescriptor);
}
catch (IOException e) {}
format = extractor.getTrackFormat(0);
String mime = format.getString(MediaFormat.KEY_MIME);
int sampleRate = format.getInteger(MediaFormat.KEY_SAMPLE_RATE);
try {
codec = MediaCodec.createDecoderByType(mime);
}
catch (IOException e) {}
codec.configure(format, null, null, 0);
codec.start();
codecInputBuffers = codec.getInputBuffers();
codecOutputBuffers = codec.getOutputBuffers();
extractor.selectTrack(0);
int minBufferSize = AudioTrack.getMinBufferSize(
sampleRate,
AudioFormat.CHANNEL_OUT_STEREO,
AudioFormat.ENCODING_PCM_16BIT);
mAudioTrack = new AudioTrack(
AudioManager.STREAM_MUSIC,
sampleRate,
AudioFormat.CHANNEL_OUT_STEREO,
AudioFormat.ENCODING_PCM_16BIT,
minBufferSize,
AudioTrack.MODE_STREAM
);
info = new MediaCodec.BufferInfo();
mAudioTrack.play();
do {
input();
output();
}
while (!sawInputEOS);
}
});
thread.start();
}
private void input() {
int inputBufferIndex = codec.dequeueInputBuffer(-1);
if (inputBufferIndex >= 0) {
ByteBuffer byteBuffer = codecInputBuffers[inputBufferIndex];
int sampleSize = extractor.readSampleData(byteBuffer, 0);
long presentationTimeUs = 0;
if (sampleSize < 0) {
Log.w(LOG_TAG, "Saw input end of stream!");
sampleSize = 0;
}
else {
presentationTimeUs = extractor.getSampleTime();
}
codec.queueInputBuffer(inputBufferIndex,
0,
sampleSize,
presentationTimeUs,
sawInputEOS ? MediaCodec.BUFFER_FLAG_END_OF_STREAM : 0);
// doesn't seem to work:
extractor.advance();
}
}
private void output() {
final int res = codec.dequeueOutputBuffer(info, -1);
if (res >= 0) {
ByteBuffer buf = codecOutputBuffers[res];
final byte[] chunk = new byte[info.size];
buf.get(chunk);
buf.clear();
if (chunk.length > 0) {
mAudioTrack.write(chunk, 0, chunk.length);
}
codec.releaseOutputBuffer(res, false);
if ((info.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
sawOutputEOS = true;
}
}
else if (res == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
codecOutputBuffers = codec.getOutputBuffers();
}
else if (res == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
final MediaFormat oformat = codec.getOutputFormat();
mAudioTrack.setPlaybackRate(oformat.getInteger(MediaFormat.KEY_SAMPLE_RATE));
}
}
What it does:
When onAudioFileReady() is called, this code extracts and plays the audio samples of the file, but only the ones that have already been downloaded. When it reaches the end of the already downloaded part, the MediaExtractor stops (it looks like extractor.advance() doesn't want to continue the extraction...), even if there is more data available...
What I want to achieve:
I want to be able to continue the extraction of the audio samples of the file, as long as there is enough data for it of course.
IMPORTANT:
At that point, you may ask why I don't just use extractor.setDataSource(url). Here are the reasons why:
I want to save the audio file locally, so I can play it later
I want to be able to play the song, even long after the beginning of the download
Does anyone know how to achieve that? Thanks in advance for your help.

How to convert .pcm file to .wav or .mp3?

I am currently developing an Android Application that has audio recording and playing. I am new to dealing with audio and I'm having some trouble with encoding and formats.
I am able to record and play the audio in my application, but when exporting I am not able to reproduce the audio. The only way I found was exporting my .pcm file and converting using Audacity.
This is my code to record the audio is:
private Thread recordingThread
private AudioRecord mRecorder;
private boolean isRecording = false;
private void startRecording() {
mRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
Constants.RECORDER_SAMPLERATE, Constants.RECORDER_CHANNELS,
Constants.RECORDER_AUDIO_ENCODING, Constants.BufferElements2Rec * Constants.BytesPerElement);
mRecorder.startRecording();
isRecording = true;
recordingThread = new Thread(new Runnable() {
public void run() {
writeAudioDataToFile();
}
}, "AudioRecorder Thread");
recordingThread.start();
}
private void writeAudioDataToFile() {
// Write the output audio in byte
FileOutputStream os = null;
try {
os = new FileOutputStream(mFileName);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
while (isRecording) {
// gets the voice output from microphone to byte format
mRecorder.read(sData, 0, Constants.BufferElements2Rec);
try {
// // writes the data to file from buffer
// // stores the voice buffer
byte bData[] = short2byte(sData);
os.write(bData, 0, Constants.BufferElements2Rec * Constants.BytesPerElement);
} catch (IOException e) {
e.printStackTrace();
}
}
try {
os.close();
} catch (IOException e) {
e.printStackTrace();
}
}
To play the recorded audio, the code is:
private void startPlaying() {
new Thread(new Runnable() {
public void run() {
try {
File file = new File(mFileName);
byte[] audioData = null;
InputStream inputStream = new FileInputStream(mFileName);
audioData = new byte[Constants.BufferElements2Rec];
mPlayer = new AudioTrack(AudioManager.STREAM_MUSIC, Constants.RECORDER_SAMPLERATE,
AudioFormat.CHANNEL_OUT_MONO, Constants.RECORDER_AUDIO_ENCODING,
Constants.BufferElements2Rec * Constants.BytesPerElement, AudioTrack.MODE_STREAM);
final float duration = (float) file.length() / Constants.RECORDER_SAMPLERATE / 2;
Log.i(TAG, "PLAYBACK AUDIO");
Log.i(TAG, String.valueOf(duration));
mPlayer.setPositionNotificationPeriod(Constants.RECORDER_SAMPLERATE / 10);
mPlayer.setNotificationMarkerPosition(Math.round(duration * Constants.RECORDER_SAMPLERATE));
mPlayer.play();
int i = 0;
while ((i = inputStream.read(audioData)) != -1) {
try {
mPlayer.write(audioData, 0, i);
} catch (Exception e) {
Log.e(TAG, "Exception: " + e.getLocalizedMessage());
}
}
} catch (FileNotFoundException fe) {
Log.e(TAG, "File not found: " + fe.getLocalizedMessage());
} catch (IOException io) {
Log.e(TAG, "IO Exception: " + io.getLocalizedMessage());
}
}
}).start();
}
The constants defined in a Constants class are:
public class Constants {
final static public int RECORDER_SAMPLERATE = 44100;
final static public int RECORDER_CHANNELS = AudioFormat.CHANNEL_IN_MONO;
final static public int RECORDER_AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
final static public int BufferElements2Rec = 1024; // want to play 2048 (2K) since 2 bytes we use only 1024
final static public int BytesPerElement = 2; // 2 bytes in 16bit format
}
If I export the file as it is, I convert it with Audacity and it plays. I do, however, need to export it in a format that can be played automatically.
I've seen answers to implement Lame and am currently working on it. I've also found an answer to convert it using:
private File rawToWave(final File rawFile, final String filePath) throws IOException {
File waveFile = new File(filePath);
byte[] rawData = new byte[(int) rawFile.length()];
DataInputStream input = null;
try {
input = new DataInputStream(new FileInputStream(rawFile));
input.read(rawData);
} finally {
if (input != null) {
input.close();
}
}
DataOutputStream output = null;
try {
output = new DataOutputStream(new FileOutputStream(waveFile));
// WAVE header
// see http://ccrma.stanford.edu/courses/422/projects/WaveFormat/
writeString(output, "RIFF"); // chunk id
writeInt(output, 36 + rawData.length); // chunk size
writeString(output, "WAVE"); // format
writeString(output, "fmt "); // subchunk 1 id
writeInt(output, 16); // subchunk 1 size
writeShort(output, (short) 1); // audio format (1 = PCM)
writeShort(output, (short) 1); // number of channels
writeInt(output, Constants.RECORDER_SAMPLERATE); // sample rate
writeInt(output, Constants.RECORDER_SAMPLERATE * 2); // byte rate
writeShort(output, (short) 2); // block align
writeShort(output, (short) 16); // bits per sample
writeString(output, "data"); // subchunk 2 id
writeInt(output, rawData.length); // subchunk 2 size
// Audio data (conversion big endian -> little endian)
short[] shorts = new short[rawData.length / 2];
ByteBuffer.wrap(rawData).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shorts);
ByteBuffer bytes = ByteBuffer.allocate(shorts.length * 2);
for (short s : shorts) {
bytes.putShort(s);
}
output.write(bytes.array());
} finally {
if (output != null) {
output.close();
}
}
return waveFile;
}
private void writeInt(final DataOutputStream output, final int value) throws IOException {
output.write(value >> 0);
output.write(value >> 8);
output.write(value >> 16);
output.write(value >> 24);
}
private void writeShort(final DataOutputStream output, final short value) throws IOException {
output.write(value >> 0);
output.write(value >> 8);
}
private void writeString(final DataOutputStream output, final String value) throws IOException {
for (int i = 0; i < value.length(); i++) {
output.write(value.charAt(i));
}
}
But this, when exported, plays with the correct duration but just white noise.
Some of the answers that I've tried but wasn't able to work:
Android:Creating Wave file using Raw PCM, the wave file does not play
How to convert PCM raw data to mp3 file?
converting pcm file to mp3 using liblame in android
Anyone can point out what is the best solution? Is it really implementing lame or can it be done on a more straight forward way? If so, why is the code sample converting the file to just white noise?
You've got most of the code correct. The only issue that I can see is the part where you write the PCM data to the WAV file. This should be quite simple to do because WAV = Metadata + PCM (in that order). This should work:
private void rawToWave(final File rawFile, final File waveFile) throws IOException {
byte[] rawData = new byte[(int) rawFile.length()];
DataInputStream input = null;
try {
input = new DataInputStream(new FileInputStream(rawFile));
input.read(rawData);
} finally {
if (input != null) {
input.close();
}
}
DataOutputStream output = null;
try {
output = new DataOutputStream(new FileOutputStream(waveFile));
// WAVE header
// see http://ccrma.stanford.edu/courses/422/projects/WaveFormat/
writeString(output, "RIFF"); // chunk id
writeInt(output, 36 + rawData.length); // chunk size
writeString(output, "WAVE"); // format
writeString(output, "fmt "); // subchunk 1 id
writeInt(output, 16); // subchunk 1 size
writeShort(output, (short) 1); // audio format (1 = PCM)
writeShort(output, (short) 1); // number of channels
writeInt(output, 44100); // sample rate
writeInt(output, RECORDER_SAMPLERATE * 2); // byte rate
writeShort(output, (short) 2); // block align
writeShort(output, (short) 16); // bits per sample
writeString(output, "data"); // subchunk 2 id
writeInt(output, rawData.length); // subchunk 2 size
// Audio data (conversion big endian -> little endian)
short[] shorts = new short[rawData.length / 2];
ByteBuffer.wrap(rawData).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shorts);
ByteBuffer bytes = ByteBuffer.allocate(shorts.length * 2);
for (short s : shorts) {
bytes.putShort(s);
}
output.write(fullyReadFileToBytes(rawFile));
} finally {
if (output != null) {
output.close();
}
}
}
byte[] fullyReadFileToBytes(File f) throws IOException {
int size = (int) f.length();
byte bytes[] = new byte[size];
byte tmpBuff[] = new byte[size];
FileInputStream fis= new FileInputStream(f);
try {
int read = fis.read(bytes, 0, size);
if (read < size) {
int remain = size - read;
while (remain > 0) {
read = fis.read(tmpBuff, 0, remain);
System.arraycopy(tmpBuff, 0, bytes, size - remain, read);
remain -= read;
}
}
} catch (IOException e){
throw e;
} finally {
fis.close();
}
return bytes;
}
private void writeInt(final DataOutputStream output, final int value) throws IOException {
output.write(value >> 0);
output.write(value >> 8);
output.write(value >> 16);
output.write(value >> 24);
}
private void writeShort(final DataOutputStream output, final short value) throws IOException {
output.write(value >> 0);
output.write(value >> 8);
}
private void writeString(final DataOutputStream output, final String value) throws IOException {
for (int i = 0; i < value.length(); i++) {
output.write(value.charAt(i));
}
}
How to use
It's quite simple to use. Just call it like this:
File f1 = new File("/sdcard/44100Sampling-16bit-mono-mic.pcm"); // The location of your PCM file
File f2 = new File("/sdcard/44100Sampling-16bit-mono-mic.wav"); // The location where you want your WAV file
try {
rawToWave(f1, f2);
} catch (IOException e) {
e.printStackTrace();
}
How all this works
As you can see, the WAV header is the only difference between WAV and PCM file formats. The assumption is that you are recording 16 bit PCM MONO audio (which according to your code, you are). The rawToWave function just neatly adds headers to the WAV file, so that music players know what to expect when your file is opened, and then after the headers, it just writes the PCM data from the last bit onwards.
Cool Tip
If you want to shift the pitch of your voice, or make a voice changer app, all you got to do is increase/decrease the value of writeInt(output, 44100); // sample rate in your code. Decreasing it will tell the player to play it at a different rate thereby changing the output pitch. Just a little extra 'good to know' thing. :)
I know it is late and you got your stuff working with MediaRecorder. But thought of sharing my answer as it took me some good time to find it. :)
When you record your audio, the data is read as short from your AudioRecord object and it is then converted to bytes before storing in the .pcm file.
Now, when you write the .wav file, you're again doing the short conversion. This is not required. So, in your code if you remove the following block and write the rawData directly to the end of .wav file. It will work just fine.
short[] shorts = new short[rawData.length / 2];
ByteBuffer.wrap(rawData).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shorts);
ByteBuffer bytes = ByteBuffer.allocate(shorts.length * 2);
for (short s : shorts) {
bytes.putShort(s);
}
Check the below piece of code you'll get after removing the duplicate block of code.
writeInt(output, rawData.length); // subchunk 2 size
// removed the duplicate short conversion
output.write(rawData);
Just to register, I solved my need of recording an audio playable in common players using MediaRecorder instead of Audio Recorder.
To start recording:
MediaRecorder mRecorder = new MediaRecorder();
mRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mRecorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
mRecorder.setAudioEncoder(MediaRecorder.OutputFormat.AMR_NB);
mRecorder.setOutputFile(Environment.getExternalStorageDirectory()
.getAbsolutePath() + "/recording.3gp");
mRecorder.prepare();
mRecorder.start();
And to play the recording:
mPlayer = new MediaPlayer();
mPlayer.setDataSource(Environment.getExternalStorageDirectory()
.getAbsolutePath() + "/recording.3gp");
mPlayer.prepare();
mPlayer.start();
I tried the above code for audio recording writeAudioDataToFile(). It perfectly records and convert the audio into .wav format. But when I played the recorded audio, it was too fast. 5sec audio completed in 2.5 sec. Then I observed it was because of this short2byte() function.
For those how are having same problem should not use short2byte() and directly write sData in line os.write(sData, 0, Constants.BufferElements2Rec * Constants.BytesPerElement); where sData should be byte[].

Decoding raw AAC with MediaCodec without using MediaExtractor

I successfully decoded and play the mp4 (AAC) file using MediaExtractor and MediaCodec with the code below.I want to decode raw AAC (in another file, with same encoding format) to PCM. The problem is that I don't know how to set SampleSize and presentationTimeUs without mediaExtractor. How can I set above parameters without using MediaExtractor?
//songwav.mp4 file is created from PCM with this format
MediaFormat outputFormat = MediaFormat.createAudioFormat(
"audio/mp4a-latm", 44100, 2);
outputFormat.setInteger(MediaFormat.KEY_AAC_PROFILE,
MediaCodecInfo.CodecProfileLevel.AACObjectLC);
outputFormat.setInteger(MediaFormat.KEY_BIT_RATE,
128000);
//decoding
String inputfilePath = Environment.getExternalStorageDirectory()
.getPath() + "/" + "songwav.mp4";
String outputFilePath = Environment.getExternalStorageDirectory()
.getPath() + "/" + "songwavmp4.pcm";
OutputStream outputStream = new FileOutputStream(outputFilePath);
MediaCodec codec;
AudioTrack audioTrack;
// extractor gets information about the stream
MediaExtractor extractor = new MediaExtractor();
extractor.setDataSource(inputfilePath);
MediaFormat format = extractor.getTrackFormat(0);
String mime = format.getString(MediaFormat.KEY_MIME);
// the actual decoder
codec = MediaCodec.createDecoderByType(mime);
codec.configure(format, null /* surface */, null /* crypto */, 0 /* flags */);
codec.start();
ByteBuffer[] codecInputBuffers = codec.getInputBuffers();
ByteBuffer[] codecOutputBuffers = codec.getOutputBuffers();
// get the sample rate to configure AudioTrack
int sampleRate = format.getInteger(MediaFormat.KEY_SAMPLE_RATE);
// create our AudioTrack instance
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate,
AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT,
AudioTrack.getMinBufferSize(sampleRate,
AudioFormat.CHANNEL_OUT_STEREO,
AudioFormat.ENCODING_PCM_16BIT), AudioTrack.MODE_STREAM);
// start playing, we will feed you later
audioTrack.play();
extractor.selectTrack(0);
// start decoding
final long kTimeOutUs = 10000;
MediaCodec.BufferInfo BufInfo = new MediaCodec.BufferInfo();
boolean sawInputEOS = false;
boolean sawOutputEOS = false;
int inputBufIndex;
int counter=0;
while (!sawOutputEOS) {
counter++;
if (!sawInputEOS) {
inputBufIndex = codec.dequeueInputBuffer(kTimeOutUs);
// Log.d(LOG_TAG, " bufIndexCheck " + bufIndexCheck);
if (inputBufIndex >= 0) {
ByteBuffer dstBuf = codecInputBuffers[inputBufIndex];
int sampleSize = extractor
.readSampleData(dstBuf, 0 /* offset */);
long presentationTimeUs = 0;
if (sampleSize < 0) {
sawInputEOS = true;
sampleSize = 0;
} else {
presentationTimeUs = extractor.getSampleTime();
}
// can throw illegal state exception (???)
codec.queueInputBuffer(inputBufIndex, 0 /* offset */,
sampleSize, presentationTimeUs,
sawInputEOS ? MediaCodec.BUFFER_FLAG_END_OF_STREAM
: 0);
if (!sawInputEOS) {
extractor.advance();
}
} else {
Log.e("sohail", "inputBufIndex " + inputBufIndex);
}
}
int res = codec.dequeueOutputBuffer(BufInfo, kTimeOutUs);
if (res >= 0) {
Log.i("sohail","decoding: deqOutputBuffer >=0, counter="+counter);
// Log.d(LOG_TAG, "got frame, size " + info.size + "/" +
// info.presentationTimeUs);
if (BufInfo.size > 0) {
// noOutputCounter = 0;
}
int outputBufIndex = res;
ByteBuffer buf = codecOutputBuffers[outputBufIndex];
final byte[] chunk = new byte[BufInfo.size];
buf.get(chunk);
buf.clear();
if (chunk.length > 0) {
// play
audioTrack.write(chunk, 0, chunk.length);
// write to file
outputStream.write(chunk);
}
codec.releaseOutputBuffer(outputBufIndex, false /* render */);
if ((BufInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
Log.i("sohail", "saw output EOS.");
sawOutputEOS = true;
}
} else if (res == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
codecOutputBuffers = codec.getOutputBuffers();
Log.i("sohail", "output buffers have changed.");
} else if (res == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
MediaFormat oformat = codec.getOutputFormat();
Log.i("sohail", "output format has changed to " + oformat);
} else {
Log.i("sohail", "dequeueOutputBuffer returned " + res);
}
}
Log.d(LOG_TAG, "stopping...");
// ////////closing
if (audioTrack != null) {
audioTrack.flush();
audioTrack.release();
audioTrack = null;
}
outputStream.flush();
outputStream.close();
codec.stop();

Raw H.264 stream output by MediaCodec not playble

I am creating raw H.264 stream output by MediaCodec. The problem is the output file is not playable in android default player (API 16). How can it be that Android can export file that is not playable in player, only in VLC on the PC. Maybe some thing wrong with my code? My video is 384x288.
public class AvcEncoder {
private MediaCodec mediaCodec;
private BufferedOutputStream outputStream;
private File f;
public AvcEncoder(int w, int h, String file_name)
{
f = new File(file_name + ".mp4");
try {
outputStream = new BufferedOutputStream(new FileOutputStream(f));
} catch (Exception e){
e.printStackTrace();
}
String key_mime = "video/avc"; //video/mp4v-es, video/3gpp, video/avc
mediaCodec = MediaCodec.createEncoderByType(key_mime);
MediaFormat mediaFormat = MediaFormat.createVideoFormat(key_mime, w, h);
mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, (w * h) << 3);
mediaFormat.setInteger(MediaFormat.KEY_FRAME_RATE, 25);
mediaFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar);
mediaFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5);
mediaFormat.setInteger(MediaFormat.KEY_AAC_PROFILE,MediaCodecInfo.CodecProfileLevel.MPEG4ProfileMain);
mediaCodec.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
mediaCodec.start();
}
public void close() {
try {
mediaCodec.stop();
mediaCodec.release();
outputStream.flush();
outputStream.close();
} catch (Exception e){
e.printStackTrace();
}
}
public void offerEncoder(byte[] input) {
try {
ByteBuffer[] inputBuffers = mediaCodec.getInputBuffers();
ByteBuffer[] outputBuffers = mediaCodec.getOutputBuffers();
int inputBufferIndex = mediaCodec.dequeueInputBuffer(0);
if (inputBufferIndex >= 0) {
ByteBuffer inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
inputBuffer.put(input);
mediaCodec.queueInputBuffer(inputBufferIndex, 0, input.length, 0, 0);
}
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
int outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo, 0);
while (outputBufferIndex >= 0) {
ByteBuffer outputBuffer = outputBuffers[outputBufferIndex];
byte[] outData = new byte[bufferInfo.size];
outputBuffer.get(outData);
outputStream.write(outData, 0, outData.length);
mediaCodec.releaseOutputBuffer(outputBufferIndex, false);
outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo, 0);
}
} catch (Throwable t) {
}
}
}
The Android MediaPlayer doesn't handle raw H.264 streams.
One difficulty with such streams is that the H.264 NAL units don't have timestamp information, so unless the video frames are at a known fixed frame rate the player wouldn't know when to present them.
You can either create your own player with MediaCodec (see e.g. "Play video (TextureView)" in Grafika), or convert the raw stream to a .mp4 file. The latter requires MediaMuxer, available in API 18, or the use of a 3rd-party library like ffmpeg.

How to use Android MediaCodec encode Camera data(YUV420sp)

Thank you for your focus!
I want to use Android MediaCodec APIs to encode the video frame which aquired from Camera,
unfortunately, I have not success to do that! I still not familiar with the MediaCodec API。
The follow is my codes,I need your help to figure out what I should do.
1、The Camera setting:
Parameters parameters = mCamera.getParameters();
parameters.setPreviewFormat(ImageFormat.NV21);
parameters.setPreviewSize(320, 240);
mCamera.setParameters(parameters);
2、Set the encoder:
private void initCodec() {
try {
fos = new FileOutputStream(mVideoFile, false);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
mMediaCodec = MediaCodec.createEncoderByType("video/avc");
MediaFormat mediaFormat = MediaFormat.createVideoFormat("video/avc",
320,
240);
mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, 125000);
mediaFormat.setInteger(MediaFormat.KEY_FRAME_RATE, 15);
mediaFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT,
MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420Planar);
mediaFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5);
mMediaCodec.configure(mediaFormat,
null,
null,
MediaCodec.CONFIGURE_FLAG_ENCODE);
mMediaCodec.start();
inputBuffers = mMediaCodec.getInputBuffers();
outputBuffers = mMediaCodec.getOutputBuffers();
}
private void encode(byte[] data) {
int inputBufferIndex = mMediaCodec.dequeueInputBuffer(0);
if (inputBufferIndex >= 0) {
ByteBuffer inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
inputBuffer.put(data);
mMediaCodec.queueInputBuffer(inputBufferIndex, 0, data.length, 0, 0);
} else {
return;
}
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
int outputBufferIndex = mMediaCodec.dequeueOutputBuffer(bufferInfo, 0);
Log.i(TAG, "outputBufferIndex-->" + outputBufferIndex);
do {
if (outputBufferIndex >= 0) {
ByteBuffer outBuffer = outputBuffers[outputBufferIndex];
System.out.println("buffer info-->" + bufferInfo.offset + "--"
+ bufferInfo.size + "--" + bufferInfo.flags + "--"
+ bufferInfo.presentationTimeUs);
byte[] outData = new byte[bufferInfo.size];
outBuffer.get(outData);
try {
if (bufferInfo.offset != 0) {
fos.write(outData, bufferInfo.offset, outData.length
- bufferInfo.offset);
} else {
fos.write(outData, 0, outData.length);
}
fos.flush();
Log.i(TAG, "out data -- > " + outData.length);
mMediaCodec.releaseOutputBuffer(outputBufferIndex, false);
outputBufferIndex = mMediaCodec.dequeueOutputBuffer(bufferInfo,
0);
} catch (IOException e) {
e.printStackTrace();
}
} else if (outputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
outputBuffers = mMediaCodec.getOutputBuffers();
} else if (outputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
MediaFormat format = mMediaCodec.getOutputFormat();
}
} while (outputBufferIndex >= 0);
}
I guess the problem occurred in the encoder method,the method will be used in the Camera Preview Callback ,like
initCodec();
//mCamera.setPreviewCallback(new MyPreviewCallback());
mCamera.setPreviewCallback(new PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
encode(data);
}
});
I just have no idea how to do it correctly with the MediaCodec API.Can you give me some advice or links about it?
Thank you!
I have solved the problem.As follows:
private synchronized void encode(byte[] data)
{
inputBuffers = mMediaCodec.getInputBuffers();// here changes
outputBuffers = mMediaCodec.getOutputBuffers();
int inputBufferIndex = mMediaCodec.dequeueInputBuffer(-1);
Log.i(TAG, "inputBufferIndex-->" + inputBufferIndex);
//......
And next,you will find your encoded video color is not right, for more information,please go to here MediaCodec and Camera: colorspaces don't match
The YUV420 formats output by the camera are incompatible with the formats accepted by the MediaCodec AVC encoder. In the best case, it's essentially NV12 vs. NV21 (U and V planes are reversed), requiring a manual reordering. In the worst case, as of Android 4.2, the encoder input format may be device-specific.
You're better off using MediaRecorder to connect the camera hardware to the encoder.
Update:
It's now possible to pass the camera's Surface preview to MediaCodec, instead of using the YUV data in the ByteBuffer. This is faster and more portable. See the CameraToMpegTest sample here.

Categories

Resources