I am trying to get the audio input stream from a file in the local file system on an android device.
This is so that i can use this library to show a wave form for the audio file.
https://github.com/newventuresoftware/WaveformControl/blob/master/app/src/main/java/com/newventuresoftware/waveformdemo/MainActivity.java#L125
The example in the project uses rawResource like so
InputStream is = getResources().openRawResource(R.raw.jinglebells);
This input stream is later converted into byte array and passed to somewhere that uses it to paint a wave picture and sound.
however when I did
InputStream is = new InputFileSystem(new File(filePath));
But this does not seem to work properly. The image generated is wrong, and the sound played is nothing like what the file actually is.
This is the body of the function in that library that gets the input stream and convert it into byte arrays.
private short[] getAudioSample() throws IOException {
// If i replace this part with new FileInput(new File(filePath))
// the generated "samples" from it does not work properly with the library.
InputStream is = getResources().openRawResource(R.raw.jinglebells);
byte[] data;
try {
data = IOUtils.toByteArray(is);
} finally {
if (is != null) {
is.close();
}
}
ShortBuffer sb = ByteBuffer.wrap(data).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer();
short[] samples = new short[sb.limit()];
sb.get(samples);
return samples;
}
The sound file that I would like to get processed and pass to that library is created by a MediaRecorder with the following configurations
MediaRecorder recorder = new MediaRecorder();
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
recorder.setOutputFormat(MediaRecorder.OutputFormat.DEFAULT);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
Basically that library requires PCM samples. Using that 3gp FileInputStream generated by MediaRecorder directly is not gonna cut it.
What I did is to use MediaExtractor and MediaCodec to convert the 3gp audio data into PCM samples by samples. and then it is fed into that library. Then everything worked =)
The logic in encoding audio data can be almost directly taken from here this awesome github repo
Related
I am trying to use a portion of a pre-recorded video file for processing on Android. Currently I am working with MediaCodec library on Android. I am using MediaExtractor to extract the data from video file. The idea is to extract only required gata stream for processing. The code looks like:
extractor = new MediaExtractor();
File file = new File(mInputFile);
FileInputStream inputStream = new FileInputStream(file);
long len = file.length();
long start = 200L;
extractor.setDataSource(inputStream.getFD(),start,len);
inputStream.close();
Logcat is:
java.io.IOException: Failed to instantiate extractor.
at android.media.MediaExtractor.setDataSource(Native Method).
It works well if I set starting bytes to 0. What am I doing wrong ? Can we not extract portion of videos using mediaextractor ?
I've found lots of tutorials and posts showing how to use AudioTrack to play wav files in AudioTrack.MODE_STREAM and I've successfully implemented this example.
However I'm having issues with performance when playing multiple audio tracks at once and thinking that I should first create the tracks using AudioTrack.MODE_STATIC then just call play each time.
I can't find any resources on how to implement this. How can I do this?
Thanks
The two main sticking points for me were realizing that .write() comes first and that the instantiated player must have the size of the entire clip as the buffer_size_in_bytes.
Assuming you have recorded a PCM file using AudioRecord, you can play it back with STATIC_MODE like so...
File file = new File(FILENAME);
int audioLength = (int)file.length();
byte filedata[] = new byte[audioLength];
try{
InputStream inputStream = new BufferedInputStream(new FileInputStream(FILENAME));
int lengthOfAudioClip = inputStream.read(filedata, 0, audioLength);
player = new AudioTrack(STREAM_TYPE, SAMPLE_RATE, CHANNEL_OUT_CONFIG, AUDIO_FORMAT,audioLength, AUDIO_MODE);
player.write(filedata, OFFSET, lengthOfAudioClip);
player.setPlaybackRate(playbackRate);
player.play();
}
I have some h264 frames already encoded with android encoder. Now i want to create and write them one by one to the mp4 file. Please advise how to do this on android in Java. I don't want to use OpenCV or native code.
mp4-parser can't do this as i understand
navite MPEG4Writer is too complicated to use
Wondering why such a common and very useful thing as mp4 writer NOT FROM CAMERA is not implemented in android
You may use my mp4parser project. You can mux H264 and AAC in pure Java with it:
H264TrackImpl h264Track = new H264TrackImpl(new FileInputStream("raw.h264").getChannel());
Movie m = new Movie();
m.addTrack(h264Track);
IsoFile out = new DefaultMp4Builder().build(m);
FileOutputStream fos = new FileOutputStream(new File("h264_output.mp4"));
out.getBox(fos.getChannel());
fos.close();
I'm doing a conversion from PCM-16 to AMR using AmrInputStream. The details for the AmrInputStream can be found here http://hi-android.info/src/android/media/AmrInputStream.java.html
I'm quite new to programming to while it talks about using JNI and stuff, I have no idea what JNI is and I don't think it is required for this discussion. The AmrInputStream above is also apparently not found in the SDK nor the NDK, but I have been able to use it.
I've been searching around the internet for how to use the stream, but have not found any examples. In the end I experimented and found it to be similar to just any InputStream. Here's a code snippet
InputStream inStream;
inStream = new FileInputStream("abc.wav");
AmrInputStream aStream = new AmrInputStream(inStream);
File file = new File("xyz.amr");
file.createNewFile();
OutputStream out = new FileOutputStream(file);
byte[] x = new byte[1024];
int len;
while ((len=aStream.read(x)) > 0) {
out.write(x,0,len);
}
out.close();
I have tested this and it has worked (requiring adding the #!AMR\n tag to the output file for playing.) (Edit: The AMR tag must be #!AMR\n).
My question pertains to that I have only managed to get this to work on a PCM-16 file sampled at 8000Hz. Any (higher) frequency used for the original PCM-16 file results in an (undersampled) output. There is a SAMPLES_PER_FRAME variable in the AmrInputStream.java file which I have tried playing with but it does not seem to affect anything.
Any advise or related discussion is welcomed!
SAMPLES_PER_FRAME is the block of data the amrencoder acts on in one go(which is mapped to 20 msec of audio).
from the signatures of the amr encoder functions (at the bottom of http://hi-android.info/src/android/media/AmrInputStream.java.html)
private static native int GsmAmrEncoderNew();
private static native void GsmAmrEncoderInitialize(int gae);
private static native int GsmAmrEncoderEncode(int gae,
byte[] pcm, int pcmOffset, byte[] amr, int amrOffset) throws IOException;
private static native void GsmAmrEncoderCleanup(int gae);
private static native void GsmAmrEncoderDelete(int gae);
There doesnt seem to be a way to pass samplerate to the encoder.(gae is native handle)
the sample rate is hardcoded to 8k atleast with this api
Can any tell how to combine/merge two media files into one ?
i found a topics about audioInputStream but now it's not supported in android, and all code for java .
And on StackOverflow i found this link here
but there i can't find solution - these links only on streaming audio . Any one can tell me ?
P.S and why i can't start bounty ?:(
import java.io.*;
public class TwoFiles
{
public static void main(String args[]) throws IOException
{
FileInputStream fistream1 = new FileInputStream("C:\\Temp\\1.mp3"); // first source file
FileInputStream fistream2 = new FileInputStream("C:\\Temp\\2.mp3");//second source file
SequenceInputStream sistream = new SequenceInputStream(fistream1, fistream2);
FileOutputStream fostream = new FileOutputStream("C:\\Temp\\final.mp3");//destinationfile
int temp;
while( ( temp = sistream.read() ) != -1)
{
// System.out.print( (char) temp ); // to print at DOS prompt
fostream.write(temp); // to write to file
}
fostream.close();
sistream.close();
fistream1.close();
fistream2.close();
}
}
Consider two cases for .mp3 files:
Files with same sampling frequency and number of channels
In this case, we can just append the second file to end of first file. This can be achieved using File classes available on Android.
Files with different sampling frequency or number of channels.
In this case, one of the clips has to be re-encoded to ensure both files have same sampling frequency and number of channels. To do this, we would need to decode MP3, get PCM samples,process it to change sampling frequency and then re-encode to MP3. From what I know, android does not have transcode or reencode APIs. One option is to use external library like lame/FFMPEG via JNI for re-encode.