Can any tell how to combine/merge two media files into one ?
i found a topics about audioInputStream but now it's not supported in android, and all code for java .
And on StackOverflow i found this link here
but there i can't find solution - these links only on streaming audio . Any one can tell me ?
P.S and why i can't start bounty ?:(
import java.io.*;
public class TwoFiles
{
public static void main(String args[]) throws IOException
{
FileInputStream fistream1 = new FileInputStream("C:\\Temp\\1.mp3"); // first source file
FileInputStream fistream2 = new FileInputStream("C:\\Temp\\2.mp3");//second source file
SequenceInputStream sistream = new SequenceInputStream(fistream1, fistream2);
FileOutputStream fostream = new FileOutputStream("C:\\Temp\\final.mp3");//destinationfile
int temp;
while( ( temp = sistream.read() ) != -1)
{
// System.out.print( (char) temp ); // to print at DOS prompt
fostream.write(temp); // to write to file
}
fostream.close();
sistream.close();
fistream1.close();
fistream2.close();
}
}
Consider two cases for .mp3 files:
Files with same sampling frequency and number of channels
In this case, we can just append the second file to end of first file. This can be achieved using File classes available on Android.
Files with different sampling frequency or number of channels.
In this case, one of the clips has to be re-encoded to ensure both files have same sampling frequency and number of channels. To do this, we would need to decode MP3, get PCM samples,process it to change sampling frequency and then re-encode to MP3. From what I know, android does not have transcode or reencode APIs. One option is to use external library like lame/FFMPEG via JNI for re-encode.
Related
I am trying to get the audio input stream from a file in the local file system on an android device.
This is so that i can use this library to show a wave form for the audio file.
https://github.com/newventuresoftware/WaveformControl/blob/master/app/src/main/java/com/newventuresoftware/waveformdemo/MainActivity.java#L125
The example in the project uses rawResource like so
InputStream is = getResources().openRawResource(R.raw.jinglebells);
This input stream is later converted into byte array and passed to somewhere that uses it to paint a wave picture and sound.
however when I did
InputStream is = new InputFileSystem(new File(filePath));
But this does not seem to work properly. The image generated is wrong, and the sound played is nothing like what the file actually is.
This is the body of the function in that library that gets the input stream and convert it into byte arrays.
private short[] getAudioSample() throws IOException {
// If i replace this part with new FileInput(new File(filePath))
// the generated "samples" from it does not work properly with the library.
InputStream is = getResources().openRawResource(R.raw.jinglebells);
byte[] data;
try {
data = IOUtils.toByteArray(is);
} finally {
if (is != null) {
is.close();
}
}
ShortBuffer sb = ByteBuffer.wrap(data).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer();
short[] samples = new short[sb.limit()];
sb.get(samples);
return samples;
}
The sound file that I would like to get processed and pass to that library is created by a MediaRecorder with the following configurations
MediaRecorder recorder = new MediaRecorder();
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
recorder.setOutputFormat(MediaRecorder.OutputFormat.DEFAULT);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
Basically that library requires PCM samples. Using that 3gp FileInputStream generated by MediaRecorder directly is not gonna cut it.
What I did is to use MediaExtractor and MediaCodec to convert the 3gp audio data into PCM samples by samples. and then it is fed into that library. Then everything worked =)
The logic in encoding audio data can be almost directly taken from here this awesome github repo
I am working on a project in Flash Mobile using ActionScript. I have a zipped wav file that I need to be able to de serialize and play as needed in a Button Press action. Below is the code for zipping the wav file.
mic.removeEventListener(SampleDataEvent.SAMPLE_DATA, micSampleDataHandler);
btnRecord.setStyle("icon", recOff);
sampleCount++;
// save the raw PCM samples as a bare WAV file
var wav:ByteArray = new ByteArray();
var writer:WAVWriter = new WAVWriter();
writer.numOfChannels = 1;
writer.sampleBitRate = 16;
writer.samplingRate = 11025;
samples.position = 0;
writer.processSamples(wav, samples, 11025, 1);
wav.position = 0;
// zip the WAV file
var fzip:FZip = new FZip();
fzip.addFile(name + sampleCount.toString(), wav);
var zip:ByteArray = new ByteArray();
fzip.serialize(zip);
var recSpot:Object = {
id: null,
audio: zip,
name: "New Audio File " + newRecNum,
existsdb: "false"
};
newRecNum++;
recordings.addItem(recSpot);
}
What can I do to play this file, really haven't had to play a zipped file before.
I'm not familiar with WAVWriter (which is probably somewhat beside the point), but here's what I do know.
Firstly, because of the nature of a compression, you cannot (as far as I know) play a zipped audio file, period. You will need to unzip it first.
A quick Google search turned up THIS AS3 TUTORIAL on unzipping with FZIP. The example program is using .PNGs, but I would assume you can adjust it to work with the raw .WAV file you zipped earlier. Skip down to Step 5 for the actual code. (You'll need to rewrite it to work with your interface, obviously.)
You won't need the DataProvider variable in step 5, as that is for components, specifically. You'll need to load your data into something else. If your method of playing WAV files is anything like mine (I use the as3WAVSound class), you'll probably want to load the data into a ByteArray and play off of that.
You also probably won't need the for loop he uses in step 10, as your code appears to be creating a ZIP with only one WAV file. That simplifies things considerably.
Anyway, I hope that answers your question!
My aim is to pause in recording file.
I see in Android developer site its but Media Recorder have not pause option.
Java supports merge two audio file programatically but In android its not work.
Join two WAV files from Java?
And also I used default device audio recorder Apps which is available in all device but in Samsung few devices have not returened recording path.
Intent intent = new Intent(MediaStore.Audio.Media.RECORD_SOUND_ACTION);
startActivityForResult(intent,REQUESTCODE_RECORDING);
Any one help for voice recording with pause functionality.
http://developer.android.com/reference/android/media/MediaRecorder.html
MediaRecorder does not have pause and resume methods. You need to use stop and start methods instead.
I had such a requirement in one of my projects, What we done was like make a raw file for saving recorded data in start of recording using AudioRecord , the for each resume we append the data to the same file
like
FileOutputStream fos= new FileOutputStream(filename, true);
here the filename is the name of the raw file and append the new recording data to it.
And when user stop the recording we will convert the entire raw file to .wav( or other) formats. Sorry that i cant post the entire code. Hope this will give you a direction to work.
You can refer my answer here if still have this issue. For API level >= 24 pause/resume methods are available in Android MediaRecorder class.
For API level < 24
Add below dependency in your gradle file:
compile 'com.googlecode.mp4parser:isoparser:1.0.2'
The solution is to stop recorder when user pause and start again on resume as already mentioned in many other answers in stackoverflow. Store all the audio/video files generated in an array and use below method to merge all media files. The example is taken from mp4parser library and modified little bit as per my need.
public static boolean mergeMediaFiles(boolean isAudio, String sourceFiles[], String targetFile) {
try {
String mediaKey = isAudio ? "soun" : "vide";
List<Movie> listMovies = new ArrayList<>();
for (String filename : sourceFiles) {
listMovies.add(MovieCreator.build(filename));
}
List<Track> listTracks = new LinkedList<>();
for (Movie movie : listMovies) {
for (Track track : movie.getTracks()) {
if (track.getHandler().equals(mediaKey)) {
listTracks.add(track);
}
}
}
Movie outputMovie = new Movie();
if (!listTracks.isEmpty()) {
outputMovie.addTrack(new AppendTrack(listTracks.toArray(new Track[listTracks.size()])));
}
Container container = new DefaultMp4Builder().build(outputMovie);
FileChannel fileChannel = new RandomAccessFile(String.format(targetFile), "rw").getChannel();
container.writeContainer(fileChannel);
fileChannel.close();
return true;
}
catch (IOException e) {
Log.e(LOG_TAG, "Error merging media files. exception: "+e.getMessage());
return false;
}
}
Use flag isAudio as true for Audio files and false for Video files.
You can't do it using Android API, but you can save a lot of mp4 files and merge it using mp4parser: powerful library written in Java. Also see my simple recorder with a "pause": https://github.com/lassana/continuous-audiorecorder.
I'm doing a conversion from PCM-16 to AMR using AmrInputStream. The details for the AmrInputStream can be found here http://hi-android.info/src/android/media/AmrInputStream.java.html
I'm quite new to programming to while it talks about using JNI and stuff, I have no idea what JNI is and I don't think it is required for this discussion. The AmrInputStream above is also apparently not found in the SDK nor the NDK, but I have been able to use it.
I've been searching around the internet for how to use the stream, but have not found any examples. In the end I experimented and found it to be similar to just any InputStream. Here's a code snippet
InputStream inStream;
inStream = new FileInputStream("abc.wav");
AmrInputStream aStream = new AmrInputStream(inStream);
File file = new File("xyz.amr");
file.createNewFile();
OutputStream out = new FileOutputStream(file);
byte[] x = new byte[1024];
int len;
while ((len=aStream.read(x)) > 0) {
out.write(x,0,len);
}
out.close();
I have tested this and it has worked (requiring adding the #!AMR\n tag to the output file for playing.) (Edit: The AMR tag must be #!AMR\n).
My question pertains to that I have only managed to get this to work on a PCM-16 file sampled at 8000Hz. Any (higher) frequency used for the original PCM-16 file results in an (undersampled) output. There is a SAMPLES_PER_FRAME variable in the AmrInputStream.java file which I have tried playing with but it does not seem to affect anything.
Any advise or related discussion is welcomed!
SAMPLES_PER_FRAME is the block of data the amrencoder acts on in one go(which is mapped to 20 msec of audio).
from the signatures of the amr encoder functions (at the bottom of http://hi-android.info/src/android/media/AmrInputStream.java.html)
private static native int GsmAmrEncoderNew();
private static native void GsmAmrEncoderInitialize(int gae);
private static native int GsmAmrEncoderEncode(int gae,
byte[] pcm, int pcmOffset, byte[] amr, int amrOffset) throws IOException;
private static native void GsmAmrEncoderCleanup(int gae);
private static native void GsmAmrEncoderDelete(int gae);
There doesnt seem to be a way to pass samplerate to the encoder.(gae is native handle)
the sample rate is hardcoded to 8k atleast with this api
I use the following code to append as many wav files present in the sdcard to a single file. audFullPath is an arraylist containing the path of the audiofiles. Is it correct. When I play the recordedaudio1, after doing this. It play only the first file. I want to play all the files. Any suggestion..
File file=new File("/sdcard/AudioRecorder/recordedaudio1.wav");
RandomAccessFile raf = new RandomAccessFile(file, "rw");
for(int i=0;i<audFullPath.size();i++) {
f=new File(audFullPath.get(i));
fileContent = new byte[(int)f.length()];
System.out.println("Filecontent"+fileContent);
raf.seek(raf.length());
raf.writeBytes(audFullPath.get(i));
}
You can't append WAV files the way you do. That's because each WAV has special format:
The simplest possible WAV file looks like this:
[RIFF HEADER]
...
totalFileSize
[FMT CHUNK]
...
audioFormat
frequency
bytesPerSample
numberOfChannels
...
[DATA CHUNK]
dataSize
<audio data>
What you need to do is:
Make sure that all WAV files are of compatible: same audioFormat, frequency, bits per sample, number of channels, etc.
Create proper RIFF header with total file size
Create proper FMT header
Create proper DATA header with total audio data size
This algorithm will definitely work for LPCM, ULAW, ALAW audio formats. Not sure about others.