I am developing an app that will allow users to cast the screen of an android phone directly to another phone. It is based on libstreaming library with a few modifications.
I have setup MediaRecorder, but the output is not sent to a file but to a file descriptor. I have two file descriptors created with ParcelFileDescriptor.createPipe();
mMediaRecorder = new MediaRecorder();
...
FileDescriptor fd = mParcelWrite.getFileDescriptor();
mMediaRecorder.setOutputFile(fd);
... here I continue with prepare, start, etc.
Then the output is read in the second file descriptor. The stream is processed so the headers that don't exist yet in the mp4 file can be skipped but the video can be streamed.
InputStream is = new ParcelFileDescriptor.AutoCloseInputStream(mParcelRead);
try {
byte buffer[] = new byte[4];
// Skip all atoms preceding mdat atom
while (!Thread.interrupted()) {
while (is.read() != 'm') ;
is.read(buffer, 0, 3);
if (buffer[0] == 'd' && buffer[1] == 'a' && buffer[2] == 't') break;
}
} catch (IOException e) {
Log.e(TAG, "Couldn't skip mp4 header :/");
stop();
throw e;
}
(After this point the stream is sent over the network)
The thing is that apparently after API 23, android doesn't allow non seekable file descriptors.
Any idea of how can I overcome this problem?
Thanks.
Related
I currently work on an app where I use the phone camera and open CV to process the frames. Now I thought it would be cool to be able to send the frames to another Android client. I thought frame by frame with steamer could work, but don't know how to setup the host and if it's not efficient. Any suggestions?
If you just want to send each frame as a raw set of data you can use sockets.
This code below is old now but it worked fine when last tested - it sends an entire video but you can use the same to send whatever file you want:
//Send the video file to helper over a Socket connection so he helper can compress the video file
Socket helperSocket = null;
try {
Log.d("VideoChunkDistributeTask doInBackground","connecting to: " + helperIPAddress + ":" + helperPort);
helperSocket = new Socket(helperIPAddress, helperPort);
BufferedOutputStream helperSocketBOS = new BufferedOutputStream(helperSocket.getOutputStream());
byte[] buffer = new byte[4096];
//Write the video chunk to the output stream
//Open the file
File videoChunkFile = new File(videoChunkFileName);
BufferedInputStream chunkFileIS = new BufferedInputStream(new FileInputStream(videoChunkFile));
//First send a long with the file length - wrap the BufferedOutputStream in a DataOuputStream to
//allow us send a long directly
DataOutputStream helperSocketDOS = new DataOutputStream(
new BufferedOutputStream(helperSocket.getOutputStream()));
long chunkLength = videoChunkFile.length();
helperSocketDOS.writeLong(chunkLength);
Log.d("VideoChunkDistributeTask doInBackground","chunkLength: " + chunkLength);
//Now loop through the video chunk file sending it to the helper via the socket - note this will simply
//do nothing if the file is empty
int readCount = 0;
int totalReadCount = 0;
while(totalReadCount < chunkLength) {
//write the buffer to the output stream of the socket
readCount = chunkFileIS.read(buffer);
helperSocketDOS.write(buffer, 0, readCount);
totalReadCount += readCount;
}
Log.d("VideoChunkDistributeTask doInBackground","file sent");
chunkFileIS.close();
helperSocketDOS.flush();
} catch (UnknownHostException e) {
Log.d("VideoChunkDistributeTask doInBackground","unknown host");
e.printStackTrace();
return null;
} catch (IOException e) {
Log.d("VideoChunkDistributeTask doInBackground","IO exceptiont");
e.printStackTrace();
return null;
}
The full source code is at: https://github.com/mickod/ColabAndroid/tree/master/src/com/amodtech/colabandroid
You may also find there are more up to date socket libraries available which might be better for you to use, but the general principles should be similar.
If you want to stream your video so that the other app can play it like a regular video it streams from the web, then you would want to set up a web server on the 'sending' device. At this point it might be easier to send it to a server and stream from there instead.
I want to analyse an audio file (mp3 in particular) which the user can select and determine what notes are played, when they're player and with what frequency.
I already have some working code for my computer, but I want to be able to use this on my phone as well.
In order to do this however, I need access to the bytes of the audio file. On my PC I could just open a stream and use AudioFormat to decode it and then read() the bytes frame by frame.
Looking at the Android Developer Forums I can only find classes and examples for playing a file (without access to the bytes) or recording to a file (I want to read from a file).
I'm pretty confident that I can set up a file chooser, but once I have the Uri from that, I don't know how to get a stream or the bytes.
Any help would be much appreciated :)
Edit: Is a similar solution to this possible? Android - Read a File
I don't know if I could decode the audio file that way or if there would be any problems with the Android API...
So I solved it in the following way:
Get an InputStream with
final InputStream inputStream = getContentResolver().openInputStream(selectedUri);
Then pass it in this function and decode it using classes from JLayer:
private synchronized void decode(InputStream in)
throws BitstreamException, DecoderException {
ArrayList<Short> output = new ArrayList<>(1024);
Bitstream bitstream = new Bitstream(in);
Decoder decoder = new Decoder();
float total_ms = 0f;
float nextNotify = -1f;
boolean done = false;
while (! done) {
Header frameHeader = bitstream.readFrame();
if (total_ms > nextNotify) {
mListener.OnDecodeUpdate((int) total_ms);
nextNotify += 500f;
}
if (frameHeader == null) {
done = true;
} else {
total_ms += frameHeader.ms_per_frame();
SampleBuffer buffer = (SampleBuffer) decoder.decodeFrame(frameHeader, bitstream); // CPU intense
if (buffer.getSampleFrequency() != 44100 || buffer.getChannelCount() != 2) {
throw new DecoderException("mono or non-44100 MP3 not supported", null);
}
short[] pcm = buffer.getBuffer();
for (int i = 0; i < pcm.length-1; i += 2) {
short l = pcm[i];
short r = pcm[i+1];
short mono = (short) ((l + r) / 2f);
output.add(mono); // RAM intense
}
}
bitstream.closeFrame();
}
bitstream.close();
mListener.OnDecodeComplete(output);
}
The full project (in case you want to look up the particulars) can be found here:
https://github.com/S7uXN37/MusicInterpreterStudio/
I am trying to use MediaCodec to save a series of Images, saved as Byte Arrays in a file, to a video file. I have tested these images on a SurfaceView (playing them in series) and I can see them fine. I have looked at many examples using MediaCodec, and here is what I understand (please correct me if I am wrong):
Get InputBuffers from MediaCodec object -> fill it with your frame's
image data -> queue the input buffer -> get coded output buffer ->
write it to a file -> increase presentation time and repeat
However, I have tested this a lot and I end up with one of two cases:
All sample projects I tried to imitate have caused Media server to die when calling queueInputBuffer for the second time.
I tried calling codec.flush() at the end (after saving output buffer to file, although none of the examples I saw did this) and the media server did not die, however, I am not able to open the output video file with any media player, so something is wrong.
Here is my code:
MediaCodec codec = MediaCodec.createEncoderByType(MIMETYPE);
MediaFormat mediaFormat = null;
if(CamcorderProfile.hasProfile(CamcorderProfile.QUALITY_720P)){
mediaFormat = MediaFormat.createVideoFormat(MIMETYPE, 1280 , 720);
} else {
mediaFormat = MediaFormat.createVideoFormat(MIMETYPE, 720, 480);
}
mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, 700000);
mediaFormat.setInteger(MediaFormat.KEY_FRAME_RATE, 10);
mediaFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar);
mediaFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5);
codec.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
codec.start();
ByteBuffer[] inputBuffers = codec.getInputBuffers();
ByteBuffer[] outputBuffers = codec.getOutputBuffers();
boolean sawInputEOS = false;
int inputBufferIndex=-1,outputBufferIndex=-1;
BufferInfo info=null;
//loop to read YUV byte array from file
inputBufferIndex = codec.dequeueInputBuffer(WAITTIME);
if(bytesread<=0)sawInputEOS=true;
if(inputBufferIndex >= 0){
if(!sawInputEOS){
int samplesiz=dat.length;
inputBuffers[inputBufferIndex].put(dat);
codec.queueInputBuffer(inputBufferIndex, 0, samplesiz, presentationTime, 0);
presentationTime += 100;
info = new BufferInfo();
outputBufferIndex = codec.dequeueOutputBuffer(info, WAITTIME);
Log.i("BATA", "outputBufferIndex="+outputBufferIndex);
if(outputBufferIndex >= 0){
byte[] array = new byte[info.size];
outputBuffers[outputBufferIndex].get(array);
if(array != null){
try {
dos.write(array);
} catch (IOException e) {
e.printStackTrace();
}
}
codec.releaseOutputBuffer(outputBufferIndex, false);
inputBuffers[inputBufferIndex].clear();
outputBuffers[outputBufferIndex].clear();
if(sawInputEOS) break;
}
}else{
codec.queueInputBuffer(inputBufferIndex, 0, 0, presentationTime, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
info = new BufferInfo();
outputBufferIndex = codec.dequeueOutputBuffer(info, WAITTIME);
if(outputBufferIndex >= 0){
byte[] array = new byte[info.size];
outputBuffers[outputBufferIndex].get(array);
if(array != null){
try {
dos.write(array);
} catch (IOException e) {
e.printStackTrace();
}
}
codec.releaseOutputBuffer(outputBufferIndex, false);
inputBuffers[inputBufferIndex].clear();
outputBuffers[outputBufferIndex].clear();
break;
}
}
}
}
codec.flush();
try {
fstream2.close();
dos.flush();
dos.close();
} catch (IOException e) {
e.printStackTrace();
}
codec.stop();
codec.release();
codec = null;
return true;
}
My question is, how can I get a working video from a stream of images using MediaCodec. What am I doing wrong?
Another question (if I am not too greedy), I would like to add an Audio track to this video, can it be done with MediaCodec as well, or must I use FFmpeg?
Note: I know about MediaMux in Android 4.3, however, it is not an option for me as my App must work on Android 4.1+.
Update
Thanks to fadden answer, I was able to reach EOS without Media server dying (Above code is after modification). However, the file I am getting is producing gibberish. Here is a snapshot of the video I get (only works as .h264 file).
My Input image format is YUV image (NV21 from camera preview). I can't get it to be any playable format. I tried all COLOR_FormatYUV420 formats and same gibberish output. And I still can't find away (using MediaCodec) to add audio.
I think you have the right general idea. Some things to be aware of:
Not all devices support COLOR_FormatYUV420SemiPlanar. Some only accept planar. (Android 4.3 introduced CTS tests to ensure that the AVC codec supports one or the other.)
It's not the case that queueing an input buffer will immediately result in the generation of one output buffer. Some codecs may accumulate several frames of input before producing output, and may produce output after your input has finished. Make sure your loops take that into account (e.g. your inputBuffers[].clear() will blow up if it's still -1).
Don't try to submit data and send EOS with the same queueInputBuffer call. The data in that frame may be discarded. Always send EOS with a zero-length buffer.
The output of the codecs is generally pretty "raw", e.g. the AVC codec emits an H.264 elementary stream rather than a "cooked" .mp4 file. Many players won't accept this format. If you can't rely on the presence of MediaMuxer you will need to find another way to cook the data (search around on stackoverflow for ideas).
It's certainly not expected that the mediaserver process would crash.
You can find some examples and links to the 4.3 CTS tests here.
Update: As of Android 4.3, MediaCodec and Camera have no ByteBuffer formats in common, so at the very least you will need to fiddle with the chroma planes. However, that sort of problem manifests very differently (as shown in the images for this question).
The image you added looks like video, but with stride and/or alignment issues. Make sure your pixels are laid out correctly. In the CTS EncodeDecodeTest, the generateFrame() method (line 906) shows how to encode both planar and semi-planar YUV420 for MediaCodec.
The easiest way to avoid the format issues is to move the frames through a Surface (like the CameraToMpegTest sample), but unfortunately that's not possible in Android 4.1.
I want to merge two mp3 files into one mp3 file.for example if 1st file is 1min and 2nd file is 30 sec then the output should be one min. In that one min it should play both the files.
First of all, in order to mix two audio files you need to manipulate their raw representation; since an MP3 file is compressed, you don't have a direct access to the signal's raw representation. You need to decode the compressed MP3 stream in order to "understand" the wave form of your audio signals and then you will be able to mix them.
Thus, in order to mix two compressed audio file into a single compressed audio file, the following steps are required:
decode the compressed file using a decoder to obtain the raw data (NO PUBLIC SYSTEM API available for this, you need to do it manually!).
mix the two raw uncompressed data streams (applying audio clipping if necessary). For this, you need to consider the raw data format obtained with your decoder (PCM)
encode the raw mixed data into a compressed MP3 file (as per the decoder, you need to do it manually using an encoder)
More info aboud MP3 decoders can be found here.
I am not sure if you want to do it on an Android phone (looks like that because of your tags), but if I'm right maybe try LoopStack, it's a mobile DAW (did not try it myself).
If you are just "mixing" two files without adjusting the output volume your output might clip. However I am not sure if it's possible to "mix" two mp3 files without decoding them.
If it is okay for you to merge them on your PC try Audacity, it's a free desktop DAW.
I have not done it in Android but I had done it using Adobe flex. I guess the logic remains the same. I followed the following steps:
I extracted both the mp3s into two byte arrays. (song1ByteArray, song2ByteArray)
Find out the bigger byte array. (Let's say song1ByteArray is the larger one).
Create a function which returns the mixed byte array.
private ByteArray mix2Songs(ByteArray song1ByteArray, ByteArray song2ByteArray){
int arrLength=song1ByteArray.length;
for(int i=0;i<arrLength;i+=8){ // here if you see we are incrementing the length by 8 because a sterio sound has both left and right channels 4 bytes for left +4 bytes for right.
// read left and right channel values for the first song
float source1_L=song1ByteArray.readFloat();// I'm not sure if readFloat() function exists in android but there will be an equivalant one.
float source1_R=song1ByteArray.readFloat();
float source2_L=0;
float source2_R=0;
if(song2ByteArray.bytesAvailable>0){
source2_L=song1ByteArray.readFloat();//left channel of audio song2ByteArray
source2_R=song1ByteArray.readFloat(); //right channel of audio song2ByteArray
}
returnResultArr.writeFloat((source_1_L+source_2_L)/2); // average value of the source 1 and 2 left channel
returnResultArr.writeFloat((source_1_R+source_2_R)/2); // average value of the source 1 and 2 right channel
}
return returnResultArr;
}
1. Post on Audio mixing in Android
2. Another post on mixing audio in Android
3. You could leverage Java Sound to mix two audio files
Example:
// First convert audiofile to audioinputstream
audioInputStream = AudioSystem.getAudioInputStream(soundFile);
audioInputStream2 = AudioSystem.getAudioInputStream(soundFile2);
// Create one collection list object using arraylist then add all AudioInputStreams
Collection list=new ArrayList();
list.add(audioInputStream2);
list.add(audioInputStream);
// Then pass the audioformat and collection list to MixingAudioInputStream constructor
MixingAudioInputStream mixer=new MixingAudioInputStream(audioFormat, list);
// Finally read data from mixed AudionInputStream and give it to SourceDataLine
nBytesRead =mixer.read(abData, 0,abData.length);
int nBytesWritten = line.write(abData, 0, nBytesRead);
4. Try AudioConcat that has a -m option for mixing
java AudioConcat [ -D ] [ -c ] | [ -m ] | [ -f ] -o outputfile inputfile ...
Parameters.
-c
selects concatenation mode
-m
selects mixing mode
-f
selects float mixing mode
-o outputfile
The filename of the output file
inputfile
the name(s) of input file(s)
5. You could use ffmpeg android wrapper using a syntax and approach as explained here
This guy used the JLayer library in a project quite similar to yours. He also gives you a guide on how to integrate that library in your android application directly recompiling the jar.
Paraphrasing his code it is very easy to accomplish your task:
public static byte[] decode(String path, int startMs, int maxMs)
throws IOException, com.mindtherobot.libs.mpg.DecoderException {
ByteArrayOutputStream outStream = new ByteArrayOutputStream(1024);
float totalMs = 0;
boolean seeking = true;
File file = new File(path);
InputStream inputStream = new BufferedInputStream(new FileInputStream(file), 8 * 1024);
try {
Bitstream bitstream = new Bitstream(inputStream);
Decoder decoder = new Decoder();
boolean done = false;
while (! done) {
Header frameHeader = bitstream.readFrame();
if (frameHeader == null) {
done = true;
} else {
totalMs += frameHeader.ms_per_frame();
if (totalMs >= startMs) {
seeking = false;
}
if (! seeking) {
SampleBuffer output = (SampleBuffer) decoder.decodeFrame(frameHeader, bitstream);
if (output.getSampleFrequency() != 44100
|| output.getChannelCount() != 2) {
throw new com.mindtherobot.libs.mpg.DecoderException("mono or non-44100 MP3 not supported");
}
short[] pcm = output.getBuffer();
for (short s : pcm) {
outStream.write(s & 0xff);
outStream.write((s >> 8 ) & 0xff);
}
}
if (totalMs >= (startMs + maxMs)) {
done = true;
}
}
bitstream.closeFrame();
}
return outStream.toByteArray();
} catch (BitstreamException e) {
throw new IOException("Bitstream error: " + e);
} catch (DecoderException e) {
Log.w(TAG, "Decoder error", e);
throw new com.mindtherobot.libs.mpg.DecoderException(e);
} finally {
IOUtils.safeClose(inputStream);
}
}
public static byte[] mix(String path1, String path2) {
byte[] pcm1 = decode(path1, 0, 60000);
byte[] pcm2 = decode(path2, 0, 60000);
int len1=pcm1.length;
int len2=pcm2.length;
byte[] pcmL;
byte[] pcmS;
int lenL; // length of the longest
int lenS; // length of the shortest
if (len2>len1) {
lenL = len1;
pcmL = pcm1;
lenS = len2;
pcmS = pcm2;
} else {
lenL = len2;
pcmL = pcm2;
lenS = len1;
pcmS = pcm1;
}
for (int idx = 0; idx < lenL; idx++) {
int sample;
if (idx >= lenS) {
sample = pcmL[idx];
} else {
sample = pcmL[idx] + pcmS[idx];
}
sample=(int)(sample*.71);
if (sample>127) sample=127;
if (sample<-128) sample=-128;
pcmL[idx] = (byte) sample;
}
return pcmL;
}
Note that I added attenuation and clipping in the last rows: you always have to do both when mixing two waveforms.
If you don't have memory/time requirements you can make an int[] of the sum of the samples and evaluate what is the best attenuation to avoid clipping.
To merge (overlap) two sound files, you can use This FFMPEG library.
Here is the Documentation
In their sample you can just enter the command you want. So lets talk about the command that we need.
-i [FISRST_FILE_PATH] -i [SECOND_FILE_PATH] -filter_complex amerge -ac 2 -c:a libmp3lame -q:a 4 [OUTPUT_FILE_PATH]
For first and second file paths, you will get the absolute path of the sound file.
1- If it is on storage then it is a sub folder for Environment.getExternalStorageDirectory().getAbsolutePath()
2- If it is assets so it should be a sub folder for file:///android_asset/
For the output path, Make sure to add the extension
ex.
String path = Environment.getExternalStorageDirectory().getAbsolutePath() + "/File Name.mp3"
I didn't get any fine solution.but we can do some trick here.. :)
You can assign both mp3 files to two different MediaPlayer object.then play both files at a time with a button.compare both mp3 files to find the longest duration.after that Use a AudioReorder to record to that duration. it will solve your problem..I know its not a right way but hope it will help you.. :)
I have an application that streaming Video/Audio from Android device to Server
Streaming are fine but when I save the streamed data fro MediaRecoreder I can't play the file
Android code :
String hostname = "000.000.000.000";
int port = 0000;
Socket socket = null;
try {
socket = new Socket(InetAddress.getByName(hostname), port);
} catch (UnknownHostException e1) {
e1.printStackTrace();
} catch (IOException e1) {
e1.printStackTrace();
}
ParcelFileDescriptor pfd = ParcelFileDescriptor.fromSocket(socket);
MediaRecorder recorder = new MediaRecorder();
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
recorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
recorder.setOutputFile(pfd.getFileDescriptor());
recorder.start();
server side :
Socket userSocket = socket.accept();
//DataInputStream dis;
dis = new DataInputStream(userSocket.getInputStream());
while(true){
dis.read(buf , 0 , buf.length);
saveBufferToFile(buf);
}
Now the I save the buffer using FileOutStream .write(); method but the out put file can't be played at all.
after research I understand that I need to add the mp4 headers to the file before I write the data on it BUT I don't know how to do this !?
Regards,
Firstly, the MPEG4 container is more than just "headers" to video data.
Secondly, the server side code does not look correct to me. I assume it is Java. Specifically:
dis = new DataInputStream(userSocket.getInputStream());
while(true){
dis.read(buf , 0 , buf.length);
saveBufferToFile(buf);
}
Specifically, DataInputStream.read() does not necessarily read the entire length of the buffer. The JavaDocs say:
An attempt is made to read as many as len bytes, but a smaller number
may be read, possibly zero.
I suspect the file you are writing to disk is corrupted. You can confirm this by comparing the number of bytes sent by the client and the size of file. My gut say the file will be significantly larger.
To fix it you'll need to take not of the bytes read in by read(), and only write those to bytes to disk. The rest of the buffer is effectively garbage, ignore it.