I'm compressing a video with audio on android using MediaCodec. When I run the code in a try/catch the output video is fine apart from the last part of the audio is missing.
I've had to rewrite the code and also convert it into Kotlin but nothing has worked so far. I've seen a few similar issues on here but the error code is slightly different as it is more of an audio issue whereas this looks like an issue with the video extraction.
Here's the code where the error is happening. The line with videoDecoder.queueInputBuffer is exactly the crash.
videoExtractorDone = !videoExtractor.advance()
if (videoExtractorDone) {
if (false) Log.d(TAG, "video extractor: EOS")
videoDecoder.queueInputBuffer(decoderInputBufferIndex, 0, 0, 0, videoDecoderOutputBufferInfo.flags)
}
How can I solve this issue so that the clip has full audio? I can't lose any of the audio from this process.
Related
I want to save the video every 5 seconds while the video recording is ON.
I have tried many solutions but I am facing a Glitch that is, the Last Saved Frame remains in preview for around 300ms.
I think the reason is in MediaRecorder class "Once a recorder has been stopped, it will need to be completely reconfigured and prepared before being restarted."
Thanks
I think it's impossible to do that with MediaRecorder. The better approach could be encoding video by using MediaCodec and storing encoded content bt using MediaMuxer.
Grafika is a project on Google Github account which is a dumping ground for Android graphics & media hacks. In this project, you can find good examples of using both MediaCodec and MediaMuxer classes.
I forked the Grafika project and did some modifications to support sequential segmented recording. You can find it here. When you run the application, select Show + capture camera item from the list and then set Output Segment Duration for example to 5 and then press Start recording button.
Please look at VideoEncoderCore and CameraCaptureActivity classes source code to find how it works. You can find here how it segments live camera feed to different files.
"I think the reason is in MediaRecorder class, "Once a recorder has been stopped, it will need to be completely reconfigured and prepared before being restarted"."
You can use multiple mediaMuxer's to encode separate files.
The camera should send data to fill a MediaMuxer object (which itself produces an .mp4 file).
When needed, you can start writing the Camera data to a second (different) MediaMuxer thus automatically creating a second new .mp4 file (on begin usage of the muxer).
The first MediaMuxer can then close and save its file. Your first segment is ready...
If needed, try to study this code for a guide on using Camera with mediaMuxer:
https://bigflake.com/mediacodec/CameraToMpegTest.java.txt
So you have a function that handles things when the 5 second interval has passed? In that function, could cycle the recording between two muxers, giving one a chance to close its file, while the other records the next segment and then vice-versa).
Instead of something like below (using MediaRecorder.OutputFormat.MPEG_4):
this.mMediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
You will instead create a new muxer (with MUXER_OUTPUT_MPEG_4):
//# create a new File to ssave into
File outputFile = new File(OUTPUT_FILENAME_DIR, "/yourFolder/Segment" + "-" + mySegmentNum + ".mp4");
String outputPath = outputFile.toString();
int format = MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4;
try { mMuxer = new MediaMuxer(outputPath, format); }
catch (IOException e) { Log.e(TAG, e.getLocalizedMessage()); }
And you stop a muxer with:
mMuxer1.stop(); mMuxer1.release();
PS:
Another option is to use Threads to run multiple MediaRecorders. It might help your situation. See the Android Background Process guide.
I have a server that encodes real-time voice into mono or stereo mp3 thanks to libmp3lame and sends it chunk by chunk through a WebSocket.
I'm trying to make an Android App that receives those mp3 chunks and play them with the most appropriate Audio player Android have. I went with AudioTrack since it seems pretty easy to add chunks to the player as well as "stream" oriented. (Since what I'm doing is sending to the track some byte array and not a full song that is locally stocked in the Android phone).
Since AudioTrack does not support compressed audio format (such as MP3), I have to decode those chunks into PCM to play them afterward. I'm using the famous JLayer to do this real-time decoding. Thanks to that, I can play each sample into my AudioTrack and hear what the server is sending.
My problem is that the received/player audio is badly hashed. (I can understand whatever the speaker is saying perfectly, but the quality is bad, like if the speaker had a "robotic voice").
Here is the code I'm using to receive/decode/play those byte[].
public void addSample(byte[] data) throws BitstreamException, DecoderException, IOException {
// JLayer decoder
Decoder decoder = new Decoder();
// Input Stream with the byte[] voice data
InputStream bis = new ByteArrayInputStream(data);
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
Bitstream bits = new Bitstream(bis);
// Decoding MP3 data into PCM in a PCM BUFFER
SampleBuffer pcmBuffer = (SampleBuffer) decoder.decodeFrame(bits.readFrame(), bits);
// Sending the PCMBuffer data into Audio Track to play it
mTrack.write(pcmBuffer.getBuffer(), 0, pcmBuffer.getBufferLength());
bits.closeFrame();
}
And here is my AudioTrack initialization
mTrack= new AudioTrack.Builder()
.setAudioAttributes(new AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_MEDIA)
.setContentType(AudioAttributes.CONTENT_TYPE_SPEECH)
.build())
.setAudioFormat(new AudioFormat.Builder()
.setEncoding(AudioFormat.ENCODING_PCM_16BIT)
.setSampleRate(48000)
.setChannelMask(AudioFormat.CHANNEL_OUT_STEREO)
.build())
.setBufferSizeInBytes(AudioTrack.getMinBufferSize(48000, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT))
.build();
mTrack.play();
So to understand what was happening I tried to lag each data contained in the pcmBuffer. It seems like a huge part of those data where 0 at the very beginning of the buffer (I'd say 1/5 of the buffer is 0, all of them located at the beginning). So then I took an oscilloscope and tried to get the signal my Android phone was receiving. Here is the result:
As you can see, each frame is present, but as some "blank" or 0 data values. Those 0 in the beginning of each frame makes the signal hashed and pretty annoying to listen.
I have no idea whether this comes from the MP3 signal itself, the way I'm playing it, AudioTrack, JLayer, or the way I'm decoding it. So if anyone has an idea it would be really awesome.
EDIT :
Found out something interesting. By decoding each frame header I can have access to a lot of information such as the time in ms for each frame. I logged it :
System.out.println(bits.readFrame().ms_per_frame());
I found out that each of my frames are 24ms. When I look back at the oscilloscope, I can see that each frame actually take 24ms, but the beginning/end of each frame is filled with 0. So first of all, is it a decoding problem ? If it is not, how can I have a clear signal without small breakup in each frame ?
I've been printing all the data that each frame is sending me, each frame starts with a looot of zeros. How am I supposed to have a clear signal if each frame have some kind of audio void ?
If I print the MP3 data that I'm receiving each frame (96 bits), I have the first four bytes (probably the header?) that always have the same value :
"-1, -5, 20, -60"
Then I have a fifth bit that is always equal to 0, and sometimes a sixth bit that is also equal to 0. Should I be removing those ?
I am working on a project that needs to process a video using OpenGL on Android. I decided to use MediaCodec and I managed to get it works with the help from ExtractDecodeEditEncodeMuxTest. The result is quite good, I have it receives a video, extracts the tracks, decodes the videotrack, edits with OpenGL, and encodes to a video file.
The problem is that the result video can be play well on Android, but when it comes to iOS, two-thirds of the screen is green.
I tried to solve with the suggestions from here, here, and here, experiment different formats for the encoder, but the problem is still the same.
Could someone suggest me the reasons that can cause this problem and how to fix it?
This is the video when it's played on iOS
This is the configuration for the encoder
MediaCodec mediaCodec = MediaCodec.createEncoderByType("video/avc");
MediaFormat mediaFormat = MediaFormat.createVideoFormat("video/avc", 540, 960);
mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, 2000000);
mediaFormat.setInteger(MediaFormat.KEY_FRAME_RATE, 30);
mediaFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, CodecCapabilities.COLOR_FormatSurface);
mediaFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 1);
mediaCodec.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
Update
I wonder if i had any mistake with the video orientation, because the working partial of the output video has the same ratio as the desired output resolution, but in horizontal orientation. The input is vertical recorded, so does the desired output.
Here is the code of the decoder configuration:
inputFormat.setInteger(MediaFormat.KEY_WIDTH, 540);
inputFormat.setInteger(MediaFormat.KEY_HEIGHT, 960);
inputFormat.setInteger("rotation-degrees", 90);
String mime = inputFormat.getString(MediaFormat.KEY_MIME);
MediaCodec decoder = MediaCodec.createDecoderByType(mime);
decoder.configure(inputFormat, surface, null, 0);
Update Dec 25: I've tried different resolutions and orientations when configuring both encoder and decoder to check if the video's orientation is the problem or not, but the output video just got rotated, the green problem is still there.
I also tried "video/mp4v-es" for the encoder, the result video is viewable on Mac, but the iPhone cannot even play it.
I've just solved it. The reason turns out to be the MediaMuxer, it wraps the h264 raw stream in some sort of container that iOS cant understand. So instead of using MediaMuxer, I write the raw h264 stream from the encoder to a file, and use mp4parser to mux it into a mp4 file.
I know the answer now: It has to do with the fps range. I changed the fps rate on my camera params and on the media codec and suddenly it worked!
This is my first question so please let me know if I missed anything!
I want to copy the video using android Media Codec class. (form.mp4 -> to.mp4)
I did decoding video and audio . but i have no idea how to incorde video and audio. I already saw http://bigflake.com/mediacodec/ this page. But i can't find simple example.
mFormat = MediaFormat.createVideoFormat("video/avc", mMetadata.V_WIDTH,mMetadata.V_HEIGHT);
mFormat.setInteger(MediaFormat.KEY_WIDTH, mMetadata.V_WIDTH);
mFormat.setInteger(MediaFormat.KEY_HEIGHT, mMetadata.V_HEIGHT);
mFormat.setInteger(MediaFormat.KEY_BIT_RATE, (int) mMetadata.BIT_RATE);
mFormat.setInteger(MediaFormat.KEY_FRAME_RATE, mMetadata.FRAME_RATE);
mFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar);
mFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, mMetadata.I_FRAME_INTERVAL);
videoEncoder = MediaCodec.createEncoderByType("video/avc");
videoEncoder.configure(mFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
videoEncoder.start();
I did Media-codec start. but I don't know how connect row data of video. please save me from this hell.
I have followed this example to convert raw audio data coming from AudioRecord to mp3, and it happened successfully, if I store this data in a file the mp3 file and play with music player then it is audible.
Now my question is instead of storing mp3 data to a file i need to play it with AudioTrack, the data is coming from the Red5 media server as live stream, but the problem is AudioTrack can only play PCM data, so i can only hear noise from my data.
Now i am using JLayer to my require task.
My code is as follows.
int readresult = recorder.read(audioData, 0, recorderBufSize);
int encResult = SimpleLame.encode(audioData,audioData, readresult, mp3buffer);
and this mp3buffer data is sent to other user by Red5 stream.
data received at other user is in form of stream, so for playing it the code is
Bitstream bitstream = new Bitstream(data.read());
Decoder decoder = new Decoder();
Header frameHeader = bitstream.readFrame();
SampleBuffer output = (SampleBuffer) decoder.decodeFrame(frameHeader, bitstream);
short[] pcm = output.getBuffer();
player.write(pcm, 0, pcm.length);
But my code freezes at bitstream.readFrame after 2-3 seconds, also no sound is produced before that.
Any guess what will be the problem? Any suggestion is appreciated.
Note: I don't need to store the mp3 data, so i cant use MediaPlayer, as it requires a file or filedescriptor.
just a tip, but try to
output.close();
bitstream.closeFrame();
after yours write code. I'm processing MP3 same as you do, but I'm closing buffers after usage and I have no problem.
Second tip - do it in Thread or any other Background process. As you mentioned these deaf 2 seconds, media player may wait until you process whole stream because you are loading it in same thread.
Try both tips (and you should anyway). In first, problem could be in internal buffers; In second you probably fulfill Media's input buffer and you locked app (same thread, full buffer cannot receive your input and code to play it and release same buffer is not invoked because writing locks it...)
Also, if you don't doing it now, check for 'frameHeader == null' due to file end.
Good luck.
You need to loop through the frames like this:
While (frameHeader = bitstream.readFrame()){
SampleBuffer output = (SampleBuffer) decoder.decodeFrame(frameHeader, bitstream);
short[] pcm = output.getBuffer();
player.write(pcm, 0, pcm.length);
bitstream.close();
}
And make sure you are not running them on main thread.(This is probably the reason of freezing.)