I'm working with the MediaRecorder API for a while, I thought all problems are behind me but I guess I was wrong.
I'm using the MediaRecorder API for recording video to a file.
When I use the setProfile with high quality I get good quality but when I try to set the parameters manually (as in the code below) the quality is bad (since for some reason the bitrate is cropped).
I want to get 720p with 1fps.
I keep getting the following warning:
WARN/AuthorDriver(268): Video encoding bit rate is set to 480000 bps
The code I'm running:
m_MediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);
m_MediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
m_MediaRecorder.setVideoSize(1280, 720);
m_MediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264);
m_MediaRecorder.setVideoFrameRate(1);
m_MediaRecorder.setVideoEncodingBitRate(8000000);
Any idea?
Thanks a lot.
Found the solution...very weird however.
Setting the bit-rate before setting the compression type somehow solved the problem.
The only question is whether it is a bug in google's code or something else that I don't understand.
Original:
m_MediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264);
m_MediaRecorder.setVideoFrameRate(1);
m_MediaRecorder.setVideoEncodingBitRate(8000000);
Solution:
m_MediaRecorder.setVideoEncodingBitRate(8000000);
m_MediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264);
m_MediaRecorder.setVideoFrameRate(1);
The documentation for setVideoEncodingBitRate() says:
Sets the video encoding bit rate for recording. Call this method
before prepare(). Prepare() may perform additional checks on the
parameter to make sure whether the specified bit rate is applicable,
and sometimes the passed bitRate will be clipped internally to ensure
the video recording can proceed smoothly based on the capabilities of
the platform.
Because the MediaRecorder API is dealing with a hardware encoding chip of some sort, that is different from device to device, it can't always give you every combination of codec, frame size, frame rate and encoding bitrate you ask for.
Your needs are somewhat unusual, in that you are trying to record at 1 fps. If you are developing your app for Honeycomb, there is a "time lapse" API for MediaRecorder along with an associated setCaptureRate() call that might be useful.
Related
I modify a video with glsl shaders, using a SurfaceTexture and OpenGL ES 2.0. I can also encode the result video with MediaCodec.
The problem is that the only way I've found to decode the video is with MediaPlayer and SurfaceTexture, but MediaPlayer doesn't have a frame by frame decoding option. So right now, it's like a live encoding/decoding, there is no pause.
I've also tried to use seekTo / pause / start, but it would never update the texture..
So would it be possible to do a step by step decoding instead, to follow the encoding process ? I'm afraid that my current method is not very accurate.
Thanks in advance !
Yes, instead of using MediaPlayer, you need to use MediaExtractor and MediaCodec to decode it (into the same SurfaceTexture that you're already using with MediaPlayer).
An example of this would be ExtractMpegFramesTest at http://bigflake.com/mediacodec/, possibly also DecodeEditEncodeTest (or for a >= Android 5.0 async version of it, see https://github.com/mstorsjo/android-decodeencodetest).
EDIT : Wrong, mediaPlayer's stream cannot be used frame by frame, seems like it only works in "real" speed.
I've managed to do it with MediaPlayer actually. Following this answer :
stackoverflow - SurfaceTexture.OnFrameAvailableListener stops being called
Using counters, you can speed up or speed down the video stream, and synchronize it with the preview or the encoding.
But - If you want to do a real seek to a particular frame, then mstorsjo's solution is way better. In my case, I just wanted to make sure the encoding process is not going faster or slower than the video input stream.
I'm trying to use Android's MediaMuxer and MediaCodec to produce MP4 videos.
If I drain frames from the codec directly to the muxer by calling writeSampleData(), everything works fine and the correct video is produced.
But if I try to first store these frames on an array and decide later to send them to the muxer, I'm unable to produce a working video, even if the presentation timestamps are correct.
For some reason, it seem that the mediamuxer output depends not only on the presentation timestamps, but also on the actual time "writeSampleData" is called, although it's my understanding that having the correct timestamps should be enough.
Can anyone shed some light on this issue?
Thanks mstorsjo and fadden. I had actually a combination of errors which didn't allow me to understand what was really going on. Both your questions led me to the correct code and the conviction that using writeSampleData() was not time sensitive.
Yes, I was getting the wrong buffers at the first time. The problem was not initially noticeable because the muxer was writing the frames before the buffers got rewritten. When I introduced the delays and decided to duplicate the buffers contents, I hit another issue (basically a race condition) and concluded it was not the case.
What this code does (for the SmartPolicing project) is capture video and audio to create a MP4 file. I could use MediaRecorder (this was the initial solution), but we also wanted to intercep the frames and stream the video via web, so we dropped the MediaRecorder and created a custom solution.
Now it is running smoothly. Thanks a lot, guys.
Are you sure you actually store the complete data for the frames to be written, not only the buffer indices?
I am working on an implementation of one of the Android Test Cases regarding previewTexture recording with the new MediaCodec and MediaMuxer API's of Android 4.3.
I've managed to record the preview stream with a framerate of about 30fps by setting the recordingHint to the camera paremeters.
However, I ran into a delay/lag problem and don't really know how to fix that. When recording the camera preview with quite standard quality settings (1280x720, bitrate of ~8.000.000) the preview and the encoded material suffers from occasional lags. To be more specific: This lag occurs about every 2-3 seconds and takes about 300-600ms.
By tracing the delay I was able to figure out the delay comes from the following line of code in the "drainEncoder" method:
mMuxer.writeSampleData(mTrackIndex, encodedData, mBufferInfo);
This line is called in a loop if the encoder has data available for muxing. Currently I don't record audio so only the h264 streams is converted to a mp4 format by the MediaMuxer.
I don't know if this has something to do with that delay, but it always occurs when the loop needs two iterations to dequeue all available data of the encoder (to be even more specific it occurs always in the first of these two iterations). In most cases one iteration is enough to dequeue the encoder.
Since there is not much information online about these new API's any help is very appreciated!
I suspect you're getting bitten by the MediaMuxer disk write. The best way to be sure is to run systrace during recording and see what's actually happening during the pause. (systrace docs, explanation, bigflake example -- as of right now only the latter is updated for Android 4.3)
If that's the case, you may be able to mitigate the problem by running the MediaMuxer instance on a separate thread, feeding the H.264 data to it through a synchronized queue.
Do these pauses happen regularly, every 5 seconds? The CameraToMpegTest example configures the encoder to output an I-frame every 5 seconds (with an expected frame rate of 30fps), which results in a full-sized frame being output rather than tiny deltas.
As #fadden points out, this is a disk write issue that occurs mostly on devices with lower writing flash speeds or if you try to write to the SD card.
I have written a solution on how to buffer MediaMuxer's write in a similar question here.
I am working on application that does some real time image processing on camera frames. For that, I use preview callback's method onPreviewFrame. This works fine for cameras that support preview frames that have resolution at least 640x480 or larger. But when camera does not support such large camera preview resolution, application is programmed to refuse processing such frames. Now, the problem I have is with phones like Sony Xperia Go. It is a very nice device that can record video up to resolution 1280x720, but unfortunately maximum camera preview size is 480x320, which is too small for my needs.
What I would like to know is how to obtain these larger camera frames (up to 1280x720 or more)? Obviously it has to be possible because camera application has the ability to record videos in that resolution - therefore this application somehow must be able to access those larger frames. How to do the same from my application?
Application has to support Android 2.1 and later, but I would be very happy even if I find the solution for my problem only for Android 4.0 or newer.
This question is similar to http://stackoverflow.com/questions/8839109/processing-android-video-frame-by-frame-while-recording, but I don't need to save the video - I only need those high resolution video frames...
It seems the only thing you can do is decoding frames from MediaRecoder data.
You may use ffmpeg to decode recoreder data from LocalSocket.
Hope the following open source projects may help:
ipcamera-for-android: https://code.google.com/p/ipcamera-for-android/
spydroid-ipcamera: https://code.google.com/p/spydroid-ipcamera/
You should probably take a look at the OpenCV library.
It has methods that allow you to receive full frames.
I have an impression: video preview size is small, and is slow, slower than the set video recording frame rate.
I was once trying to look for solutions on this. It seems a better way is to get the video stream from the video recorder, then directly process the data from the video stream.
You could find some examples on Android ip-camera.
You can use this library:
https://github.com/natario1/CameraView
This library has addFrameProcessor listener that in process function has Frame parameter.
If you need to record video while frame processing, you need to use from takeVideoSnapshot function of CameraView. takeVideo stop frame processing until complete video recording in latest version I tested 2.6.4.
Is there any way to record audio in high quality?
And how can I read information that user is saying something? In Audio Recording application you can see such indicator (I don't know the right name for it).
At the moment, a big reason for poor audio quality recording on Android is the codec used by the MediaRecorder class (the AMR-NB codec). However, you can get access to uncompressed audio via the AudioRecord class, and record that into a file directly.
The Rehearsal Assistant app does this to save uncompressed audio into a WAV file - take a look at the RehearsalAudioRecord class source code.
The RehearsalAudioRecord class also provides a getMaxAmplitude method, which you can use to detect the maximum audio level since the last time you called the method (MediaRecorder also provides this method).
For recording and monitoring: You can use the sound recorder activity.
Here's a snippet of code:
Intent recordIntent = new Intent(
MediaStore.Audio.Media.RECORD_SOUND_ACTION);
startActivityForResult(recordIntent, REQUEST_CODE_RECORD);
For a perfect working example of how to record audio which includes an input monitor, download the open source Ringdroid project: https://github.com/google/ringdroid
Look at the screenshots and you'll see the monitor.
For making the audio higher quality, you'd need a better mic. The built in mic can only capture so much (which is not that good). Again, look at the ringdroid project, glean some info from there. At that point you could implement some normalization and amplification routines to improve the sound.
I give you a simple answer.
for samplerate, about the quality, 48000 is almost the same as 16000.
for bitrate, about the quality, 96Kbps is much better than 16Kbps.
you can try stereo(channelCount = 2), but make little change.
So, for android phones, just set the audio bit rate bigger, you will get the better quality.