I am working on an implementation of one of the Android Test Cases regarding previewTexture recording with the new MediaCodec and MediaMuxer API's of Android 4.3.
I've managed to record the preview stream with a framerate of about 30fps by setting the recordingHint to the camera paremeters.
However, I ran into a delay/lag problem and don't really know how to fix that. When recording the camera preview with quite standard quality settings (1280x720, bitrate of ~8.000.000) the preview and the encoded material suffers from occasional lags. To be more specific: This lag occurs about every 2-3 seconds and takes about 300-600ms.
By tracing the delay I was able to figure out the delay comes from the following line of code in the "drainEncoder" method:
mMuxer.writeSampleData(mTrackIndex, encodedData, mBufferInfo);
This line is called in a loop if the encoder has data available for muxing. Currently I don't record audio so only the h264 streams is converted to a mp4 format by the MediaMuxer.
I don't know if this has something to do with that delay, but it always occurs when the loop needs two iterations to dequeue all available data of the encoder (to be even more specific it occurs always in the first of these two iterations). In most cases one iteration is enough to dequeue the encoder.
Since there is not much information online about these new API's any help is very appreciated!
I suspect you're getting bitten by the MediaMuxer disk write. The best way to be sure is to run systrace during recording and see what's actually happening during the pause. (systrace docs, explanation, bigflake example -- as of right now only the latter is updated for Android 4.3)
If that's the case, you may be able to mitigate the problem by running the MediaMuxer instance on a separate thread, feeding the H.264 data to it through a synchronized queue.
Do these pauses happen regularly, every 5 seconds? The CameraToMpegTest example configures the encoder to output an I-frame every 5 seconds (with an expected frame rate of 30fps), which results in a full-sized frame being output rather than tiny deltas.
As #fadden points out, this is a disk write issue that occurs mostly on devices with lower writing flash speeds or if you try to write to the SD card.
I have written a solution on how to buffer MediaMuxer's write in a similar question here.
Related
I'm using mp3 files to play sounds in my Android game, developed in Libgdx. The sounds play fine when they happen every now and then, but when I play them fast (footsteps in a running animation for example) the game freezes/stutters.
Every time a sound is played, I get this in the logs:
W/AudioTrack: AUDIO_OUTPUT_FLAG_FAST denied by client; transfer 4, track 44100 Hz, output 48000 Hz
I use libktx AssetStorage to store the sounds. I've been searching for this issue for a few days now and haven't gotten luck with any of the following solutions:
Override createAudio in AndroidLauncher and use AsynchronousAndroidAudio
Convert mp3 to ogg (using Audacity)
Convert to 48k rate sample (using Audacity)
Add 1 or seconds of silence to the file
I test it on my own device, Samsung Galaxy S5, which is quite old and has version Android 6.0.1.
What can I do to resolve this error and stuttering?
Decoding compressed audio can be a significant processing load. If it's a short recording (e.g., one footstep that is being repeated), I'd either package the sound file as a .wav or decode it into PCM to be held in memory, and use it that way. IDK if it's possible to output PCM directly with libgdx, though, but I do recall inspecting and tinkering with an ogg utility to have it decode into an array, and outputting it with a SourceDataLine for a non-libgdx Java project. I realize that SourceDataLine output is not an option with Android, but Android does have provisions for playing back raw PCM.
Another idea to explore is raising the priority of the thread that is processing the audio to Thread.MAX_PRIORITY if libgdx allows this. Theoretically, the audio processing thread spends most of its time in a blocked state, so doing this shouldn't hurt the performance, unless you are really going overboard with your audio requests.
I just saw the mismatch of sample rates. It's wasteful to repeatedly do conversions on the fly when you can do the conversion once in Audacity. I'm guessing the difference between outputting at 48000 vs 44100 isn't that big of a load difference. Seems to me 44100 should be fine, but I doubt using 48000 for everything adds much in terms of cpu load (or perceivable audio fidelity). So, whichever one you pick, spend a little time making sure all the assets match the format.
I am using MediaCodec Muxer to encode videos,but the process is too slow. Sometimes 60 seconds video, the encode process takes more than 90 seconds. The encode plan comes from ExtractDecodeEditEncodeMuxTest(BigFlake) and I interpret this example into jni layer. I don't know whether it is because of the using of reflection in my code to call java api that leads to encode video very slow or the swap process between GLDisplay and MediaCodec inputSurface causes this problem? I use eglCreateWindowSurface to create GLSurface, I wonder if I can use eglCreatePbufferSurface to create off-screen surface that may speed up the encode process?
Can any one give some advice ? thanks!
I speed up by encoding audio and video in different thread and speed up audio encoding by enlarge audio writing buffer.
I'm trying to use Android's MediaMuxer and MediaCodec to produce MP4 videos.
If I drain frames from the codec directly to the muxer by calling writeSampleData(), everything works fine and the correct video is produced.
But if I try to first store these frames on an array and decide later to send them to the muxer, I'm unable to produce a working video, even if the presentation timestamps are correct.
For some reason, it seem that the mediamuxer output depends not only on the presentation timestamps, but also on the actual time "writeSampleData" is called, although it's my understanding that having the correct timestamps should be enough.
Can anyone shed some light on this issue?
Thanks mstorsjo and fadden. I had actually a combination of errors which didn't allow me to understand what was really going on. Both your questions led me to the correct code and the conviction that using writeSampleData() was not time sensitive.
Yes, I was getting the wrong buffers at the first time. The problem was not initially noticeable because the muxer was writing the frames before the buffers got rewritten. When I introduced the delays and decided to duplicate the buffers contents, I hit another issue (basically a race condition) and concluded it was not the case.
What this code does (for the SmartPolicing project) is capture video and audio to create a MP4 file. I could use MediaRecorder (this was the initial solution), but we also wanted to intercep the frames and stream the video via web, so we dropped the MediaRecorder and created a custom solution.
Now it is running smoothly. Thanks a lot, guys.
Are you sure you actually store the complete data for the frames to be written, not only the buffer indices?
I'm trying to implement an Android app that records a video, while also writing a file containing the times at which each frame was taken. I tried using MediaRecorder, got to a point where I can get the video saved, but I couldn't find a way to get the timestamps. I tried doing something like:
while (previous video file length != current video file length)
write current time to text file;
but this did not seem to work, as file length doesn't seem to be updated frequently enough (or am I wrong?).
I then tried using OpenCV and managed to capture the images frame by frame (and therefore getting timestamps was easy), but I couldn't find a way to join all the frames to one video. I saw answers referring to using NDK and FFmpeg, but I feel like there should be an easier solution (perhaps similar to the one I tried at the top?).
You could use MediaCodec to capture the video, but that gets complicated quickly, especially if you want audio as well. You can recover timestamps from a video file by walking through it with MediaExtractor. Just call getSampleTime() on each frame to get the presentation time in microseconds.
This question is old, but I just needed the same thing. With the camera2 api you could stream to multiple targets. So you can stream from one and the same physical camera to the MediaRecorder and at the same time to an ImageReader. The ImageReader has the ImageReader.OnImageAvailableListener, the "onImageAvailable" callback you can use for querying the frames for their timestamps:
image = reader.acquireLatestImage();
long cameraTime = image.getTimestamp();
Not sure whether streaming to mulitple targets requires certain hardware capabilities, though (such as multiple image processors). I guess, modern devices have like 2-3 of them.
EDIT: This looks like every camera2 capable device can stream to up to 3 targets.
EDIT: Apparently, you can use an ImageReader for capturing the individual frames and their timestamps, and then use an ImageWriter to provide those images to downstream consumers such as a MediaCodec, which records the video for you.
Expanding on fadden's answer, the easiest way to go is to record using MediaRecorder (the Camera2 example helped me a lot) and after that extracting timestamps via MediaExtractor. I want to add a working MediaExtractor code sample:
MediaExtractor extractor = new MediaExtractor();
extractor.setDataSource(pathToVideo);
do {
long sampleMillis = extractor.getSampleTime() / 1000;
// do something with sampleMillis
}
while (extractor.advance() && extractor.getSampleTime() != -1);
I'm writing an audio streaming app that buffers AAC file chunks, decodes those chunks to PCM byte arrays, and writes the PCM audio data to AudioTrack. Occasionally, I get the following error when I try to either skip to a different song, call AudioTrack.pause(), or AudioTrack.flush():
obtainbuffer timed out -- is cpu pegged?
And then what happens is that a split second of audio continues to play. I've tried reading a set of AAC files from the sdcard and got the same result. The behavior I'm expecting is that the audio stops immediately. Does anyone know why this happens? I wonder if its an Audio latency issue with Android 2.3.
edit: The AAC audio contains an ADTS Header. The header + audio payload constitute what I'm calling ADTSFrame. These are fed to the decoder one frame at a time. The resulting PCM byte array that gets returned from the C layer to the Java Layer gets fed to Android's AudioTrack API.
edit 2: I got my nexus 7 (Android 4.1 OS) today. Loaded the same APP onto the device. Didn't have any of these problems at all.
it is highly possible about sample rate. one of your devices might be supporting the sample rate u used while the other could not. Please check it. I had the same issue, it was about sample rate. use 44.1kHz (44100) and try again please.