I'm writing an audio streaming app that buffers AAC file chunks, decodes those chunks to PCM byte arrays, and writes the PCM audio data to AudioTrack. Occasionally, I get the following error when I try to either skip to a different song, call AudioTrack.pause(), or AudioTrack.flush():
obtainbuffer timed out -- is cpu pegged?
And then what happens is that a split second of audio continues to play. I've tried reading a set of AAC files from the sdcard and got the same result. The behavior I'm expecting is that the audio stops immediately. Does anyone know why this happens? I wonder if its an Audio latency issue with Android 2.3.
edit: The AAC audio contains an ADTS Header. The header + audio payload constitute what I'm calling ADTSFrame. These are fed to the decoder one frame at a time. The resulting PCM byte array that gets returned from the C layer to the Java Layer gets fed to Android's AudioTrack API.
edit 2: I got my nexus 7 (Android 4.1 OS) today. Loaded the same APP onto the device. Didn't have any of these problems at all.
it is highly possible about sample rate. one of your devices might be supporting the sample rate u used while the other could not. Please check it. I had the same issue, it was about sample rate. use 44.1kHz (44100) and try again please.
Related
I'm using mp3 files to play sounds in my Android game, developed in Libgdx. The sounds play fine when they happen every now and then, but when I play them fast (footsteps in a running animation for example) the game freezes/stutters.
Every time a sound is played, I get this in the logs:
W/AudioTrack: AUDIO_OUTPUT_FLAG_FAST denied by client; transfer 4, track 44100 Hz, output 48000 Hz
I use libktx AssetStorage to store the sounds. I've been searching for this issue for a few days now and haven't gotten luck with any of the following solutions:
Override createAudio in AndroidLauncher and use AsynchronousAndroidAudio
Convert mp3 to ogg (using Audacity)
Convert to 48k rate sample (using Audacity)
Add 1 or seconds of silence to the file
I test it on my own device, Samsung Galaxy S5, which is quite old and has version Android 6.0.1.
What can I do to resolve this error and stuttering?
Decoding compressed audio can be a significant processing load. If it's a short recording (e.g., one footstep that is being repeated), I'd either package the sound file as a .wav or decode it into PCM to be held in memory, and use it that way. IDK if it's possible to output PCM directly with libgdx, though, but I do recall inspecting and tinkering with an ogg utility to have it decode into an array, and outputting it with a SourceDataLine for a non-libgdx Java project. I realize that SourceDataLine output is not an option with Android, but Android does have provisions for playing back raw PCM.
Another idea to explore is raising the priority of the thread that is processing the audio to Thread.MAX_PRIORITY if libgdx allows this. Theoretically, the audio processing thread spends most of its time in a blocked state, so doing this shouldn't hurt the performance, unless you are really going overboard with your audio requests.
I just saw the mismatch of sample rates. It's wasteful to repeatedly do conversions on the fly when you can do the conversion once in Audacity. I'm guessing the difference between outputting at 48000 vs 44100 isn't that big of a load difference. Seems to me 44100 should be fine, but I doubt using 48000 for everything adds much in terms of cpu load (or perceivable audio fidelity). So, whichever one you pick, spend a little time making sure all the assets match the format.
I have an app calling using WebRTC. But during a call, I need to record microphone. WebRTC has an object WebRTCAudioRecord to record audio but the audio file is so large (PCM_16bit). I want to record but to a smaller size.
I've tried MediaRecorder but it doesn't work because WebRTC is recorded and MediaRecorder does not have permission to record while calling.
Has anyone done this, or have any idea that could help me?
Webrtc is considered as comparatively much better pre-processing tool for Audio and Video.
Webrtc native development includes fully optimized native C and C++ classes, In order to maintain wonderful Speech Quality and Intelligibility of audio and video which is quite interesting.
Visit Reference Link: https://github.com/jitsi/webrtc/tree/master/examples regularly.
As Problem states;
I want to record but smaller size. I've tried MediaRecorder and it doesn't work because WebRtc is recorded and MediaRecorder has not permission to record while calling.
First of all, to reduce or minimize the size of your recorded data (audio bytes), you should look at different types of speech codecs which basically reduce the size of recorded data by maintaining sound quality at a level. To see different voice codecs, here are well-known speech codecs as follows:
OPUS
SPEEX
G7.11 (G-Series Speech Codecs)
As far as size of the audio data is concerned, it basically depends upon the Sample Rate and Time for which you record a chunk or audio packet.
Supppose time = 40ms ---then---> Reocrded Data = 640 bytes (or 320 short)
Size of recorded data is **directly proportional** to both Time and Sample rate.
Sample Rate = 8000 or 16000 etc. (greater the sample rate, greater would be the size)
To see in more detail visit: fundamentals of audio data representation. But Webrtc mainly process 10ms audio data for pre-processing in which packet size is reduced up to 160 bytes.
Secondly, If you want to use multiple AudioRecorder instances at a time, then it is practically impossible. As WebRtc is already recording from microphone then practically MediaRecorder instance would not perform any function as this answer depicts audio-record-multiple-audio-at-a-time. Webrtc has following methods to manage audio bytes such as;
1. Push input PCM data into `ProcessCaptureStream` to process in place.
2. Get the processed PCM data from `ProcessCaptureStream` and send to far-end.
3. The far end pushed the received data into `ProcessRenderStream`.
I have maintained a complete tutorial related to audio processing using Webrtc, you can visit to see more details; Android-Audio-Processing-Using-Webrtc.
There are two parts for the solution:
Get the raw PCM audio frames from webrtc
Save them to a local file in compressed size so that it can be played out later
For the first part you have to attach the SamplesReadyCallback while creating audioDeviceManager by calling the setSamplesReadyCallback method of JavaAudioDeviceModule. This callback will give you the raw audio frames captured by webrtc's AudioRecord from the mic.
For the second part you have to encode the raw frames and write into a file. Check out this sample from google on how to do it - https://android.googlesource.com/platform/frameworks/base/+/master/packages/SystemUI/src/com/android/systemui/screenrecord/ScreenInternalAudioRecorder.java#234
I am encoding raw audio to AAC with the MediaCodec API of Android. The problem: I need to send to a server the AAC stream in chunks of one second. So I need to split the stream. Right now, since an AAC frame is 1024 samples, I take round(SAMPLE_RATE/1024) AAC frames for each chunk. However, because of "priming samples" this simple cutting of the AAC stream does not work.
More details follow. After sending a chunk to the server, a client receives it in the web browser Chrome and using Web Audio API plays all received chunks. The playback is done in such a way to be gapless: a large audiobuffer is initially allocated, the received chunks are decoded and copied in the audiobuffer, the audiobuffer is played.
Now, this does not work with AAC (it works with Ogg/Vorbis though). With AAC I have artifacts in the generated sound. At end of each second the start of the next second is zero, then, gradually, the waveform grows until it has a normal size. This lasts for 10, 20 milliseconds.
I believe the problem is caused by missing "priming samples". Maybe the Web Audio API is expecting "priming samples" at the start of each AAC chunk, it does not find them and thus modifies the actual audio.
The question is: how can I split the original AAC stream and send "good" AAC chunks of one second?
From what I have understood, I should include at the start of each chunk the previous two frames (last two frames of the previous chunk). However, this number should vary and there is not much documentation. Some expert advice is appreciated.
I am using the following method. I am not an expert of AAC so I may be missing something, but experimentally it is working.
Assuming that the Chrome decoder is expecting priming samples at the start of each chunk I do the following: before sending a chunk to the server, I add at its beginning the last 4 AAC frames of the previous chunk (if it is the first chunk I do not do this). Client-side, I retrieve a chunk, I decode it and the remove the first 4*1024 samples (1024 = samples in one AAC frame).
This is working.
I'm making an app which is using MediaCodec APIs.
The app runs on two phones. The first phone reads the video from the sdcard and then uses the MediaCodec encoder to encode the frames in avc format and then streams the frames to another device. The second device has a MediaCodec decoder running. The decoder decodes the frames and render them on a Surface.
The code is running fine but after sometime when the size of the frames gets more, the first device is sometime not able to stream the video and the encoder stops reporting the following log :
E/OMX-VENC-720p( 212): Poll timedout, pipeline stalled due to client/firmware ETB: 496, EBD: 491, FTB: 492, FBD: 492
So I want to implement frame skipping on the encoder side.
What's the best way to skip the frames and not stream them to the other device. ?
PS. On a separate note if anyone can suggest me of any other way of streaming a video to other device it'll be really nice.
Please try Intel INDE Media Pack with tutorials on https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials. It has Camera, File and Game streaming components, which make streaming with help of Wowza and a set of samples demonstrating how to use it as a server and as a client
I am working on an implementation of one of the Android Test Cases regarding previewTexture recording with the new MediaCodec and MediaMuxer API's of Android 4.3.
I've managed to record the preview stream with a framerate of about 30fps by setting the recordingHint to the camera paremeters.
However, I ran into a delay/lag problem and don't really know how to fix that. When recording the camera preview with quite standard quality settings (1280x720, bitrate of ~8.000.000) the preview and the encoded material suffers from occasional lags. To be more specific: This lag occurs about every 2-3 seconds and takes about 300-600ms.
By tracing the delay I was able to figure out the delay comes from the following line of code in the "drainEncoder" method:
mMuxer.writeSampleData(mTrackIndex, encodedData, mBufferInfo);
This line is called in a loop if the encoder has data available for muxing. Currently I don't record audio so only the h264 streams is converted to a mp4 format by the MediaMuxer.
I don't know if this has something to do with that delay, but it always occurs when the loop needs two iterations to dequeue all available data of the encoder (to be even more specific it occurs always in the first of these two iterations). In most cases one iteration is enough to dequeue the encoder.
Since there is not much information online about these new API's any help is very appreciated!
I suspect you're getting bitten by the MediaMuxer disk write. The best way to be sure is to run systrace during recording and see what's actually happening during the pause. (systrace docs, explanation, bigflake example -- as of right now only the latter is updated for Android 4.3)
If that's the case, you may be able to mitigate the problem by running the MediaMuxer instance on a separate thread, feeding the H.264 data to it through a synchronized queue.
Do these pauses happen regularly, every 5 seconds? The CameraToMpegTest example configures the encoder to output an I-frame every 5 seconds (with an expected frame rate of 30fps), which results in a full-sized frame being output rather than tiny deltas.
As #fadden points out, this is a disk write issue that occurs mostly on devices with lower writing flash speeds or if you try to write to the SD card.
I have written a solution on how to buffer MediaMuxer's write in a similar question here.