Is there a reasonable way to play a memory buffer containing AAC (in an MP4 container) in Android? I cannot write this buffer to disk... it must be played from memory.
Seems that my only option is to decode to PCM manually and use AudioTrack since Android doesn't expose any API that plays non-PCM memory buffers. OpenSL ES has support for memory sources, but Android doesn't implement this.
The only way I can find that does this is (as indicated in my question) to manually decode the AAC. There are a few packages for Android that do this, perhaps ffmpeg is the most well-tested and well-known. ffmpeg can be distilled down to just an AAC decoder so the memory needs are quite small (although it's frustrating to introduce a decoder to a device that already has one natively).
Once AAC is decoded into PCM, it can be played in Java (AudioTrack), or natively (OpenSL ES). The playback latency is notably lower for the native option, if such matters to you.
Related
The documentation of OpenSL, that states
"Supported formats include WAV PCM, WAV alaw, WAV ulaw, MP3, Ogg Vorbis, AAC LC, HE-AACv1 (aacPlus), HE-AACv2 (enhanced aacPlus), AMR, and FLAC [provided these are supported by the overall platform, and AAC formats must be located within an MP4 or ADTS container]. MIDI is not supported. WMA is not part of the open source release, and compatibility with Android OpenSL ES has not been verified."
Elsewhere, in forums, I have read that OpenSL on Android doesn't support the decoding of any compressed format. Since the implementation of the decoder using the OpenSL API seems a task that requires at least some hours to be implemented, I would love to understand if I can be sure that once I have put in place all the required boiler-plate code for the decoding I won't find myself with the surprise of not being able to read any compressed format, especially OGG.
Decoding ogg vorbis with OpenSL works. It even does sampling rate conversion, which is handy. Although I have found that the end of the stream is not signaled with the event SL_PLAYEVENT_HEADATEND. In case of mp3 or wav 'decoding' this event is dispatched. This is not necessary a dealbreaker, b/c you can figure it out that the decoding was finished in other ways.
I eventually added the ogg vorbis sources to my project because I wanted to have more control over the decoding: This way I can tell in advance how long is the decoded clip, for example, but I had to do the sample rate conversion myself.
I am working on a video conferencing project. We were using software codec for encode and decode of video frames which will do fine for lower resolutions( up to 320p). We have planned to support our application for higher resolutions also up to 720p. I came to know that hardware acceleration will do this job fairly well.
As the hardware codec api Media codec is available from Jelly Bean onward I have used it for encode and decode and are working fine. But my application is supported from 2.3 . So I need to have an hardware accelerated video decode for H.264 frames of 720p at 30fps.
On research came across the idea of using OMX codec by modifying the stage fright framework.I had read that the hardware decoder for H.264 is available from 2.1 and encoder is there from 3.0. I have gone through many articles and questions given in this site and confirmed that I can go ahead.
I had read about stage fright architecture here -architecture and here- stagefright how it works
And I read about OMX codec here- use-android-hardware-decoder-with-omxcodec-in-ndk.
I am having a starting trouble and some confusions on its implementation.I would like to have some info about it.
For using OMX codec in my code should I build my project with the whole android source tree or can I do by adding some files from AOSP source(if yes which all).
What are the steps from scratch I should follow to achieve it.
Can someone give me a guideline on this
Thanks...
The best example to describe the integration of OMXCodec in native layer is the command line utility stagefright as can be observed here in GingerBread itself. This example shows how a OMXCodec is created.
Some points to note:
The input to OMXCodec should be modeled as a MediaSource and hence, you should ensure that your application handles this requirement. An example for creating a MediaSource based source can be found in record utility file as DummySource.
The input to decoder i.e. MediaSource should provide the data through the read method and hence, your application should provide individual frames for every read call.
The decoder could be created with NativeWindow for output buffer allocation. In this case, if you wish to access the buffer from the CPU, you should probably refer to this query for more details.
I'm decoding a H.264 stream on Android 4.2 using Mediacodec. Unfortunately, the decoder always buffers 6-10 frames, which lead to annoying latency, and Android does not provide any API to adjust buffer size. So my question is, how to modify the Android source code (or the OMX driver) in order to reduce the buffer size for realtime video streaming?
Generally speaking, you don't. The number of buffers in the queue is determined by the codec. Different devices, and different codecs on the same device, can behave differently.
Unless you're using the software AVC codec, the codec implementation is provided as a binary by the hardware OEM, so there's no way to modify it (short of hex-editing).
I am receiving the MPEG-TS (MPEG transport stream) packets with the multiplexed H.264 video and AAC audio streams. I need to be able to show the audio and video on the Android phone. My assumption is that I need:
MPEG-TS de-multiplexer
AAC decoder
H.264 decoder
Synchronize the audio and video playback
Assuming that I am right then (in Android 2.x) MPEG-TS de-multiplexer is not part of the OS and must be ported, both AAC and H.264 decoder are part of the Android OS, but I am not sure if they have interface, which allows passing the data in buffers and if they allow mutual timing synchronization. In the worst case those components must be ported here as well.
Can you give me some advices where to start? I was thinking about the FFMPEG porting. Are there any other ways?
Regards,
STeN
Android 4.x has OpenMAX which can play TS with H264 and AAC. You don't even need to worry about synchronisation of audio and video.
Look at the nativemedia sample in the NDK.
If you want to support previous versions of Android, then ffmpeg might be a good choice, but it the maximum it can give you is just decoded video frames in RGB or any other format and decoded audio in PCM. Then you will have to implement renderer and audio playback yourself. I would recommend reading this tutorial - http://dranger.com/ffmpeg/. It is not android specific but it will give you idea how video play works.
You may refer to the android-ffmpeg project on github.
https://github.com/guardianproject/android-ffmpeg
In Gingerbread ( 2.3 ), actually there is a MPEG TS parser in the stagefright framework that you could use. Also, I believe it is well integrated with H264 and AAC decoders. MPEG TS parser is not advertised anywhere but the support is silently sitting there. I believe they have brought it to support Apple HTTP Live streaming in HC or later version but the code is sitting there in the Gingerbread ( 2.3 ) codebase as well. With a minor modification in the framework, you can playback http live streaming ( which actually sends TS packets). I guess the above information would be helpful for you.
Vibgyor
(DISCLAIMER: I'm personally involved in developing the free and open source program linked below)
A static version of FFMpeg (both library and commandline) is provided by ZShaolin http://dyne.org/software/zshaolin also contains other media conversion tools.
Its use can facilitate scripting experiments without having to compile FFMpeg from scratch.
What I need to do is to decode video frames and render the frames on a trapezoidal surface. I'm using Android 2.2 as my development platform
I'm not using the mediaplayer service since I need access to the decoded frames.
Here's what I have so far:
I am using stagefright framework to extract decoded video frames.
each frame is then converted from YUV420 to RGB format
the converted frames are then copied to a texture and rendered to an OpenGL surface
Note that I am using Processing and not using OpenGL calls directly.
So now my problems are
i can only decode mp4 files with stagefright
the rendering is too slow, around 100ms for a 320x420 frame
there is no audio yet, I can only render videos but I still don't know how to synchronize the playing of the audio frames.
So for my questions...
how can I support other video formats? Shoud I use stagefright or should I switch to ffmpeg?
how can I improve the performance? I should be able to support at least 720p?
Should I use OpenGL calls directly instead of Processing? Will this improve the performance?
How can I sync the audio frames during playback?
Adding other video formats and codecs to stagefright
If you have parsers for "other" video formats, then you need to implement Stagefright media extractor plug-in and integrate into awesome player. Similarly if you have OMX Components for required Video Codecs, you need to integrate them into OMXCodec class.
Using FFMPEG components in stagefright, or using FFMPEG player instead of stagefright does not seem trivial.
However if required formats are already available in Opencore, then you can modify Android Stack so that Opencore gets chosen for those formats. You need to port the logic of getting YUV data to Opencore.
(get dirty with MIOs)
Playback performance
The surface flinger, used for normal playback uses Overlay for rendering. It usually provides around 4 - 8 video buffers (so far what I have seen). So you can check how many different buffers you are getting in OPEN GL rendering. Increasing buffer will definitely improve the performance.
Also, check time taken for YUV to RGB conversion. Can optimize or use opensource library to improve performance.
Usually Open GL is not used for Video Rendering (known for Graphics). So not sure on the performance.
Audio Video Sync
Audio time is used as reference. In Stagefright, awesome player uses Audio Player for playing out audio. This player implements an interface for providing time data. Awesome player uses this for rendering Video. Basically Video frames are rendered when their presentation time matches with that of audio sample being played.
Shash