In Android 4.2 release, I observe that the miracast implementation mandates an OMX encoder to support a new extension index "OMX.google.android.index.prependSPSPPSToIDRFrames". However, when I studied the subsequent implementation of MediaCodec, Converter and WifiDisplaySource, I observe that there is enough support in the existing framework to support this feature without the need of adding another index for the OMX component.
Can someone please confirm if my understanding is correct? If so, can you kindly provide some further information on reasons/rationale behind the same?
Thanks.
I have found the answers to my question in latest release of Android 4.2.2. As per my earlier question, Google has decided to support both an index inside the OMX Component as well as a Stagefright framework level handling of prepend SPS and PPS to IDR frames. This way an OMX Component needn't support the new index. The creation of the component from ACodec interface doesn't fail and the framework takes up the responsibility of prepend.
Related
I have gone through this links and few other links also,
khronos
OpenMax_Development_Guide
bellagio_openmax_il_open_source_implementation_enables_developers_to_create
but all of them just explains how the calling sequence is, picture of block diagram etc but don't explain how to write and build openmax component and plug it in android. Even the link for android building and porting is complicated it doesn't explain, that you will need whole source code to write and build openmax plugin or part of android source code or without android source code you can create it.
I am having firefly K3288 board with android OS Kitkat 4.4 which is supporting hevc hardware decoder but I want to add hevc software decoder.
If anyone know how to write and build openmax hevc video decoder component and plug it in android please give some directions.
For the 1st question of how to develop an OMX component, you will have to write a new component either out of scratch or using a template of existing functions. Please do refer to the OMXIL specification, specifically chapter 2.
I would recommend you to write a component based on Bellagio implementation which can be found here. Please refer to omx_base_video_port.c as this is essential for your decoder development.
An alternative could be to refer the implementation from one of the vendors. In AOSP tree, please refer to the qcom implementation as here which could provide you a good reference for starting with your development.
Note: Please note that OMX wrapper is more aligned towards state management, context management and buffer management. The interaction with your decoder whether HW or SW is dependent on your driver architecture which you should decide on. Once this driver architecture is finalized, integrating into OMX should be fairly easy.
For the 2nd question on how to integrate the hevc decoder, please refer to this question which has relevant details.
I have a new task of integrating a decoder(HEVC) from FFMPEG to Android's Stagefright. To do this, i first need to created an OMX component, My next thing is to register my codec in media_codecs.xml and then the OMX component registration in OMXCore.
Is there any guide or steps to create an OMX component for a video decoder? Secondly, this decoder plays only elementary streams (.bin or .h265 files) so there is no container format here.
Can anyone provide some steps or guidelines to be followed while creating the OMX component for a video codec. Any sort of pointers will be really helpful to me.
Thanks in advance.
In general, you could follow the steps pointed in this question for integrating a decoder into OMX Core.
HEVC is not yet part of the OMX IL specification. Hence, you would have to introduce a new role like video_decoder.hevc for your component while registering in media_codecs.xml. Please do check that your OMX core can support this new role.
If you are trying to play only elementary streams, you can consider modifying the stagefright command line utility to read the elementary stream data and feed the decoder.
Another option is to modify the current recordVideo utility to read a frame data and create a decoder instead of the encoder. With these, I presume you should be able to play your decoder from command line.
EDIT: If you wish to build a new OMX component, I would recommend that you could refer to the Bellagio Component Writers Guide which should give good information on how to build an OMX component. This gives a pretty comprehensive guide to build a new component. Please do ensure that you are able to identify the dependencies with the Bellagio implementation and your core implementation.
Also, you could look at other public domain OMX implementations as here:
http://androidxref.com/4.4.2_r1/xref/hardware/ti/omap4xxx/domx/
http://androidxref.com/4.4.2_r1/xref/hardware/qcom/media/mm-video-v4l2/vidc/
I feel Bellagio could work as a good starting reference if you haven't build an OMX component earlier. The sources for Bellagio are available on Sourceforge.
I need to develop a custom 'wrapper' video codec and integrate it into android (JB for now, ICS later). We want to use some custom decryption keys from the SIM (don't ask!). The best method (that would allow it to work alongside other non-encrypted media and to use the standard media player or other) seems to be to define our own mime-type, and link that to a custom wrapper codec that can do the custom decryption, and then pass the data on to a real codec. (Let's say the filetype is .mp4 for now.)
(An alternative might be to write our own media player, but we'd rather not go down that route because we really want the media to appear seamlessly alongside other media)
I've been trying to follow this guide:
how to integrate a decoder into multimedia framework
I'm having trouble with OMX Core registration - I can build the libstagefright.so from the android source by typing make stagefright but in the guide he says to use the libstagefrighthw.so which seems appropriate for JB, but I'm not sure how to build this, it doesn't seem to get built from using make stagefright unless I'm doing something wrong?
The other problem is that even if I do get the custom wrapper codec registered, I'm not sure how to go about passing the data off to a real codec.
If anyone has any suggestions (or can give some baby step by step instructions!), I'd really appreciate it - the deadline is quite tight for the proof of concept and I know very little about codecs or the media framework...
Many Thanks.
(p.s. I don't want to get into a mud fight about drm and analogue holes etc.., thanks)
In this post, I am using H.264 as an example, but the solution(s) can be extended to support other codecs like MPEG-4, VC-1, VP8 etc. There are 2 possible solutions to solve your problem, which I am enlisting below, each with their own pros and cons to help you take an informed decision.
Solution 1: Extend the codec to support new mode
In JellyBean, one could register the same OMX component with same MIME types as 2 different component names viz., OMX.ABC.XYZ and OMX.ABC.XYZ.secure. The former is used for normal playback and is the more commonly used component. The latter is used when the parser i.e. MediaExtractor indicates the presence of secure content. In OMXCodec::Create, after findMatchingCodecs returns a list of codecs, we can observe the choice to select .secure component as here.
Steps to follow:
In your platform, register another component with some new extension like OMX.H264.DECODER.decrypt or something similar. Change is required only in media_codecs.xml. The choice of whether to register a new factory method or have a common factory method is your choice.
From your parser, when you encounter the specific use-case, set a new flag like kKeyDecryptionRequired. For this you will have to define a new flag in Metadata.h and a corresponding quirk in OMXCodec.h.
Modify the OMXCodec::create method to append a .decrypt suffix similar to the .secure suffix as shown above.
With all changes in OMXCodec, Metadata, MediaExtractor modules, you will have to rebuild only libstagefright.so and replace the same on your platform.
Voila!! your integration should be complete. Now comes the main challenge inside the component. As part of the component implementation, you should be able to differentiate between an ordinary component creation and .decrypt component creation.
From a runtime perspective, assuming that your component is aware of the fact that it is a .decrypt component or not, you could handle the decryption as part of the OMX_EmptyThisBuffer call, where you could decrypt the data and then pass it to underlying codec.
Pros: Easy to integrate, Minimal changes in Android framework, Scalable to other codecs, No new MIME type registration required.
Cons: You need to track the future revisions of android, specifically on the new quirks, flags and choice of .decrypt extension. If Google decides to employ something similar, you will have to adapt / modify your solution accordingly.
Solution 2: Registration of new MIME Type
From your question, it is not clear if you were able to define the MIME type or not and hence, I am capturing the steps for clarity.
Steps to follow:
Register a new MIME type at MediaDefs as shown here. For example, you could employ a new MIME type as const char *MEDIA_MIMETYPE_VIDEO_AVC_ENCRYPT = "video/avc-encrypt";
Register your new component with this updated MIME type in media_codecs.xml. Please note that you will have to ensure that the component quirks are also handled accordingly.
In OMXCodec::setVideoOutputFormat method implementation, you will have to introduce the support for handling your new MIME type as shown for H.264 here. Please note that you will have to handle similar changes in OMXCodec to support the new MIME type.
In MediaExtractor, you will have to signal the MIME type for the video track using the newly defined type. With these changes, your component will be selected and created.
However, the challenge still remains: Where to perform the decryption? For this, you could as well employ the same solution as described in the previous section i.e. handle the same as part of OMX_EmptyThisBuffer call.
Pros: None that I can think of..
Cons: First, solution is not scalable. You will have to keep adding newer MIME types and keep modifying the Stagefright framework. Next, the changes in OMXCodec will require corresponding changes in MediaExtractor. Hence, even though your initial focus is on MP4 extractor, if you wish to extend the solution to other container formats like AVI, MKV, you will have to include the support for new MIME types in these extractors.
Lastly, some notes.
As a preferred solution, I would recommend Solution 1 as it is easy and simple.
I haven't touched upon ACodec based implementation of the codec. However, I do feel that Solution 1 would be a far more easier solution to implement even if such a support is required in future.
If you aren't modifying the OMX core, you shouldn't require to modify the libstagefrighthw.so. Just FYI, this is typically implemented by the vendors as part of their vendor specific modules as in vendor/<xyz>/hardware/.... You need to check with your platform provider on the sources for libstagefrighthw.so.
Is there a documentation explaining android Stagefright architecture?
Can I get some pointers on these subjects?
A good explanation of stagefright is provided at http://freepine.blogspot.com/2010/01/overview-of-stagefrighter-player.html.
There is a new playback engine implemented by Google comes with Android 2.0 (i.e, Stagefright), which seems to be quite simple and straightforward compared with the OpenCORE solution.
MediaExtractor is responsible for retrieving track data and the corresponding meta data from the underlying file system or http stream;
Leveraging OMX for decoding: there are two OMX plugins currently, adapting to PV's software codec and vendor's hardware implementation respectively. And there is a local implementation of software codecs which encapsulates PV's decoder APIs directly;
AudioPlayer is responsible for rendering audio, it also provides the timebase for timing and A/V synchronization whenever audio track is present;
Depending on which codec is picked, a local or remote render will be created for video rendering; and system clock is used as the timebase for video only playback;
AwesomePlayer works as the engine to coordinate the above modules, and is finally connected into android media framework through the adapter of StagefrightPlayer.
Look at this post.
Also, Android player is built up using PacketVideo (PV) Player, and here comes the docs about it (beware of really slow transfer speed :) ):
PVPlayer SDK Developer's Guide link 1, link 2
PVPlayer return codes link
Starting Gingerbread, it is Stagefright framework instead of PV framework. Above link has good info about the framework. If you have some specific questions, I may be able to help you out.
Thanks, Dolphin
I am writting an app which needs to decode H.264(AVC) bitstream. I find there are AVC codec sources exist in /frameworks/base/media/libstagefright/codecs/avc, does anyone know how can one get access to those codecs in an Android app? I guess it's through JNI, but not clear about how this can be done.
After some investigation I think one approach is to create my own classes and JNI interfaces in the Android source to enable using the CODECS in an Android App.
Another way which does not require any changes in Android source is to include CODECS as shared library in my application, use NDK. Any thoughts on these? Which way is better(if feasible)?
I didn't find much information about Stagefright, it would be great if anyone can point out some? I am developing on Android 2.3.3.
Any comments are highly appreciated.Thanks!
Stagefright does not support elementary H.264 decoding. However it does have H.264 decoder component. In theory, this library could be used. But in reality, it will be tough to use it as a standalone library due to its dependencies.
Best approach would be use to JNI wrapped independent h.264 decoder(like the one available with ffmpeg).