I'm very new to Android. Now i need to work on Adding my own codec to Android.I mean, I want to know what all the steps i need to take to add my own codec to android.
I'm very fresh to this i need some basic things, so can someone please explain the steps I need to take in order to add a new codec to Android?
This is virtually impossible to do in a portable way as all audio and video codecs are compiled at the platform level (due to the fact that most of the time they require hardware specific acceleration)
If you are only interested on this working on a specific hardware platform and have an unlocked bootloader (So you can boot a custom build of Android) you can compile the full Android platform from scratch using the AOSP as a base.
Depending on which version of Android you're targeting you're looking at adding code to either Opencore or Stagefright (The subsystems that Android uses for A/V decoding and parsing) here you can add audio decoders, audio encoders, video encoders, video decoders and container parsers.
Here is some discussion of adding to Stagefright:
http://freepine.blogspot.com/2010/01/overview-of-stagefrighter-player.html
http://groups.google.com/group/android-porting/msg/5d88e76845a22bbb
However unless the encoding scheme you wish to support is very simple (What are you wanting to add?) it is likely to be too CPU intensive for most Android devices to run without being able to offload some of the work to another system (like the radio chipset or the GPU).
In Android Framework, mediacodec is implemented as Stagefright, a media playback engine at the native level that has built-in software-based codecs for popular media formats. Stagefright also supports integration with custom hardware codecs provided as OpenMAX component. Here is the article which summaries the steps.
CHECK HERE
Related
What is the best (performance wise) way to get and stream a video from an android device's camera to a PC?
I have seen this question asked here before and there exist a few open source programs that do just that, but there exist multiple ways from which I don't know which one is the best!
for example:
Should the android part be written in c++ or java (performance/api wise)?
Which api should I use to get the video from camera?
What is the best way to stream the video?
I don't intend to support old android versions (<4.x), so if the best way/api is relatively new it's fine by me.
I'm not familiar with Android development but I'll try to answer.
I suppose that the actual encoding of the raw image data is probably done on hardware chip (otherwise software encoding would probably kill your battery) and it looks like MediaCodec class is exactly what you need. I suppose you want to implement some kind of live streaming service and the latency is important. If so, then you should stick to UDP based transmission methods. Using RTP protocol or MPEG-TS container format would be the best choice for this purpose. Of course you can also use TCP based methods for streaming like HLS or DASH (both of them use HTTP).
You should also take a look at Table 1 Core media format and codec support:
It tells us for example that using H.264 AVC Encoder supports MPEG-TS container and that HLS version 3 is also supported for Android 4.0 and above.
I am working on a video conferencing project. We were using software codec for encode and decode of video frames which will do fine for lower resolutions( up to 320p). We have planned to support our application for higher resolutions also up to 720p. I came to know that hardware acceleration will do this job fairly well.
As the hardware codec api Media codec is available from Jelly Bean onward I have used it for encode and decode and are working fine. But my application is supported from 2.3 . So I need to have an hardware accelerated video decode for H.264 frames of 720p at 30fps.
On research came across the idea of using OMX codec by modifying the stage fright framework.I had read that the hardware decoder for H.264 is available from 2.1 and encoder is there from 3.0. I have gone through many articles and questions given in this site and confirmed that I can go ahead.
I had read about stage fright architecture here -architecture and here- stagefright how it works
And I read about OMX codec here- use-android-hardware-decoder-with-omxcodec-in-ndk.
I am having a starting trouble and some confusions on its implementation.I would like to have some info about it.
For using OMX codec in my code should I build my project with the whole android source tree or can I do by adding some files from AOSP source(if yes which all).
What are the steps from scratch I should follow to achieve it.
Can someone give me a guideline on this
Thanks...
The best example to describe the integration of OMXCodec in native layer is the command line utility stagefright as can be observed here in GingerBread itself. This example shows how a OMXCodec is created.
Some points to note:
The input to OMXCodec should be modeled as a MediaSource and hence, you should ensure that your application handles this requirement. An example for creating a MediaSource based source can be found in record utility file as DummySource.
The input to decoder i.e. MediaSource should provide the data through the read method and hence, your application should provide individual frames for every read call.
The decoder could be created with NativeWindow for output buffer allocation. In this case, if you wish to access the buffer from the CPU, you should probably refer to this query for more details.
I'm trying to decode a raw h264 stream on "older" Android versions.
I've tried MediaPlayer class and does not seem to support the stream format.
I can see the stream on other Cam Viewer apps from the market, so I figure there must be a way to do it, probably using the NDK.
I've read about OpenMAX and Stagefright, but couldn't find a working example about streaming.
Could someone please point me in the right direction?
Also, I'm reading in several places about "frameworks/av/include/media/stagefright/MediaSource.h" and other sources, but they don't seem to be either in the regular SDK or the NDK.
Where is this source located? is there another sdk?
Thanks in advance.
Update: I'm receiving a rtsp connection.
If you wish to perform only a simple experiment to verify certain functionality, you can consider employing the command line stagefright utility. Please do consider this condition where your streaming input may not be supported.
If you wish to build a more comprehensive player pipeline, you can consider the handling for rtsp as in here or http as in here. Please note that NuCachedSource2 implementation is essential for streaming input as this provides a page cache implementation which acts as a jitter for the streaming data.
Please do note one critical point: Command line stagefright utility doesn't render to the screen. Hence, if you wish to render, you will to implement the complete playback pipeline with rendering.
On a related note, if your input is streaming input, the standard player implementation does have support for streaming inputs as can be observed here. Did you face any issues with the same?
As fadden has already pointed out, your work is made far more simpler with the introduction of MediaCodec in Android 4.x.
You should use third-party libs like android-h264-decoder which uses JNI for increasing the performance! Also look at this lib Intel
Update: Media codec wasn't exposed until API 16 (Android 4.1), so that won't work for a 2.3.3 device.
Stagefright and OpenMAX IL were (and still are) internal components of Android. You can find the code (including MediaSource.h) at https://android.googlesource.com/platform/frameworks/av/+/master Note that the media framework has moved to a separate "tree" frameworks/av only recently. Before it was part of 'frameworks/base', e.g. https://android.googlesource.com/platform/frameworks/base/+/gingerbread/media/
I know well that H.264 support is not the goal of WebRTC's current maintainers. However, while poking around the native code, I noticed some commented out bits referring to an H.264 RTP packetizer. The environment I'm working on is the OMAP4430, which has hardware-accelerated support for H.264 SVC encode/decode, so it'd be great if I could re-add H.264 support to native WebRTC for my application. (VP8 is extremely slow on my device.) Is starting with the packetizer currently in the project a good start? Has anyone done this / have recommendations for how to go about adding H.264 support? (I plan on sending the H.264 WebRTC data to Doubango's Media Breaker to provide support for regular WebRTC clients.)
If the above is absolutely not possible or very hard, can anyone at least recommend how I might get better VP8 performance on my device? It's a NEON-based ARM SoC, so I would imagine libvpx should automatically take advantage of that. Is there any way to know for sure?
"H.264 support is not the goal of WebRTC's current maintainers" is not correct at all.
The IETF has not made a decision as to whether VP8 or H.264 or both will be mandatory to implement yet.
Google, who hosts webrtc.org obviously wants their own VP8 codec in there, so there is nary a mention of 264 on their site or their example code... doesn't mean that how this will all end up.
I would visit ietf.org and sign up for the WebRTC email list - and ask for some help there. :-)
In Android ICS and later, a new OpenMax IL API version is in use, making old binary blobs useless/unused. This leads to older devices that otherwise run ICS just fine and dandy to have broken video playback (YouTube HQ and IMBD, for example) because Androids fallback software decoder sucks when compared to what ffmpeg can do on the same device (I tested MXPlayer+arm6vfp ffmpeg and a 720p movie played back great).
I am trying to dig through the Android source code to see where and what exactly I could add/replace code to allow the ffmpeg library's awesomeness to be used. The problem is I don't know exactly what code is being used in for example the YouTube app to decode video, or how that's decided.
So I have two options as far as I can tell:
Figure out the current software decoder being used, and try to wrap its external interface around ffmpeg, effectively replacing the slow software decoder currently used. The end result would be a single .so I could push to the device.
Figure out how to trick Android into thinking an OMX library based on ffmpeg (I have built one succesfully for Android: limoa) and add this somewhere to the list of considered libraries (or better: replace the unusable hardware codec).
As an extension, I'd like to also make camcorder video encoding work through this, so a true integrated solution would be very much wanted. The question is: how, and where, and what? Searching the Android source tree gives numerous counts of "H264" and related stuff in many different places. I need the lowest and simplest possible, so I can simply wrap the hypothetical decode(buffer) function call to use ffmpeg (libavcodec).
It seems to me that this presentation ("Integrating a Hardware Video Codec into Android Stagefright using OpenMAX IL") is exactly what you'd like to do. Good luck with your project!