I need to develop a custom 'wrapper' video codec and integrate it into android (JB for now, ICS later). We want to use some custom decryption keys from the SIM (don't ask!). The best method (that would allow it to work alongside other non-encrypted media and to use the standard media player or other) seems to be to define our own mime-type, and link that to a custom wrapper codec that can do the custom decryption, and then pass the data on to a real codec. (Let's say the filetype is .mp4 for now.)
(An alternative might be to write our own media player, but we'd rather not go down that route because we really want the media to appear seamlessly alongside other media)
I've been trying to follow this guide:
how to integrate a decoder into multimedia framework
I'm having trouble with OMX Core registration - I can build the libstagefright.so from the android source by typing make stagefright but in the guide he says to use the libstagefrighthw.so which seems appropriate for JB, but I'm not sure how to build this, it doesn't seem to get built from using make stagefright unless I'm doing something wrong?
The other problem is that even if I do get the custom wrapper codec registered, I'm not sure how to go about passing the data off to a real codec.
If anyone has any suggestions (or can give some baby step by step instructions!), I'd really appreciate it - the deadline is quite tight for the proof of concept and I know very little about codecs or the media framework...
Many Thanks.
(p.s. I don't want to get into a mud fight about drm and analogue holes etc.., thanks)
In this post, I am using H.264 as an example, but the solution(s) can be extended to support other codecs like MPEG-4, VC-1, VP8 etc. There are 2 possible solutions to solve your problem, which I am enlisting below, each with their own pros and cons to help you take an informed decision.
Solution 1: Extend the codec to support new mode
In JellyBean, one could register the same OMX component with same MIME types as 2 different component names viz., OMX.ABC.XYZ and OMX.ABC.XYZ.secure. The former is used for normal playback and is the more commonly used component. The latter is used when the parser i.e. MediaExtractor indicates the presence of secure content. In OMXCodec::Create, after findMatchingCodecs returns a list of codecs, we can observe the choice to select .secure component as here.
Steps to follow:
In your platform, register another component with some new extension like OMX.H264.DECODER.decrypt or something similar. Change is required only in media_codecs.xml. The choice of whether to register a new factory method or have a common factory method is your choice.
From your parser, when you encounter the specific use-case, set a new flag like kKeyDecryptionRequired. For this you will have to define a new flag in Metadata.h and a corresponding quirk in OMXCodec.h.
Modify the OMXCodec::create method to append a .decrypt suffix similar to the .secure suffix as shown above.
With all changes in OMXCodec, Metadata, MediaExtractor modules, you will have to rebuild only libstagefright.so and replace the same on your platform.
Voila!! your integration should be complete. Now comes the main challenge inside the component. As part of the component implementation, you should be able to differentiate between an ordinary component creation and .decrypt component creation.
From a runtime perspective, assuming that your component is aware of the fact that it is a .decrypt component or not, you could handle the decryption as part of the OMX_EmptyThisBuffer call, where you could decrypt the data and then pass it to underlying codec.
Pros: Easy to integrate, Minimal changes in Android framework, Scalable to other codecs, No new MIME type registration required.
Cons: You need to track the future revisions of android, specifically on the new quirks, flags and choice of .decrypt extension. If Google decides to employ something similar, you will have to adapt / modify your solution accordingly.
Solution 2: Registration of new MIME Type
From your question, it is not clear if you were able to define the MIME type or not and hence, I am capturing the steps for clarity.
Steps to follow:
Register a new MIME type at MediaDefs as shown here. For example, you could employ a new MIME type as const char *MEDIA_MIMETYPE_VIDEO_AVC_ENCRYPT = "video/avc-encrypt";
Register your new component with this updated MIME type in media_codecs.xml. Please note that you will have to ensure that the component quirks are also handled accordingly.
In OMXCodec::setVideoOutputFormat method implementation, you will have to introduce the support for handling your new MIME type as shown for H.264 here. Please note that you will have to handle similar changes in OMXCodec to support the new MIME type.
In MediaExtractor, you will have to signal the MIME type for the video track using the newly defined type. With these changes, your component will be selected and created.
However, the challenge still remains: Where to perform the decryption? For this, you could as well employ the same solution as described in the previous section i.e. handle the same as part of OMX_EmptyThisBuffer call.
Pros: None that I can think of..
Cons: First, solution is not scalable. You will have to keep adding newer MIME types and keep modifying the Stagefright framework. Next, the changes in OMXCodec will require corresponding changes in MediaExtractor. Hence, even though your initial focus is on MP4 extractor, if you wish to extend the solution to other container formats like AVI, MKV, you will have to include the support for new MIME types in these extractors.
Lastly, some notes.
As a preferred solution, I would recommend Solution 1 as it is easy and simple.
I haven't touched upon ACodec based implementation of the codec. However, I do feel that Solution 1 would be a far more easier solution to implement even if such a support is required in future.
If you aren't modifying the OMX core, you shouldn't require to modify the libstagefrighthw.so. Just FYI, this is typically implemented by the vendors as part of their vendor specific modules as in vendor/<xyz>/hardware/.... You need to check with your platform provider on the sources for libstagefrighthw.so.
Related
How to implementing a custom audio HAL in android?
What are the different approaches to implement the audio HAL in android Ex. TinyHAL, UCM etc. I am looking for various approaches to suite my requirement.
First and the most important question you should ask yourself is if you really need to implement your own HAL.
Secondly, there is a mandatory reading - Audio Architecture in Android.
Next, check the source code e.g. for one of the Nexus devices, this can be treated as a reference design.
After lots of code writing and lots of debugging pass CTS testsuit, to make sure you are compliant with the Android OS.
I have checked this question.
It is very similar:
I want to record a video with android camera.
After that with a library remove the background, which is with chroma key.
First I think I should use android NDK in order to escape from SDK memory limitation and use the whole memory.
The length of the video is short, a few seconds so maybe is able to handle it.
I would prefer to use an SDK implementation and set the android:largeHeap="true" , because of mismatching the .so files architecture.
Any library suggestion for SDK or NDK please.
IMO you should prefer NDK based solution, since video processing is a CPU-consuming operation and java code won't give you a better performance. Moreover, the most popular and reliable media-processing libraries are often written in C or C++.
I'd recommend you to take a look at FFmpeg. It offers reach abilities to cope with multimedia. chromakey filter may help you to remove green background (or whatever color you want). Then you can use another video as new background, if needed. See blend filter docs.
Filters are a nice and powerful concept. They may be used both via ffmpeg tool command line or via libavfilter API. For the former case you should find ffmpeg binary compiled for android and run it with traditional Runtime.exec(). For the latter case - you need to write native code, that creates proper filter graph and performs processing. This code must be linked against FFmpeg libraries.
I am developing a voice / video calling system in which there is browser to browser, Android to Android and Android to browser calling. Although I have managed to get that all working, I have run into a problem with the cryptos being used to encrypt the audio / video packets being sent between two clients. My system requires a certain set of cryptos, and I have managed to get that set working with Android to Android calling. However, the default cryptos being used in WebRTC enabled browsers are significantly weaker than the alternate crypto set being used for Android to Android calling. Thus, I have to "dumb down" the cryptos in the system so that I can have Android to browser calling.
Since I have no access to the code for WebRTC enabled browsers (and definitely cannot modify it) my only recourse is to somehow select or tell the peerconnection object which crypto level / set to use. I swear I have heard of this being done before, but I cannot find where I saw it nor anywhere that talks about doing it. So, I was wondering if anyone knew:
Is such a thing possible?
If possible, how does one set the cryptos for the call?
What cryptos are supported in Chrome and Firefox?
If I am remembering what I saw correctly, it was done somewhere along the lines of passing a JSON looking something like: { 'crypto' : 'AES....'} to the constraints parameter of webkitRTCPeerConnection. However, I could potentially be imagining all of this.
You can enable DTLS by passing the following to the PeerConnection constructor:
{ 'optional': [{'DtlsSrtpKeyAgreement': 'true'}]}
However, that doesn't let you pick the crypto algorithm. For that you could potentially munge the SDP with a different crypto line with the given SRTP key management parameters. However, I'm not sure offhand if anything other than the default is supported in Chrome. That may be a good question for the discuss-webrtc list.
Since Android API 12, RTP is supported in the SDK, which includes RtpStream as the base class, and AudioStream, AudioCodec, and AudioGroup. However, there is no documentation, examples, or tutorials to help me use these specific APIs to take input from the device's microphone, and output it to an RTP stream.
Where do I specify using the mic as the source, and not to use a speaker? Does it perform any RTCP? Can I extend the RtpStream base class to create my own VideoStream class (ideally I would like to use these for video streaming too)?
Any help out there on these new(ish) APIs please?
Unfortunately these APIs are the thinnest necessary wrapper around native code that performs the actual work. This means that they cannot be extended in java, and to extend them in C++ you would have to have a custom Android version I believe.
As far as I can see the AudioGroup cannot actually be set to not output sound.
I don't believe it does an RTCP but my use of it doesn't involve RTCP so I would not know.
My advice is that if you want to be able to extend functionality or have greater flexibility, then you should find a C or C++ native library that someone has written or ported to Android and use that instead, this should allow you to control what audio it uses and add video streaming and other such extensions.
Can somebody give me some direction on how to synthesize sounds of instruments (Piano, Drums, Guitar, etc...)
I am not even sure what to look for.
Thanks
Not sure if this is still the case but Android seems to have latency issues that inhibit it from being able to do true sound synthesis. NanoStudio, in my opinion, is the best audio app on the iOS and the author so far refuses to make an Android version because the framework isn't there yet.
See these links:
http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=nanostudio+android#hl=en&q=+site:forums.blipinteractive.co.uk+nanostudio+android&bav=on.2,or.r_gc.r_pw.&fp=ee1cd411508a9e34&biw=1194&bih=939
It all depends on what kind of application you're making, if it's going to be a Akai APC firing off sounds you could be alright. If you're after true synthesis (crafting wave forms so they replicate pianos, guitars, and drums), which is what JASS mentioned above does, then Android might not be able to handle it.
If you're looking for a guide on emulating organic instruments via synthesis check out the books by Fred Welsh http://www.synthesizer-cookbook.com/
Synthesizing a guitar, piano, or natural drums would be difficult. Triggering samples that you pass through a synthesis engine less so. If you want to synthesize analog synth sounds that's easier.
Here is a project out there you might be able to grab code from:
https://sites.google.com/site/androidsynthesizer/
In the end if you want to create a full synthesizer or multi-track application you'll have to render your oscillators + filters, etc into an audio stream that can be piped into the MediaPlayer. You don't necessarily need MIDI to do that.
Here is one persons experience:
http://jazarimusic.com/2011/06/audio-on-android-a-developers-perspective/
Interesting read.
Two projects that might be worth looking at JASS (Java Audio Synthesis System) and PureData . PureData is quite interesting though probably the harder path.
MIDI support on Android sucks. (So does audio support in general, but that's another story.) There's an interesting blog post here that discusses the (lack of) MIDI capabilities on Android. Here's what he did to work around some of the limitations:
Personally I solved the dynamic midi generation issue as follows: programmatically generate a midi file, write it to the device storage, initiate a mediaplayer with the file and let it play. This is fast enough if you just need to play a dynamic midi sound. I doubt it’s useful for creating user controlled midi stuff like sequencers, but for other cases it’s great.
Android unfortunately took out MIDI support in the official Java SDK.
That is, you cannot play audio streams directly. You must use the provided MediaStream classes.
You will have to use some DSP (digital signal processing) knowledge and the NDK in order to do this.
I would not be surprised if there was a general package (not necessarily for Android) to allow you to do this.
I hope this pointed you in the right direction!