Is there a documentation explaining android Stagefright architecture?
Can I get some pointers on these subjects?
A good explanation of stagefright is provided at http://freepine.blogspot.com/2010/01/overview-of-stagefrighter-player.html.
There is a new playback engine implemented by Google comes with Android 2.0 (i.e, Stagefright), which seems to be quite simple and straightforward compared with the OpenCORE solution.
MediaExtractor is responsible for retrieving track data and the corresponding meta data from the underlying file system or http stream;
Leveraging OMX for decoding: there are two OMX plugins currently, adapting to PV's software codec and vendor's hardware implementation respectively. And there is a local implementation of software codecs which encapsulates PV's decoder APIs directly;
AudioPlayer is responsible for rendering audio, it also provides the timebase for timing and A/V synchronization whenever audio track is present;
Depending on which codec is picked, a local or remote render will be created for video rendering; and system clock is used as the timebase for video only playback;
AwesomePlayer works as the engine to coordinate the above modules, and is finally connected into android media framework through the adapter of StagefrightPlayer.
Look at this post.
Also, Android player is built up using PacketVideo (PV) Player, and here comes the docs about it (beware of really slow transfer speed :) ):
PVPlayer SDK Developer's Guide link 1, link 2
PVPlayer return codes link
Starting Gingerbread, it is Stagefright framework instead of PV framework. Above link has good info about the framework. If you have some specific questions, I may be able to help you out.
Thanks, Dolphin
Related
I am working on an android voip application that need not work on PSTN. I am completely novice to this field and any little help will be appreciated.
I started by researching how whatsapp voice call works and found out that it is using PJSIP which is open source sip stack library(Source: What's up with WhatsApp and WebRTC? - webrtcHacks). I also found that codecs are used in voip to compress and then decompress the voip packets.
Knowing that I am extremely comfused betweet those sip libraries and codec. Do an android voip app have to have implement sip library? Every sip library supports a few codec.
Is there any general format by which I can integrate any codec within my android app whether it is OPUS or Speex or anything like that which is independent of sip implementation?
May be I am sounding too confusing but that is true. Even googling so much on this specific topic did not help me and my last stop is this community. Any little guidance will be appreciated.
Yes, usually every app implements the codecs on their own. Some codec is available in the Android SDK but even in these cases a proper implementation is better.
G.711 (PCMU and PCMA) are very simple which can be implemented within a single java class (or even in a single function if you wish). The others are more complicated, but you can find open source implementations for almost each of them.
Also note that codec's are implemented also within PJSIP, so if you are using this library then you already have the most popular codec's available.
I have gone through this links and few other links also,
khronos
OpenMax_Development_Guide
bellagio_openmax_il_open_source_implementation_enables_developers_to_create
but all of them just explains how the calling sequence is, picture of block diagram etc but don't explain how to write and build openmax component and plug it in android. Even the link for android building and porting is complicated it doesn't explain, that you will need whole source code to write and build openmax plugin or part of android source code or without android source code you can create it.
I am having firefly K3288 board with android OS Kitkat 4.4 which is supporting hevc hardware decoder but I want to add hevc software decoder.
If anyone know how to write and build openmax hevc video decoder component and plug it in android please give some directions.
For the 1st question of how to develop an OMX component, you will have to write a new component either out of scratch or using a template of existing functions. Please do refer to the OMXIL specification, specifically chapter 2.
I would recommend you to write a component based on Bellagio implementation which can be found here. Please refer to omx_base_video_port.c as this is essential for your decoder development.
An alternative could be to refer the implementation from one of the vendors. In AOSP tree, please refer to the qcom implementation as here which could provide you a good reference for starting with your development.
Note: Please note that OMX wrapper is more aligned towards state management, context management and buffer management. The interaction with your decoder whether HW or SW is dependent on your driver architecture which you should decide on. Once this driver architecture is finalized, integrating into OMX should be fairly easy.
For the 2nd question on how to integrate the hevc decoder, please refer to this question which has relevant details.
I am trying to write a native android application in which I want to play couple of audio streams. In order to have proper synchronization with other audio streams in rest of the android system, I need to manage playback of these streams properly. While programming in java, android framework provides APIs like 'AudioManager.requestAudioFocus', 'AudioManager.abandonAudioFocus' and also provides appropriate callbacks according to behavior of other audio streams.
So, is there any possible way by means of which I can call these methods from a native code ?
It seems there is one more way of using OpenSL APIs. Does OpenSl provides methods similar to requestAudioFocus / abandonAudioFocus ?
Thanks in advance
I have a new task of integrating a decoder(HEVC) from FFMPEG to Android's Stagefright. To do this, i first need to created an OMX component, My next thing is to register my codec in media_codecs.xml and then the OMX component registration in OMXCore.
Is there any guide or steps to create an OMX component for a video decoder? Secondly, this decoder plays only elementary streams (.bin or .h265 files) so there is no container format here.
Can anyone provide some steps or guidelines to be followed while creating the OMX component for a video codec. Any sort of pointers will be really helpful to me.
Thanks in advance.
In general, you could follow the steps pointed in this question for integrating a decoder into OMX Core.
HEVC is not yet part of the OMX IL specification. Hence, you would have to introduce a new role like video_decoder.hevc for your component while registering in media_codecs.xml. Please do check that your OMX core can support this new role.
If you are trying to play only elementary streams, you can consider modifying the stagefright command line utility to read the elementary stream data and feed the decoder.
Another option is to modify the current recordVideo utility to read a frame data and create a decoder instead of the encoder. With these, I presume you should be able to play your decoder from command line.
EDIT: If you wish to build a new OMX component, I would recommend that you could refer to the Bellagio Component Writers Guide which should give good information on how to build an OMX component. This gives a pretty comprehensive guide to build a new component. Please do ensure that you are able to identify the dependencies with the Bellagio implementation and your core implementation.
Also, you could look at other public domain OMX implementations as here:
http://androidxref.com/4.4.2_r1/xref/hardware/ti/omap4xxx/domx/
http://androidxref.com/4.4.2_r1/xref/hardware/qcom/media/mm-video-v4l2/vidc/
I feel Bellagio could work as a good starting reference if you haven't build an OMX component earlier. The sources for Bellagio are available on Sourceforge.
Can somebody give me some direction on how to synthesize sounds of instruments (Piano, Drums, Guitar, etc...)
I am not even sure what to look for.
Thanks
Not sure if this is still the case but Android seems to have latency issues that inhibit it from being able to do true sound synthesis. NanoStudio, in my opinion, is the best audio app on the iOS and the author so far refuses to make an Android version because the framework isn't there yet.
See these links:
http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=nanostudio+android#hl=en&q=+site:forums.blipinteractive.co.uk+nanostudio+android&bav=on.2,or.r_gc.r_pw.&fp=ee1cd411508a9e34&biw=1194&bih=939
It all depends on what kind of application you're making, if it's going to be a Akai APC firing off sounds you could be alright. If you're after true synthesis (crafting wave forms so they replicate pianos, guitars, and drums), which is what JASS mentioned above does, then Android might not be able to handle it.
If you're looking for a guide on emulating organic instruments via synthesis check out the books by Fred Welsh http://www.synthesizer-cookbook.com/
Synthesizing a guitar, piano, or natural drums would be difficult. Triggering samples that you pass through a synthesis engine less so. If you want to synthesize analog synth sounds that's easier.
Here is a project out there you might be able to grab code from:
https://sites.google.com/site/androidsynthesizer/
In the end if you want to create a full synthesizer or multi-track application you'll have to render your oscillators + filters, etc into an audio stream that can be piped into the MediaPlayer. You don't necessarily need MIDI to do that.
Here is one persons experience:
http://jazarimusic.com/2011/06/audio-on-android-a-developers-perspective/
Interesting read.
Two projects that might be worth looking at JASS (Java Audio Synthesis System) and PureData . PureData is quite interesting though probably the harder path.
MIDI support on Android sucks. (So does audio support in general, but that's another story.) There's an interesting blog post here that discusses the (lack of) MIDI capabilities on Android. Here's what he did to work around some of the limitations:
Personally I solved the dynamic midi generation issue as follows: programmatically generate a midi file, write it to the device storage, initiate a mediaplayer with the file and let it play. This is fast enough if you just need to play a dynamic midi sound. I doubt it’s useful for creating user controlled midi stuff like sequencers, but for other cases it’s great.
Android unfortunately took out MIDI support in the official Java SDK.
That is, you cannot play audio streams directly. You must use the provided MediaStream classes.
You will have to use some DSP (digital signal processing) knowledge and the NDK in order to do this.
I would not be surprised if there was a general package (not necessarily for Android) to allow you to do this.
I hope this pointed you in the right direction!