I'm trying to set video bitrate in Exoplayer. I had already set it like this:
trackSelector = new DefaultTrackSelector(factory);
DefaultTrackSelector.Parameters parameters = trackSelector.getParameters();
parameters.withMaxVideoBitrate(maxBitrate);
parameters.withExceedRendererCapabilitiesIfNecessary(false);
parameters.withExceedVideoConstraintsIfNecessary(false);
trackSelector.setParameters(parameters);
but it doesn't work. Everywhere I found something about this I've found people were using HlsChunk source which is private in Exoplayer 2.6. Can anyone help me, pls?
For those who need to set HLS quality according to their needs this is how it could be made, considering that about this topic there are several post on SO but no one is very clear.
As I write in 2019 I assume everyone is using ExoPlayer2.
This is the solution which gave us the best result.
DataSource.Factory dataSourceFactory = new DefaultDataSourceFactory(Objects.requireNonNull(getContext()),
Util.getUserAgent(this.getContext(), getResources().getString(R.string.app_name)));
trackSelector = new CustomTrackSelector();
videoSource = new HlsMediaSource.Factory(dataSourceFactory).createMediaSource(mp4VideoUri);
player = ExoPlayerFactory.newSimpleInstance(this.getContext(), trackSelector);
so what you should is just override the behavior of the custom track selector, overriding the selectTrack method
public class CustomTrackSelector extends DefaultTrackSelector
{
public CustomTrackSelector()
{
super();
}
protected #Nullable
TrackSelection selectVideoTrack(
TrackGroupArray groups,
int[][] formatSupports,
int mixedMimeTypeAdaptationSupports,
Parameters params,
#Nullable TrackSelection.Factory adaptiveTrackSelectionFactory)
throws ExoPlaybackException
{
AdaptiveTrackSelection adaptiveTrackSelection = null;
if (groups.length > 0)
{
for (int groupIndex = 0; groupIndex < groups.length; groupIndex++)
{
TrackGroup trackGroup = groups.get(groupIndex);
int[] tracks = new int[trackGroup.length];
//creation of indexes array
for (int i = 0; i < trackGroup.length; i++)
{
tracks[i] = i;
}
adaptiveTrackSelection = new AdaptiveTrackSelection(
trackGroup,
tracks,
new DefaultBandwidthMeter(),
AdaptiveTrackSelection.DEFAULT_MIN_DURATION_FOR_QUALITY_INCREASE_MS,
AdaptiveTrackSelection.DEFAULT_MAX_DURATION_FOR_QUALITY_DECREASE_MS,
AdaptiveTrackSelection.DEFAULT_MIN_DURATION_TO_RETAIN_AFTER_DISCARD_MS,
AdaptiveTrackSelection.DEFAULT_BANDWIDTH_FRACTION,
AdaptiveTrackSelection.DEFAULT_BUFFERED_FRACTION_TO_LIVE_EDGE_FOR_QUALITY_INCREASE,
AdaptiveTrackSelection.DEFAULT_MIN_TIME_BETWEEN_BUFFER_REEVALUTATION_MS,
Clock.DEFAULT);
for (int i = 0; i < tracks.length; i++)
{
Format format = trackGroup.getFormat(tracks[i]);
if (format.width < MIN_WIDTH)
{
Logger.log(this, "Video track blacklisted with width = " + format.width);
adaptiveTrackSelection.blacklist(tracks[i], BLACKLIST_DURATION);
} else
{
Logger.log(this, "Video track NOT blacklisted with width = " + format.width);
}
}
}
}
return adaptiveTrackSelection;
}
}
The above method just blacklist the track that you don't want to select, allowing the player to choose just between those that are not blacklisted.
We have blacklisted tracks according to width parameter, but obviously you can filter them using bitRate.
With this behavior the player will start with the track you allow it to use, and after a period of time (BLACKLIST TIME) it can switch back to use all the tracks in case of need.
If you want to exclude a track for all the time just use Integer.MAX_VALUE as blacklist time.
I hope that this will help who is searching for this feature.
Related
Currently, I have a server that streams four RTMP MediaSources, one with 720p video source, one with 360p video source, one with 180p video source, and one audio-only source. If I wanted to switch resolutions, I have to stop the ExoPlayer instance, prepare the other track I wanted to switch to, then play.
The code I use to prepare the ExoPlayer instance:
TrackSelection.Factory adaptiveTrackSelectionFactory = new AdaptiveTrackSelection.Factory(bandwidthMeter);
TrackSelector trackSelector = new DefaultTrackSelector(adaptiveTrackSelectionFactory);
RtmpDataSourceFactory rtmpDataSourceFactory = new RtmpDataSourceFactory(bandwidthMeter);
ExtractorsFactory extractorsFactory = new DefaultExtractorsFactory();
factory = new AVControlExtractorMediaSource.Factory(rtmpDataSourceFactory);
factory.setExtractorsFactory(extractorsFactory);
createSource();
//noinspection deprecation
mPlayer = ExoPlayerFactory.newSimpleInstance(mActivity, trackSelector, new DefaultLoadControl(
new DefaultAllocator(true, C.DEFAULT_BUFFER_SEGMENT_SIZE),
1000, // min buffer
2000, // max buffer
1000, // playback
1000, //playback after rebuffer
DefaultLoadControl.DEFAULT_TARGET_BUFFER_BYTES,
true
));
vwExoPlayer.setPlayer(mPlayer);
mPlayer.addAnalyticsListener(mAnalyticsListener);
With createSource() being:
private void createSource() {
factory.setTrackPlaybackFlag(AVControlExtractorMediaSource.PLAYBACK_BOTH_AV);
mMediaSource180 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_180()));
mMediaSource180.addEventListener(getHandler(), new MSourceDebuggerListener("GameMediaSource180"));
mMediaSource360 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_360()));
mMediaSource360.addEventListener(getHandler(), new MSourceDebuggerListener("GameMediaSource360"));
mMediaSource720 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_720()));
mMediaSource720.addEventListener(getHandler(), new MSourceDebuggerListener("GameMediaSource720"));
factory.setTrackPlaybackFlag(AVControlExtractorMediaSource.PLAYBACK_AUDIO_ONLY);
mMediaSourceAudio = factory.createMediaSource(Uri.parse(API.GAME_AUDIO_STREAM_URL()));
mMediaSourceAudio.addEventListener(getHandler(), new MSourceDebuggerListener("GameMediaSourceAudio"));
}
private void releaseSource() {
mMediaSource180.releaseSource(null);
mMediaSource360.releaseSource(null);
mMediaSource720.releaseSource(null);
mMediaSourceAudio.releaseSource(null);
}
And the code I currently use to switch between these MediaSources is:
private void changeTrack(MediaSource source) {
if (currentMediaSource == source) return;
try {
this.currentMediaSource = source;
mPlayer.stop(true);
mPlayer.prepare(source, true, true);
mPlayer.setPlayWhenReady(true);
if (source == mMediaSourceAudio) {
if (!audioOnly) {
try {
TransitionManager.beginDelayedTransition(rootView);
} catch (Exception ignored) {
}
layAudioOnly.setVisibility(View.VISIBLE);
vwExoPlayer.setVisibility(View.INVISIBLE);
audioOnly = true;
try {
GameQnAFragment fragment = findFragment(GameQnAFragment.class);
if (fragment != null) {
fragment.signAudioOnly();
}
} catch (Exception e) {
Trace.e(e);
}
try {
GamePollingFragment fragment = findFragment(GamePollingFragment.class);
if (fragment != null) {
fragment.signAudioOnly();
}
} catch (Exception e) {
Trace.e(e);
}
}
} else {
if (audioOnly) {
TransitionManager.beginDelayedTransition(rootView);
layAudioOnly.setVisibility(View.GONE);
vwExoPlayer.setVisibility(View.VISIBLE);
audioOnly = false;
}
}
} catch (Exception ignore) {
}
}
I wanted to implement a seamless switching between these MediaSources so that I don't need to stop and re-prepare, but it appears that this feature is not supported by ExoPlayer.
In addition, logging each MediaSource structure with the following code:
MappingTrackSelector.MappedTrackInfo info = ((DefaultTrackSelector)trackSelector).getCurrentMappedTrackInfo();
if(info != null) {
for (int i = 0; i < info.getRendererCount(); i++) {
TrackGroupArray trackGroups = info.getTrackGroups(i);
if (trackGroups.length != 0) {
for(int j = 0; j < trackGroups.length; j++) {
TrackGroup tg = trackGroups.get(j);
for(int k = 0; k < tg.length; k++) {
Log.i("track_info_"+i+"-"+j+"-"+k, tg.getFormat(k)+"");
}
}
}
}
}
Just nets me 1 video format and 1 audio format each.
My current workaround is to prepare another ExoPlayer instance in the background, replace the currently running instance with that upon preparations being complete, and release the old instance. That reduces the lag between the MediaSources somewhat, but doesn't come close to achieving seamless resolution changes like Youtube.
Should I implement my own TrackSelector and jam-pack all the 4 sources into that, should I implement another MediaSource that handles all 4 sources, or should I just tell the colleague who maintains the streams to switch to just one RTMP MediaSource with a sort of manifest that lists all the resolutions available for the AdaptiveTrackSelection to switch between them?
Adaptive Bit Rate Streaming is designed to allow easy switching between different bit rate streams, but it requires the streams to be segmented and the player to download the video segment by segment.
In this way the player can decide which bit rate to choose for the next segment depending on the current network conditions (and the device display size and t type). The player is able to seamlessly, apart from the different bitrate and quality, move from one bit rate to another this way.
See here for some more info: https://stackoverflow.com/a/42365034/334402
All the above relies on a delivery protocol which supports this segmentation and different bit rate streams. The most common ones today are HLS and MPEG-DASH.
The easiest way to support what I think you are looking for would be for you colleague who is supplying the stream to supply it using HLS and/or DASH.
Note that at the moment, both HLS and DASH are required as apple devices require HLS while other devices tend to default to DASH. Traditionally HLS used TS as the container for the video in the segments and DASH used fragmented MP4, but there is now a move for both to use CMAF, which is essentially fragmented MP4.
So in theory a single set of bit rate videos can be used for HLS and DASH now - in practice this will depend on whether your content is encrypted or not, as HLS and apple used one encryption mode and everyone else another in the past. This is changing now also but will take time before all devices support the new approach, where all devices can support the same encryption mode, so if your streams are encrypted this is an added complication at the moment.
We have a native Android app that uses WebRTC, and we need to find out what video codecs are supported by the host device. (VP8 is always supported but H.264 is subject to the device having a compatible chipset.)
The idea is to create an offer and get the supported video codecs from the SDP. We can do this in a web app as follows:
const pc = new RTCPeerConnection();
if (pc.addTransceiver) {
pc.addTransceiver('video');
pc.addTransceiver('audio');
}
pc.createOffer(...);
Is there a way to do something similar on Android? It's important that we don't need to request camera access to create the offer.
Create a VideoEncoderFactory object and call getSupportedCodecs(). This will return a list of codecs that can be used. Be sure to create the PeerConnectionFactory first.
PeerConnectionFactory.InitializationOptions initializationOptions =
PeerConnectionFactory.InitializationOptions.builder(this)
.setEnableVideoHwAcceleration(true)
.createInitializationOptions();
PeerConnectionFactory.initialize(initializationOptions);
VideoEncoderFactory videoEncoderFactory =
new DefaultVideoEncoderFactory(eglBase.getEglBaseContext()
, true, true);
for (int i = 0; i < videoEncoderFactory.getSupportedCodecs().length; i++) {
Log.d("Codecs", "Supported codecs: " + videoEncoderFactory.getSupportedCodecs()[i].name);
}
I think this is what you are looking for:
private static void codecs() {
MediaCodecInfo[] codecInfos = new MediaCodecList(MediaCodecList.ALL_CODECS).getCodecInfos();
for(MediaCodecInfo codecInfo : codecInfos) {
Log.i("Codec", codecInfo.getName());
for(String supportedType : codecInfo.getSupportedTypes()){
Log.i("Codec", supportedType);
}
}
}
You can check example on https://developer.android.com/reference/android/media/MediaCodecInfo.html
I am developing an Android app for TV using the Leanback library. I have an HLS video stream with an srt subtitle from a URI. I am using ExoPlayer version 2.5.4 as used in this example app. I created my MediaSource using:
private MediaSource onCreateMediaSource(Uri uri, Uri subtitleUri) {
String userAgent = Util.getUserAgent(mContext, "ExoPlayerAdapter");
MediaSource videoSource = new HlsMediaSource(uri,
new DefaultDataSourceFactory(mContext, userAgent),
null,
null);
Format subtitleFormat = Format.createTextSampleFormat(
"1", MimeTypes.APPLICATION_SUBRIP, 0, "en");
MediaSource subtitleSource = new SingleSampleMediaSource(
subtitleUri,
new DefaultDataSourceFactory(mContext, userAgent),
subtitleFormat, C.TIME_UNSET);
MergingMediaSource mergedSource =
new MergingMediaSource(videoSource, subtitleSource);
return mergedSource;
}
In my PlaybackTransportControlGlue, I have a PlaybackControlsRow.ClosedCaptioningAction. When this button is clicked, what do I write in the action dispatcher to toggle the closed captions?
I tried this:
#Override
public void onActionClicked(Action action) {
if (action == mClosedCaptioningAction) {
DefaultTrackSelector trackSelector = mAdapter.getTrackSelector();
int rendererIndex = 0;
if (mClosedCaptioningAction.getIndex() == PlaybackControlsRow.ClosedCaptioningAction.INDEX_ON) {
MappingTrackSelector.MappedTrackInfo mappedTrackInfo = trackSelector.getCurrentMappedTrackInfo();
TrackGroupArray textGroups = mappedTrackInfo.getTrackGroups(rendererIndex);
int groupIndex = 0;
trackSelector.setRendererDisabled(rendererIndex, false);
MappingTrackSelector.SelectionOverride override =
new MappingTrackSelector.SelectionOverride(mTrackFactory, groupIndex, 0);
trackSelector.setSelectionOverride(rendererIndex, textGroups, override);
} else {
trackSelector.setRendererDisabled(rendererIndex, true);
trackSelector.clearSelectionOverrides();
}
}
super.onActionClicked(action);
}
I got this error:
E/gralloc: unregister FBM buffer
Okay I just answered a question which got text tracks working in a simple way here. This works for adaptive files (like HLS), but I would have to modify it to get it working with progressive files (like an .mp4 that you've merged with an .srt file).
It's at least a starting point.
I'd like to circle back around and get it fully working for you, but I think it may be a matter of working with the track index and tweaking the override so that it doesn't use the AdaptiveFactory (from the below line).
TrackSelection.Factory factory = tracks.length == 1
? new FixedTrackSelection.Factory()
: new AdaptiveTrackSelection.Factory(BANDWIDTH_METER);
We have it fully working in our code for both HLS and progressive, but our implementation is wrapped in a lot of additional architecture which might make it even harder to understand the core components.
I'm building an android app using Xamarin. The requirement of the app is to capture video from the camera and encode the video to send it across to a server.
Initially, I was using an encoder library on the server-side to encode recorded video but it was proving to be extremely unreliable and inefficient especially for large-sized video files. I have posted my issues on another thread here
I then decided to encode the video on the client-side and then send it to the server. I've found encoding to be a bit complicated and there isn't much information available on how this can be done. So, I searched for the only way I knew how to encode a video that is by using FFmpeg codec. I've found some solutions. There's a project on GitHub that demonstrates how FFmpeg is used inside a Xamarin android project. However, running the solution doesn't give any output. The project has a binary FFmpeg file which is installed to the phone directory using the code below:
_ffmpegBin = InstallBinary(XamarinAndroidFFmpeg.Resource.Raw.ffmpeg, "ffmpeg", false);
Below is the example code for encoding video into a different set of outputs:
_workingDirectory = Android.OS.Environment.ExternalStorageDirectory.AbsolutePath;
var sourceMp4 = "cat1.mp4";
var destinationPathAndFilename = System.IO.Path.Combine (_workingDirectory, "cat1_out.mp4");
var destinationPathAndFilename2 = System.IO.Path.Combine (_workingDirectory, "cat1_out2.mp4");
var destinationPathAndFilename4 = System.IO.Path.Combine (_workingDirectory, "cat1_out4.wav");
if (File.Exists (destinationPathAndFilename))
File.Delete (destinationPathAndFilename);
CreateSampleFile(Resource.Raw.cat1, _workingDirectory, sourceMp4);
var ffmpeg = new FFMpeg (this, _workingDirectory);
var sourceClip = new Clip (System.IO.Path.Combine(_workingDirectory, sourceMp4));
var result = ffmpeg.GetInfo (sourceClip);
var br = System.Environment.NewLine;
// There are callbacks based on Standard Output and Standard Error when ffmpeg binary is running as a process:
var onComplete = new MyCommand ((_) => {
RunOnUiThread(() =>_logView.Append("DONE!" + br + br));
});
var onMessage = new MyCommand ((message) => {
RunOnUiThread(() =>_logView.Append(message + br + br));
});
var callbacks = new FFMpegCallbacks (onComplete, onMessage);
// 1. The idea of this first test is to show that video editing is possible via FFmpeg:
// It results in a 150x150 movie that eventually zooms on a cat ear. This is desaturated, and there's a fade-in.
var filters = new List<VideoFilter> ();
filters.Add (new FadeVideoFilter ("in", 0, 100));
filters.Add(new CropVideoFilter("150","150","0","0"));
filters.Add(new ColorVideoFilter(1.0m, 1.0m, 0.0m, 0.5m, 1.0m, 1.0m, 1.0m, 1.0m));
var outputClip = new Clip (destinationPathAndFilename) { videoFilter = VideoFilter.Build (filters) };
outputClip.H264_CRF = "18"; // It's the quality coefficient for H264 - Default is 28. I think 18 is pretty good.
ffmpeg.ProcessVideo(sourceClip, outputClip, true, new FFMpegCallbacks(onComplete, onMessage));
//2. This is a similar version in command line only:
string[] cmds = new string[] {
"-y",
"-i",
sourceClip.path,
"-strict",
"-2",
"-vf",
"mp=eq2=1:1.68:0.3:1.25:1:0.96:1",
destinationPathAndFilename2,
"-acodec",
"copy",
};
ffmpeg.Execute (cmds, callbacks);
// 3. This lists codecs:
string[] cmds3 = new string[] {
"-codecs",
};
ffmpeg.Execute (cmds, callbacks);
// 4. This convers to WAV
// Note that the cat movie just has some silent house noise.
ffmpeg.ConvertToWaveAudio(sourceClip, destinationPathAndFilename4, 44100, 2, callbacks, true);
I have tried different commands but no output file is generated. I have tried to use another project found here but this one has the same issue. I don't get any errors but no output file is generated. I'm really hoping someone can help me find a way I can manage to use FFmpeg in my project or some way to compress video to transport it to the server.
I will really appreciate if someone can point me in the right direction.
Just figure how to get the output by adding the permission in AndroidManifest file.
android.permission.WRITE_EXTERNAL_STORAG
Please read the update on the repository, it says that there is a second package, Xamarin.Android.MP4Transcoder for Android 6.0 onwards.
Install NuGet https://www.nuget.org/packages/Xamarin.Android.MP4Transcoder/
await Xamarin.MP4Transcoder.Transcoder
.For720pFormat()
.ConvertAsync(inputFile, ouputFile, f => {
onProgress?.Invoke((int)(f * (double)100), 100);
});
return ouputFile;
For Previous Android versions
Soruce Code https://github.com/neurospeech/xamarin-android-ffmpeg
Install-Package Xamarin.Android.FFmpeg
Use this as template, this lets you log output as well as calculates progress.
You can take a look at source, this one downloads ffmpeg and verifies sha1 hash on first use.
public class VideoConverter
{
public VideoConverter()
{
}
public File ConvertFile(Context contex,
File inputFile,
Action<string> logger = null,
Action<int,int> onProgress = null)
{
File ouputFile = new File(inputFile.CanonicalPath + ".mpg");
ouputFile.DeleteOnExit();
List<string> cmd = new List<string>();
cmd.Add("-y");
cmd.Add("-i");
cmd.Add(inputFile.CanonicalPath);
MediaMetadataRetriever m = new MediaMetadataRetriever();
m.SetDataSource(inputFile.CanonicalPath);
string rotate = m.ExtractMetadata(Android.Media.MetadataKey.VideoRotation);
int r = 0;
if (!string.IsNullOrWhiteSpace(rotate)) {
r = int.Parse(rotate);
}
cmd.Add("-b:v");
cmd.Add("1M");
cmd.Add("-b:a");
cmd.Add("128k");
switch (r)
{
case 270:
cmd.Add("-vf scale=-1:480,transpose=cclock");
break;
case 180:
cmd.Add("-vf scale=-1:480,transpose=cclock,transpose=cclock");
break;
case 90:
cmd.Add("-vf scale=480:-1,transpose=clock");
break;
case 0:
cmd.Add("-vf scale=-1:480");
break;
default:
break;
}
cmd.Add("-f");
cmd.Add("mpeg");
cmd.Add(ouputFile.CanonicalPath);
string cmdParams = string.Join(" ", cmd);
int total = 0;
int current = 0;
await FFMpeg.Xamarin.FFMpegLibrary.Run(
context,
cmdParams
, (s) => {
logger?.Invoke(s);
int n = Extract(s, "Duration:", ",");
if (n != -1) {
total = n;
}
n = Extract(s, "time=", " bitrate=");
if (n != -1) {
current = n;
onProgress?.Invoke(current, total);
}
});
return ouputFile;
}
int Extract(String text, String start, String end)
{
int i = text.IndexOf(start);
if (i != -1)
{
text = text.Substring(i + start.Length);
i = text.IndexOf(end);
if (i != -1)
{
text = text.Substring(0, i);
return parseTime(text);
}
}
return -1;
}
public static int parseTime(String time)
{
time = time.Trim();
String[] tokens = time.Split(':');
int hours = int.Parse(tokens[0]);
int minutes = int.Parse(tokens[1]);
float seconds = float.Parse(tokens[2]);
int s = (int)seconds * 100;
return hours * 360000 + minutes * 60100 + s;
}
}
Is there a way to get true mic streaming input?
The example code I have at the moment looks to be getting the mic input data, and saving it to a sound object and playing it right away.
Is there a way to stream it properly?
If not, in my example, is there a way to get the mic input data, but mute the audio, as it is causing a feedback loop (despite setLoopBack being set to false..)
Code below:
import flash.display.*;
import flash.events.*;
import flash.media.*;
import flash.media.SoundTransform;
import flash.utils.*;
var _soundBytes:ByteArray = new ByteArray();
var _micBytes:ByteArray;
var son:Sound;
var sc:SoundChannel;
var pow:int = 0;
var myBar:Sprite;
stage.quality = "LOW";
// this code ended up muting the mic input oddly?
//SoundMixer.soundTransform = new SoundTransform(0);
init();
function init()
{
myBar = new Sprite;
micInit();
soundInit();
addEventListener(Event.ENTER_FRAME, visualise);
}
function micInit()
{
var mic:Microphone = Microphone.getMicrophone();
if(mic != null) {
//mic.setUseEchoSuppression(true);
mic.setLoopBack(false);
mic.setSilenceLevel(0);
mic.rate = 44;
mic.gain = 60;
mic.addEventListener(SampleDataEvent.SAMPLE_DATA, micSampleDataHandler);
}
}
function micSampleDataHandler(event:SampleDataEvent):void
{
_micBytes = event.data;
sc = son.play();
}
function soundInit():void {
son = new Sound();
son.addEventListener(SampleDataEvent.SAMPLE_DATA, soundSampleDataHandler);
}
function soundSampleDataHandler(event:SampleDataEvent):void {
for (var i:int = 0; i < 8192 && _micBytes.bytesAvailable > 0; i++) {
var sample:Number = _micBytes.readFloat();
event.data.writeFloat(sample);
event.data.writeFloat(sample);
}
}
function drawLines(e:Event):void{
SoundMixer.computeSpectrum(_soundBytes, true);
myBar.graphics.clear();
myBar.graphics.lineStyle(2,0xabc241);
for (var i=0; i < 256; i++) {
pow = _soundBytes.readFloat()*200;
pow = Math.abs(pow);
myBar.graphics.drawRect(i*2, 0, 2, pow);
addChild(myBar);
}
}
Hope you can help!
To use acoustic echo cancellation, call Microphone.getEnhancedMicrophone() to get a reference to an enhanced Microphone object. Set the Microphone.enhancedOptions property to an instance of the MicrophoneEnhancedOptions class. Here is an article that discusses it all. Article about enhanced microphone options at Adobe
EDIT: I spoke too soon. I have used enhanced mic many times before, but I decided to read the article myself to see if there were any interesting things I could learn new from it... and I found this near the end
AEC is computationally expensive. Currently, only desktop platforms are supported for Flash Player and AIR
Although I just looked at the date... last year, so maybe give it a try and it is supported now?!?