How to do hardware decoding of video on Android? - android

How does Android tablet PC hardware decode hd video? The CPU is arm. The hardware decoding in system should be with Mali module to do hard solution. But I don't know how to it. Anyone’s help is very appreciating.

In Android 4.1 or later, use MediaCodec API to encode or decode video from Java.
Mali is the GPU, but it doesn't contain a VDU (video decoder unit), that is chipset-specific for each SoC. Chipset vendors like TI, QCOM etc each have their own, and they each provide their own OpenMAX (OMX) components to do this, and you will see some of those mentioned in the android adb logs.
Sample code: http://dpsm.wordpress.com/2012/07/28/android-mediacodec-decoded/

Related

Why WebRTC only support H264 in Chrome but not in native application with some devices

I use official sample to create offer SDP in Android Chrome, we can find a=rtpmap:100 H264/90000 that meant it can support H264.
But if I build AppRTC(official Android sample) and use official prebuilt libraries version 1.0.25821, call createOffer then receive SDP in SdpObserver::onCreateSuccess, the SDP did not contain H264.
My test device is Oppo R15 (with MTK Helio P60, Android 8.1).
So, why WebRTC only support H264 in Chrome but not in native application with some Android devices?
Chrome build uses openh264 which is not used by regular **WebRTC. What I meant by regular is that there is variant with software h.264 encoder from the chrome build which you may use but I wouldn't recommend it.
On Android WebRTC, H.264 is supported only if
device hardware supports it, AND
WebRTC hardware encoder glue logic supports that hardware encoder. Currently only QCOM and EXYNOS devices are supported. So any other devices even if they support h.264 HW encoder, won't be used and won't be added as part of codec factory and you won't see in SDP generated from WebRTC sample apps.
At Java level, you can see that in HardwareVideoEncoderFactory.java which checks for QCOM and EXYNOS devices in isHardwareSupportedInCurrentSdkH264 function.
Interestingly, if you are using native code, even QCOM and EXYNOS hardware encoders are not supported (there is bug filed on Webrtc issue tracker). This is because of tight integration of HW encoding code with JNI code - definitely not a good modular code.

How to do hardware h.264 video encoding on android platform

I'm trying to do hardware h.264 video encoding platform, I've learned that "MediaCodec" seems support hardware video decoding, but does it support hardware video encoding?
Some search results from google suggest I should consider search for different solutions for different chips according to user's Android device, does this mean I should go to each chip provider's website to search for different solution?
Thanks for any advice
The MediaCodec class also supports video encoding. The MediaCodec class was explicitly designed for multi device hardware accelerated media processing so that the same code runs on every device (from experience i can tell you it won't)
Good readings about this topic: http://developer.android.com/reference/android/media/MediaCodec.html
http://bigflake.com/mediacodec/
Remember, MediaCodec min-sdk version is 16 (i recommend to target api 18 e.g. usage of surface / MediaMuxer class), so if you're targeting devices with api < 16 MediaCodec won't do. If you wan't to target these devices you'll have to use lib stagefright and OpenMax wich i do not recomend

Android built in Camera encoder VS FFMPEG --- Speed

How does recording a 1080P, H264 encoded video in android camera application is realtime fast but encoding a video in android using FFMPEG is slow at the same video size?
I know FFMPEG is a software level encoder and it wont support any hardware features.
I know camera applications directly get buffer data from camera driver.
But actually where the difference happens??
Why camera application is Realtime fast???
Does it use GPU and OpenGL features of the phone to encode the video so that its so realtime fast??
Because both Camera Application and FFMPEG runs on same mobile but still camera encodes H264 realtime ???
I know FFMPEG is a software level encoder and it wont support any hardware features.
You have basically answered this question for yourself. Many devices have hardware codecs that don't rely on the usual CPU instructions for any encoding. FFmpeg won't take advantage of these. (I believe there are hardware optimizations you can build into FFmpeg, though I am not sure of their availability on Android.)
FFMPEG does support NEON optimisations by default on ARM platforms hence differences are not likely visible at resolutions like QVGA or VGA. But on-chip HW for encoding video is much faster at higher resolutions like 1080P, minimally utilising ARM MHz. Note that encoders use different HW than the OpenGL HW engines.
ffmpeg may use optional x264 encoder if configured this way; note that this has dire licensing implications.x264 is very good and efficient, and when it is built to use sliced multithreading, it can achieve 25FPS for WVGA video on modern devices, like Samsung S4.
ffmpeg can be compiled with libstagefrihht which makes use of the built-in hardware decoder, but unfortunately does not include an encoder.
I also met this problem,Bothering me for a long time.I solved the problem through this:
AVDictionary *param = 0;
//H.264
if (pCodecCtx->codec_id == AV_CODEC_ID_H264) {
// av_dict_set(&param, "preset", "slow", 0);
/**
*
* ultrafast,superfast, veryfast, faster, fast, medium
* slow, slower, veryslow, placebo. This is x264 encoding speed parameter
*/
av_dict_set(&param, "preset", "superfast", 0);
av_dict_set(&param, "tune", "zerolatency", 0);
}
if (avcodec_open2(pCodecCtx, pCodec, &param) < 0) {
loge("Failed to open encoder!\n");
return -1;
}
you need set preset superfast or ultrafast

Hardware accelerated video decode for H.264 in android prior to Jelly Bean

I am working on a video conferencing project. We were using software codec for encode and decode of video frames which will do fine for lower resolutions( up to 320p). We have planned to support our application for higher resolutions also up to 720p. I came to know that hardware acceleration will do this job fairly well.
As the hardware codec api Media codec is available from Jelly Bean onward I have used it for encode and decode and are working fine. But my application is supported from 2.3 . So I need to have an hardware accelerated video decode for H.264 frames of 720p at 30fps.
On research came across the idea of using OMX codec by modifying the stage fright framework.I had read that the hardware decoder for H.264 is available from 2.1 and encoder is there from 3.0. I have gone through many articles and questions given in this site and confirmed that I can go ahead.
I had read about stage fright architecture here -architecture and here- stagefright how it works
And I read about OMX codec here- use-android-hardware-decoder-with-omxcodec-in-ndk.
I am having a starting trouble and some confusions on its implementation.I would like to have some info about it.
For using OMX codec in my code should I build my project with the whole android source tree or can I do by adding some files from AOSP source(if yes which all).
What are the steps from scratch I should follow to achieve it.
Can someone give me a guideline on this
Thanks...
The best example to describe the integration of OMXCodec in native layer is the command line utility stagefright as can be observed here in GingerBread itself. This example shows how a OMXCodec is created.
Some points to note:
The input to OMXCodec should be modeled as a MediaSource and hence, you should ensure that your application handles this requirement. An example for creating a MediaSource based source can be found in record utility file as DummySource.
The input to decoder i.e. MediaSource should provide the data through the read method and hence, your application should provide individual frames for every read call.
The decoder could be created with NativeWindow for output buffer allocation. In this case, if you wish to access the buffer from the CPU, you should probably refer to this query for more details.

select output colospace in ffmpeg's stagefright codec

I'm new here so first of all, i'd like to say that this is an awesome comunity :-). Well, let's start with my question.
Currently, I'm working with an embbeded systems (freescale imx6 and exynos 4412 based processors). The main idea is to develop a voip app with HD video (1080p) for Android. The video is captured via H264 hardware webcam (logitech c920). Until now, I'm able to use the ffmpeg's libstagefright codec, it works really fine and faster but I have the problem that lots of people have. "The colorspace conversion".
As I can see in the code,
outFormat = (*s->decoder)->getFormat();
outFormat->findInt32(kKeyColorFormat, &colorFormat);
We can get the output colorspace format, but my question is:
Could I define other output colorspace format? How?
If I were able to perform this task via stagefright (because the vendors provides the hardware accel by this way) I could overcome the colorspace conversion time penalty when I perform this task via OpenGL.
Thank you!
Regards

Categories

Resources