How does recording a 1080P, H264 encoded video in android camera application is realtime fast but encoding a video in android using FFMPEG is slow at the same video size?
I know FFMPEG is a software level encoder and it wont support any hardware features.
I know camera applications directly get buffer data from camera driver.
But actually where the difference happens??
Why camera application is Realtime fast???
Does it use GPU and OpenGL features of the phone to encode the video so that its so realtime fast??
Because both Camera Application and FFMPEG runs on same mobile but still camera encodes H264 realtime ???
I know FFMPEG is a software level encoder and it wont support any hardware features.
You have basically answered this question for yourself. Many devices have hardware codecs that don't rely on the usual CPU instructions for any encoding. FFmpeg won't take advantage of these. (I believe there are hardware optimizations you can build into FFmpeg, though I am not sure of their availability on Android.)
FFMPEG does support NEON optimisations by default on ARM platforms hence differences are not likely visible at resolutions like QVGA or VGA. But on-chip HW for encoding video is much faster at higher resolutions like 1080P, minimally utilising ARM MHz. Note that encoders use different HW than the OpenGL HW engines.
ffmpeg may use optional x264 encoder if configured this way; note that this has dire licensing implications.x264 is very good and efficient, and when it is built to use sliced multithreading, it can achieve 25FPS for WVGA video on modern devices, like Samsung S4.
ffmpeg can be compiled with libstagefrihht which makes use of the built-in hardware decoder, but unfortunately does not include an encoder.
I also met this problem,Bothering me for a long time.I solved the problem through this:
AVDictionary *param = 0;
//H.264
if (pCodecCtx->codec_id == AV_CODEC_ID_H264) {
// av_dict_set(¶m, "preset", "slow", 0);
/**
*
* ultrafast,superfast, veryfast, faster, fast, medium
* slow, slower, veryslow, placebo. This is x264 encoding speed parameter
*/
av_dict_set(¶m, "preset", "superfast", 0);
av_dict_set(¶m, "tune", "zerolatency", 0);
}
if (avcodec_open2(pCodecCtx, pCodec, ¶m) < 0) {
loge("Failed to open encoder!\n");
return -1;
}
you need set preset superfast or ultrafast
Related
I am using Opentok SDK for video calling in IOS and Android devices with Nodejs server.
It is a group call scenario with max 4 people, when we stream for more than 10 min, both the devices getting too hot.
Does anyone have solution for this?
We can't degrade the video quality.
This is likely because you are using the default video code, VP8, which is not hardware accelerated. You can change the codec per publisher to either H.264 or VP8, but there are some trade-offs to this approach.
Their lack of H.264 SVC support is disappointing, but might be okay depending on your use case. If you read this whole post and still want more guidance, I'd recommend reaching out to their developer support team, and/or post more about your use case here.
Here's some more context from the OpenTok Documentation, but I recommend you read the whole page to understand where you need to make compromises:
The VP8 real-time video codec is a software codec. It can work well at lower bitrates and is a mature video codec in the context of WebRTC. As a software codec it can be instantiated as many times as is needed by the application within the limits of memory and CPU. The VP8 codec supports the OpenTok Scalable Video feature, which means it works well in large sessions with supported browsers and devices.
The H.264 real-time video codec is available in both hardware and software forms depending on the device. It is a relatively new codec in the context of WebRTC although it has a long history for streaming movies and video clips over the internet. Hardware codec support means that the core CPU of the device doesn’t have to work as hard to process the video, resulting in reduced CPU load. The number of hardware instances is device-dependent with iOS having the best support. Given that H.264 is a new codec for WebRTC and each device may have a different implementation, the quality can vary. As such, H.264 may not perform as well at lower bit-rates when compared to VP8. H.264 is not well suited to large sessions since it does not support the OpenTok Scalable Video feature.
I'm developing an app for applying effects to the camera image in real-time. Currently I'm using the MediaMuxer class in combination with MediaCodec. Those classes were implemented with Android 4.3.
Now I wanted to redesign my app and make it compatible for more devices. The only thing I found in the internet was a combination of FFmpeg and OpenCV, but I read that the framerate is not very well if I want to use a high resolution. Is there any possibility to encode video in real-time while capturing the camera image without using MediaMuxer and MediaCodec?
PS: I'm using GLSurfaceView for OpenGL fragment shader effects. So this is a must-have.
Real-time encoding of large frames at a moderate frame rate is not going to happen with software codecs.
MediaCodec was introduced in 4.1, so you can still take advantage of hardware-accelerated compression so long as you can deal with the various problems. You'd still need an alternative to MediaMuxer if you want a .mp4 file at the end.
Some commercial game recorders, such as Kamcord and Everyplay, claim to work on Android 4.1+. So it's technically possible, though I don't know if they used non-public APIs to feed surfaces directly into the video encoder.
In pre-Jellybean Android it only gets harder.
(For anyone interested in recording GL in >= 4.3, see EncodeAndMuxTest or Grafika's "Record GL app".)
I am developing a player on android using ffmpeg. However, I found out that avcodec_decode_video2 very slow. Sometimes, it takes about 0.1, even 0.2 second to decode a frame from a video with 1920 × 1080 resolution.
How can I improve the speed of avcodec_decode_video2() ?
If your device has necessary hardware+firmware capabilities, you could use ffmpeg with libstagefright support.
Update: here is the easy procedure to decide whether it is worth while to switch to libstagefright on your device for a given class of videos: use ffmpeg on your PC to convert the representative video stream into mp4:
ffmpeg -i your_video -an -vcodec copy test.mp4
and try to open the resulting file with the stock video player on your device. If the video does play with reasonable quality, you can use libstagefright with ffmpeg to improve your player app. If you see "Cannot Play Video", your device hw+fw does not support the video.
That sounds about rite. HD video takes a lot of CPU. Some codecs may support multithread decode if you device has multiple cores. But the will consume massive amounts a battery, and heat the device. This is why most mobile devices use specialized hardware decoders instead of CPU. In Android using the MediaCodec API instead of libavcodec should invoke the hardware decoder.
I'm new here so first of all, i'd like to say that this is an awesome comunity :-). Well, let's start with my question.
Currently, I'm working with an embbeded systems (freescale imx6 and exynos 4412 based processors). The main idea is to develop a voip app with HD video (1080p) for Android. The video is captured via H264 hardware webcam (logitech c920). Until now, I'm able to use the ffmpeg's libstagefright codec, it works really fine and faster but I have the problem that lots of people have. "The colorspace conversion".
As I can see in the code,
outFormat = (*s->decoder)->getFormat();
outFormat->findInt32(kKeyColorFormat, &colorFormat);
We can get the output colorspace format, but my question is:
Could I define other output colorspace format? How?
If I were able to perform this task via stagefright (because the vendors provides the hardware accel by this way) I could overcome the colorspace conversion time penalty when I perform this task via OpenGL.
Thank you!
Regards
How does Android tablet PC hardware decode hd video? The CPU is arm. The hardware decoding in system should be with Mali module to do hard solution. But I don't know how to it. Anyone’s help is very appreciating.
In Android 4.1 or later, use MediaCodec API to encode or decode video from Java.
Mali is the GPU, but it doesn't contain a VDU (video decoder unit), that is chipset-specific for each SoC. Chipset vendors like TI, QCOM etc each have their own, and they each provide their own OpenMAX (OMX) components to do this, and you will see some of those mentioned in the android adb logs.
Sample code: http://dpsm.wordpress.com/2012/07/28/android-mediacodec-decoded/