I know well that H.264 support is not the goal of WebRTC's current maintainers. However, while poking around the native code, I noticed some commented out bits referring to an H.264 RTP packetizer. The environment I'm working on is the OMAP4430, which has hardware-accelerated support for H.264 SVC encode/decode, so it'd be great if I could re-add H.264 support to native WebRTC for my application. (VP8 is extremely slow on my device.) Is starting with the packetizer currently in the project a good start? Has anyone done this / have recommendations for how to go about adding H.264 support? (I plan on sending the H.264 WebRTC data to Doubango's Media Breaker to provide support for regular WebRTC clients.)
If the above is absolutely not possible or very hard, can anyone at least recommend how I might get better VP8 performance on my device? It's a NEON-based ARM SoC, so I would imagine libvpx should automatically take advantage of that. Is there any way to know for sure?
"H.264 support is not the goal of WebRTC's current maintainers" is not correct at all.
The IETF has not made a decision as to whether VP8 or H.264 or both will be mandatory to implement yet.
Google, who hosts webrtc.org obviously wants their own VP8 codec in there, so there is nary a mention of 264 on their site or their example code... doesn't mean that how this will all end up.
I would visit ietf.org and sign up for the WebRTC email list - and ask for some help there. :-)
Related
I am using Opentok SDK for video calling in IOS and Android devices with Nodejs server.
It is a group call scenario with max 4 people, when we stream for more than 10 min, both the devices getting too hot.
Does anyone have solution for this?
We can't degrade the video quality.
This is likely because you are using the default video code, VP8, which is not hardware accelerated. You can change the codec per publisher to either H.264 or VP8, but there are some trade-offs to this approach.
Their lack of H.264 SVC support is disappointing, but might be okay depending on your use case. If you read this whole post and still want more guidance, I'd recommend reaching out to their developer support team, and/or post more about your use case here.
Here's some more context from the OpenTok Documentation, but I recommend you read the whole page to understand where you need to make compromises:
The VP8 real-time video codec is a software codec. It can work well at lower bitrates and is a mature video codec in the context of WebRTC. As a software codec it can be instantiated as many times as is needed by the application within the limits of memory and CPU. The VP8 codec supports the OpenTok Scalable Video feature, which means it works well in large sessions with supported browsers and devices.
The H.264 real-time video codec is available in both hardware and software forms depending on the device. It is a relatively new codec in the context of WebRTC although it has a long history for streaming movies and video clips over the internet. Hardware codec support means that the core CPU of the device doesn’t have to work as hard to process the video, resulting in reduced CPU load. The number of hardware instances is device-dependent with iOS having the best support. Given that H.264 is a new codec for WebRTC and each device may have a different implementation, the quality can vary. As such, H.264 may not perform as well at lower bit-rates when compared to VP8. H.264 is not well suited to large sessions since it does not support the OpenTok Scalable Video feature.
What is the best (performance wise) way to get and stream a video from an android device's camera to a PC?
I have seen this question asked here before and there exist a few open source programs that do just that, but there exist multiple ways from which I don't know which one is the best!
for example:
Should the android part be written in c++ or java (performance/api wise)?
Which api should I use to get the video from camera?
What is the best way to stream the video?
I don't intend to support old android versions (<4.x), so if the best way/api is relatively new it's fine by me.
I'm not familiar with Android development but I'll try to answer.
I suppose that the actual encoding of the raw image data is probably done on hardware chip (otherwise software encoding would probably kill your battery) and it looks like MediaCodec class is exactly what you need. I suppose you want to implement some kind of live streaming service and the latency is important. If so, then you should stick to UDP based transmission methods. Using RTP protocol or MPEG-TS container format would be the best choice for this purpose. Of course you can also use TCP based methods for streaming like HLS or DASH (both of them use HTTP).
You should also take a look at Table 1 Core media format and codec support:
It tells us for example that using H.264 AVC Encoder supports MPEG-TS container and that HLS version 3 is also supported for Android 4.0 and above.
I'm trying to decode a raw h264 stream on "older" Android versions.
I've tried MediaPlayer class and does not seem to support the stream format.
I can see the stream on other Cam Viewer apps from the market, so I figure there must be a way to do it, probably using the NDK.
I've read about OpenMAX and Stagefright, but couldn't find a working example about streaming.
Could someone please point me in the right direction?
Also, I'm reading in several places about "frameworks/av/include/media/stagefright/MediaSource.h" and other sources, but they don't seem to be either in the regular SDK or the NDK.
Where is this source located? is there another sdk?
Thanks in advance.
Update: I'm receiving a rtsp connection.
If you wish to perform only a simple experiment to verify certain functionality, you can consider employing the command line stagefright utility. Please do consider this condition where your streaming input may not be supported.
If you wish to build a more comprehensive player pipeline, you can consider the handling for rtsp as in here or http as in here. Please note that NuCachedSource2 implementation is essential for streaming input as this provides a page cache implementation which acts as a jitter for the streaming data.
Please do note one critical point: Command line stagefright utility doesn't render to the screen. Hence, if you wish to render, you will to implement the complete playback pipeline with rendering.
On a related note, if your input is streaming input, the standard player implementation does have support for streaming inputs as can be observed here. Did you face any issues with the same?
As fadden has already pointed out, your work is made far more simpler with the introduction of MediaCodec in Android 4.x.
You should use third-party libs like android-h264-decoder which uses JNI for increasing the performance! Also look at this lib Intel
Update: Media codec wasn't exposed until API 16 (Android 4.1), so that won't work for a 2.3.3 device.
Stagefright and OpenMAX IL were (and still are) internal components of Android. You can find the code (including MediaSource.h) at https://android.googlesource.com/platform/frameworks/av/+/master Note that the media framework has moved to a separate "tree" frameworks/av only recently. Before it was part of 'frameworks/base', e.g. https://android.googlesource.com/platform/frameworks/base/+/gingerbread/media/
I'm very new to Android. Now i need to work on Adding my own codec to Android.I mean, I want to know what all the steps i need to take to add my own codec to android.
I'm very fresh to this i need some basic things, so can someone please explain the steps I need to take in order to add a new codec to Android?
This is virtually impossible to do in a portable way as all audio and video codecs are compiled at the platform level (due to the fact that most of the time they require hardware specific acceleration)
If you are only interested on this working on a specific hardware platform and have an unlocked bootloader (So you can boot a custom build of Android) you can compile the full Android platform from scratch using the AOSP as a base.
Depending on which version of Android you're targeting you're looking at adding code to either Opencore or Stagefright (The subsystems that Android uses for A/V decoding and parsing) here you can add audio decoders, audio encoders, video encoders, video decoders and container parsers.
Here is some discussion of adding to Stagefright:
http://freepine.blogspot.com/2010/01/overview-of-stagefrighter-player.html
http://groups.google.com/group/android-porting/msg/5d88e76845a22bbb
However unless the encoding scheme you wish to support is very simple (What are you wanting to add?) it is likely to be too CPU intensive for most Android devices to run without being able to offload some of the work to another system (like the radio chipset or the GPU).
In Android Framework, mediacodec is implemented as Stagefright, a media playback engine at the native level that has built-in software-based codecs for popular media formats. Stagefright also supports integration with custom hardware codecs provided as OpenMAX component. Here is the article which summaries the steps.
CHECK HERE
I know Android doesn't support MJPEG natively but are there any jar files/drivers available that can be added to a project to make it possible?
There is a View available to display MJPEG streams:
Android and MJPEG Topic
Hardly, unless it's your Android platform (i.e. you are the integrator of special-purpose devices running Android).
A good place to start looking on how the Android framework handles video streams is here:
http://opencore.net/files/opencore_framework_capabilities.pdf
If you want to cook up something entirely incompatible, I guess you could do that with the NDK, jam ffmpeg into there, and with a bit of luck (and a nightmare supporting different Android devices) you can have it working.
What is the root problem you are trying to solve, perhaps we could work something out.
You can of course write or port software to handle any documented video format, the problem is that you won't have the same degree of hardware optimized code as the built in video codecs, and won't have as efficient low-level access to the framebuffer. So your code is likely to not be able to play back at full speed. Sometimes that might be okay, if you just want to get a sense of something. Also mjpeg compresses frames individually, so it should be trivial to write something that just skips a lot of frames and only decodes whatever fraction of them it can keep up with.
I think that some people have managed to build ffmpeg or mplayer using the optional features of the cpus in some phones and get to full frame rate for some videos, but it's tricky and device-specific.
I'm probably stating the obvious here, but MJPEG consists simply of multiple JPEGs. If you can grab the frames by cutting out data, you can probably get that data to be displayed as any other image.
I couldn't find any information on when exactly this was implemented, but as of now (testing on Android 8) you can view MJPEG stream just fine using a WebView.