I am using Opentok SDK for video calling in IOS and Android devices with Nodejs server.
It is a group call scenario with max 4 people, when we stream for more than 10 min, both the devices getting too hot.
Does anyone have solution for this?
We can't degrade the video quality.
This is likely because you are using the default video code, VP8, which is not hardware accelerated. You can change the codec per publisher to either H.264 or VP8, but there are some trade-offs to this approach.
Their lack of H.264 SVC support is disappointing, but might be okay depending on your use case. If you read this whole post and still want more guidance, I'd recommend reaching out to their developer support team, and/or post more about your use case here.
Here's some more context from the OpenTok Documentation, but I recommend you read the whole page to understand where you need to make compromises:
The VP8 real-time video codec is a software codec. It can work well at lower bitrates and is a mature video codec in the context of WebRTC. As a software codec it can be instantiated as many times as is needed by the application within the limits of memory and CPU. The VP8 codec supports the OpenTok Scalable Video feature, which means it works well in large sessions with supported browsers and devices.
The H.264 real-time video codec is available in both hardware and software forms depending on the device. It is a relatively new codec in the context of WebRTC although it has a long history for streaming movies and video clips over the internet. Hardware codec support means that the core CPU of the device doesn’t have to work as hard to process the video, resulting in reduced CPU load. The number of hardware instances is device-dependent with iOS having the best support. Given that H.264 is a new codec for WebRTC and each device may have a different implementation, the quality can vary. As such, H.264 may not perform as well at lower bit-rates when compared to VP8. H.264 is not well suited to large sessions since it does not support the OpenTok Scalable Video feature.
Related
We are developing an application which involves audio,video decoding and encoding. In some cases we need multiple decoders to be open at same time.
Problem :
Some devices doesnt support multiple(2 or more) decoders to be open at same time. This happens mostly for high resolution videos (1080p).
Assumptions
We think this is happening because of hardware limitations of the devices.
We need to know is there any apis which tells us the media codec capabilities in android like maximum number of codecs that can be opened at same time in any android device. We are fine with even if the API is in native level.
How about MediaCodecInfo.CodecCapabilities.getMaxSupportedInstances()?
I know well that H.264 support is not the goal of WebRTC's current maintainers. However, while poking around the native code, I noticed some commented out bits referring to an H.264 RTP packetizer. The environment I'm working on is the OMAP4430, which has hardware-accelerated support for H.264 SVC encode/decode, so it'd be great if I could re-add H.264 support to native WebRTC for my application. (VP8 is extremely slow on my device.) Is starting with the packetizer currently in the project a good start? Has anyone done this / have recommendations for how to go about adding H.264 support? (I plan on sending the H.264 WebRTC data to Doubango's Media Breaker to provide support for regular WebRTC clients.)
If the above is absolutely not possible or very hard, can anyone at least recommend how I might get better VP8 performance on my device? It's a NEON-based ARM SoC, so I would imagine libvpx should automatically take advantage of that. Is there any way to know for sure?
"H.264 support is not the goal of WebRTC's current maintainers" is not correct at all.
The IETF has not made a decision as to whether VP8 or H.264 or both will be mandatory to implement yet.
Google, who hosts webrtc.org obviously wants their own VP8 codec in there, so there is nary a mention of 264 on their site or their example code... doesn't mean that how this will all end up.
I would visit ietf.org and sign up for the WebRTC email list - and ask for some help there. :-)
Working on Android 4.0+ above.
I am in process of analyzing ways to live stream my camera video to Window PC using RTP , encoding MPEG-2.
Is there readily available "rtp-server" in android 4.0+ ?
Is following true:: "The Android platform lacks support for
streaming protocol, which makes it difficult to stream live audio /
video to Android enabled devices." extracted from website
Currently I analyzed used the ffserver from the ffmpeg
libraries, but the FPS is < 5. which is far slow. Did any one
explored other solution which has more FPS?
Did anybody tried using StageFright for same? Capturing raw data
from camera and sending it to stagefright framework for encoding and
then streaming the same using RTP ??
Many Thanks.
The answers to your questions are as below. Though the links are related to Android 4.2.2, the same is true for Android 4.0 also.
Yes, there is a RTP transmitter available. You could look at this example in MyTransmitter as a starting point or you can consider using the standard recorder as in startRTPRecording.
You can stream data via RTP from an Android device to an external sink or you could have a different use-case as in Miracast a.k.a. Wi-Fi Display. However, streaming from one android device to another device through Wi-Fi Direct is still not completely enabled. The latter statement is mainly coming from Miracast scenario.
You can use the standard android software, which is capable of high resolution recording and transmission. This is mainly dependent on the underlying hardware as the overhead from software stack is not very high.
Yes. This is already answered in Q1 above.
I am developing an Android application that needs to send short (<60 second) voice messages to a server.
File size is very important because we don't want to eat up data plans. Sound quality is important to the point the message needs to be recognizable, but it should require significantly less bandwidth/quality than music files.
Which of the standard Android audio encoders (http://developer.android.com/reference/android/media/MediaRecorder.AudioEncoder.html) and file formats (http://developer.android.com/reference/android/media/MediaRecorder.OutputFormat.html) are likely to be best for this application?
Any hints on good starting places for bit rates, etc. would be welcome as well.
We need to ultimately be able to play them on Windows and iOS, but it's okay if that takes some back-end conversion. There doesn't seem to be an efficient cross-platform format/encoder so that's where we'll put in the work.
AMR is aimed precisely at speech compression, and is the codec most commonly used for normal circuit-switched voice calls.The narrow-band variant (AMR-NB, 8kHz sample rate) is still the most widely used and should be supported on pretty much any mobile phone you can find. The wide-band variant (AMR-WB, 16kHz sample rate) offers better quality and is preferred if the target device supports it and you can spare the bandwidth.
Typical bitrates for AMR ranges from around 6 to 14 kbit/s.
I'm not sure if there are any media players for Windows that handle .3GP files with AMR audio directly (VLC might). There are converters that can be used, though.
HE-AAC (v1) could also be used for speech encoding, however this page suggests that encoding support on Android is limited to Android 4.1 and above. Suitable rates might be 16 kHz / 64 kbps.
I'm very new to Android. Now i need to work on Adding my own codec to Android.I mean, I want to know what all the steps i need to take to add my own codec to android.
I'm very fresh to this i need some basic things, so can someone please explain the steps I need to take in order to add a new codec to Android?
This is virtually impossible to do in a portable way as all audio and video codecs are compiled at the platform level (due to the fact that most of the time they require hardware specific acceleration)
If you are only interested on this working on a specific hardware platform and have an unlocked bootloader (So you can boot a custom build of Android) you can compile the full Android platform from scratch using the AOSP as a base.
Depending on which version of Android you're targeting you're looking at adding code to either Opencore or Stagefright (The subsystems that Android uses for A/V decoding and parsing) here you can add audio decoders, audio encoders, video encoders, video decoders and container parsers.
Here is some discussion of adding to Stagefright:
http://freepine.blogspot.com/2010/01/overview-of-stagefrighter-player.html
http://groups.google.com/group/android-porting/msg/5d88e76845a22bbb
However unless the encoding scheme you wish to support is very simple (What are you wanting to add?) it is likely to be too CPU intensive for most Android devices to run without being able to offload some of the work to another system (like the radio chipset or the GPU).
In Android Framework, mediacodec is implemented as Stagefright, a media playback engine at the native level that has built-in software-based codecs for popular media formats. Stagefright also supports integration with custom hardware codecs provided as OpenMAX component. Here is the article which summaries the steps.
CHECK HERE