I am creating an application that receives an RTSP streaming that is H264 encoded and the application has to decode it using MediaCodec to be finally displayed. Something similar to the one mentioned in this other post.
I wonder if some knows if mediaCodec performance will be affected by a change on resolution on the RTSP streaming, specifically the latency and ARM consumption.
I know MediaCodec is accelerated by Hardware so I would expect a pretty stable latency and ARM consumption but if someone has some numbers on performance it would be great to compare if my application could be doing something silly causing more load on the ARM.
I can't test this myself right now because I am trying to figure out the correct format for the buffers that media codec would expect for H264 once I get the RTSP payload.
Related
I am using Opentok SDK for video calling in IOS and Android devices with Nodejs server.
It is a group call scenario with max 4 people, when we stream for more than 10 min, both the devices getting too hot.
Does anyone have solution for this?
We can't degrade the video quality.
This is likely because you are using the default video code, VP8, which is not hardware accelerated. You can change the codec per publisher to either H.264 or VP8, but there are some trade-offs to this approach.
Their lack of H.264 SVC support is disappointing, but might be okay depending on your use case. If you read this whole post and still want more guidance, I'd recommend reaching out to their developer support team, and/or post more about your use case here.
Here's some more context from the OpenTok Documentation, but I recommend you read the whole page to understand where you need to make compromises:
The VP8 real-time video codec is a software codec. It can work well at lower bitrates and is a mature video codec in the context of WebRTC. As a software codec it can be instantiated as many times as is needed by the application within the limits of memory and CPU. The VP8 codec supports the OpenTok Scalable Video feature, which means it works well in large sessions with supported browsers and devices.
The H.264 real-time video codec is available in both hardware and software forms depending on the device. It is a relatively new codec in the context of WebRTC although it has a long history for streaming movies and video clips over the internet. Hardware codec support means that the core CPU of the device doesn’t have to work as hard to process the video, resulting in reduced CPU load. The number of hardware instances is device-dependent with iOS having the best support. Given that H.264 is a new codec for WebRTC and each device may have a different implementation, the quality can vary. As such, H.264 may not perform as well at lower bit-rates when compared to VP8. H.264 is not well suited to large sessions since it does not support the OpenTok Scalable Video feature.
I found MediaPlayer cannot play videos which are encoded by H.264 Main Profile and I tried ExoPlayer and Vitamio but none of them solved my problem. finally I found the best solution is converting videos to H.264 Baseline Profile. FFmpeg is almost 9MB and it's so heavy for my project, so I don't like to use it for converting videos to that profile by commands. My friend suggested converting videos on the server-side but we both know it has bad performance. What should I do? What is the best solution to this problem?
Android technically only supports H.264 Baseline, but many of the newer (usually high end devices) will play H.264 Main Profile, too. The Nexus 4,5,6,7 and 10 all do, for example. So, you have a few options... You either just use H.264 Main and don't care about older devices that don't support it, or you convert on the server side. Doing the conversion on the device is a bad idea. If it doesn't support H.264 Baseline, it was probably done for performance reasons and doing the conversion on the device and then decoding is going to crush the CPU.
Worth noting, ExoPlayer will use the same device codecs as MediaPlayer because it is just a wrapper around MediaCodec. Vitamio is a wrapper around ffmpeg and it might be possible to provide a H.264 Main codec with a custom ffmpeg build, but again, if it isn't there in the first place, performance was probably an issue.
I need to create a VOIP app and I'm using OpenSL ES. I need to capture and play pcm audio data at 8KHz sampling rate for all android devices. But, when i capture audio at sampling rate 8KHz and play it at the same time (voice communication), it produces noise and the audio is distorted for some devices like Samsung Galaxy S3, S4 etc. I know, there's a specific preferred sampling rate for each device and I want to know is there any workaround or any way to work with 8KHz sampling rate only without any distortion?
I tried increasing buffer size and many other things but failed to find an optimum and generic solution. I need audio data sampled at 8KHz for my encoder and decoder. I took re-sampling audio data before it is passed to my encoder or decoder as my second thought, but its not the solution i'm looking for.
I found CSipSimple used OpenSL and I went through some of their codes too. But, yet I couldn't find a solution and may be I failed to understand where to concentrate.
I'm stuck here!
Here's how I solved my problem:
I was working on audio streaming for Android using OpenSL ES and this tutorial helped me a lot. I followed the instructions here and got the thing working. Then i found audio streaming with this approach doesn't work very well for some devices (mostly samsung devices). I tried many things like increasing buffer size, disabling environmental reverb etc etc. I found this answer very useful for improving streaming performance.
Finally, I found the audio is distorted because of theadlocks I had to use for synchronizing the buffer switches. Using lock free structure is suggested for better audio performance. Then I went with another approach of Victor Lazzarini which is a lock free Audio IO. This article of Lock-free audio IO with OpenSL ES on Android helped a lot to implement a lock free structure along with a better audio performance.
I'm trying to decode (and render) live H.264 over RTSP in an Android app.
Assuming, there are no network latency issues, the latency should not exceed several seconds.
The first try was to use the MediaPlayer which was fine but the internal buffering of the infrastructure causes delays of 10-15 seconds.
Right now the main dilemma is between using the new MediaCodec APIs or with FFMPeg.
I know there are many tutorials/samples out there talking about FFMPeg but I didn't see any comparison.
I think I understand most of the pros/cons for each but before spending ages on making one of them working I would like to be sure.
I haven't seen too much info on mediacodec api, I do know that ffmpeg gives you considerably better quality and latency than the built in rtsp functionality in android
Does anybody have any luck streaming a high quality video (over 1000kbps) to Android through RTSP?
We currently have low quality video streams (around 200kbps) that work wonderfully over 3G. Now we are trying to serve a high-quality stream for when the user has a faster connection. The high quality videos play smoothly in VLC, but the Android playback seems to drop frames and get blocky, even on a 4 megabit connection.
It seems like the YouTube app uses a plain HTTP download for their high quality videos. This works well and plays smoothly, but will not work for streaming live videos. Has anybody had luck streaming high quality videos to Android through RTSP?
The videos are encoded using H.264, 1500kbps, 24fps, and a 720x480 resolution. In the app, we are using a VideoView to play the videos. We are using Darwin Streaming Server, but we are open to other options if necessary.
Update 6/23/2011
Looking through Darwin some more today. So far, I am just logging the request and session information in a Darwin module.
The original Droid tries to use these settings: 3GPP-Adaptation:...size=131072;target-time=4000. Although that means it wants 4 seconds of buffer, 131Kb only holds about a second of playback at 1200kbps. I understand that 1200kbps is large, but it is necessary for a high quality video (minimal compression on 720x480).
I am trying to force the client to buffer more, but I haven't figured out how to do that yet. I'm just looking through the Darwin Streaming Server source and trying to figure out how they do things. Any Darwin experts out there?
Update 6/24/2011
As it turns out, using plain old HTTP for viewing videos on demand works well with no loss of quality. When we get to live streaming, we will have to look more into RTSP.
Well even if the network is able to transmit at that rate, you still need to decode it. What are you using for decoding? You will probably need to use a NEON accelerated video decoder so you can have a proper framerate, and a decent size buffer... the graphics processor is only as good as the bus that it is in... Also what are your encoding settings and resolution?
Edit: You are encoding those at much to high bitrate, half of that will do fine. Also you need to make sure where the issue lies. Is the mediaPlayer getting the data and failing to stream at a decent framerate, in that case you have to replace the MediaPlayer code with your own player. Is it's network issue then only solution is to lower the bitrate, 600Kbps would be just fine (or 500Kbps video, 128Kbps audio), it's 3x your 200k stream and on a screen this small, the difference is not noticeable.