I am trying to live stream the desktop to android tablet. Here is what I did
1) Capture the screen using the ffmpeg
ffmpeg -y -f x11grab -s 800x600 -r 20 -i :0 http://x.x.x.x:8090/feed1.ffm
2) Stream using ffserver
Here is partial ffserver.conf file
<Stream test>
Feed feed1.ffm
Format flv
NoAudio
VideoSize 800x600
</Stream>
3) Play the stream on the tablet (Android 4.3) using URL "http://x.x.x.x:8090/test"
I am able to see the desktop on tablet with few issues
1) There is around 6 to 8 secs delay in the video
2) player shows a warning "first frame is no keyframe"
When I changed the "Format flv" to "Format mpegts" in "ffserver.conf" file the warning is gone. But the delay is still there.
Is there a way to reduce the delay?
Am I using the correct format?
I want to achieve at max 2 sec delay for my desktop streaming.
What are you using at the android device to view the video? The question looks quite generic.
Are you just sending raw frames to the receiver? In that case, they might be quite a bit heavy and it take some time to process them. See if you can actually encode them and stream to network.
Secondly, It also depends in network latency, how good is your network? Give a try in a WLAN first and then try it between two public IP Numbers.
What is the size of your jitter-buffer at the receiver? If you have large jitter-buffer, players set some percentage limit to fill before it can actually kick playing. Apparently if you have large jitter-buffer it might take long time to fill it and so initial delay for your video. So, in test cases shutdown the jitter-buffer.
I could also blame the decoding capabilities of your receiver device.
Related
How to config WebRTC for the lowest latency for streaming live video only one side from Android phone camera to PC via WebRTC app on android to Firefox PC?
the quality maybe 15-24 fps and maybe 640 x 480?
My app need to live streaming video in android phone and transporting it as real time as possible to the PC to view in Firefox PC (using P2P protocol). That app looks like control some robot, play live streaming video game.
How do I do it for the best expected? Maybe it can do with 50 ms latency with 3G/4G network?
Thank you.
Maybe it can do with 50 ms latency with 3G/4G network? Thank you.
Impossible. You can't send a single packet with that little amount of latency over a mobile network, let alone capture video, encode video, mux video with audio, send it, receive it, buffer it, demux it, decode it, present it. 50ms latency per frame is not a whole lot higher than what you get with analog transmission!
You'll find that even many cameras on phones are going to have that much lag by the time the system gets the data to even work with it.
You realize it can take ~200ms for a human to even react to visual stimulus anyway? My TV takes at least 150ms to display a frame from its lossless HDMI input.
Your project requirements are completely out of touch with reality. You should also take time to gain an understanding of the tradeoffs that occur when you push digital video down into the extreme ends of low latency. You're about to make some real sacrifices by going under 1s or 500ms or so. Consider reading my post here: https://stackoverflow.com/a/37475943/362536 Particularly the "why not [magic technology here]" section.
I have troubles in my current project which requires video processing. Basically crop function (video should be squared), trimming (video shouldn't be longer than 30 seconds) and quality reduction (bitrate should be equal 713K).
I've succesfully embedded FFmpeg into application, all functions are working quite fine except one major detail - processing as per my boss is taking too long time. For video that have around 52 MB for 36 seconds it's taking 50 seconds to perforn all the operations (I'm trimming video to 30 seconds before any other operation obviously). The problem is that on parallel project on iOS video processing takes like 10-15 seconds for greater files. I assume that it's related to fact that they're using Apple QuickTime format which obviusly was developed by Apple so it's not surprising that it's working quite fast.
So well, it was introduction, now my question: is there any way for Android to process any video in any quality (for now we can assume that all videos are in h264) in time of 10-15 seconds (not more then in 30 seconds, as my boss said)? Some alternative to FFmpeg, that can perform operations faster? I'nm pretty sure that there is no possibility to perform such work in a such short time, since I already feel like I searched thought while Internet, but I want to make sure that there is really no possibility to do such work. If anyone can provide me links to solution more faster than FFmpeg or confirm that there is no such solution, I will be very gratefull.
Update
Thanks to Alex Cohn I've resolved this with MediaCodec. After a while, I got 20 seconds processing on 52MB video with cropping to square and lowering bitrate. For any future Googlers out of here I can suggest to take a look at this respository:
Many stuff about MediaCodec
and more precisely at this file: Extract, edit and encode again, video and audio
If the video has been recorded on the same device, you have a very good chance that MediaCodec and native Android media APIs will be much faster (running both decoder and encoder in HW). Otherwise, you can try to decode the video with MediaCodec, and fall back to FFmpeg software decoder if it fails. Even then, if you can use MediaCodec for compression, this alone may deliver performance that will satisfy your boss.
There exists a (deprecated) project called libstagefright that builds FFmpeg with support for the hardware codec, and it was designed to work on API 10+.
Don't forget to compare the CPU characteristics of your Android device and the iOS platform (if that's a 6S, they have a significant fore). Consider multithreaded encoding and decoding.
BTW: Note that FFmpeg does not come with H264 encoder, and the typical bundle of FFmpeg+x264 is GPL, and requires all your app to be open sourced (or pay a hefty license fee for x264, but still be forced to comply with LGPL of FFmpeg).
Note that you can make square video by manipulating the MP4 headers only, without transcoding!
I'm working on a project for courier service. I have developed an Android Application for our employees wich works as a camera app, they can make photos and video records with. When the photo or video file is ready, this app can automatically upload it to the server (if WiFi connection is active). The server maintains a web site where we can see each employee's daily job, including links to photo and video files. Of course, there is no problem accessing photos through any browser, but there is such problem with video files.
We don't restrict Android devices with which an employee works, only that it is powered by Android 2.3.3 or later (and certainly must have a camera). Video is written with CamcorderProfile.QUALITY_LOW setting, so it's format is up to device's decision what this CamcorderProfile.QUALITY_LOW is. There is no problem in viewing video files from varying devices on a desktop (Windows). Me and my chief have some browser plugins so this files can be open in a browser. But we want to have a video viewing solution on site that does not require any browser plugin or additional software at the client's side.
So the questions are:
What is the best video format that can enable online video viewing without downloading the whole file to the client's side (like at YouTube), video records can be very long (an hour, two hours or even more).
What tool I need for a universal conversion of Android-recorded video files to that format, without manually specifying input file format (as it can be quite different)? Our server is powered by "SMP Debian 3.2.63-2+deb7u1 x86_64".
I may have missed something. I'm sorry, this is my first project to deal with video, I lack knowledge.
I answer my own question, as it all seems decided already. After further research I came to decision that we will stream everything at our site with tag, the only source will be .mp4 (H.264), as we thought that 99.9% of our users will have no problem with it. I'a a windows user, and I have no problem with watching H.264 video in Google Chrome, Mozilla Firefox and Microsoft Internet Explorer, and so most of our clients will do.
As for video conversion into H.264, ffmpeg will do the job. I have already done some tests, all great. The real problem was to manage video rotation, as all video is recorded from Android phones in portrait mode, and it appeared that at Android this is done by recording frames in landscape mode (native camera orientation), but specifying 90 degree rotation attribute in meta data. And a great quantity of video players and Firefox refused to understand it. The problem can be solved by "transpose" and "rotate" in a command like this:
ffmpeg -i 1.avi -vf "transpose=1" -r 25 -b:a 64k -b:v 256k -metadata:s:v:0 rotate=0 r1.mp4
-vf "transpose=1" physically rotates frames 90 degrees
-metadata:s:v:0 rotate=0 clears rotation attribute in meta data because rotation is already done
Left is the job to tune parameters for output quality, write a batch conversion script, put it on crontab, and write HTML and JS to allow people to watch this video.
I'm not an expert in Video Editing but what I want to understand the logic of Whatsapp video processing.
First of all I have noticed that whatever the file is, Whatsapp sets the limit of Uploaded videos to 16MB, after which whatsapp crops the video to not exceed the limit. is this a convention or it's a personal choice ?
Secondly, When a video is recorded using the Camera it's not compressed by default, so whatsapp compresses it using FFMPEG I guess, and it takes no time. (tried for a video of 1min 1920x1080 with 125MB of size, becomes 640x360 with 5MB of size in no time, and the upload starts automatically).. how may they do this ? and why the choice of 640x360, It seems to me very fast for 2 asynchronous tasks : Compression + Upload.
When I run the compression command ffmpeg -y -i in.mp4 -codec:v libx264 -crf 23 -preset medium -codec:a libfdk_aac -vbr 4 -vf scale=-1:640,format=yuv420p out.mp4 it takes approximatively 1 min and the video is being rotated !! :D
Finally, when we download a video from Youtube it's already compressed (I guess) and whatsapp doesn't even try to compress it. So I think that it automatically detects thats the video is compressed. How can we detect this ?
Thank you.
Here are possible answers to your questions:
Quest. 1: Its a personal choice. The whatsapp team is trying to offer the best User Experience (UX) they could to users of their app, that is why they have kept a limit of 16MB for video file. Imagine how long it would take to upload a file of about 125MB. Hence, the app compresses the file for quicker upload and seamless experience.
Quest. 2: I guess you already answered this question yourself - Asynchronous programming. The large video file you feed it, gets encoded into a compressed format according to the algorithm they have written for the app. As Devs., we all know about algorithms and we all know there are things you can do to speed up execution. I guess they implemented their very own algorithm using Asynchronous programming that speeds up the process. The ffmpeg library you mentioned i guess was coded in C which i think doesn't support async call (not so sure though). After this, upload takes over.
Quest. 3&Finally: Codecs are standards. If you encode a video file to MPEG4, then try to re-encode it again to MPEG4 even using another program, you will get the same result as far as both programs are using same encoding standards, i.e. they didn't implement a specific algorithm for their programs(this takes years of work). So, when your Whatsapp tries to encode the file, it gives the same result.
Hope I have been able to answer your questions.
MichVeline
Use media codec instead of ffmpeg for better performance. If your use case is only to compress the video, MediaCodec would be best option for android. It helps you to write the code asynchronously and also gives you lots of freedom to optimize you algorithm.
I am Writing video player in android. So far i could able to capture the frames, with the help of av_read_frame and avcodec_decode_video2, and updating to SDL2.0. I have followed dranger tutorial02.c http://dranger.com/ffmpeg/ .
Sudo Code is :
while (1)
{
1. Read packet
2. check if video frame; if not Go to Step 3.
2.1 if video frame, then update with SDL_UpdateYUVTexture,
3. Handle SDL Event
4. Clear the Renderer.
5. Present Renderer.
}
I wonder, do i need to take care of synchronization of video, dts/pts calculation while i need only to display video?
This scenario works well in the samsung, but not in other mobiles.
What woud be your advice?
It depends. If you're ok with the fact that your video will a) play as fast as the device can decode it and b) will play with different speed on different devices and even on the same device depending on other processes, then you don't need to synchronize, and can just dump the frames as soon as they're decoded.
Otherwise you still need to synchronize the video output to PTS. Since you don't have audio, and won't have audio clock, your only option would be to synchronize the video to the system clocks which makes it simpler.