Video compression estimated size - android - android

I am developing an android video compression application.
I have video 1280*720, video duration is 4 seconds and size of the video is 7Mb, How can get estimated video size for different resolutions.
I want to find estimated video size for 854*504, 640*360 and more resolutions, please let me if any formal for calculating estimated video size.
Thanks :)

You can not estimate file size from resolution alone. A file with little motion will compress much smaller that a file with a lot of action. If you need to know the size before compression, you should decide what size you want, then divide that number by the video duration, and use that number for your bitrate when encoding.

Related

Compressing video file based with Multiple Resolution format options in Android

I am using MediaCodec Native API to compress video. My requirement is , I also need to show the user, list of available resolution formats for video compression(assuming that if user selects any of the resolution format, the output file should be less than 5MB). So, I need to be able to calculate the compressed video size based on resolution option chosen by the user before the actual compression. Is this possible in Android? I have searched extensively, but unable to find any answer. Any leads would be very helpful. Thanks!
You can calculate the output size by multiplying the output bitrate (bit per second) with the length (seconds) of the video.

What video encoder gives best performance on an Android device for given quality?

I'm trying to determine the best encoder or encoding parameters to play as high resolution (quality) video on an Android phone as possible. I do not care much about file size, it can be triple the size of a "properly compressed" video as long as it plays smoothly. All encoders are optimized for best quality in as a small file as possible by default at the expense of computing power needed to decode the video - I'd like to optimize for computing power at the expense of file size.
So essentially I'd like to know how to effectively unburden the decoder at the expense of increasing the file size so the video plays without any artifacts or freezes.
Can anyone recommend a technique to achieve this?
To clarify: I have a locally available file in very high quality (1440p) which I'd like to transcode to as much playable resolution/quality as possible while not caring about file size (1080p+).
Thank you.
For encoding video the general recommendation is to use H.264 with Baseline Profile for broad compatibility. There are a variety of parameters generally for optimizing for video content (animation vs static lecture vs action/sports), but generally resolves down to bitrate.
Any device which has Google Play must conform to the the Android Compatibility Definition Document which spells out what are the expect frame rate and bit rate for various sized videos:
http://source.android.com/compatibility/7.0/android-7.0-cdd.html#5_3_4_h_264
Android device implementations with H.264 decoders:
MUST support Main Profile Level 3.1 and Baseline Profile.
Support for ASO (Arbitrary Slice Ordering), FMO (Flexible Macroblock Ordering)
and RS (Redundant Slices) is OPTIONAL.
MUST be capable of decoding videos with the SD (Standard Definition)
profiles listed in the following table and encoded with the Baseline Profile and
Main Profile Level 3.1 (including 720p30).
SHOULD be capable of decoding videos with the HD (High Definition) profiles
as indicated in the following table.
In addition, Android Television devices—
MUST support High Profile Level 4.2 and the HD 1080p60 decoding profile.
MUST be capable of decoding videos with both HD profiles as indicated
in the following table and encoded with either the Baseline Profile, Main
Profile, or the High Profile Level 4.2
SD (Low quality) SD (High quality) HD 720p HD 1080p
Video resolution 320 x 240 px 720 x 480 px 1280 x 720 px 1920 x 1080 px
Video frame rate 30 fps 30 fps 30 fps 30 fps
Video bitrate 800 Kbps 2 Mbps 8 Mbps 20 Mbps
while Android has must requirements for SD video, HD is should but most likely implemented in high end devices.
With regards to power usage - with hardware decoding relatively common on high end devices - the screen is still the most power hungry part of playing a video so any thoughts about 'compression' should be in regards to what settings will provide the most visually acceptable content while being as small as possible. Given variations in content the 'right' settings usually require a bit of experimentation.
In addition if you are delivering to a device you should allow the client to pick the resolution/quality which makes sense - i.e. no reason to deliver a 1080p file to a 640x480 device.

How WebRTC modify my video's resolution?

I am dealing with WebRTC on android. My problem is I can not sent video which has more than 1280X1280 resolution. Even if I set video resolution as 1920x1080 WebRTC sends maximum 1280x1080 resolution. I see these results using StatsReport output.
It gives me these values when I set video as 1920X1080;
name : googFrameWidthInput, value : 1920
name : googFrameWidthSent value : 1280
name : googFrameHeightSent value : 1080
name : googFrameHeightInput value : 1080
I have 3 question in here.
1) Does WebRTC supports full hd video (1920X1080)?
2) How it modify my video resolution? Is it just decrease my video randomly?
As seen from here it doesn't keep my video ratio , isn't it wrong?
3) As far as I know WebRTC decrease video resolution when cpu usage increase or network quality decrease. When one of these case occurs what will be my new video resolution and ratio? Is it decrease with a rule ?
1) Does WebRTC supports full hd video (1920X1080)?
Yes, both the local camera and the remote peer needs to support it or else it will choose the lower rate.
2) How it modify my video resolution? Is it just decrease my video randomly? As seen from here it doesn't keep my video ratio , isn't it wrong?
Again it is decided by the combination of your local camera and the advertisement of the supported resolution by the peer.
3) As far as I know WebRTC decrease video resolution when cpu usage increase or network quality decrease. When one of these case occurs what will be my new video resolution and ratio? Is it decrease with a rule ?
The variance in bitrate is not changing the resolution. It is the codec being set to adjustable bit rate, and dealing with the amount of motion in the scene. When there is more motion, there will be a higher bitrate.
1) Few days ago explore native code WebRTC and find some parameters of resolution, where was max only HD. But, maybe WebRTC can transform data
for a suitable stream.
2) As you can see, from sample, all parameters is adding to PeerConnectionConstructure. But I think you tried this solution. If no, check the sample.
PeerConnectionParameters peerConnectionParameters =
new PeerConnectionParameters
( /* many different parameters, including resolution */);

Load and play ultra high resolution vĂ­deos on android

Does anyone know how to load a high resolution video in android programmatically, such as 3000 x 3000, to display only a portion of this video, such as 1000 x 1000?
I tried to use the MediaPlayer android sdk official in a TextureView, but this method has media size limitations, I think, because the video plays but and texture view is black ..
I appreciate the help.
Media size limitations are for a reason. 3k x 3k resolution video is huge for such small device like phone. Consider that you are not able to decrypt only small portion of video frame, that is not like video works. So you need do decrypt whole big frame (wchich is calculated from I-Frame and following P-frames ) than take screenshot of it after that take part which you are interested in and present on your textureView, and all of this in realtime. Think about device memory and CPU. In my opinion it's not possible with such small resources

Android:where to store large arrays of frames before encoding?

I am facing a programming problem
I am trying to encode video from camera frames that I have merged with other frames which were retrieved from other layer(like bitmap/GLsurface)
When I use 320X240 .I can make the merge in real time with fine FPS(~10),but when I try to increase the pixels size I am getting less than 6 FPS.
It is sensible as my merging function depend on the pixels size.
So what I ask is, how to store that Arrays of frames for after processing (encode)?
I don't know how to store this large arrays.
Just a quick calculation:
If i need to store 10 frame per second
and each frame is 960X720 pixel
so i need to store for 40 second video : 40X10X960X720X(3/2-android factor)=~ 276 MB
it is to much for heap
any idea?
You can simply record the camera input as video - 40 sec will not be a large file even at 720p resolution - and then offline you can decode, merge, and encode again. The big trick is that MediaRecorder will use hardware, so encoding will be really fast. Being compressed, the video can be written to sdcard or local file system in real time, and reading it for decoding is not an issue, too.

Categories

Resources