Android:where to store large arrays of frames before encoding? - android

I am facing a programming problem
I am trying to encode video from camera frames that I have merged with other frames which were retrieved from other layer(like bitmap/GLsurface)
When I use 320X240 .I can make the merge in real time with fine FPS(~10),but when I try to increase the pixels size I am getting less than 6 FPS.
It is sensible as my merging function depend on the pixels size.
So what I ask is, how to store that Arrays of frames for after processing (encode)?
I don't know how to store this large arrays.
Just a quick calculation:
If i need to store 10 frame per second
and each frame is 960X720 pixel
so i need to store for 40 second video : 40X10X960X720X(3/2-android factor)=~ 276 MB
it is to much for heap
any idea?

You can simply record the camera input as video - 40 sec will not be a large file even at 720p resolution - and then offline you can decode, merge, and encode again. The big trick is that MediaRecorder will use hardware, so encoding will be really fast. Being compressed, the video can be written to sdcard or local file system in real time, and reading it for decoding is not an issue, too.

Related

Android NDK Camera Sample JPEG at 30 Hz

I have been using the Android NDK Camera sample and with it one is able to read the frames with format AIMAGE_FORMAT_YUV_420_888 by using the yuvreader_ inside DrawFrame at 30 Hz. I validated that 30 Hz is achieved by recording the timestamp in each image and printing it. I am using a Samsung Galaxy S9.
I was now trying to obtain JPEG images instead of the YUV ones also at 30 Hz but have not yet succeed and was wondering if someone could help.
From what I understood, the capture session in this sample creates a request for both "preview" and a "still capture", where the yuv is used for preview and jpeg is used for the still capture. What I have done was to set the jpgReader_ as the preview one as well, and then I checked in the timestamp of the frames captured in the ImageCallback here (I commented out the step of writing to file, and just called AImage_delete(image) to clean the buffer instead). However, the result I get is frames with intervals of 33, 66, 99 and 133 ms, quite evenly distributed, so many frames often get skipped.
Any ideas of what the problem could be?
Many camera devices cannot produce 30 jpeg images per second. That's why the camera API explicitly sets YUV (or private) format for preview or video. Few devices are capable of creating 30 Jpegs per second. That's why typical video recording session involves h246 or vp8 encoders.

How can I reduce the video recording size in flutter?

I'm using flutter camera plugin to record the video. But the recorded video size is too big. Something around 20mb for 1 min. How can I reduce the size (one of which is how to reduce resolution)? Also I have changed my VideoEncodingBitRate to 3000000. Like this mediaRecorder.setVideoEncodingBitRate(3000000);.
To reduce the size, you can employ any or both of these 2 methods:
Resolution
You can see them in the example
controller = CameraController(cameras[0], ResolutionPreset.medium);, change this to ResolutionPreset.low or some other customer value (does not have to be preset)
Encoding
You can use different encoding algorithms, such as FFmpeg using this plugin https://pub.dartlang.org/packages/flutter_ffmpeg. See also this question and its answers how to reduce size of video before upload to server programmatically in android

Compressing video file based with Multiple Resolution format options in Android

I am using MediaCodec Native API to compress video. My requirement is , I also need to show the user, list of available resolution formats for video compression(assuming that if user selects any of the resolution format, the output file should be less than 5MB). So, I need to be able to calculate the compressed video size based on resolution option chosen by the user before the actual compression. Is this possible in Android? I have searched extensively, but unable to find any answer. Any leads would be very helpful. Thanks!
You can calculate the output size by multiplying the output bitrate (bit per second) with the length (seconds) of the video.

Load and play ultra high resolution vĂ­deos on android

Does anyone know how to load a high resolution video in android programmatically, such as 3000 x 3000, to display only a portion of this video, such as 1000 x 1000?
I tried to use the MediaPlayer android sdk official in a TextureView, but this method has media size limitations, I think, because the video plays but and texture view is black ..
I appreciate the help.
Media size limitations are for a reason. 3k x 3k resolution video is huge for such small device like phone. Consider that you are not able to decrypt only small portion of video frame, that is not like video works. So you need do decrypt whole big frame (wchich is calculated from I-Frame and following P-frames ) than take screenshot of it after that take part which you are interested in and present on your textureView, and all of this in realtime. Think about device memory and CPU. In my opinion it's not possible with such small resources

Android MediaMux & MediaCodec too slow for saving video

My Android app does live video processing using OpenGL. I'm trying to save it to video using MediaMuxer and MediaCodex.
The performance it not good enough. Each cycle the screen is updated, and it is saved to file. The screen is smooth, the video file is horrible. By this I mean major motion blur when it changes quickly and the frame-rate appears to be 1/2 or 1/3rd of what it should be.
It seems to be a limitation due to clamping of settings internally. I can't get it to spit out a video with a bit rate greater than 288KBPS. I think it is not clamping the requested parameters because there is no difference in frame rate for 1024x1024, 480x480, and 240x240. If it was having trouble keeping up, it should at least improve when the number of pixels drops by a factor > 10.
The app is here : https://play.google.com/store/apps/details?id=com.matthewjmouellette.snapdat.
I would love to post a code sample, but my program is 10K lines of code, with a lot of relevant code just for this problem.
Any help would be greatly appreciated.
EDIT:
I've tried like 10+ different things. I'm out of ideas right now. I wish I could just save the video uncompressed, the hard-drive should be able to keep up with a small enough image and medium fps.
It seems to be that the encoding method just doesn't work for my video. The frames differ to much, to try to "move" one part of the frame, as a sort of encoding. Instead I need full frames throughout. I am thinking something along the lines of M-JPEG would work really well. JPEGs tend to take 1/10th the size of a bitmap. It should allow a reasonable size, with almost no processing power required by the CPU, since it is image compression not video compression which we are doing. I wish I had a good library for this.

Categories

Resources