Images to Video converter in Android - android

I would like to convert multiple images(frames) to a video(MP4) in an android device. Also, I would like to convert video(MP4) into multiple images(for each frame). I have limited knowledge on FFMPEG, but installing FFMPEG in Android may consume more time. I Would like to ask experienced engineers to suggest a better strategy which can take less time to complete this task. Please point me to some open source code which I may modify to complete this task quickly.

First you need to convert Image to YUV format using Image Decoder.
Then you can feed each YUV Image as a Video Input to Media Recorder Engine.
Go through the Media Recorder Source Code to get more info.

Related

Is it possible to render frames in Exoplayer?

I am pulling h264 and AAC frames and at the moment I am feeding them to MediaCodec, decoding and rendering them myself, but the code is getting too complicated and I need to cover all cases. I was thinking if it's possible to set up an Exoplayer instance and feed them as a source.
I can only find that it supports normal files and streams, but not separate frames? Do I need to mux the frames myself, and if so is there an easy way to do it?
If you mean that you are extracting frames from a video file or a live stream, and then want to work on them individually or display them individually, you may find that OpenCV would suit your use case.
You can fairly simply open a stream or file, go frame by frame and do what you want with the resulting decoded bitmap.
This answer has a Python and Android example that might be useful: https://stackoverflow.com/a/58921325/334402

Tensorflow Android support for videos

Is there any existing support to Tensorflow on Android for locally saved videos? The demo provided is tightly coupled with the camera, and porting it to work for videos will be non-trivial and time-consuming, at the very least. The task it is intended for, is to process raw frames from a stream being broadcast live.
You have to take bitmaps for frame in video using MediaMetadataRetriever or something appropriate and then pass them to the tensorflow library for image recognition.
Currently there is no existing support for "video" stream itself in tensorflow AFAIK, even the demo takes screenshots of the camera preview to recognize.
If you really want to recognize video stream itself then you have to build a model of your own.
Otherwise, Process to analyze video would be as follows assuming you already have your graph and label file and playing video is not needed ( If you want to show video during the analysis, then you should implement surfaceview or textureview in your activity):
Initialize tensorflow & load desired video using MediaMetadataRetriever
extract bitmaps for desired frames using getFrameAtTime & scale the bitmap to appropriate size
patternize the bitmap and run inference method ( you can pass the bitmap directly if you copy to use TensorflowImageClassifier.class from the demo )
store the result and loop to another frame (2~4)
It's somewhat simplified overall process but I hope you can get a hint from this.

encoding images to video with ffpmeg

We are working on Android 3D Animation App.
We need to identify images, then save and encode the same to video using FFmpeg (Since Android API is not supporting). Once the video is generated, then audio is appended to the same.
We are facing 2 problems on this.
First is the memory leakage issue at the time of saving identified images for encoding. CPU of emulator is getting overloaded. Whether FFpmeg is called every time when an image is selected? How to resolve this issue?
Second (in case if we get through the first one) we are not able to encode the selected images, since this is generating green color video. What could be reason for this?
Whether is there any tool other than FFmpeg for video encoding from images to H264?
Whether images version (Rastar or Vector) will impact this video encoding?
Whether Android OS version is considered?
Any valuable inputs on this will be greatly appreciated.
Thanks
I played also with that idea using ffmpeg on an android phone, but I would suggest to do that on a server which has much more power. On a server you don't need to think about the cpu load of a smartphone.
In general for improving your ffmpeg run you need to publish the ffmpeg calls. ffmpeg is quiet complex where the order of the parameters directly correlates with the efficience.
I don't know which container format you preferer but maybe a simple mjpeg codec could work for you. AFIK there a just the jpeg frames connected to each other which should be much simple then encoding a video to h264/x264 (ffmpeg uses the last one).
A combination of both may be to generate a mjpeg stream which will be converted on the server side to a h264 video which may be downloaded to the client. but that really depends on the length of the video if you don't want to waste the traffic of your customers.

How to record the http live streaming from an IP Cam

I have created the application in which the client can view the ip camera which is giving
an http live stream of MJPEG using this link
Android ICS and MJPEG using AsyncTask
Now i want the user to record the video into its memory card .
I have googled for a while and the only two approaches which came in my mind :-
Either i keep storing the jpeg images and when user clicks stop recording then i
somehow clip all the images as to provide a 3GP video or some other file format.
But i don't know how to create the video from all the images and will this be an efficient
approach or not.
Or i do ffmpeg and in this case i will have to deal with NDK and it seems to be a longer
path which may lead to nowhere :P
So is FFMPEG a better option? If yes please share some links or is the first option better.
Thanks in advance
FFmpeg is the better option, but you'll probably get stuck with a pretty poor encoding resolution/compression. Maybe some low quality MPEG-4 like xvid will work, but even that might require too high of performance from the CPU.
Android doesn't have an API to access the video encoder logic in the SoC, so a native implementation is pretty much your only choice. If so, FFmpeg through NDK is probably the easiest.

Transcode/Convert Video to Mp4 on Android

I've a requirement where I need to transcode small video clips shot from Native camera app to lower bitrate/resolution Mp4 which is shreable via email etc.
What is the best way to transcode/convert the video on device itself. FFMPEG or any other library?
p.s. I know this is an overkill for the device but client leaves me with no option. He doesn't care about battery or time it takes. I'm targeting this for quad-cores, where CPU is not a problem.
Your best bet would be to use something like ffmpeg which has been ported to Android (see this SO post: ffmpeg for a android (using tutorial: "ffmpeg and Android.mk") and the ffmpeg port for android which is here: http://bambuser.com/opensource). You'll have to use JNI etc, but that will save you the hassle of dealing with the byte stream yourself.
Haven't tried it on Android myself, so YMMV:
Is there a Java API for mp4 files?
http://code.google.com/p/mp4parser/
If you're recording on-device, why not set the expected format from your code? It appears the api lets you set video size, framerate etc. in the MediaRecorder class.

Categories

Resources