I would like to know if there is a way to convert video using VideoView from RGB to Gray without using any external library (such as OpenCV).
I didn't found anything about it.
All that I found is to manipulate the video before display him, by using FFMPEG, using OpenCV that I don't need for this project or do manipulation on SurfaceView using OpenGL.
Is there any way to do the manipulation directly on the VideoView?
Update
I found the solution for this problem, by just use this Library
Related
The goal is to analyse a video frame by frame and apply a painting on each of those frames. Then save the video and be able to play it.
Do you know if such a thing is possible in flutter? I've heard about ffmpeg package but I'm not too sure if that's possible with it.
I would recommend to make this implementation on respective platforms. It will be more performant that way and you can get complete control of the video generation.
on iOS : AVMutableComposition class is the way to go
I don't know about android.
I have this interesting problem. I need to convert animated SVG or just plain SVG to a video format either on the client side (iOS/Android) or on the server side (.NET).
There does not seems to be a way to accomplish it.
My initial approach was to animate SVG using Core Animation (on iOS) and convert that to a video, but a similar solution does not exist in the Android world.
My next approach was to use FFMPEG, but that too fell short. then I shifted my focus to .NET, but that too, nothing came out of it.
I'm looking for some general direction. Can it be done?
The idea is to take multiple SVGs and convert them to a video file along with some audio.
Thanks in advance!
I'm creating an Android app that makes use of OpenCV to implement augmented reality. One of the needed features is that it saves the processed video. I can't seem to find any sample code on real-time saving while using OpenCV.
If the above scenario isn't possible, another option is to save the video first and have it post-processed by OpenCV and saved back as a new file. But I can't find any sample code for this either.
Could someone be kind enough to point me to either direction, or give me an alternative? It's ok if the alternative doesn't use OpenCV.
Typical opencv flow is, you receive frames from camera, convert to RGB format, perform matrix operations then return to activity to display in View. You can actually store the modified frames as images somewhere in sdcard and use jcodec to create your mp4 out of your images. See Android make animated video from list of images.
How to generate gif frame by frame, base on a canvas?
I'm writing a chess game, and want to generate a gif file from every move by user. Is there any apis provided by sdk?
Thanks.
Unfortunately that is not possible with any built-in function. The SDK does not come with the ImageIO class. You can make use of this animated GIF encoder:
http://www.java2s.com/Code/Java/2D-Graphics-GUI/AnimatedGifEncoder.htm
I'd like to write an app that merges multiple images into a movie on Android. JMF has a basic implementation (JpegImagesToMovie). But, JMF isn't supported on Dalvik.
Is there an alternative library that I can use for this ? Or if there is no library available, does anyone have any pointers for what I need to research to implement myself.
Rgds, Kevin.
I'm not aware of any pure-Java video encoders, and the built-in video encoder in Android appears to be limited to capturing video from the camera alone, rather than a custom input source.
You could look at writing a multi-part JPEG (quite rare but well supported) writer, or even an MJPEG (used by many digicams) encoder.