I'm trying to decide which library to choose in order to create app that can
filters a video, for example beautify or clarity.
During my search I came across 2 candidates: OpenCv and FFmpeg, and I found a complete framework only for FFmpeg(so +1 for this).
I couldn't found a complete comparision between the two so if someone has tried them and can post the answer it would be really helpful.
Edit:
Another candidate is Marvin framework(Java project) for Android - https://code.google.com/p/android-image-filtering/
OpenCV is a framework for Computer Vision and it's very limited for what you need because it requires you to write most of the cool filters yourself. Nevertheless, it provides a few techniques to blur images, change contrast, convert to grayscale, flip, crop, threshold, erode, dilate, resize, rotate, isolate colors, composite, and few other things. Just so you have an idea of how to implement filters, I recently implemented a Displacement Map Filter using OpenCV.
FFmpeg has a few filters as well, but it's meant to be a cross-platform solution to record, convert and stream audio and video, which means it doesn't really offer many filter effects.
Nevertheless, both APIs can read video (files and stream from camera) on Android and provide access to the video frames so you can execute your custom filters.
I believe the technology that can really help you bring a large collection of filters to your application is ImageMagick. Note that ImageMagick doesn't handle videos, so you can use Android's native API, OpenCV or FFmpeg for this part. Here are a few examples of what you can do with an image using ImageMagick from the command line, a program interface, or script:
there is a very useful OpenGL ES 2.0 library for video processing with many filters for ios
GPUImage for Ios
the android wrapper is here :
GPUImage Wrapper for Android
The GPUImage framework is a BSD-licensed iOS library that lets you apply GPU-accelerated filters and other effects to images, live camera video, and movies. In comparison to Core Image (part of iOS 5.0), GPUImage allows you to write your own custom filters, supports deployment to iOS 4.0, and has a simpler interface. However, it currently lacks some of the more advanced features of Core Image, such as facial detection.
Related
I have sucesfully implemented face detection in my app using Android's Camera.FaceDetectionListener (following the Android Developers guide), but unfortunatelly some devices does not support this feature. Is there another way to achieve the same result?
I usually work with OpenCV to make image processing algorithms.
http://opencv.org/platforms/android.html
Its algorithms are much better than Android face detection, besides if you download the SDK you have a faceDetection example.
Here are the downloads:
http://opencv.org/downloads.html
The sdk, handles camera api 2, which it works at 30 fps, with a wrapper if you want to process video frames. Besides there are samples where you can mix Java OpenCV code with JNI code, to make so much faster your algorithm.
Unfortunately, these examples are made on Eclipse projects, but they are not difficult to merge into Android studio project.
I hope that these references are useful
Cheers.
I'm trying to build an AR Android app that uses Vuforia + jPCT-AE.
The jPCT is being used because it makes it easier to use objects exported from Blender, and dramatically reduces the code verbosity (when compared with vuforia stand-alone).
I would like to introduce the possibility to display a video along with different objects (e.g. a banana and a monkey ) that were rendered using jPCT-AE, however I haven't found any clues (documentation) on how to do this so I'm asking for you help and knowledge.
Thanks in advance!
Use of vuforia video playback in this link:
Advanced Topics
The samples below show how to implement sophisticated rendering techniques in Unity and OpenGL ES to enrich your app with creative effects. A project to show you how to work with C++ on Android is also available.
...
Video Playback
I developed iOS app using GPUImage including GPUImageBilateralFilter.
Now I am going to port the iOS app to Android, but I've found there is no GPUImageBilateralFilter in Android GPUImage library.
How can I port this filter in Android?
Well, you do have the entire source code for that filter right here. I talk a little more about that here and here, but it's a relatively simple bilateral blur. It uses a hardcoded set of 9 Gaussian samples about a central pixel, split into horizontal and vertical passes.
All you need to do is take the vertex and fragment shaders from the GPUImage repository and write your own implementation of a two-pass filter that does this within whatever Android framework you're using. If they've already got a separated Gaussian blur, that shouldn't be too hard to do.
I'm attempting to find an image processing library that can add filters and things of this nature. Something like ImageMagick (which I have tried, but couldn't get png support and other issues specifically with Android).
The main requirement is that it produces the same images given the same filters in iOS, linux and Android.
Have a look to android package graphics. It contains all that you need to perform image transformation and filters and there are a lot of example on the web.
Scenario:
I am working on a Android project where in one particular openGL page, I display videos.
FFmpeg is used to obtain frames from videos(as openGL does not support video as texture) and I am using the frames to obtain a video effect.
I am using pre-compiled FFmpeg binaries in the project.
I was not aware of the level of legal implications of using the FFmpeg library.
My superior brought this to my notice FFmpeg legal reference
Problem:
I am no legal expert, so only thing that I understood is that using FFmpeg in a comercial free app (but service needs to be paid for) is going to get me and my company into trouble :(
In no way the source or any part of the source of the project can be released.(The client is very strict about this.)
Questions?
1) Is there any alternative to FFmpeg (which uses Apache or MIT license) that I can use to obtain video frames?
2) In openGL, getting the video frames and looping through - Is it the only way of playing a video? Is there any alternate method to achieve this functionality?
IANAL, but LGPL means that if you compile and use ffmpeg as shared library (.so file) or standalone executable, then you are fine - even in closed source application that you sell for money.