everyone! I have a question. I need some video recorder (library project) which offer the possibility to set programmatically the maximum output file size and resolution of recording video. I know that native android video recorder allows to set max output file size, but it allows to set only two type of quality(best and worst). I need at least three different video resolutions. So maybe someone know library which can help me to solve my problem?
Also good to have:
-zoom;
-autofocus;
-flash;
Thanks!
The best libraries which I found:
CWAC-Camera library (https://github.com/commonsguy/cwac-camera);
OpenCamera (http://opencamera.sourceforge.net/). But Open Camera is released under the GPL v3. The source code is available from https://sourceforge.net/projects/opencamera/files/.
To anyone reading this recently, CWAC-Camera mentioned on accepted answer is discontinued, then the author re-attempted for a better version, which is found here. And the demo is here.
I also found Material Camera that is under the Apache License v2.0. Here is the sample project.
There is also ragnraok/RxCamera, based on android.hardware.camera although it is said to be still in very early stage, plus the android reference link says android.hardware.camera2 is the recommended API now.
The best libraries which I found:
FFmpegVideoRecorder
The library provides a way to record multiple videos using MediaRecorder and merge them together using FFmpeg Recorder from JavaCV. It is designed to allow maximum customization of the video encoding and recording.
It has built in activities for easy recording and previewing. But it also exposes basic components that can be used to customize your own UI and logic.
General Features
Able to record multiple clips and combine them into one video
Camera preview image is scaled, cropped, and padded to exactly how it will be recorded
Can generate a thumbnail image for the video
Can set recording parameters such as:
video codec
List item
video width
video height
List item
video frame
....etc
To anyone reading this recently,
The author mentioned he plans to DISCONTINUE the project cwac-cam2..
Here:
https://github.com/commonsguy/cwac-cam2/issues/336
Related
I don't know how to get the video frame , so I can't save the image.
Give me some tips. Thanks a lot.
As canvas and rest of the facilities are unavailable in Android we can dodge this situation by taking screenshots and introducing animation in our app's UI. screenshot image can be stored at configured location and resused it later for exchanging it to other party
Edit: One can take reference from AppRTC to capture surfaceview()
https://codereview.webrtc.org/1257043004/
GLSurfaceView () should not work as webrtc library has the hold of camera and screen. One has to build extended class to get Videorenderer and get the snap of frame , Once done one can display the frame using customized api displayFrame() mentioned by cferran in opentok android examples.
You can also use OpenTok library but that is chargeable when compared to webRTC.
If you are interested in using a third party library here is an example on how to implement this use case: https://github.com/opentok/opentok-android-sdk-samples/tree/master/Live-Photo-Capture
If you prefer to use directly WebRTC, here you can find generic information about how to build WebRTC on Android: https://webrtc.org/native-code/android/
This question might have been asked many times. I searched everywhere but could not find the correct answer. I am using ExoPlayer in my project to play HLS videos.
I want to give user the option to select the bandwidth of videos. Something like what YouTube does.Any idea how this can be achieved using ExoPlayer?
From ExoPlayer issue tracker:
ExoPlayer currently selects the first variant listed in the master
playlist. If I remember correctly, this is what Apple
recommends/specifies as correct client behavior. If you want to start
in the lowest quality, you should technically have your server
generate the master playlist with the lowest quality listed first.
The above aside, we do agree that it makes more sense for the client
to make the initial variant selection locally, as opposed to the
recommended behavior. We'll be moving HLS over to use FormatEvaluator
in ExoPlayer V2, which will give more control over the initial
selection (and over the adaptive algorithm in general).
And as the solution, this comment:
Have a look at AdaptiveTrackSelection.Factory. Its parameters may
provide enough customization for your case.
I think this should give you either the answer you need, or it will guide you in the right direction.
Note that you have to provide streams in lower qualities to be able to use this. If you ONLY have the video in HD, ExoPlayer can't downsample the video, its not its job and not what it is intended for. ExoPlayer can only sync and enable smooth transition between the video in multiple resolutions when they are provided.
I am working on app like instagram where i have to apply filters on already created video and stored it in SDCard. I have searched a lot but at the end of day i find one library named FFMPEG but didn't get any help. I am newbie in video filtering. and I have setup NDK but don't know how to use this library.Is there any other way of applying filters on video and create a new video?
Well, if you have a problem with configuring FFmpeg with Android FFmpeg wiki and this popular question has a good explanation on it. Apart from that to apply colour effects to a video you need to know the properties of a video that need to be changed.
Here you can find some of the filters that can be used with there property values. You can use those values with FFmpeg. When converting those css values to FFmpeg context you can use a comprehensive documentation provided by W3 for css. Further you can play around and create fancy filters with FFmpeg. I faced the same kind of issue and here I have explained the solution for some of it. Applying this kind of change to a video needs a re-encoding and therefore this process will take significant time compared to applying same effect to an image. So bare it.
FFmpeg comprise of several different filters to manipulate colour levels and its related properties like brightness, saturation, etc. You can find those different filters on FFmpeg doc. Always try to follow-up the documentation as most of the time they provide solutions for our problems.
Hope this helps!
i want to apply some effects to real video which taken by camera.By browsing through some links i found that it could be achieved by using video filter libraries. can anyone suggest any such libraries that fit in my case?
the same kind of question already asked here and there.however there is no response for them.
Take a look at OpenCV. There are some example demos using OpenCV on Android. I believe you should be able to build some video filters with OpenCV.
Also, take a look at some of the Augmented Reality libraries, they contain some of the same mechanisms that video filters have as well.
This question may sound a little bit complex or ambiguous, but I'll try to make it as clear as I can. I have done lots of Googling and spent lots of time but didn't find anything relevant for windows.
I want to play two videos on a single screen. One as full screen in background and one on top of it in a small window or small width/height in the right corner. Then I want an output which consists of both videos playing together on a single screen.
So basically one video overlays another and then I want that streamed as output so the user can play that stream later.
I am not asking you to write the whole code, just tell me what to do or how to do it or which tool or third party SDK I have to use to make it happen.
update:
Tried a lots of solution.
1.Xuggler- doesn't support Android.
2.JavaCV or JJMPEG- not able to find any tutorial which suggested how to do it?
Now looking for FFMPEG- searched for a long time but not able to find any tutorial which suggest the coding way to do it. I found command line way to how to fix it.
So can anyone suggest or point the tutorial of FFMPEG or tell any other way to
I would start with JavaCV. It's quite good and flexible. It should allow you to grab frames, composite them and write them back to a file. Use FFmpegFrameGrabber and Recorder classes. The composition can be done manually.
The rest of the answer depends on few things:
do you want to read from a file/mem/url?
do you want to save to a file/mem/url?
do you need realtime processing?
do you need something more than simple picture-in-picture?
You could use OpenGL to do the trick. Please note however that you will need to have to render steps, one rendering the first video in a FBO and then the second rendering the second video, using the FBO as TEXTURE0 and the second as EXTERNAL_TEXTURE.
Blending, and all the stuff you want would be done by OpengL.
You can check the source codes here: Using SurfaceTexture in Android and some important information here: Android OpenGL combination of SurfaceTexture (external image) and ordinary texture
The only thing I'm not sure is what happens when two instances of mediaplayer are running in Parallel. I guess it should not be a problem.
ffmpeg is a very active project, lot's of changes and releases all the time.
You should look at the Xuggler project, this provides a Java API for what you want to do, and they have tight integration with ffmpeg.
http://www.xuggle.com/xuggler/
Should you choose to go down the Runtime.exec() path, this Red5 thread should be useful:
http://www.nabble.com/java-call-ffmpeg-ts15886850.html