I am trying to build the FFmpeg library to use in my android app with the NDK. The reason for this is because I am using the native video capture feature in android because I really don't want to write my own video recorder. However, the native video capture only allows for either high-quality encoding, or low quality encoding. I want something in between, and I believe that the solution is to use the FFmpeg library to re-encode the high quality video to be lighter.
So far I have been able to build the FFmpeg library according to this guide: http://www.roman10.net/how-to-build-ffmpeg-for-android/ and which a few tweaks I have been able to get it to work.
However, everything that I've found seems to be about writing your own encoder, which seems like overkill to me. All that I really want to do is send a string in command line format to the main() function of FFmpeg and re-encode my video. However, I can't seem to figure out how I build FFmpeg to give me access to the main method. I found this post: Compile ffmpeg.c and call its main() via JNI which links to a project doing what I want more of less, but for the life of me I cannot figure out what is going on. It also seems like he is compiling more than I want, and I would really like to keep my application as light weight as possible.
Some additional direction would be extremely helpful. Thank you.
In Android NDK there is no main() in your application in the typical sense, so you are unable to do what you want to directly. However, you can still call the main() of FFmpeg yourself and provide all necessary parameters to it. Here are 2 possibilities to get the parameters:
Android Activity receives an Intent after creation. You can pass the parameters through the intent while starting your activity and then extract it like this:
Intent CommandLine = this.getIntent();
Uri uri = CommandLine.getData();
You can read parameters from file you create somewhere on a SD-card and pass the to FFmpeg.
Related
I'm working on a feature in which I want to add picture over the video and save it to sd card.
in general, the user selects an image with semi-transparent background and puts that image above the video, after the user presses the save button he gets a new video but already with the image above the video.
I have heard about ffmpeg, and saw some commands that are provided by ffmpeg. but I don't know where I should initialize. can anyone provide me an example for the same?
Thank you.
One common approach is to use an ffmpeg wrapper to access ffmpeg functionality from your Android app.
There are several fairly well used wrappers available on GitHub - the ones below are particularly well featured and documented (note, I have not used these as they were not so mature when I was looking at this previously, but if I was doing something like this again now I would definitely build on one of these):
http://writingminds.github.io/ffmpeg-android-java/
https://github.com/guardianproject/android-ffmpeg
Using one of the well supported and used libraries will take care of some common issues that you might otherwise encounter - having to load different binaries for different processor types, and some tricky issues with native library reloading to avoid crashes on subsequent invocations of the wrapper.
Because this approach uses the standard ffmpeg cmd line syntax for commands it also means you should be able to search and find help easily on multiple different operations (as anyone using ffmpeg in 'normal' model will use the same syntax for the ffmpeg command itself).
For example, for your adding an image case here are some results from a quick search (ffmpeg syntax can change over time so it is worth doing a current check):
https://stackoverflow.com/a/32250369/334402
https://superuser.com/a/678171
I am working on a matlab project where I add effects to audio files (mp3, wav). Therefore, I load the files into arrays using the matlab function audioread(..).
Now, I want to export this to Android. I read that the best way is to use the Matlab Coder to export the matlab code to C/C++ (or Java) and then export it into android (more or less).
However, the function call audioplayer (and play) are Unsupported (that's what the code generation readiness issues says).
What can I do ? One idea was to play the sounds directly using c++ code (so after the code generation). But how to play sounds from arrays using c++ ?
Or if you guys have others ideas without touching c++ codes (so fixing the problem directly in matlab), I would be glad to hear it !
Thanks and have a good day !
Typically what I recommend in cases like this is to factor your code in two pieces:
The part that does the audio file I/O and audio playing (namely the OS-specific part)
The computational kernel for which you will generate code using MATLAB Coder. This piece usually takes numeric arrays representing the image or audio data as arguments.
I've used this approach to leverage MATLAB Coder generated code to do image filtering on Android.
To do part (1), as Navan says, you'll need to use Android APIs to read in audio files, write data back to files, and to play them as desired. Note, I haven't done extensive Android development, so doing these tasks may take some research or be difficult.
Once you have the data in a format suitable for the function(s) in (2), likely a numeric array, then you can call your generated code using JNI to add the desired effects. The generated code would return the data back to the Java code and you can then encode it, play it, or do as you please with it using the Android APIs.
Playing audio normally uses platform dependent libraries. In DSP System toolbox, there is an audio player object called dsp.AudioPlayer which supports C code generation. But I believe this uses platform dependent libraries in the generated code and it will not be straight forward to make it work in Android. You will be better off finding an audio player library for Android and hooking that in manually after generating code.
I am trying to build an video recording system on Android 4.2.2, I've done the encoding part, which is using OMX. Now I am working on the Muxer part, since the code stream of the video can be a little different if I use FFMpeg, so I wish to use the exact same Muxer tool of the original system.
So I want to extract the Muxer part of StagefrightRecorder, compile it into a .so file, and then call it via JNI in my application. But there are a lot of stuffs in StagefrightRecorder, I am confused.
Can this way work? Can I just extract the code relevant to MPEG4Writer? Can anyone give me any instructions?
Thanks!
If you are compiling within the context of the framework, you could simply include the relevant header files and create the MPEG4Writer object directly. A very good example for this is the command line utility recordVideo as can be observed from this file.
If you wish to write a separate application, then you need to link with libstagefright.so and include the relevant header files and their path.
Note: If you wish to work with the standard MPEG4Writer, it's source i.e. source of the MPEG4Writer which would be an encoder should be modeled as a MediaSource. The writer pulls the metadata and actual bitstream through the read method and hence, it is recommended to employ a standard built-in object such as OMXCodec or ACodec for the encoder.
I've been trying to use an ffmpeg binary with command line access for a while now and getting nowhere (Using runtime.exec)
It looks like the only way I'll be able to get it to work is using a wrapper in C to access the built ffmpeg libraries using JNI...
Main problem: I haven't coded C for more than one and a half decades now and wouldn't know where to begin...
I just need 3 operations, I need to add audio to a video file, I need to concatenate two video files and if possible I need to rotate a clip by 90 degrees (but I could do without this)...
Does anyone have any example code that could work for me, or some good places to start (I've already exhausted much of the first pages of various google results to no avail)...
Any help would be greatly appreciated!
There are many open source projects available, But for simplicity, You can start from here
I believe this is what you are looking for:
https://github.com/hoary/JavaAV
Multiple platforms supported so your code will be more portable.
We need an Android app that can encode a folder of images to a video. I have been looking for solutions a while now, but cannot find anything good. The Android API does not support it. We are trying ffmpeg, but cannot get it to work. We need a working solution, using ffmpeg is not mandatory. A full Android Java solution is also a possibility, since this would work on all Android devices, possibly at the cost of some performance.
The app also needs to be able to add an audio track to the movie if the user chooses to do this.
Any help would be appreciated.
Kind regards,
AƤron
From the FFmpeg FAQ entry "How do I encode single pictures into movies?":
First, rename your pictures to follow a numerical sequence. For example, img1.jpg, img2.jpg, img3.jpg,... Then you may run:
ffmpeg -f image2 -i img%d.jpg /tmp/a.mpg
Adding an audio track should just involve add another input (e.g., -i audio.mp3), but could also require explicit -maping with older versions.