I am new to android. I have two files of same length, One is audio file and one is video file with no audio. I want to make a video with audio by combining these two files. Help me to achieve this task.
I assume you have native Android app and familiarity with Java (or know porting the code in native C) and are willing to use other open-source classes in your project.
This is what you might give a head-start: Since this project is not actively maintained now, you might have to fork and use their logic into your code.
https://github.com/tqnst/MP4ParserMergeAudioVideo
Another alternative is using ffmpeg port for Android (however I am not sure how this works natively).
https://github.com/WritingMinds/ffmpeg-android-java
Related
What are the standard ways of reading and writing audio files on Android / Kotlin?
I am very confused. I've found plenty of posts that discuss this at some level, but they're all either giving a third party answer (someone's own implementation like https://medium.com/#rizveeredwan/working-with-wav-files-in-android-52e9500297e or https://stackoverflow.com/a/43569709/4959635 or https://gist.github.com/kmark/d8b1b01fb0d2febf5770) or using some Java class, of which I don't know how it's related to the Android SDK (https://stackoverflow.com/a/26598862/4959635, https://gist.github.com/niusounds/3e49013a8e942cdba3fbfe1c336b61fc, https://github.com/google/oboe/issues/548#issuecomment-502758633).
I cannot find a standard way from the Android documentation. Some answer said to use https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.io/java.io.-input-stream/read-bytes.html for reading, but I'm quite sure this doesn't parse the file header.
So what's the standard way of processing audio files on Android / Kotlin?
I'm already using dr_wav just fine on desktop, so I am actually thinking of just using that through NDK and maybe creating a wrapper to it.
Your use case is not clear from the question.
Assuming that you need to process raw audio data (PCM samples) - the standard way is to read the (compressed) input file using the MediaExtractor and decode the packets using the MediaCodec. Note that the documentation includes some example code.
The MediaCodec outputs ByteBuffers containing raw PCM samples. The binary format is described here.
Well, there is no strict standard.
In production, you usually choose stable third party library or your company's reusable internal solution for this kind of tasks. You still can implement it yourself, but it will cost you time, since most likely the implementation will consist of hundreds of lines of code and you probably will just create another variation of existing solution which is present on the internet.
I'm working on a feature in which I want to add picture over the video and save it to sd card.
in general, the user selects an image with semi-transparent background and puts that image above the video, after the user presses the save button he gets a new video but already with the image above the video.
I have heard about ffmpeg, and saw some commands that are provided by ffmpeg. but I don't know where I should initialize. can anyone provide me an example for the same?
Thank you.
One common approach is to use an ffmpeg wrapper to access ffmpeg functionality from your Android app.
There are several fairly well used wrappers available on GitHub - the ones below are particularly well featured and documented (note, I have not used these as they were not so mature when I was looking at this previously, but if I was doing something like this again now I would definitely build on one of these):
http://writingminds.github.io/ffmpeg-android-java/
https://github.com/guardianproject/android-ffmpeg
Using one of the well supported and used libraries will take care of some common issues that you might otherwise encounter - having to load different binaries for different processor types, and some tricky issues with native library reloading to avoid crashes on subsequent invocations of the wrapper.
Because this approach uses the standard ffmpeg cmd line syntax for commands it also means you should be able to search and find help easily on multiple different operations (as anyone using ffmpeg in 'normal' model will use the same syntax for the ffmpeg command itself).
For example, for your adding an image case here are some results from a quick search (ffmpeg syntax can change over time so it is worth doing a current check):
https://stackoverflow.com/a/32250369/334402
https://superuser.com/a/678171
I am working on a matlab project where I add effects to audio files (mp3, wav). Therefore, I load the files into arrays using the matlab function audioread(..).
Now, I want to export this to Android. I read that the best way is to use the Matlab Coder to export the matlab code to C/C++ (or Java) and then export it into android (more or less).
However, the function call audioplayer (and play) are Unsupported (that's what the code generation readiness issues says).
What can I do ? One idea was to play the sounds directly using c++ code (so after the code generation). But how to play sounds from arrays using c++ ?
Or if you guys have others ideas without touching c++ codes (so fixing the problem directly in matlab), I would be glad to hear it !
Thanks and have a good day !
Typically what I recommend in cases like this is to factor your code in two pieces:
The part that does the audio file I/O and audio playing (namely the OS-specific part)
The computational kernel for which you will generate code using MATLAB Coder. This piece usually takes numeric arrays representing the image or audio data as arguments.
I've used this approach to leverage MATLAB Coder generated code to do image filtering on Android.
To do part (1), as Navan says, you'll need to use Android APIs to read in audio files, write data back to files, and to play them as desired. Note, I haven't done extensive Android development, so doing these tasks may take some research or be difficult.
Once you have the data in a format suitable for the function(s) in (2), likely a numeric array, then you can call your generated code using JNI to add the desired effects. The generated code would return the data back to the Java code and you can then encode it, play it, or do as you please with it using the Android APIs.
Playing audio normally uses platform dependent libraries. In DSP System toolbox, there is an audio player object called dsp.AudioPlayer which supports C code generation. But I believe this uses platform dependent libraries in the generated code and it will not be straight forward to make it work in Android. You will be better off finding an audio player library for Android and hooking that in manually after generating code.
I have to modify the Http Live Streaming implementation of Android Media Player.
The implementation is under the stagefright library
http://androidxref.com/4.0.4/xref/frameworks/base/media/libstagefright/httplive/LiveDataSource.cpp
I think these library will compile to a libstagefright.so which should be part of the Android system.
My question is if I make some changes to this library and compile a new libstagefright.so.
If I load this new libstagefright.so in my new application and call up the media player, will it use the code in my new libstagefright.so?
You will not be able to replace the original library, since when you try to loadLibrary it will load the library from within /system/lib. So unless you replace that (which is not possible on unrooted devices), you won't be able to load your custom code.
https://github.com/android/platform_system_core/blob/66ed50af6870210ce013a5588a688434a5d48ee9/rootdir/init.environ.rc.in sets the LD_LIBRARY_PATH by default. And loads it from these paths if available. If not, then your application's lib directory will be searched; but not the other way around.
I tried this with libwebkit.so in the past on various mainstream devices and haven't had any luck getting it to load instead of the one in /system/lib.
You can learn more by looking at:
doLoad from here https://android.googlesource.com/platform/libcore/+/41d00b744b7772f9302fdb94dddadb165b951220/luni/src/main/java/java/lang/Runtime.java
findLibrary here http://developer.android.com/reference/dalvik/system/BaseDexClassLoader.html#findLibrary(java.lang.String)
I'm pretty sure you can't replace the default class loader either for security reasons.
What you can do, though, is a straightforward fork the Media Player and have it load your modified libstagefright-modified.so. There could be other solutions, haven't looked at Media Player's code.
Knowing that all you want to do is parse the data before it gets to the MediaPlayer, I suggest not trying to alter the Android libraries. As soulseekah mentioned, it's not going to work without a rooted device. There are other options, although they both have drawbacks.
1) If you are only targeting recent versions (4.2 or later, I believe), you can take a look at new classes added to the android.media package, like MediaExtractor and MediaCodec. I'm not greatly familiar with those because they aren't available on the hardware with which I work, but they could be useful in getting to the raw data. Here is a decent sample of using them to play video. The drawback is those classes aren't available in earlier versions.
2) The other option is to put a local proxy on the device. Connect the MediaPlayer to the proxy and make the request to the media server yourself. See my answer here for a little more info on that. With a proxy, you will see all the data that comes through, giving you a chance to parse the ID3 tags. There is the drawback that you will have to parse the TS packets to put together an elementary stream (essentially doing the demuxer's job), but it will work with any version of Android. TS streams aren't difficult to disassemble, and ID3 tags aren't time consuming to parse, so I think this is a reasonable approach.
I want to make an app that downloads a specific MP3 file, allow the user to crop/trim it and re-upload it back to our server. How can I trim MP3 files inside my app ? Do I need to achieve this in C and then port it/use NDK ? If MP3 is tough, I can switch over to other easy formats too.
Will be great if anyone can show me the right path.
I think of all the alternatives, using something like LAME and compiling for the NDK is your best option. Be warned that there are patent restrictions on the MP3 format (most of which I don't understand), and you may run into problems if you distribute your app. YMMV
I didn't test it on Android yet, but there is a pure Java MP3 decoder / player I wrote a while ago. Maybe you can use parts of it. The code is based on JLayer from JavaZoom (also LGPL).
This library helps to trim MP3,AAC/MP4,WAV,3GPP/AMR...
http://code.google.com/p/ringdroid/