I am using ffmpeg to decode a file and play it back on an android device. I have this working and would now like to decode two streams at the same time. I have read some comments regarding needing to use av_lockmgr_register() call with ffmpeg, unfortunately I am not sure how to use these and how the flow would work when using these locks.
Currently I have seperate threads on the java side making requests through JNI to native code that is communicating with ffmpeg.
Do the threads need to be on the native(NDK) side, or can I manage them on the java side? And do I need to do any locking, and if so how does that work with ffmpeg?
***UPDATE
I have this working now, it appears that setting up the threads at the java sdk level transfers into separate threads at the native level. With that I was able to create a struct with my variables, and then pass a variable to the native layer to specify what struct to use for each video. So for I have needed to use any mutexs or locks at the native level, and haven't had any issues.
Does anyone know of potential gotchas I may encounter by not doing so with ffmpeg?
I'll answer this, my latest update approach appears to be working. By controlling the threads from the java layer and making my native calls on separate threads everything is working and I have not encountered any issues.
Related
I have checked this question.
It is very similar:
I want to record a video with android camera.
After that with a library remove the background, which is with chroma key.
First I think I should use android NDK in order to escape from SDK memory limitation and use the whole memory.
The length of the video is short, a few seconds so maybe is able to handle it.
I would prefer to use an SDK implementation and set the android:largeHeap="true" , because of mismatching the .so files architecture.
Any library suggestion for SDK or NDK please.
IMO you should prefer NDK based solution, since video processing is a CPU-consuming operation and java code won't give you a better performance. Moreover, the most popular and reliable media-processing libraries are often written in C or C++.
I'd recommend you to take a look at FFmpeg. It offers reach abilities to cope with multimedia. chromakey filter may help you to remove green background (or whatever color you want). Then you can use another video as new background, if needed. See blend filter docs.
Filters are a nice and powerful concept. They may be used both via ffmpeg tool command line or via libavfilter API. For the former case you should find ffmpeg binary compiled for android and run it with traditional Runtime.exec(). For the latter case - you need to write native code, that creates proper filter graph and performs processing. This code must be linked against FFmpeg libraries.
How is native code generated in Titanium ?? I have read the documentaion on the internet and from it i can only understand the high level architecture but i need more details about the in depth working. For eg. when we create a button in Titanium using Ti.UI.createButton() how is this binded with native code and how do we get the same button as we get using native code.
Is UIButton object created and returned (talking only abt iOS) or the execution flow is different ? Also where should i look in the native code to for better understanding ?
First of all, how it works is different for each platform, so it is impossible to generalize effectively since the platforms are so specific.
For iOS Titanium uses native bridge wrapper objects called a KrollObject. These proxy objects form a bridge from a Javascript object to the native object in native code. For your UIButton use-case, the UIButton is created but is not returned to the Javascript, you control it through the Kroll bridge. (As a side note, Kroll is the process of refining the material titanium, punny).
Really you dont need to know the really intrinsic details of how it works to write modules, especially since it requires a huge amount of native platform knowledge (in which case theres no reason for you to use titanium).
Here is a great video on how it all works from the last Codestrong. If you really want to know how garbage collection and lifecycle of objects works study this video.
I have a question about the limitations of what you can do in native code on the Android platform.
Basically I have developed a library in native C code that uses UDP sockets for SIP/RTP and uses OpenAL for audio recording/playback - basically the whole application.
The idea is to have as much as possible in native C code rather than Java code. I want to do this because I am going to use it on other platforms as well.
My question then is simply - is it possible to just use the Java for the GUI and then all processing in native code?
What will happen when my native code tries to create a socket, bind it, record audio, play it, etc - since it is in native code, do I need to setup permissions for it (such as application accessing microphone and whatnot) or will it just bypass this stuff since its native code?
Can native code do pretty much anything it wants on Android like on PCs?
Sorry if its unclear; just tell and I'll try to improve it
Thanks
You can do pretty much anything you want in native code, but the only OS-level thing really supported is OpenGL, OpenSL, and some number-crunching libraries (compression, math, etc).
However, at any time you're free to use the JNI to call a Java method, so you could use the standard Android API for networking (classes like Socket, etc). Obviously, since the call is going through the Java API, all the normal Android permissions apply (like android.permission.INTERNET).
EDIT: As elaborated in the comments, the standard libraries that are part of the NDK do have support for sockets.
You still need your application to have permissions. For example, your native sockets will not work without android.permission.INTERNET in the manifest.
<manifest xlmns:android...>
...
<uses-permission android:name="android.permission.INTERNET"></uses-permission>
</manifest>
Another option is to create the socket at the Java layer and pass it down. Here's an example of interacting with the socket in native land, see the method org_..._OpenSSLSocketImpl_connect():
http://www.netmite.com/android/mydroid/dalvik/libcore/x-net/src/main/native/org_apache_harmony_xnet_provider_jsse_OpenSSLSocketImpl.cpp
The Native Android API is a nice article for NDK.
is it possible to just use the Java for the GUI and then all processing in native code?
Yes. And you need to set appropriate permissions to your AndroidManifest.
record audio, play it,
You need to use OpenSL ES API for recording and playing audio in the native side. It means your application should be for Android 2.3 or later.
Or, NVIDIA provides a framework that allow we to be able to develop using C++ for Android events, sensors, audio and so on even though for Android 2.2 or earlier.
Tegra Resources - Android SDK & NDK sample applications and documentation
You may like to check out csipsimple which is a example of using the pjsip sip library (which is written in C) in a java android application.
I haven't looked into how it does the sockets communication but it should give you a more complete example of what you are trying to do.
I've worked with OpenCV in combination with Android before and what I always tried to do is to use as few calls as possible between my native code and my java code. When I look at the OpenCV port for Android though, it seems like they just create a wrapper function for every native function and call these from java. Now unless I totally misunderstand the principal of swig wrappers and the whole idea of this port, won't this be a lot slower than doing the actual coding in native code? I noticed that passing data between native code and java code is really slow, so I don't get why it seems to be the most normal thing in this port.
I did use it myself, but I just decided to ignore all the wrappers and use the code as it is and create my own wrapper using the normal way presented by Android tutorials.
So my question is, am I just wrong about the disadvantages? Or are they actually there and what is the real advantage of using OpenCV in this was? I know these questions are somewhat informal, but I hope you guys can help me out.
I can't give you the answer you're looking for, but here's what I think: there are many examples of JNI layers that wrap every native function - OpenGL, Android's Canvas, etc. Calling trough JNI is slower than working entirely in native code, but the question is does it make any difference for a concrete application? I believe in the majority of cases this time penalty is ignorable in comparison with the time spent inside the native functions. However I favor doing as much work as possible in native code for Android apps, not mainly because of the faster execution, but because Java is awkward language compared to C and C++.
The Android NDK has just been significantly expanded to include support for writing android applications entirely in native C/C++ code. One can now capture input events on the keyboard and touch screen using native code, and also implement the application lifecycle in C/C++ using the new NativeActivity class.
Given all the expanded native capabilities, would it be worthwhile to completely bypass Java and write Android application in native code?
The NDK is not native per-se. It is to a large extent a JNI wrapper around the Android SDK. Using NativeActivity gives you a convenient way of dealing with certain app-life cycle events, and add your own native code on top. ALooper, AInputQueue etc. are all JNI wrappers of the Java SDK counterparts, some with additional code that is private and unaccessible for real apps.
When it comes to Android development, there's no such thing as writing an application entirely in native C++ - you will (in every real App case that I can think of) always need to use the Android API:s, which are to a huge extent pure Java. Wether you use these through wrappers provided by the NDK or wrappers that you create yourself doesn't really change this.
So, to answer your question: No, it wouldn't be worthwhile, because you would end up writing JNI wrappers for SDK calls instead of writing JNI wrappers to your own Java methods that do the same thing, with less code, simpler code and faster code. For example, showing a dialog using "pure c++", involves quite many JNI calls. Just calling a Java method through JNI that does the same thing will give you faster code (one JNI call), and, arguably, code that is easier to maintain.
To fully understand what you can do, you really must examine the Android source code. Start with native_app_glue.c, which is available in the NDK, then continue with the OS implementation of AActivity, ALooper, AInputQueue etc. Google Code Search is a great help in this. :-)
If it is easy to do in Java, and includes many calls, call a method through JNI that does it all, rather than writing all the extra code to do it with multiple JNI calls. Preserve as much of your existing C++ code as is reasonable.
Not if you are just making a standard application. The Java SDK is more complete than its Native counterpart right now so you would still be making things more difficult for yourself.
If you are not doing something that requires the NDK (read: real time performance sensitive) then stick with Java.
Just some food for thought but if you have an app on iOS and Android, some C/C++ code might be shareable. Obviously the iOS Obj-C and platform specific code wouldn't work elsewhere. (Ditto for the Android specific stuff). But you might be able have some shared code that's platform neutral.
If you can, stick with the java style apps until versions of Android supporting native activities constitute a significant fraction of the installed base.
For things that were hard to do before - particularly ports of existing code - this will probably be a big help.
It's not entirely clear yet what has changed vs. just writing your own thin java wrapper. For example, is there still a copy of the dalvik VM hanging around?