I am a little new to all these so please bear with me if the question sounds a little dumb. I am doing a project on comparing the extends on using GPU for map visualization and spatial analysis on mobile devices (Android, basically). I have decided to leverage upon the JTS topology suite which offers a variety of analysis (Triangulation, point in polygon and the like) and have implemented these functions in Android without the use of GPU (mainly running it on CPU).
However, I would like to bring these functions onto the GPU through RenderScript, but have been unable to reference the different variables in RenderScript. These are types such as GeometryFactory, Point, Polygon, Coordinates which I want to use in the RenderScript C file.
Hence, should I download the C library version of JTS (GEOS, basically) and use it in RenderScript? And if so, how should I go about to implement it? (I am not exactly competent in C) Or is there a way to set the different variables in the RenderScript C file via Java?
Should you require any details:
I am using Android Developer Tools with Eclipse, JTS 1.13
Thank you!
As an example, I would like to do something like:
(in Java)
import jtslibrary.*;
but implement it in RenderScript so that it can recognize the variable type.
Related
What I have: A trained recurrent neural network in Tensorflow.
What I want: A mobile application that can run this network as fast as possible (inference mode only, no training).
I believe there are multiple ways how I can accomplish my goal, but I would like you feedback/corrections and additions because I have never done this before.
Tensorflow Lite. Pro: Straight forward, available on Android and iOS. Contra: Probably not the fastest method, right?
TensorRT. Pro: Very fast + I can write custom C code to make it faster. Contra: Used for Nvidia devices so no easy way to run on Android and iOS, right?
Custom Code + Libraries like openBLAS. Pro: Probably very fast and possibility to link to it on Android on iOS (if I am not mistaken). Contra: Is there much use for recurrent neural networks? Does it really work well on Android + iOS?
Re-implement Everything. I could also rewrite the whole computation in C/C++ which shouldn't be too hard with recurrent neural networks. Pro: Probably the fastest method because I can optimize everything. Contra: Will take a long time and if the network changes I have to update my code as well (although I am willing to do it this way if it really is the fastest). Also, how fast can I make calls to libraries (C/C++) on Android? Am I limited by the Java interfaces?
Some details about the mobile application. The application will take a sound recording of the user, do some processing (like Speech2Text) and output the text. I do not want to find a solution that is "fast enough", but the fastest option because this will happen over very large sound files. So almost every speed improvement counts. Do you have any advice, how I should approach this problem?
Last question: If I try to hire somebody to help me out, should I look for an Android/iOS-, Embedded- or Tensorflow- type of person?
1. TensorflowLite
Pro: it uses GPU optimizations on Android; fairly easy to incorporate into Swift/Objective-C app, and very easy into Java/Android (just adding one line in gradle.build); You can transform TF model to CoreML
Cons: if you use C++ library - you will have some issues adding TFLite as a library to your Android/Java-JNI (there is no native way to build such library without JNI); No GPU support on iOS (community works on MPS integration tho)
Also here is reference to TFLite speech-to-text demo app, it could be useful.
2. TensorRT
It uses TensorRT uses cuDNN which uses CUDA library. There is CUDA for Android, not sure if it supports the whole functionality.
3. Custom code + Libraries
I would recommend you to use Android NNet library and CoreML; in case you need to go deeper - you can use Eigen library for linear algebra. However, writing your own custom code is not beneficial in the long term, you would need to support/test/improve it - which is a huge deal, more important than performance.
Re-implement Everything
This option is very similar to the previous one, implementing your own RNN(LSTM) should be fine, as soon as you know what you are doing, just use one of the linear algebra libraries (e.g. Eigen).
The overall recommendation would be to:**
try to do it server side: use some lossy compression and serverside
speech2text;
try using Tensorflow Lite; measure performance, find bottlenecks, try to optimize
if some parts of TFLite would be too slow - reimplement them in custom operations; (and make PR to the Tensorflow)
if bottlenecks are on the hardware level - goto 1st suggestion
Maybe you should try this lib, it can run on android and ios devices.
https://github.com/Tencent/TNN
two years ago i developed an Augmented Reality framework on android-7 (Eclair). Since AR application are computationally intensive task, I developed a JNI c++ library used by a Java activity to render and register the virtual environment. The sensor readings acquired in Java are passed to the underline c++ library to compute the registration of the virtual environment. Tridimensional objects are rendered by a native draw function called from a GLSurfaceView. This results in a lot of JNI call.
Now I would like to port the application to android-15(Ice Cream Sandwich). Starting from android-9(Gingerbread) Android allows to use NativeActivity.
I would like to understand which is the better way to develop an AR application. Since every JNI calls introduce an overhead it would be much better to avoid them. Is it possible using NativeActivity? I didn't find an exaustive guide that explains how NativeActivity works but reading this document it seems that it results in a lot of JNI calls anyway. Is there any architectural document that explains how NativeActivity works? Is NativeActivity just a "JNI wrapper" to avoid java code? Concerning performances,are there any advantages using NativeActivity instead of a JNI library as I done before?
Thanks a lot.
NativeActivity will not give a performance boost to your framework. It still uses JNI to communicate with the System, only under the cover.
Moreover, there are good reasons not to use it. If I understand your purpose correctly, you want other applications to take advantage of your code. By forcing them to use NativeActivity you seriously reduce their freedom, and require that they struggle with a less familiar environment. There is a number of limitations with NativeActivity, e.g. it cannot load more than one JNI library.
Finally, I would suggest a completely different direction if you look for optimization of your AR framework: you can use the new setPreviewTexture() API.
As far as I understand it you still are bound to JNI also when using NativeActivity. This class can be used as starting point and encapsulates some functionalities for your convenience but the underlying technology to access native code has not changed and ist still JNI. So in my opinion you only can do some benchmarks to check if NativeActivity is more efficient for some reason (may be the guys at Google do know some hacks that make it faster than your solution).
I would like to create a dll that would contain definition for some functions. I would be able to use the functions in both Objective-c and java environnement.
Is it even possible to do this?
Thanks!
Write in C or C++. You'll be able to link that to Objective C/Cocoa via the magic of Objective C++, and to Java on Android via NDK and JNI. That's what I do in my project. The compiler is GCC in both cases, the RTL is not identical but similar enough.
Avoid hairy data structures in the interface, stick to primitives and primitive arrays. And, naturally, you'll probably need to abstract away some of the platform.
You might want to compile your code with -fshort-wchar. It happens so that short is the native character format in both Cocoa and JNI. You'll lose widestring functions of the RTL though, but they're no use with Cocoa strings and Java strings anyway. Or you can use UTF-8 on the library/platform boundary. Conversion overhead on every call, yadda yadda.
Note: if you just want to reuse some minor helper functions, it's just easier to write them twice, or copy/paste then adjust the syntax. Debugging NDK code is notoriously tricky. Only go this way if the shared bits constitute 25-30% of the project or more. In my case, it's more like 60% shared.
EDIT: if you go this way, some further porting to other mobile platforms will be a snap and some - not so much. Specifically:
Samsung bada - snap (also C++ with GCC, yay)
Mobile Qt (Meego, etc) - snap (same as above)
Windows Mobile 6.5 and under - relatively easy (compiler difference between GCC and MSVC might get in the way)
Windows 8 tablets (AKA WinRT, Metro) - same as above
Blackberry Playbook - possible in theory, never tried
Old school Blackberry - impossible, it's all Java
Windows Phone 7 - impossible, it's all C#/VB.NET
If you want to create an application for iOS and Android and you want to reuse the business logic of your application (not the UI), take a look to Xamarin.
You can develop with C# and create Android and/or iOS apps.
From Xamarin web site:
Save time by sharing data structures and non-UI code between iOS and
Android.
Hope it helps.
As you can see in architecture diagram below android platform has been built using different layers.
Application are developed in Java
Application Framework is written using Java (according to my understanding)
Libraries are in C/C++
For some insane reason I have to play/deal with devices like accelerometer, compass and camera using C/C++ which means directly accessing them in 3rd layer i.e. Libraries. According to my understanding the Application Framework itself would be consuming Libraries for accessing these devices and then providing APIs for Applications.
I am looking for any documentation/tutorials/demo which can help me in this regard i.e how to access and use these devices like camera, accelerometer and compass from C/C++ code or in other words how to play with these devices directly from Libraries layer.
My last option would be to get the android source code and dig deep into it to find out what I am looking for but I would like some easy way in form of a documentation/demo/tutorial/anything that can make this a bit easy for me.
I am looking for any documentation/tutorials/demo which can help me in this regard i.e how to access and use these devices like camera, accelerometer and compass from C/C++ code or in other words how to play with these devices directly from Libraries layer.
You don't. You access them from Java code. Reorganize your C/C++ code to support your Java code.
For the camera, you can use opencv to access the frames with a c++ library. For the Accelerometer, I'm looking for how to access using c++.
I was using opencv for some time for programming in Android, and I now see that the Gimp library is much stronger. Where can I find a starting point to learn Gimp?
I also want to know the basic concepts behind of Gimp plugins. In the past, I used C APIs in opencv. How could I write the code for android?
Also, what packages do I need to install in windows to start using Gimp?
ALthough GIMP dows have some standalone libraries that perform some image manipulation, most image manipulation is done either by GIMP's core program or through GIMP's plug-ins. Both approaches need to have the entire program installed and running (though not necessarily usin a display).
I know nothing on Andorid progrmaing, and don't knwo how can one install ordinary native code in C and call it from Android apps - if you are very familiar with it, you might have a chance in your attempt.
However GIMP itself relies on a extensive ecosystem of libraries, including, but not limited to, glib, gtk+, cairo, pango, gegl - and each of these in turn might have other pre-requisites. Since Windows does not have a working package manager to authomatically install libraries and header files of these various libraries, working with these natively on Windows, though the code of each of them is multiplatform and can run on Windows and other OSses,is very hard. So hard that hthe people who build GIMP for Windows themselves do so in a Linux environment, from where they cros-compile GIMP for Windows.
Making all of these libraries work on an Android is probably not hard if you are using the GNU ecosystem around the Android's Linux kernel , and not just the bare Android environment (I don't know enough about android to even know if that is possible).
All in all: it will be though for you, and demand a whole lot of research.
One of GIMP's libraries, the GEGL (Generic Graphics Library) has a lot less prerequistes, and can be used as an ordinary library. I think you can probably build it with just glib and Babl as prerequisites. This is the library that will replace current's GIMP core, and reimplement the operations of most existing plug-ins -- so it might be enough for you.
If you can get GEGL running and usable from an Android system share that with the World --it would be , in itelsef, a project worth of a Google Summer of Code project. (And still would be about an order of magnitude easier than getting GIMP code in there to be used as a library from other applications).
Finally -- if you want just a couple of GIMP's effects, if the effect is implemented as a Plug-in in GIMP, the plug-ins' code is quite straightforward. So, while it would be hard to get the whole GIMP environment inside Android, copying the functions that actually perform the pixel manipulation from GIMP's source tree and converting them to work in a java method inside your app would not be hard. Just remember to comply with the license in this case: GIMP's plugins code is under GPLv3. (the GEGL library is only LGPL)
In short: no, you can't use GIMP's "libraries" as native code from an Android app -if you can use OpenCV, you have a good chance of being able to use GEGL instead. Only orting the algorithms of certain plugins to manipulate pixels in your app would be easier.
However -- if your application would allow delegating Image Processing to an internet based server, setting up an HTTP application to receive a image, use GIMP to process it, and stream it back would be a simple thing to do.
(So, you could not apply effects in real time, but would allow one to, for example, take a photo, select a series of effects from menus, and send it to the server for processing)
GIMP uses quite a bit of memory when loading brushes. If you drop all of the useless plug-ins, and build it from source. You may be able to get it working but you will have to build ALL of the linked libraries directly into the executable.
In other words; build linked libraries directly into the code as a static build. In this manner things may function properly unless one of those linked libraries call another linked library.
Getting the libraries themselves to work on the OS may provide additional programs opportunities to use them. Additionally, GTK+ (GIMP Tool Kit), GIMP's interface is also rather bloated and ugly.
If all else fails, you'll simply have to settle for a smaller program with the features you're looking for on the fly ( Levels, Curves, the clone tool, dodge and burn, etc. ) Layers are also nice, but editing a a large megapixel image begins to eat up memory rather quickly and most android device don't have a swap partition.