Are there any Android's API's, that implement the Fourier Transform using the
device's DSP? Or are there any API's that permit using the device's DSP?
Thanks.
No, there is no public API for performing hardware accelerated FFT.
You can optimize native code by targeting the armeabi-v7a ABI in order to use the FPU. That's very useful for floating point FFT.
See CPU-ARCH-ABIS in the docs/ directory of the Android NDK.
Sorry this is a bit late, but if you want to calculate the Fourier transform in Android for a real or complex number you are better off using either jTransfrom or libgdx.FFT. libgdx uses KissFFT backend, not quite sure what jTransform uses.
Check out this example on how to implement libgdx:
http://www.digiphd.com/android-java-simple-fft-libgdx/
Hope this helps.
Firstly, not all devices have a DSP. Most in fact just have a CPU and GPU.
As of today, you probably can't really do what you want without a custom ROM/firmware. The good news is that they are working on it. Look at Renderscript which is available starting with Honeycomb. It currently only runs on CPUs (though it can use multiple cores), but the plan is for a future release to allow execution on the GPU (and maybe DSP) as well, with little-to-no code changes on your part. See this post for more info.
Visualiyer is said to be able to export FFT from audio playing :
http://developer.android.com/reference/android/media/audiofx/Visualizer.html
but it is certainly not that what you like.
Related
I'm planning to run an app based on (GNOME) libclutter on Android 9 (Pie). I'm quite new to these graphics related stuff, been wondering on these things, so seeking for guidance/direction whatever data that could help me to understand this thing better.
As per the documentation of Android Graphics, Android is using OpenGL ES & Vulkane at low level to render objects. And as per GNOME Clutter documentation, it could be only compiled with mentioned back-end only! (Please check embedded link to for platforms details.)
I don't see OpenGL ES or Vulkane support, So am I missing something on understanding part or it can't be done?!
[Clutter maintainer, here]
Yes, Clutter supports OpenGL ES—it uses Cogl, a library that abstracts GL and GLES concepts.
No, Clutter does not support Vulkan at the moment.
No, Clutter and Cogl do not support Android; there was an experimental port but it has been abandoned in 2012.
Additionally, Clutter is in deep maintenance mode: no new development releases, no new features, and only minimal/security/crasher bug fixes are allowed.
I would not recommend using Clutter in a newly written project.
OKay...after spending few more hours, I was able to figure out an answer! (Yayy..!!!)
As per Clutter Project website: (somehow I had missed this info previously! :p)
Clutter uses OpenGL for rendering (and optionally OpenGL ES for use on mobile and embedded platforms), but wraps an easy to use, efficient, flexible API around GL's complexity.
So, as per my requirement, I should be able to integrate and cross-compile Clutter lib source and compile it.
PS: I will try to integrate & build the libclutter on android 9. Will update this answer latter with additional set of information.
==========================================================================
Update:
As pointed out by #ebassi in another answer, I have dropped the idea of integration and looking forward to directly using Android Graphics stack for the implementation.
Thanks #ebassi...!
What I have: A trained recurrent neural network in Tensorflow.
What I want: A mobile application that can run this network as fast as possible (inference mode only, no training).
I believe there are multiple ways how I can accomplish my goal, but I would like you feedback/corrections and additions because I have never done this before.
Tensorflow Lite. Pro: Straight forward, available on Android and iOS. Contra: Probably not the fastest method, right?
TensorRT. Pro: Very fast + I can write custom C code to make it faster. Contra: Used for Nvidia devices so no easy way to run on Android and iOS, right?
Custom Code + Libraries like openBLAS. Pro: Probably very fast and possibility to link to it on Android on iOS (if I am not mistaken). Contra: Is there much use for recurrent neural networks? Does it really work well on Android + iOS?
Re-implement Everything. I could also rewrite the whole computation in C/C++ which shouldn't be too hard with recurrent neural networks. Pro: Probably the fastest method because I can optimize everything. Contra: Will take a long time and if the network changes I have to update my code as well (although I am willing to do it this way if it really is the fastest). Also, how fast can I make calls to libraries (C/C++) on Android? Am I limited by the Java interfaces?
Some details about the mobile application. The application will take a sound recording of the user, do some processing (like Speech2Text) and output the text. I do not want to find a solution that is "fast enough", but the fastest option because this will happen over very large sound files. So almost every speed improvement counts. Do you have any advice, how I should approach this problem?
Last question: If I try to hire somebody to help me out, should I look for an Android/iOS-, Embedded- or Tensorflow- type of person?
1. TensorflowLite
Pro: it uses GPU optimizations on Android; fairly easy to incorporate into Swift/Objective-C app, and very easy into Java/Android (just adding one line in gradle.build); You can transform TF model to CoreML
Cons: if you use C++ library - you will have some issues adding TFLite as a library to your Android/Java-JNI (there is no native way to build such library without JNI); No GPU support on iOS (community works on MPS integration tho)
Also here is reference to TFLite speech-to-text demo app, it could be useful.
2. TensorRT
It uses TensorRT uses cuDNN which uses CUDA library. There is CUDA for Android, not sure if it supports the whole functionality.
3. Custom code + Libraries
I would recommend you to use Android NNet library and CoreML; in case you need to go deeper - you can use Eigen library for linear algebra. However, writing your own custom code is not beneficial in the long term, you would need to support/test/improve it - which is a huge deal, more important than performance.
Re-implement Everything
This option is very similar to the previous one, implementing your own RNN(LSTM) should be fine, as soon as you know what you are doing, just use one of the linear algebra libraries (e.g. Eigen).
The overall recommendation would be to:**
try to do it server side: use some lossy compression and serverside
speech2text;
try using Tensorflow Lite; measure performance, find bottlenecks, try to optimize
if some parts of TFLite would be too slow - reimplement them in custom operations; (and make PR to the Tensorflow)
if bottlenecks are on the hardware level - goto 1st suggestion
Maybe you should try this lib, it can run on android and ios devices.
https://github.com/Tencent/TNN
I found this answer that brought me to the idea instead of using the compiled tensorflow graph you might be able to use kivy on your Android phone. That way you could directly talk to the tensorflow graph using python-for-android.
A possible advantage would be to train/adapt the model on the fly. As far as I know otherwise you can only use the final trained model (but this is currently unanswered on stackoverflow). Also cross compiling to Windows Phone might be possible what currently isn't (see here).
I don't know the technical limitations. Anyone can confirm that this is possible and maybe what would be neccessary?
Although WinPhones could be a possibility in the future, there's basically a situation where almost no-one cares as there isn't really much interest about porting it. However, there's a thing in progress about using angle for translating openGL to DirectX, so it could be possible later. There's still this funny app packaging though, so it'll eat a lot of time.
Yet I think it might be possible to use those unofficial converters APK -> WinPhone app. Re TensorFlow: to me it looks like only a recipe is missing so try write one. :P
I am just starting to learn OpenCV for Android, I have played around with it a bit and it all works fine.
I installed the NDK and managed to run some of the sample apps which included it.
I am not clear on what we need the NDK for. I could not find it referenced anywhere in the documentation.
Is there OpenCV functionality which is not available in the regular library 2.4.8 ?
Is it just so we can use modules written in c++ that others have made available, without rewriting them in java ?
I have been using NDK for my application and following are my
observations.
Yes, it gives the advantage of using plenty of c++
modules which are made available.
If you already have a bit of
experience in computer vision application programming using openCV
library in C++, you don´t have to learn new syntaxes in java(can be quite
irritating some times).
Using NDK for your app can give you a slight upperhand in terms of
performance when image processing you do is computationally
demanding (I am not quite sure about this because the openCV
library made available for Android is just a wrapper around the
same header files and stuff and should almost give the same performance as NDK. I have never really compared the
performance myself but have read in various blogs that NDK is faster).
One thing
you really have to be careful when using NDK is calls from JAVA to
NDK side or the other way round, these calls can be really expensive in terms of performance(Needs careful planning).
Passing few parameters like array of MAT from JAVA to NDK are a bit
of headache, but you can find few workarounds.
Based on these and other factors you might have found out from various sources and also by your programming strengths you can decide if you want to use NDK or not. There was never really a set of guidelines i could find that says you can use NDK if so and so conditions are satisfied, people just start with which ever programming style they are more comfortable with.
In Android documentation they advice to use NDK in case of
# CPU-intensive operations that don't allocate much memory,
# such as signal processing, physics simulation, and so on.
If everything you want to do can be computed with OpenCV built'in functions, you may not need NDK as processing routines of OpenCV are already in C/C++.
However, if you have to process the images matrix intensively (I mean direct access to pixels), you will improve performances using the NDK.
I'm trying to make an android game here, but I'm not sure what is it that I need to learn - J2ME or flash development?
Also, if flash then is there a difference between flash development (coding) & flash animation?
I'd really appreciate any help here.
Maybe a good read for you
if you want to go the easy route, I'd recommend Flash/Flex/ActionScript. Flash/Flex/ActionScript programming can all be comingled to a certain extent. Flash animations are typically done in a more graphical environment but you can absolutely create your objects programmatically and make them do the same things through Flex/ActionScript.
Also, you may want to look into the controversy over HTML5 and Javascript. That might be a viable alternative for you. Plus, it would double as getting familiar with desigining sites as well.
Java would be the way to go. Due to the fact that java has greater flexibility and is easier to start off with in certain aspects.
Flash/Flex/ActionScript can be easier with designing graphics but the scripting can get a bit complex and tedious.
http://androidforums.com/android-games/217545-should-i-use-flash-air-java-develop-games.html
Google yields many results.
Android game development tutorials
Java is the only way to go. Adobe has abandoned mobile Flash, and even before that, only a limited subset of Android devices supported flash.
The only sensible option is Java with the Android SDK.