I'm writing an Android app that involves some form of pattern recognition to count the number of similar objects in an image. The app would be designed to work with a specific type of objects and would not involve machine learning.
Is the computation and processing within the device for such a scenario feasible or would it be better to send the image over to a remote server?
If the computation can be handled by the device, would a first generation device running on version 2.2 with 528MHz of CPU and 288MB of RAM be able to return an output within a convenient amount of time?
It completely depends on your algorithm. There's no universal pattern recognition/image processing algorithm, even for your somewhat specific case of counting similar objects.
Related
I'm developing an image processing app on Android phones, which is expected to run 24/7. I've managed to do the following:
Use Camera2 interface to gain better fps.
Grab raw frames, using renderscript to convert to rgb and do image processing using opencv in a background service( no preview ). I got around 20fps after conversion to rgb at 1280x960 on LG G4.
So my questions are:
Is there anything else I need to optimize to minimize the memory and CPU usage?
Any chance that this application can run 24/7? Is delegating all the camera operations and processing to the background service sufficient to allow it run 24/7? When I leave it running, I can still feel the heat from the camera and its surrounding area.
Any suggestion would be appreciated. Thanks.
UPDATE 1
The app runs on LG G4 using Camera2 interface and do image processing in the background with the screen off, got too hot and the phone turned off itself after a few hours. What can I do to overcome this?
about the second question. I think the app can not run 24/7 because the phone will close itself because of the heat.
Before answering to your question i must say that i am also new to image processing with android (but not to image processing field).
For the question one:
May be yes. Because image processing tasks are much memory intensive you may need to optimize your app in order to avoid things like memory leak(even if the android runtime perform routine garbage collection) .
check the following links
link one
may be useful
when it comes to pixel level operation ( when you avoid using inbuilt functions of opencv or what ever the library you are using and do access and process pixels manually) it will be too slow. I am saying this based on my experience on my laptop. Hoping that you are using opencv for your app just take a look at the following opencv site ( it is for python but you can get the idea)
take an idea from this
and also this SO answer: SO answer
A tip i can remember: try to reduce Mat variable copying (copying Mat objects to other Mat objects) as much as you can
Question number two:
I will go with answer given by user7746903. This answer also linked with the memory that will be consumed by your background app. There will be much more memory intensive app running on background so it depends. Thank you.
For the first question:
I feel its worth mentioning that you should by pass java as much as possible. Ie. using Java as the interfacial layer, then using JNI C as the call loop.
eg:
Get texture from camera2 > supply texture to C function > call render script/ compute shaders from C and other processing functions > call java function to render to screen.
This speeds up CPU performance and reduces memory warnings (especially when rapid allocation and freeing of memory).
Right now I am trying to learn Tensorflow. But I am not sure if I understand it right, i.e. if tensorflow is working for what I want to do.
I have an android app which collects data from the device and trains a model using weka and store this model.
Instead of weka I wanted to use Tensorflow
As far as I understood here I have to train the model before.
I can't train a model on the android app using tensorflow?
In theory you can train a model on the device. However, it generally requires huge amounts of processing power (and/or a GPU), memory (RAM) and disk space to train a model. Nobody recommends attempting to do this on a mobile device, due to the hardware and battery life constraints.
If you were doing only a limited amount of training, you might be able to do it on the device. You could also consider only training the model when the phone is plugged into a power cable and is otherwise idle (in this case you might have problems if Doze mode kicks in).
The other problem is that almost all the tutorials and code labs assume you are training the model on a powerful computer, then embedding that trained model in the application (e.g. here are some blog posts I wrote). If you do find any good examples of training a model on an Android device please share them in the comments!
i think you can run the tensorflow apk first (size of 106MB):
https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-android/TF_BUILD_CONTAINER_TYPE=ANDROID,TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=NO_PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=android-slave/
i think if we know how the tensorflow work and we can allow the train job to the romete service such as AWS or sth. our android phone just send the data and receive the result. right?
I have a project that is an image processing app for android devices. For working with image I choose opencv android framework. The whole project consist of some general parts such as blocking the input image, compute dct of each block, sorting the result, compare the feature that get from each block, and finally show some results.
I write this project but it contain so many heavy computing like dct, sorting etc, so I can't even run it on my emulator because it take long time and my laptop shutdown middle of processing. I decided to optimize the processing using parallel computing and gpu programming (it is obvious that some parts like computing dct of blocks can become parallel, but I am not sure about some other parts like sorting), anyway there is a problem that I can't find any straightforward tutorial for doing this.
Here is the question, is there any way to do that or not ? I need it to be global for most of android device not for an especial device !!!
Or beside the gpu programming and parallel computing is there anyway to speed the processing up? (maybe there is other libraries better than opencv!)
All the intro texts to OpenGL ES repeat that, since it's based on OpenGL, it's designed around a client/server model, though these two things tend to be on the same machine.
Well, I would like to put them on separate machines (on the same local network). Is this possible in Android? How can it be done? Extra kudos if you can figure out how to work this into a libgdx scenario (which is the gaming library I use).
(Long-winded and Perhaps Unnecessary Further Information: my use case is in faster prototyping of android games for phones. It's pretty straight forward to get finger taps and accelerometer data and what not and send it over the network to a PC. If I can have the PC send gl calls to the phone, then I can effectively run the entire game from the PC, but appear to be running on the phone. This lets me test and see if a game/game-idea will work on the phone/phone-gpu, from the advantage of far superior ram/cpu/compile-times/hot-swap-code, and just see what works on a phone, before worrying about getting everything into the ram and cpu footprint and logistics of a handset device. I know I can do this by deconstructing rendered frames, sending byte[] arrays to the device, and using libgdx Pixmap or android BitmapFactory to get the image and render it; but if it's simple to stream gl calls instead, I'd rather do that, especially since it's a more realistic test of the phone gpu's rendering ability)
There is a difference between a protocol supporting remote operation and an implementation of a server or client that does the remote operation. I don't think there are existing Android implementations that support anything like this. I suspect any of the "remote desktop" apps just forward 2D images, and don't do anything with OpenGL.
That said, there isn't anything particularly preventing you from implementing a new libGDX backend that would "remote" OpenGL calls to a server that runs on the phone and forwards those operations to the local OpenGL backend. (I can only say this with confidence as I have not looked at it any detail....)
However, given that one of the bigger bottlenecks in OpenGL performance is (generally) the bandwidth between the client and the GPU (e.g., uploading textures, vertex data, shaders, etc), adding a network is only going to exacerbate that problem and will make it hard to reason about actual performance on a phone.
You're probably better off running on your desktop and using profiling to make sure you only use a "reasonable" amount of CPU and GPU resources.
There are dual core and now quad core phones in market. However i really don't know what kind of apps does truly makes use of the feature. Can anyone provide some information on the apps that can really make use the power of dual -quad cores in mobile devices.
The idea of having dual,quad or more processing is not for specific apps to use it.
It just means having more processing speed available at hand, which will only be used when completely necessary.
For example, when there is a process that can be handled by one core, which is usually the case for most apps, the other cores aren't necessary. But there are high end games or more than one process that have to be run, which need lots of calculations at a given time, other cores may also be used, if there is room for improvement in the first core.