I am looking forward to start a project that will use OCR , Object tracking and other Image processing algorithms on Android and I want to accelerate these algorithms using external hardware accelerators on FPGA using the Open Accessory API.
Do Image processing apps perform bad and needs custom hardware for acceleration ? Is there a resource to know about performance of image processing algorithms on smart phones and embedded systems without writing one ?
It's only viable to do optimization after you have done measurements for your particular case.
If you need HW acceleration you might want to check out renderscript. It gives you access to GPU hardware to perform generic computations.
JavaCV offers a number of image processing algorithms. It is basically a Java wrapper for OpenCV. I found this post regarding OpenCV and OCR: https://stackoverflow.com/questions/1284214/simple-ocr-programming-tutorials-articles. Performance really depends on the size of the image and processor on the device. Not sure about using a FPGA. Have you considered using the "cloud" to offload processing?
You can use OpenCV for image processing in Android. The best tutorial I can find is OpenCV setup on Eclipse. However if you do image processing in Java the results will be slow, so use second part of the tutorial to program in C then using JNI make the code run for android.
But still a lot of image re sizing and defining region of interest is needed to make the program run in real time if you do object recognition.
Related
What is a suggested implementation approach for a real time scrolling raster plot on Android?
I'm not looking for a full source code dump or anything, just some implementation guidance or an outline on the "what" and "how".
what: Should I use built in Android components for drawing or go straight to OpenGL ES2? Or maybe something else I haven't heard of. This is my first bout with graphics of any sort, but I'm not afraid to get a little dirty with OpenGL.
how: Given a certain set of drawing components how would I approach implementation? I feel like the plot is basically a texture that needs updating and translating.
Background
I need do design an Android application that as part of its functionality displays a real time scrolling raster plot (i.e. a spectrogram or waterfall plot). The data will first be coming out of libUSB and passing through native C++ where signal processing will happen. Then, I assume, the plotting can happen either in C++ or Kotlin depending on what is easier and whether passing the data over the JNI is a big enough bottleneck or not.
My main concern is drawing the base raster itself in real time and not so much extra things such as zooming, axes, or other added functionality. I'm trying to start simple.
Constraints
I'm limited to free software.
Platform: Android version 7.0+ on modern device
GPU hardware acceleration is preferred as the CPU will be doing a good amount of number crunching bringing streaming data to the plot.
Thanks in advance!
I'm developing an image processing app on Android phones, which is expected to run 24/7. I've managed to do the following:
Use Camera2 interface to gain better fps.
Grab raw frames, using renderscript to convert to rgb and do image processing using opencv in a background service( no preview ). I got around 20fps after conversion to rgb at 1280x960 on LG G4.
So my questions are:
Is there anything else I need to optimize to minimize the memory and CPU usage?
Any chance that this application can run 24/7? Is delegating all the camera operations and processing to the background service sufficient to allow it run 24/7? When I leave it running, I can still feel the heat from the camera and its surrounding area.
Any suggestion would be appreciated. Thanks.
UPDATE 1
The app runs on LG G4 using Camera2 interface and do image processing in the background with the screen off, got too hot and the phone turned off itself after a few hours. What can I do to overcome this?
about the second question. I think the app can not run 24/7 because the phone will close itself because of the heat.
Before answering to your question i must say that i am also new to image processing with android (but not to image processing field).
For the question one:
May be yes. Because image processing tasks are much memory intensive you may need to optimize your app in order to avoid things like memory leak(even if the android runtime perform routine garbage collection) .
check the following links
link one
may be useful
when it comes to pixel level operation ( when you avoid using inbuilt functions of opencv or what ever the library you are using and do access and process pixels manually) it will be too slow. I am saying this based on my experience on my laptop. Hoping that you are using opencv for your app just take a look at the following opencv site ( it is for python but you can get the idea)
take an idea from this
and also this SO answer: SO answer
A tip i can remember: try to reduce Mat variable copying (copying Mat objects to other Mat objects) as much as you can
Question number two:
I will go with answer given by user7746903. This answer also linked with the memory that will be consumed by your background app. There will be much more memory intensive app running on background so it depends. Thank you.
For the first question:
I feel its worth mentioning that you should by pass java as much as possible. Ie. using Java as the interfacial layer, then using JNI C as the call loop.
eg:
Get texture from camera2 > supply texture to C function > call render script/ compute shaders from C and other processing functions > call java function to render to screen.
This speeds up CPU performance and reduces memory warnings (especially when rapid allocation and freeing of memory).
I'm about to start working on an augmented reality project that will involve using the EPSON Moverio AR glasses or similar. Assuming we go for those glasses, they run on Android 4.0.4 - API 15. The app will involve near realtime analysis of video (frames from the glasses camera) for feature detection / tracking with markers and overlaying 3d objects in the 'real world' based on the markers.
So far then, on the technical side, it looks like we'll be dealing with:
API 15
OpenCV
OpenGLES2
Considering the above, I'm wondering if it's worth doing it all thru the NDK, using a single NativeActivity with the android_native_app_glue code. When I say worth it I mean performance wise.
Sure, doing it all on the C/C++ side has for instance the advantage that the code could then potentially be ported with minimal modification to run on other environments. But OpenCV does have Java bindings for Android and GL can also be used to a certain extent from Java. So I'm just wondering if performance-wise it's worth it or it would be about the same as, say, using a GLSurfaceView.
I work in augmented reality. The vast majority of applications I've seen have been native. Google recommends avoiding native application unless the gains are absolutely necessary. I think AR is one of the relatively few cases where it is necessary. The benefits I'm aware of are:
Native camera access will allow you to get a higher capture framerate. Passing the data to the Java layer considerably slows this down. Even OpenCV's non-native capture can be slower in Java because OpenCV primary maintains the data in native objects. Your capture framerate is a limiting factor on how fast you can update the pose information for your objects. Beware though, OpenCV's native camera will not work on devices running Qualcomm's MSM optimized fork of android - this includes many snapdragon devices.
Every call to an OpenGL method in Java not only has a cost related to dropping into native, and they also perform quite a few additional checks. Look through GLES20.cpp which contains the native implementation of the GLES20 class's methods. You'll see that you could bypass quite a lot of logic by using native calls. This is fine in most mobile application, but 3D rendering often gets a significant benefit from bypassing those checks and the JNI overhead. This is even more important in AR because your will already be swamping the system with CV processing.
You will very likely want your detection related code in native. OpenCV have samples if you want to see the difference between native and Java detection performance. The former will use fewer resources and be more consistent. Using a native application means that you can call your native functions without paying the cost of passing large amount of data from Java to native.
Native sensor access is more efficient and the rate says far more consistent in native thanks to the lack of garbage collection and JNI. This is relevant if you will be using IMU data interesting ways.
You may be able to build an non-native application that has most of it's code in native and runs well despite being Java based, but it is considerably more difficult.
I have a project that is an image processing app for android devices. For working with image I choose opencv android framework. The whole project consist of some general parts such as blocking the input image, compute dct of each block, sorting the result, compare the feature that get from each block, and finally show some results.
I write this project but it contain so many heavy computing like dct, sorting etc, so I can't even run it on my emulator because it take long time and my laptop shutdown middle of processing. I decided to optimize the processing using parallel computing and gpu programming (it is obvious that some parts like computing dct of blocks can become parallel, but I am not sure about some other parts like sorting), anyway there is a problem that I can't find any straightforward tutorial for doing this.
Here is the question, is there any way to do that or not ? I need it to be global for most of android device not for an especial device !!!
Or beside the gpu programming and parallel computing is there anyway to speed the processing up? (maybe there is other libraries better than opencv!)
I'm new in image processing.
I have a photocamera(not built-in in smartphone) that would use smartphone(likely Android) as processing unit. The cam will be placed on car's back or maybe car's roof(let mark this car as X) and the smartphone should alert if any other car aproaches to this car X or if other car drive strangely(goes right and left)...
My question is: can I use smartphone as processing unit for this kind of purpose or I'll need to have some server that would process the images and that server will sent the result to smartphone?
1 - If you think that smartphone(likely Android) could NOT manage this kind of image-processing tell me why please?
2 - If you think that smartphone(likely Android) DO could manage with this what tools I can use for this purpose?
It certainly can be done. I've used a Eee PC (1.4 GHz Atom processor) for image processing (3D reconstruction) and it worked very well. The system as a whole wasn't powerful enough, but the issue here was other stuff not directly related to the image processing portion (path finding, etc.). Depending on what you're going to do, you shouldn't have any issues processing images at 15, 30 or even 60 Hz.
As a note: Ever checked Android's camera app (the default one)? Newer versions offer a "background" mode for video recordings, replacing the actual backdrop with other videos. This is essentially image processing.
As for tools: I'm not sure if there's a OpenCV port yet, but this really depends on what (and how) you want to do it. Simple tracking, depth detection, etc. can definitely be done without such libraries and without having to rewrite too much.