I would like to know how are the performances of Processing sketches in Android. Here is the link for more info about Processing-Android : http://wiki.processing.org/w/Android#Instructions
I don't really know at which level lies Processing in Android and how it is implemented. That's why I would like to know what are the performance of a Processing's sketch embedded in an android app in comparison with a normal Canvas of the android API.
Processing let us create relatively easily programs and if the performance were good I'm sure we could save a lot of time drawing certain things of our app with Processing (or at least for a beginner like me, Processing's language seems much more easy than the Java used in android as we can call easily drawing function etc...)
So I would like to have your opinion whereas Processing sketches could be as efficient (in term of performance/optimization) as using Android Java API directly ?
Thanks
I've done some tests with the examples given with Processing and I thought it could be useful to some person... So here are the results :
Device : Samsung Galaxy S II : Android 2.3.6, 1GB RAM, Dual-core 1.2 GHz Cortex-A9.
Tests : (on Processing 2.0a4)
No = to much lag to do anything (around 5 FPS)
Soso = we can see what the sketch is doing but still a lot of lag (around 10/15 FPS)
OK = working (around 25 FPS or more)
Basics:
Pointillism=OK
Sprite=OK
... most of the basic examples are working correctly
Topics:
Interaction:
Follow examples =OK
Animation:
Sequential=OK
Effects :
Unlimited Sprites=OK
Motion:
Brownian=OK
Bouncy Bubbles=OK
Simulate :
Fluid=Soso
Flocking =OK (yet sometimes to FPS get a bit lower but acceptable)
Simple Particle System=OK
Smoke Particle System=OK
Spring=OK
Multiple Particle Systems=OK
Chain=OK
OpenGL:
Birds: without PShape3D=Soso, with PShape3D=OK
Earth=OK
Rocket=OK
Extrusion=NO
Electric=OK
CameraLight=OK
YellowTail=OK
Planets=OK
Contributed Libraries:
Fisicia:
Bubbles=Soso
Droppings=Soso
Joints=OK
Buttons=OK
Polygons=OK
Restitutions=OK
PBox 2D : couldn't get it working
Some Sketches from OpenProcessing.org
http://www.openprocessing.org/visuals/?visualID=3330 = OK
http://www.openprocessing.org/visuals/?visualID=1247 = OK
http://www.openprocessing.org/visuals/?visualID=8168 = OK
http://www.openprocessing.org/visuals/?visualID=5671 = OK
http://www.openprocessing.org/visuals/?visualID=10109 = NO
http://www.openprocessing.org/visuals/?visualID=7631 =NO
http://www.openprocessing.org/visuals/?visualID=7327 = NO
Note: I've run all sketches in their original size, I didn't re-scaled them to fit my SGSII (which has a resolution of 480 x 800) so I guess the performance may vary following the size of the sketch.
Conclusion : Processing is really interesting as a graphic library for android. Most of the example given with Processing are working very well and smoothly on my Phone (including OpenGl examples). Yet it is not as optimized as on PC, indeed simulation like Smoke or Vortex where many particles are involved are really laggy.
Fisicia library is working well on android which is a really good point.
Voila :)
Related
I'm developing an image processing app on Android phones, which is expected to run 24/7. I've managed to do the following:
Use Camera2 interface to gain better fps.
Grab raw frames, using renderscript to convert to rgb and do image processing using opencv in a background service( no preview ). I got around 20fps after conversion to rgb at 1280x960 on LG G4.
So my questions are:
Is there anything else I need to optimize to minimize the memory and CPU usage?
Any chance that this application can run 24/7? Is delegating all the camera operations and processing to the background service sufficient to allow it run 24/7? When I leave it running, I can still feel the heat from the camera and its surrounding area.
Any suggestion would be appreciated. Thanks.
UPDATE 1
The app runs on LG G4 using Camera2 interface and do image processing in the background with the screen off, got too hot and the phone turned off itself after a few hours. What can I do to overcome this?
about the second question. I think the app can not run 24/7 because the phone will close itself because of the heat.
Before answering to your question i must say that i am also new to image processing with android (but not to image processing field).
For the question one:
May be yes. Because image processing tasks are much memory intensive you may need to optimize your app in order to avoid things like memory leak(even if the android runtime perform routine garbage collection) .
check the following links
link one
may be useful
when it comes to pixel level operation ( when you avoid using inbuilt functions of opencv or what ever the library you are using and do access and process pixels manually) it will be too slow. I am saying this based on my experience on my laptop. Hoping that you are using opencv for your app just take a look at the following opencv site ( it is for python but you can get the idea)
take an idea from this
and also this SO answer: SO answer
A tip i can remember: try to reduce Mat variable copying (copying Mat objects to other Mat objects) as much as you can
Question number two:
I will go with answer given by user7746903. This answer also linked with the memory that will be consumed by your background app. There will be much more memory intensive app running on background so it depends. Thank you.
For the first question:
I feel its worth mentioning that you should by pass java as much as possible. Ie. using Java as the interfacial layer, then using JNI C as the call loop.
eg:
Get texture from camera2 > supply texture to C function > call render script/ compute shaders from C and other processing functions > call java function to render to screen.
This speeds up CPU performance and reduces memory warnings (especially when rapid allocation and freeing of memory).
I created an application with Starling, on the new mobile devices it performs amazingly well, however on the older devices (e.g. iPhone 4) I encounter a very odd lag.
I have as far as I can tell a completely static situation:
There are quite a few display objects added to stage, many of them are buttons in case it matters, their properties are not changed at all after initialization (x, y, rotation, etc...).
There are no enterframes / timeouts / intervals / requests of any kind in the background.
I'm not allocating / deallocating any memory.
In this situation, there's an average of 10 FPS out of 30, which is very odd.
Since Starling is a well established framework, I imagine it's me who's doing something wrong / not understanding something / not aware of something.
Any idea what might be causing it?
Has anyone else experienced this sort of problem?
Edit:
After reading a little I've made great optimizations in every possible way according to this thread:
http://wiki.starling-framework.org/manual/performance_optimization
I reduced the draw calls from around 90 to 12, flattened sprites and set blendmode to none in specific cases to ease on CPU, and so on...
To my surprise when I tested again, the FPS was unaffected:
fps: 6 / 60
mem: 19
drw: 12
Is it even possible to get normal fps with Starling on mobile? What am I missing?
I am using big textures that are scaled down to the size of the device, is it possible that such a thing affects the fps that much?
Regarding "Load textures from files/URLs", I'm downloading different piles of assets for different situations, therefore I assumed compiling each pile into a SWF would be way faster than sending a separate request for each file. The problem is, for that I can only use embed, which apparently uses twice the memory. Do you have any solution in mind to enjoy the best of both worlds?
Instead of downloading 'over-the-wire' your assets and manually caching them for re-use, you can embed the assets into your app bundle vs. embedding them and then use the Starling AssetManager to load the textures at the resolution/scale that you need for the device:
ie.
assets.enqueue(
appDir.resolvePath("audio"),
appDir.resolvePath(formatString("fonts/{0}x", scaleFactor)),
appDir.resolvePath(formatString("textures/{0}x", scaleFactor))
);
Ref: https://github.com/Gamua/Starling-Framework/blob/master/samples/scaffold_mobile/src/Scaffold_Mobile.as
Your application bundle gets bigger of course, but you do not take the 2x ram hit of using 'embed'.
Misc perf ideas from my comment:
Testing FPS with "Release" mode correct?
Are you using textures that are scaled down to match the resolution of the device before loading them?
Are you mixing BLEND modes that are causing additional draw calls?
Ref: The Performance Optimization is great reading to optimize your usage of Starling.
Starling is not a miracle solution for mobile device. There's quite a lot of code running in the background in order to make the GPU display anything. You the coder has to make sure the amount of draw call is kept to a minimum. The weaker the device and the less draw call you should force. It's not rare to see people using Starling and not pay any attention to their draw calls.
The size of graphics used is only relevant for the GPU upload time and not that much for the GPU display time. So of course all relevant texture need to be uploaded prior to displaying any scenes. You simply cannot try to upload any new texture while any given scene is playing. Even a small texture uploading will cause idling.
Displaying everything using Starling is not always a smart choice. In render mode the GPU gets a lot of power but the CPU still has some remaining. You can reduce the amount of GPU uploading and GPU charge by simply displaying static UI elements using the classic display list (which is where the Staling framework design is failing). Starling was originally made to make it very difficult to use both display system together that's one of the downsides of using this framework. Most professional I know including myself don't use Starling for that reason.
Your system must be flexible and you should embed your assets for mobile and not use any external swf as much as possible and be able to switch to another system for the web. If you expect to use one system of asset for mobile/desktop/web version of your app you are setting yourself up for failure. Embedding on mobile is critical for memory management as the AIR platform internally manages the cache of those embedded assets. Thx to that when creating new instances of those assets the memory consumption stays under control, if you don't embed then you are on your own.
Regarding overall performance a very weak Android device will probably never be able to go passed 10 fps when using Starling or any Stage3D framework because of the amount of code those framework need to run (draw calls) in the background. On weak device that amount of code is already enough to completely overload the CPU. On the other hand on weak device you can still get a good performance and user experience by using GPU mode instead of render mode (so no Stage3D) and displaying mostly raster graphic.
IN RESPONSE TO YOUR EDIT:
12 draw calls is very good (90 was pretty high).
That you still get low FPS on some device is not that surprising. Especially low end Android device will always have low FPS in render mode with Stage3D framework because of the amount of code that those framework have to run to render one frame. Now the size of the texture you are using should not affect the FPS that much (that's the point of Stage3D). It would help with the GPU uploading time if you reduce the size of those graphics.
Now optimization is the key and optimizing on low end device with low FPS is the best way to go since whatever you do will have great effect on better device as well. Start by running tests and only displaying static graphics with no or very little code on your part just to see how far the Stage3D framework can go on its own on those weak device without losing any FPS and then optimize from there. The amount of object displayed on screen + the amount of draw calls is what affects FPS with those Stage3D framework so keep a count of those and always seek ways to reduce it. On some low end device it's not practical to try to keep a 60fps so try to switch to 30 and adjust your rendering accordingly.
I have a project that is an image processing app for android devices. For working with image I choose opencv android framework. The whole project consist of some general parts such as blocking the input image, compute dct of each block, sorting the result, compare the feature that get from each block, and finally show some results.
I write this project but it contain so many heavy computing like dct, sorting etc, so I can't even run it on my emulator because it take long time and my laptop shutdown middle of processing. I decided to optimize the processing using parallel computing and gpu programming (it is obvious that some parts like computing dct of blocks can become parallel, but I am not sure about some other parts like sorting), anyway there is a problem that I can't find any straightforward tutorial for doing this.
Here is the question, is there any way to do that or not ? I need it to be global for most of android device not for an especial device !!!
Or beside the gpu programming and parallel computing is there anyway to speed the processing up? (maybe there is other libraries better than opencv!)
I'm new in image processing.
I have a photocamera(not built-in in smartphone) that would use smartphone(likely Android) as processing unit. The cam will be placed on car's back or maybe car's roof(let mark this car as X) and the smartphone should alert if any other car aproaches to this car X or if other car drive strangely(goes right and left)...
My question is: can I use smartphone as processing unit for this kind of purpose or I'll need to have some server that would process the images and that server will sent the result to smartphone?
1 - If you think that smartphone(likely Android) could NOT manage this kind of image-processing tell me why please?
2 - If you think that smartphone(likely Android) DO could manage with this what tools I can use for this purpose?
It certainly can be done. I've used a Eee PC (1.4 GHz Atom processor) for image processing (3D reconstruction) and it worked very well. The system as a whole wasn't powerful enough, but the issue here was other stuff not directly related to the image processing portion (path finding, etc.). Depending on what you're going to do, you shouldn't have any issues processing images at 15, 30 or even 60 Hz.
As a note: Ever checked Android's camera app (the default one)? Newer versions offer a "background" mode for video recordings, replacing the actual backdrop with other videos. This is essentially image processing.
As for tools: I'm not sure if there's a OpenCV port yet, but this really depends on what (and how) you want to do it. Simple tracking, depth detection, etc. can definitely be done without such libraries and without having to rewrite too much.
Hi all
I'm having an problem with my JNI library.
The execution time of the same code changes from one phone to the other.
I thought it was just because we were testing on an old phone but recently I run on htc legend and all the jni code was slow...
I run the profiler and its really a night and day difference:
on some phone the jni functions take 15% to 20% as on the other phones it takes 40% and 50% for the same conditions...
anybody has an explanantion?
If one of the phones use JIT (Just In Time) Compiler added in Foryo (2.2) than that one should be much faster then your older ones. Are you testing it using the same android version?
Apart from that Some devices are better in float-point math than others. Devices which does not implement an FPU will emulutae float point operations. Here you can find a nice blog post about it: http://www.badlogicgames.com/wordpress/?p=71.
There are planty of sources on how to implement a float point system using fixed point arithmetics: http://en.wikipedia.org/wiki/Fixed-point_arithmetic
Processors are certainly not created equal; they have different feeds, speeds, caching and such. The obvious explanation is that is is the processor.
Additionally system-wide things may impact processing - if you are, say, processing an image taken by the camera using JNI, the size of the image may be device-specific.
Additionally you have to check you are measuring thread-time and wall-clock time; if look at timings relative to the parts of the code that are Java, you might be seeing a relative speed-up in the Java (e.g. JIT in Android 2.2) and not a slow-down in the JNI.