Android TextureView.getBitmap() is very slow - android

I have an Android app with the camera configured to send its preview to a TextureView. In the onSurfaceTextureUpdated method of my SurfaceTextureListener I pull out the preview frame as a bitmap using:
textureView.getBitmap(existingBitmap);
It works fine, but takes a very long time (about 200-250 ms for a 720x1280 image). It seems like this should go much faster. Any thoughts on how I can improve the performance of this operation?

Using this command to get the preview generates a new bitmap from the preview, this is what is taking the application so long. My suggestion would be to perform this operation in a background thread explained in this link:
http://developer.android.com/reference/android/os/AsyncTask.html
It will also allow you to stack operations that require this operation to be completed first in the onPostExecute function.
This will allow the main UI thread to function and be responsive to anything else the user may need to do during this time.
This may not be the perfect answer you are loking for but hope it helps.

Related

Why can't UI thread and Render thread run at the same time?

I am a mediocore android developer for years. I like android but there's a big problem; frame drops. Even the most powerful ones can stutter so frequently while IOS devices can run at constant 60fps. I just can't understand why. I want to know it. So first thing i did was watching an I/O presentation about performance. And i didn't really understand one thing. Why can't ui and render thread run at the same time ? Yeah i know the basics like render thread can't know what to render while ui thread is doing it's thing but why can't render thread render the frame before? You can see the video here:
https://youtu.be/9HtTL_RO2wI?t=491
And here's a diagram what am i asking for:
You get the idea. I don't know about low level things about android, can anyone explain this like i'm five.
Your process' main thread is responsible for the rendering of the frames that will be presented to the user, so you should keep the code running there as fast and light as possible. If you have to do some heavy processing or access any IO (network, sdcard, etc) that may impact on the fluidity of the application since the thread may be waiting for a response.
As a good practice you should start that IO access/heavy processing on another thread to run in background and let the system decide the priority to run it, if necessary is recommended to present some feedback to the user like a ProgressBar or something to indicate that something is being processed.
Also, the Render Thread need to know what to render before it does it, so the UI Thread have to process which information the app would like to present to the user.
As #JonGoodwin points out, they both run in parallel, but usually in two cores of the same processor, as nowadays phones have at least two cores. Both threads are run in CPU, where RenderThread sends rendering commands to the GPU. Notice that this is true since API 21 (RenderThread is what enables things like ripple effect).
The problem, though, is what #LucianoFerruzzi points out: usually poor code that does too many things in the UI thread (RenderThread is not accessible, at least not with standard mechanisms).
Also, see the following episode of Android Developers Backstage: Episode 74: Graphics

Can Android phone running image processing app 24/7

I'm developing an image processing app on Android phones, which is expected to run 24/7. I've managed to do the following:
Use Camera2 interface to gain better fps.
Grab raw frames, using renderscript to convert to rgb and do image processing using opencv in a background service( no preview ). I got around 20fps after conversion to rgb at 1280x960 on LG G4.
So my questions are:
Is there anything else I need to optimize to minimize the memory and CPU usage?
Any chance that this application can run 24/7? Is delegating all the camera operations and processing to the background service sufficient to allow it run 24/7? When I leave it running, I can still feel the heat from the camera and its surrounding area.
Any suggestion would be appreciated. Thanks.
UPDATE 1
The app runs on LG G4 using Camera2 interface and do image processing in the background with the screen off, got too hot and the phone turned off itself after a few hours. What can I do to overcome this?
about the second question. I think the app can not run 24/7 because the phone will close itself because of the heat.
Before answering to your question i must say that i am also new to image processing with android (but not to image processing field).
For the question one:
May be yes. Because image processing tasks are much memory intensive you may need to optimize your app in order to avoid things like memory leak(even if the android runtime perform routine garbage collection) .
check the following links
link one
may be useful
when it comes to pixel level operation ( when you avoid using inbuilt functions of opencv or what ever the library you are using and do access and process pixels manually) it will be too slow. I am saying this based on my experience on my laptop. Hoping that you are using opencv for your app just take a look at the following opencv site ( it is for python but you can get the idea)
take an idea from this
and also this SO answer: SO answer
A tip i can remember: try to reduce Mat variable copying (copying Mat objects to other Mat objects) as much as you can
Question number two:
I will go with answer given by user7746903. This answer also linked with the memory that will be consumed by your background app. There will be much more memory intensive app running on background so it depends. Thank you.
For the first question:
I feel its worth mentioning that you should by pass java as much as possible. Ie. using Java as the interfacial layer, then using JNI C as the call loop.
eg:
Get texture from camera2 > supply texture to C function > call render script/ compute shaders from C and other processing functions > call java function to render to screen.
This speeds up CPU performance and reduces memory warnings (especially when rapid allocation and freeing of memory).

Syncing openGL rendering with c++ game loop on android

I am building a game like application using android NDK and openGL ES 2.0
So far I understand the concept of vertices and shaders and programs.
The main game loop would be a loop in a single thread as follows
step 1. Read all the user input
step 2. Update game objects (if needed) based on the input
step 3. make draw calls for all the objects
step 4. call glSwapBuffers
then loop back to step 1
But I ran into various confusions regarding sync and threading so I'm listing all the question together.
1.Since open GL draw calls are asynchronous the draw calls and glSwapBuffers may get called many times before the gpu has even rendered actually a single frame from calls from last iteration of loop. Will this be problematic? buffer overflow or tearing ?
2.Assuming VSYNC is enabled then does point 1 still causes problem?
3.Since all calls are async how do I measure the time spent rendering each frame? glSwapBuffers would return immediately so how can I know when was the frame actually done?
4.loading textures will occupy space in the ram is checking free memory before loading texture standard way or I should keep loading textures until I reach OUT_OF_MEMORY_ERROR?
5.If I switch to multithreaded approach calling just glswapbuffers at a fixed 60 times per second without any regard to the thread which is processing input and giving out draw calls then what is supposed to happen?
Also how do I control the fps in game loop? I know the exact fps depends on a large no of factors but how can you go close to that
The SwapBuffers() will not be executed out of order. Issuing it after all of the draw commands for the frame is fine. The driver will take care about it, you don't need to sync anything. You can only screw this about by using multiple threads or multiple contexts, but even that would take a lot of effort.
There is no problem with 1, and VSYNC does not directly change anything here.
The calls might be asynchronous, but the driver will not queue up an unlimit amount of work. Sooner or later, it will have to block, if you try to issue too many calls in advance. When vsync is on, the typicial behavior is that the driver will queue up at most a few frames (or just one, depending on the driver settings), and SwapBuffers() will block when that limit is reached. So the timing statistics you get there are accurate, after the first few frames. Note that this is still much better than completely flushing the queue, as the driver unblocks as soon as the first pending buffer swap was carried out.
This is a totally new topic, which probably belongs into another question. However: It is very unlikely that you get any of the current desktop GL implementations to ever generate GL_OUT_OF_MEMORY. The driver will automatically page textures (and other objects) between VRAM and system RAM (and the OS might even page that to disk). The GL also provides no means to query the available memory.
In that scenario, you will need to synchronize manually. That approach does not make the slightest sense and seems like trying to solve a problem which does not exist. If you want your game to use multithreading, still put all the gl rendering (and swapbuffers) into the same thread. You can use different threads for input processing, sound, physics, update of the scene, general game logic and whatever. But you should just use a single thread/single context approach for the GL. That way, it also won't hurt you when SwapBuffers() blocks your render thread, as your game logic and input handling is still done, and the render thread will just render new frames with the newest available data in the frequency the display needs (with vsync on) or as fast as the CPU and GPU can work (if vsync is off).

Splitting cameracontrol sample OpenCv4Android in two threads

I have imported into Eclipse Juno the sample of OpenCv4Android, ver. 2.4.5, called "cameracontrol". It can be found here:Camera Control OpenCv4Android Sample.
Now I want to use this project as base for mine. I want to process every frame with image-processing techniques, so, in order to improve performance, I want to split the main activity of the project in two classes: one that is merely the activity and one (a thread) that is responsible for preview.
How can I do? Are there any examples about this?
This might not be a complete answer as I'm only learning about this myself at the moment, but I'll provide as much info as I can.
You will likely have to grab the images from the camera yourself and dispatch it to threads. This is because your activity in the example gets called with a frame from the camera and has to return the frame to be displayed (immediately) as the return value. You can't get 2+ frames to process in parallel without showing a blank screen in the meantime or some other hacky stuff. You'll probably want to allocate a (fixed sized) buffer somewhere, then start processing a frame with a worker thread when you get one (this would be the dispatcher). Once your worker thread is done he notifies the dispatcher who gives the image to the view. If frames come from the camera while all worker threads are busy (i.e. there are no free slots in the buffer), the frame is dropped. Once a space in the buffer frees up again, the next frame is accepted and processed.
You can look at the code of the initialitzeCamera function of JavaCameraView and NativeCameraView to get an idea of how to do this (also google should help, as this is how apps without OpenCV have to do it as well). For me the native camera performs significantly better though (even without heavy processing it's just much smoother), but ymmv...
I can't help with actual details about the implementation since I'm not that far into it myself yet. I dope this provides some ideas though.

Android: OpenGL rendering pauses when heavy background task running

I am rendering objects via OpenGL, and got a nice smooth framerate of 60fps in most situations.
UNTIL I do something heavy in a background thread, like fetching stuff from a REST API, processing it, and adding objects to the graph (low-priority stuff, I care more about UI fluidity). The renderer will then pause for a very long period, up to 1 second (ca. as long as the background thread runs), and then resume as if nothing had happened. I noticed this because an animation is started at the same time, and it gets stuck for this period. The background thread is set to minimum priority, and garbage collection does take up to 100-200ms, but not the whole second. When I set a debug point anywhere in the background task, rendering continues just fine, without any delays.
Is it possible that my heavy background thread starves the OpenGL thread? If so, what can I do?
Of course! A GPU needs to be fed data, and that's done by the CPU. So, if something in the system bottlenecks, like I/O or CPU processing, then the GPU can't get fed. For example, animation is traditionally done on the CPU. This is why you get a lot of games on the PC getting higher frame rates with the same graphic chips but with different CPUs.
I also agree that profiling is a very good idea. If you can, I would suggest profiling to make sure it's actually the REST call, or if the REST call is one of many things
One thing I noticed about REST processing, and this happened to me. Since REST sometimes processes a lot of strings, and if you don't use StringBuilder, you can end up firing off a lot of garbage collection. However, it doesn't sound like you are getting this.

Categories

Resources