I'm developing an application which requires heavy image processing using camera input and real-time results display. I've decided to use OpenGL and OpenCV along with Android's normal camera API. So far it has become a bit of a multithreading nightmare, and unfortunately I feel very restricted by the lack of documentation on the onPreviewFrame() callback.
I am aware from the documentation that onPreviewFrame() is called on the thread which acquires the camera using Camera.open(). What confuses me is how this callback is scheduled - it seems to be at a fixed framerate. My current architecture relies on the onPreviewFrame() callback to initiate the image processing/display cycle, and it seems to go into deadlock when I block the camera callback thread for too long, so I suspect that the callback is inflexible when it comes to scheduling. I'd like to slow down the framerate to test this, but my device doesn't support this.
I started with the code over at http://maninara.blogspot.ca/2012/09/render-camera-preview-using-opengl-es.html. This code is not very parallel, and it is only meant to display exactly the data which the camera returns. For my needs, I adapted the code to draw bitmaps, and I use a dedicated thread to buffer the camera data to another dedicated heavy-lifting image processing thread (all outside of the OpenGL thread).
Here is my code (simplified):
CameraSurfaceRenderer.java
class CameraSurfaceRenderer implements GLSurfaceView.Renderer, SurfaceTexture.OnFrameAvailableListener,
Camera.PreviewCallback
{
static int[] surfaceTexPtr;
static CameraSurfaceView cameraSurfaceView;
static FloatBuffer pVertex;
static FloatBuffer pTexCoord;
static int hProgramPointer;
static Camera camera;
static SurfaceTexture surfaceTexture;
static Bitmap procBitmap;
static int[] procBitmapPtr;
static boolean updateSurfaceTex = false;
static ConditionVariable previewFrameLock;
static ConditionVariable bitmapDrawLock;
// MarkerFinder extends CameraImgProc
static MarkerFinder markerFinder = new MarkerFinder();
static Thread previewCallbackThread;
static
{
previewFrameLock = new ConditionVariable();
previewFrameLock.open();
bitmapDrawLock = new ConditionVariable();
bitmapDrawLock.open();
}
CameraSurfaceRenderer(Context context, CameraSurfaceView view)
{
rendererContext = context;
cameraSurfaceView = view;
// … // Load pVertex and pTexCoord vertex buffers
}
public void close()
{
// … // This code usually doesn’t have the chance to get called
}
#Override
public void onSurfaceCreated(GL10 unused, EGLConfig config)
{
// .. // Initialize a texture object for the bitmap data
surfaceTexPtr = new int[1];
surfaceTexture = new SurfaceTexture(surfaceTexPtr[0]);
surfaceTexture.setOnFrameAvailableListener(this);
//Initialize camera on its own thread so preview frame callbacks are processed in parallel
previewCallbackThread = new Thread()
{
#Override
public void run()
{
try {
camera = Camera.open();
} catch (RuntimeException e) {
// … // Bitch to the user through a Toast on the UI thread
}
assert camera != null;
//Callback set on CameraSurfaceRenderer class, but executed on worker thread
camera.setPreviewCallback(CameraSurfaceRenderer.this);
try {
camera.setPreviewTexture(surfaceTexture);
} catch (IOException e) {
Log.e(Const.TAG, "Unable to set preview texture");
}
Looper.prepare();
Looper.loop();
}
};
previewCallbackThread.start();
// … // More OpenGL initialization stuff
}
#Override
public void onDrawFrame(GL10 unused)
{
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
synchronized (this)
{
surfaceTexture.updateTexImage();
}
// Binds bitmap data to texture
bindBitmap(procBitmap);
// … // Acquire shader program ttributes, render
GLES20.glFlush();
}
#Override
public synchronized void onFrameAvailable(SurfaceTexture surfaceTexture)
{
cameraSurfaceView.requestRender();
}
#Override
public void onPreviewFrame(byte[] data, Camera camera)
{
Bitmap bitmap = markerFinder.exchangeRawDataForProcessedImg(data, null, camera);
// … // Check for null bitmap
previewFrameLock.block();
procBitmap = bitmap;
previewFrameLock.close();
bitmapDrawLock.open();
}
void bindBitmap(Bitmap bitmap)
{
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, procBitmapPtr[0]);
bitmapDrawLock.block();
if (bitmap != null && !bitmap.isRecycled())
{
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
}
bitmapDrawLock.close();
previewFrameLock.open();
}
#Override
public void onSurfaceChanged(GL10 unused, int width, int height)
{
GLES20.glViewport(0, 0, width, height);
// … // Set camera parameters
camera.startPreview();
}
void deleteTexture()
{
GLES20.glDeleteTextures(1, surfaceTexPtr, 0);
}
}
CameraImgProc.java (abstract class)
public abstract class CameraImgProc
{
CameraImgProcThread thread = new CameraImgProcThread();
Handler handler;
ConditionVariable bufferSwapLock = new ConditionVariable(true);
Runnable processTask = new Runnable()
{
#Override
public void run()
{
imgProcBitmap = processImg(lastWidth, lastHeight, cameraDataBuffer, imgProcBitmap);
bufferSwapLock.open();
}
};
int lastWidth = 0;
int lastHeight = 0;
Mat cameraDataBuffer;
Bitmap imgProcBitmap;
public CameraImgProc()
{
thread.start();
handler = thread.getHandler();
}
protected abstract Bitmap allocateBitmapBuffer(int width, int height);
public final Bitmap exchangeRawDataForProcessedImg(byte[] data, Bitmap dirtyBuffer, Camera camera)
{
Camera.Parameters parameters = camera.getParameters();
Camera.Size size = parameters.getPreviewSize();
// Wait for worker thread to finish processing image
bufferSwapLock.block();
bufferSwapLock.close();
Bitmap freshBuffer = imgProcBitmap;
imgProcBitmap = dirtyBuffer;
// Reallocate buffers if size changes to avoid overflow
assert size != null;
if (lastWidth != size.width || lastHeight != size.height)
{
lastHeight = size.height;
lastWidth = size.width;
if (cameraDataBuffer != null) cameraDataBuffer.release();
//YUV format requires 1.5 times as much information in vertical direction
cameraDataBuffer = new Mat((lastHeight * 3) / 2, lastWidth, CvType.CV_8UC1);
imgProcBitmap = allocateBitmapBuffer(lastWidth, lastHeight);
// Buffers had to be resized, therefore no processed data to return
cameraDataBuffer.put(0, 0, data);
handler.post(processTask);
return null;
}
// If program did not pass a buffer
if (imgProcBitmap == null)
imgProcBitmap = allocateBitmapBuffer(lastWidth, lastHeight);
// Exchange data
cameraDataBuffer.put(0, 0, data);
// Give img processing task to worker thread
handler.post(processTask);
return freshBuffer;
}
protected abstract Bitmap processImg(int width, int height, Mat cameraData, Bitmap dirtyBuffer);
class CameraImgProcThread extends Thread
{
volatile Handler handler;
#Override
public void run()
{
Looper.prepare();
handler = new Handler();
Looper.loop();
}
Handler getHandler()
{
//noinspection StatementWithEmptyBody
while (handler == null)
{
try {
Thread.currentThread();
Thread.sleep(5);
} catch (Exception e) {
//Do nothing
}
};
return handler;
}
}
}
I want an application which is robust, no matter how long it takes for the CameraImgProc.processImg() function to finish. Unfortunately, the only possible solution when camera frames are being fed in at a fixed rate is to drop frames when the image processing hasn't finished yet, or else I'll quickly have a buffer overflow.
My questions are as follows:
Is there any way to slow down the Camera.PreviewCallback frequency on demand?
Is there an existing Android API for getting frames on demand from the camera?
Are there existing solutions to this problem which I can refer to?
onPreviewFrame() is called on the thread which acquires the camera
using Camera.open()
That's a common misunderstanding. The key word that is missing from this description is "event". To schedule the camera callbacks to a non-UI thread, you need and "event thread", a synonym of HandlerThread. Please see my explanation and sample elsewhere on SO. Well, using a usual thread to open camera as in your code, is not useless, because this call itself may take few hundred milli on some devices, but event thread is much, much better.
Now let me address your questions: no, you cannot control the schedule of camera callbacks.
You can use setOneShotPreviewCallback() if you want to receive callbacks at 1 FPS or less. Your milage may vary, and it depends on the device, but I would recommend to use setPreviewCallbackWithBuffer and simply return from onPreviewFrame() if you want to check the camera more often. Performance hit from these void callbacks is minor.
Note that even when you offload the callbacks to a background thread, they are blocking: if it takes 200 ms to process a preview frame, camera will wait. Therefore, I usually send the byte[] to a working thread, and quickly release the callback thread. I won't recommend to slow down the flow of preview callbacks by processing them in blocking mode, because after you release the thread, the next callback will deliver a frame with undefined timestamp. Maybe it will be a fresh one, or maybe it will be one buffered a while ago.
You can schedule the callback in later platform releases (>4.0) indirectly. You can setup the buffers that the callback will use to deliver the data. Typically you setup two buffers; one to be written by the camera HAL while you read from the other one. No new frame will be delivered to you (by calling your onPreviewFrame) until you return a buffer that the camera can write to. It also means that the camera will drop frames.
Related
I've encountered a situation where I have to display images in a slideshow that switches image very fast. The sheer number of images makes me want to store the JPEG data in memory and decode them when I want to display them. To ease on the Garbage Collector, I'm using BitmapFactory.Options.inBitmap to reuse bitmaps.
Unfortunately, this causes rather severe tearing, I've tried different solutions such as synchronization, semaphores, alternating between 2-3 bitmaps, however, none seem to fix the problem.
I've set up an example project which demonstrates this issue over at GitHub; https://github.com/Berglund/android-tearing-example
I've got a thread which decodes the bitmap, sets it on the UI thread, and sleeps for 5 ms:
Runnable runnable = new Runnable() {
#Override
public void run() {
while(true) {
BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = 1;
if(bitmap != null) {
options.inBitmap = bitmap;
}
bitmap = BitmapFactory.decodeResource(getResources(), images.get(position), options);
runOnUiThread(new Runnable() {
#Override
public void run() {
imageView.setImageBitmap(bitmap);
}
});
try {
Thread.sleep(5);
} catch (InterruptedException e) {}
position++;
if(position >= images.size())
position = 0;
}
}
};
Thread t = new Thread(runnable);
t.start();
My idea is that ImageView.setImageBitmap(Bitmap) draws the bitmap on the next vsync, however, we're probably already decoding the next bitmap when this happens, and as such, we've started modifying the bitmap pixels. Am I thinking in the right direction?
Has anyone got any tips on where to go from here?
You should use the onDraw() method of the ImageView since that method is called when the view needs to draw its content on screen.
I create a new class named MyImageView which extends the ImageView and override the onDraw() method which will trigger a callback to let the listener knows that this view has finished its drawing
public class MyImageView extends ImageView {
private OnDrawFinishedListener mDrawFinishedListener;
public MyImageView(Context context, AttributeSet attrs) {
super(context, attrs);
}
#Override
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
if (mDrawFinishedListener != null) {
mDrawFinishedListener.onOnDrawFinish();
}
}
public void setOnDrawFinishedListener(OnDrawFinishedListener listener) {
mDrawFinishedListener = listener;
}
public interface OnDrawFinishedListener {
public void onOnDrawFinish();
}
}
In the MainActivity, define 3 bitmaps: one reference to the bitmap which is being used by the ImageView to draw, one for decoding and one reference to the bitmap that is recycled for the next decoding. I reuse the synchronized block from vminorov's answer, but put in different places with explanation in the code comment
public class MainActivity extends Activity {
private Bitmap mDecodingBitmap;
private Bitmap mShowingBitmap;
private Bitmap mRecycledBitmap;
private final Object lock = new Object();
private volatile boolean ready = true;
ArrayList<Integer> images = new ArrayList<Integer>();
int position = 0;
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
images.add(R.drawable.black);
images.add(R.drawable.blue);
images.add(R.drawable.green);
images.add(R.drawable.grey);
images.add(R.drawable.orange);
images.add(R.drawable.pink);
images.add(R.drawable.red);
images.add(R.drawable.white);
images.add(R.drawable.yellow);
final MyImageView imageView = (MyImageView) findViewById(R.id.image);
imageView.setOnDrawFinishedListener(new OnDrawFinishedListener() {
#Override
public void onOnDrawFinish() {
/*
* The ImageView has finished its drawing, now we can recycle
* the bitmap and use the new one for the next drawing
*/
mRecycledBitmap = mShowingBitmap;
mShowingBitmap = null;
synchronized (lock) {
ready = true;
lock.notifyAll();
}
}
});
final Button goButton = (Button) findViewById(R.id.button);
goButton.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
Runnable runnable = new Runnable() {
#Override
public void run() {
while (true) {
BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = 1;
if (mDecodingBitmap != null) {
options.inBitmap = mDecodingBitmap;
}
mDecodingBitmap = BitmapFactory.decodeResource(
getResources(), images.get(position),
options);
/*
* If you want the images display in order and none
* of them is bypassed then you should stay here and
* wait until the ImageView finishes displaying the
* last bitmap, if not, remove synchronized block.
*
* It's better if we put the lock here (after the
* decoding is done) so that the image is ready to
* pass to the ImageView when this thread resume.
*/
synchronized (lock) {
while (!ready) {
try {
lock.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
ready = false;
}
if (mShowingBitmap == null) {
mShowingBitmap = mDecodingBitmap;
mDecodingBitmap = mRecycledBitmap;
}
runOnUiThread(new Runnable() {
#Override
public void run() {
if (mShowingBitmap != null) {
imageView
.setImageBitmap(mShowingBitmap);
/*
* At this point, nothing has been drawn
* yet, only passing the data to the
* ImageView and trigger the view to
* invalidate
*/
}
}
});
try {
Thread.sleep(5);
} catch (InterruptedException e) {
}
position++;
if (position >= images.size())
position = 0;
}
}
};
Thread t = new Thread(runnable);
t.start();
}
});
}
}
As an alternative to your current approach, you might consider keeping the JPEG data as you are doing, but also creating a separate Bitmap for each of your images, and using the inPurgeable and inInputShareable flags. These flags allocate the backing memory for your bitmaps on a separate heap that is not directly managed by the Java garbage collector, and allow Android itself to discard the bitmap data when it has no room for it and re-decode your JPEGs on demand when required. Android has all this special-purpose code to manage bitmap data, so why not use it?
You need to do the following things in order to get rid of this problem.
Add an extra bitmap to prevent situations when ui thread draws a bitmap while another thread is modifying it.
Implement threads synchronization to prevent situations when background thread tries to decode a new bitmap, but the previous one wasn't shown by the ui thread.
I've modified your code a bit and now it works fine for me.
package com.example.TearingExample;
import android.app.Activity;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.os.Bundle;
import android.view.View;
import android.widget.Button;
import android.widget.ImageView;
import java.util.ArrayList;
public class MainActivity extends Activity {
ArrayList<Integer> images = new ArrayList<Integer>();
private Bitmap[] buffers = new Bitmap[2];
private volatile Bitmap current;
private final Object lock = new Object();
private volatile boolean ready = true;
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
images.add(R.drawable.black);
images.add(R.drawable.blue);
images.add(R.drawable.green);
images.add(R.drawable.grey);
images.add(R.drawable.orange);
images.add(R.drawable.pink);
images.add(R.drawable.red);
images.add(R.drawable.white);
images.add(R.drawable.yellow);
final ImageView imageView = (ImageView) findViewById(R.id.image);
final Button goButton = (Button) findViewById(R.id.button);
goButton.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
Runnable runnable = new Runnable() {
#Override
public void run() {
int position = 0;
int index = 0;
while (true) {
try {
synchronized (lock) {
while (!ready) {
lock.wait();
}
ready = false;
}
BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = 1;
options.inBitmap = buffers[index];
buffers[index] = BitmapFactory.decodeResource(getResources(), images.get(position), options);
current = buffers[index];
runOnUiThread(new Runnable() {
#Override
public void run() {
imageView.setImageBitmap(current);
synchronized (lock) {
ready = true;
lock.notifyAll();
}
}
});
position = (position + 1) % images.size();
index = (index + 1) % buffers.length;
Thread.sleep(5);
} catch (InterruptedException ignore) {
}
}
}
};
Thread t = new Thread(runnable);
t.start();
}
});
}
}
In the BM.decode(resource...
is the network involved?
If yes then u need to optimize the look-ahead connection and data transport across the net connection as well as your work optimizing bitmaps and memory.That can mean becoming adept at low latency or async transport using your connect protocol (http i guess). Make sure that you dont transport more data than you need? Bitmap decode can often discard 80% of the pixels in creating an optimized object to fill a local view.
If the data intended for the bitmaps are already local and there are not concerns about network latency then just focus on reserving a collection type DStructure(listArray) to hold the fragments that the UI will swap on the page-forward, page-back events.
If your jpegs ( pngs are lossless with bitmap ops IMO ) are around 100k each you can just use a std adapter to load them to fragments. If they are alot larger , then you will have to figure out the bitmap 'compress' option to use with the decode in order not to waste alot of memory on your fragment data structure.
if you need a theadpool in order to optimize the bitmap creation, then do that to remove any latency involved at that step.
Im not sure that it works, but if you want to get more complicated, you could look at putting a circular buffer or something underneath the listArray that collaborates with the adapter??
IMO - once you have the structure, the transaction switching among fragments as you page should be very fast. I have direct experience with about 6 pics in memory each with size around 200k and its fast at the page-fwd, page-back.
I used this app as a framework , focusing on the 'page-viewer' example.
It's related to image caching, asycTask processing, background download from net etc.
Please read this page:
http://developer.android.com/training/displaying-bitmaps/index.html
If you download and look into the sample project bitmapfun on that page, I trust it will solve all your problem. That's a perfect sample.
I am using OpenCV to attempt to do some live video processing. Since the processing is fairly heavy, it delays the output frames significantly, making the live stream look choppy.
I'd like to offload some of the processing into an AsyncTask. I've tried it and it actually makes the video much smoother. However, it ends up starting a large amount of Tasks at once, and then they will slowly start returning with some results.
Is there any way to slow this down, and wait for a result, either by using Synchronize statements, or some other method?
On each camera frame, I start one of these tasks. DoImgProcessing does the long processing and returns a string result.
private class LongOperation extends AsyncTask<Mat, Void, String> {
#Override
protected String doInBackground(Mat... params) {
Mat inputFrame = params[0];
cropToCenter(inputFrame);
return doImgProcessing(inputFrame);
}
#Override
protected void onPostExecute(String result) {
Log.d(TAG, "on post execute: "+result);
}
#Override
protected void onPreExecute() {
Log.d(TAG, "on pre execute");
}
}
public Mat onCameraFrame(Mat inputFrame) {
inputFrame.copyTo(mRgba);//this will be used for the live stream
LongOperation op = new LongOperation();
op.execute(inputFrame);
return mRgba;
}
I would do something like that :
// Example value for a timeout.
private static final long TIMEOUT = 1000L;
private BlockingQueue<Mat> frames = new LinkedBlockingQueue<Mat>();
Thread worker = new Thread() {
#Override
public void run() {
while (running) {
Mat inputFrame = frames.poll(TIMEOUT, TimeUnit.MILLISECONDS);
if (inputFrame == null) {
// timeout. Also, with a try {} catch block poll can be interrupted via Thread.interrupt() so not to wait for the timeout.
continue;
}
cropToCenter(inputFrame);
String result = doImgProcessing(inputFrame);
}
}
};
worker.start();
public Mat onCameraFrame(Mat inputFrame) {
inputFrame.copyTo(mRgba);//this will be used for the live stream
frames.put(inputFrame);
return mRgba;
}
The onCameraFrame puts the frame on the Queue, the worker Thread polls from the Queue.
This decorelate the reception and the treatment of the frame. You can monitor the growth of the Queue using frames.size().
This is a typical producer-consumer example.
If you're doing this on each frame, it sounds like you need a thread instead. An AsyncTask is for when you want to do a one-off activity on another thread. Here you want to do it repeatedly. Just create a thread, and when it finishes a frame have it post a message to a handler to run the post step on the UI thread. It can wait on a semaphore at the top of its loop for the next frame to be ready.
This is my first post here. Sorry if it is not well done and thanks in advance.
There is something that is driven me crazy.
I try to draw something and show it slowly on screen. For this I use Thread.sleep as shown in the code. But it is showing two different versions of the canvas in each iteration.
Can anybody explain what is wrong?
I only want to draw something and after a few seconds draw something else. And so on.
This is my code:
public class Vista extends SurfaceView implements Callback {
private Hilo hilo;
private SurfaceHolder sf;
class Hilo extends Thread {
public boolean running;
private Vista view;
Canvas c = null;
Bitmap btm = null;
public Hilo(SurfaceHolder holder, Vista view) {
sf = holder;
btm = Bitmap.createBitmap(480, 800, Config.ARGB_8888);
c = new Canvas(btm);
}
#Override
public void run() {
Paint paint = new Paint();
paint.setStyle(Style.FILL_AND_STROKE);
paint.setColor(Color.BLUE);
int k = 0;
while (running) {
c = sf.lockCanvas();
k+= 10;
c.drawText(String.valueOf(k/10), k, 20, paint);
sf.unlockCanvasAndPost(c);
try {
sleep(1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
public Vista(Context context) {
super(context);
SurfaceHolder holder = getHolder();
holder.addCallback(this);
hilo = new Hilo(holder, this);
}
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
// TODO Auto-generated method stub
}
#Override
public void surfaceCreated(SurfaceHolder holder) {
hilo.running = true;
hilo.start();
}
#Override
public void surfaceDestroyed(SurfaceHolder holder) {
boolean retry = true;
hilo.running = false;
while (retry) {
try {
hilo.join();
retry = false;
} catch (InterruptedException e) {
}
}
}
}
Your code is sleep()ing the Main thread, also known as the UI thread. This is something you absolutely mustn't do, as the main thread is responsible for handling all "stuff" in Android: UI rendering, UI event dispatch, etcetera. You can read the Processes and Threads guide on the Android Developer website for more information. Choice quote:
Thus, there are simply two rules to Android's single thread model:
Do not block the UI thread
Do not access the Android UI toolkit from outside the UI thread
The question then becomes, how do you animate a view without sleeping the main thread? The trick is to tell Android to go do something else and come back to you (redraw you) after a little while. Each time you draw, you calculate by how much to advance your animation.
One (crude) way to do this is with postInvalidateDelayed(long), which does exactly that: redraw your view after the given number of milliseconds. It's not a precise delay, it'll fluctuate a bit, so you can't assume that's exactly how much time passed. A way to do this is to store the System.nanoTime() in an instance field, so that the next time you're drawn, you can take the difference to a new reading of nanoTime(). This difference is how much time has passed and thus tells you by how much you should update your animation state. If your animation is supposed to last 1 second in total and 100 milliseconds went by since the last time you drew your scene, you must advance everything by 10%.
Android provides the a lot of utilities to make this easier.
I'm implementing a SurfaceView subclass, where I run a separate thread to draw onto a SurfaceHolders Canvas.
I'm measuring time before and after call to lockCanvas(), and I'm getting from about 70ms to 100ms.
Does anyone could point me why i'm getting such high timings?
Here the relevant part of the code:
public class TestView extends SurfaceView implements SurfaceHolder.Callback {
....
boolean created;
public void surfaceChanged(SurfaceHolder holder, int format, int width,
int height) {
mThread = new DrawingThread(mHolder, true);
mThread.onWindowResize(width, height);
mThread.start();
}
public void surfaceCreated(SurfaceHolder holder) {
created = true;
}
public void surfaceDestroyed(SurfaceHolder holder) {
created = false;
}
class DrawingThread extends Thread {
public void run() {
while(created) {
Canvas canvas = null;
try {
long t0 = System.currentTimeMillis();
canvas = holder.lockCanvas(null);
long t1 = System.currentTimeMillis();
Log.i(TAG, "Timing: " + ( t1 - t0) );
} finally {
holder.unlockCanvasAndPost(canvas);
}
}
You're creating a thread every time the surface is changed. You should start your thread in surfaceCreated and kill it in surfaceDestroyed. surfaceChanged is for when the dimensions of your surface changes.
From SurfaceView.surfaceCreated docs:
This is called immediately after the surface is first created. Implementations of this should start up whatever rendering code they desire. Note that only one thread can ever draw into a Surface, so you should not draw into the Surface here if your normal rendering will be in another thread.
The multiple threads are probably getting you throttled. From SurfaceHolder.lockCanvas docs:
If you call this repeatedly when the Surface is not ready (before Callback.surfaceCreated or after Callback.surfaceDestroyed), your calls will be throttled to a slow rate in order to avoid consuming CPU.
However, I'm not convinced this is the only problem. Does surfaceChanged actually get called multiple times?
This is related to how lockCanvas is actually implemented in the android graphic framework.
You should probably already know that lockCanvas will return you an free piece of memory that you will be used to draw to. By free, it means this memory has not be used for composition and not for display. Internally, simply speaking, an SurfaceView is backed up by double buffer, one is for drawing , one is for composition/display. This double buffer is managed by BufferQueque. If composition/display is slow than drawing, we have to wait until we have free buffer available.
read this:
What does lockCanvas mean (elaborate)
I've got 30+ single bitmaps (320x240 pixels) that I would like to display one after another in full screen on Android devices resulting in an animation. Currently I implemented the animation using an ImageView and a Timer that sets the next frame and then sends a message that will apply the next frame. The resulting frame rate is very low: < 2 fps.
The timer:
animationTimer.scheduleAtFixedRate(new TimerTask() {
#Override
public void run() {
Drawable frame = getNextFrame();
if (frame != null) {
Message message = animationFrameHandler.obtainMessage(1, frame);
animationFrameHandler.sendMessage(message);
}
}
}, 0, (int) (1000.0d / fps));
The handler:
final Handler animationFrameHandler = new Handler() {
#Override
public void handleMessage(Message message) {
setImageDrawable((Drawable) message.obj);
}
};
Since I want to achieve frame rates up to 30 fps I have to make use of another mechanism and heard of Canvas.drawBitmapMesh() and OpenGL.
If possible I would like to avoid using OpenGL.
Thank you very sharing your experiences!
My now working approach is the following:
Before starting the animation, load every frame into a List<Bitmap>. Important: Call System.gc() if you're getting OutOfMemoryErrors – that really helps loading more bitmaps into the memory. Then have a thread running that posts the next frame to a View instance that then update it's canvas.
Loading the frames and starting the animation
// Loading the frames before starting the animation
List<Bitmap> frames = new ArrayList<Bitmap>();
for (int i = 0; i < 30; i++) {
// Load next frame (e. g. from drawable or assets folder)
frames.add(...);
// Do garbage collection every 3rd frame; really helps loading all frames into memory
if (i %% 3 == 0) {
System.gc();
}
}
// Start animation
frameIndex = 0;
animationThread.start();
Thread that applies the next frame
private final class AnimationThread extends Thread {
#Override
public void run() {
while (!isInterrupted()) {
// Post next frame to be displayed
animationView.postFrame(frames.get(frameIndex));
// Apply next frame (restart if last frame has reached)
frameIndex++;
if (frameIndex >= frames.size()) {
frameIndex = 0;
}
try {
sleep(33); // delay between frames in msec (33 msec mean 30 fps)
} catch (InterruptedException e) {
break;
}
}
}
}
The animation view
class AnimationView extends View {
Bitmap frame = null;
public void postFrame(Bitmap frame) {
Message message = frameHandler.obtainMessage(0, frame);
frameHandler.sendMessage(message);
}
protected final Handler frameHandler = new Handler() {
#Override
public void handleMessage(Message message) {
if (message.obj != null) {
frame = (Bitmap) message.obj;
} else {
frame = null;
}
invalidate();
}
}
#Override
protected void onDraw(Canvas canvas) {
if (frame == null) return;
canvas.drawARGB(0, 0, 0, 0);
canvas.drawBitmap(frame, null, null, null);
}
}
You should look at the FrameAnimation class; http://developer.android.com/guide/topics/graphics/2d-graphics.html#frame-animation to do frame animation with Androids animation.
Though that might still be too slow.
The other alternative if you don't want to use OpenGL ES is to draw to the Canvas as you've mentioned. But just use .drawBitmap, not the drawBitmapMesh. Create a SurfaceView, which has a thread, that thread should draw on your Canvas at whatever interval you want.
It's pretty straightforward, just read the Android docs, the information is all there.
I'll let someone else go into the best way of doing this but one thing that immediately jumps to mind from your post that isn't helping is using TimerTask is a terrible way to do this and is not meant for animation.
Probably won't help with performance, but if those bitmaps are resources you might want to consider using an AnimationDrawable. If not, try to extend Drawable and implement the Animatable interface. Views already have built-in support for animating drawables, no need to use a handler for that.
One way to improve performance might be to match the bit-depth of the drawables to those of your current window. Romain Guy did a keynote on this and animations in general once: http://www.youtube.com/watch?v=duefsFTJXzc