Drawing to Surface from JNI while moving SurfaceView around - android

In my application I have a SurfaceView in my Activity layout so its surface is not affected by any Fragment lifecycles. I use the surface to draw to it from JNI using following code (some of the varialbes are global):
void onDraw() {
int height=0, width=0;
ANativeWindow_Buffer nativeBuffer;
ARect redrawRect;
if (nativeWindow != NULL) {
height = ANativeWindow_getHeight(nativeWindow);
width = ANativeWindow_getWidth(nativeWindow);
redrawRect.left = 0;
redrawRect.top = 0;
redrawRect.right = width;
redrawRect.bottom = height;
if (ANativeWindow_lock(nativeWindow, &nativeBuffer, &redrawRect) == 0) {
height = (redrawRect.bottom - redrawRect.top);
width = (redrawRect.right - redrawRect.left);
bufferSize= height * width * 4;
memcpy(nativeBuffer.bits, pbuf, bufferSize);
ANativeWindow_unlockAndPost(nativeWindow);
}
}
}
As long as I do not change the size and position of the SurfaceView this works great. However, when I start drawing, adjust the SurfaceView in size and position (content is adjusted fine), stop the drawing and then start it again I get the following error:
E/BufferQueue: [SurfaceView] connect: already connected (cur=2, req=2)
What I understand from the documentation so far is that my SurfaceView is the consumer and the ANativeWindow the producer. The SurfaceView holds a BufferQueue which send the display data (copied by memcpy) from the producer to the consumer side. So after the size and location change one of them (I'm not sure which) is trying to connect to that BufferQueue again. Since in my code I'm not calling queue/acquire etc. specifically I'm not sure how to prevent this.
What I do not know: Is connect called from ANativeWindow_lock or ANativeWindow_fromSurface on the NDK side? Could the buffer received by the lock method leak somehow to prevent the connect next time? I also added log messages to the lock and unlock results and found that after the size and location change ANativeWindow_lock fails. This seems odd to me because my unlock is only performed when the lock was successful and I'm not locking the surface from the Java side at all. If I try to unlock it then from JNI I get:
Surface::unlockAndPost failed, no locked buffer
I also thought about other memory leaks and analysed it with LeakCanary. This does not find any leaks, even when I do a heap dump manually tho I'm not sure how good it works with native code.
If I do not draw at all this error does not appear. Do you have any ideas for this?
If it matters I'm developing on Android KitKat (4.4.2). Any help is highly appreciated.

Related

Android - Enable 4K Activity with 4K GLSurfaceView

I'm upgrading my Android OpenGL app to run on my new 4K phone and am intending to have a fullscreen activity whose dimensions are both logically and physically 4K. However, without special-casing for 4K, I'm running into the evidently expected scenario that my activity is only 1080p in size and I wish to bump it up to 4K.
I have followed the available documentation involving querying the display modes, which thankfully does enumerate both 1080p and 4K as available display modes. The pseudo code getBestDisplayModeId() below represents this step, which I consider to be functioning accurately. This function can be assumed to correctly return the display mode id of the 4K display mode. I then set the preferredDisplayModeId in the LayoutParams for the window to that optimal display mode id and use setAttributes() to hopefully apply it to the window.
This is executed during the onCreate() for my fullscreen Activity. Note that the activity was created via the fullscreen activity wizard in Android Studio and it essentially contains just a GLSurfaceView and associated Renderer (the typical OpenGL setup).
My (GL) Renderer class was temporarily modified (for debugging purposes) to display a Toast (via IPC), showing its dimensions whenever its onSurfaceChanged() overridden method is called. The Toast is always reporting that the dimensions are 1080p, rather than the intended 4K.
How can I correctly apply the preferredDisplayModeId and then get an Activity/View/GLSurfaceView that is truly the intended 4K size?
int getBestDisplayModeId()
{
// 1. query available display modes for display 0 via DisplayManager
// 2. return display mode corresponding to the highest resolution
}
FullscreenActivity.onCreate()
{
Window w = getWindow();
WindowManager.LayoutParams p = w.getAttributes();
p.preferredDisplayModeId = getBestDisplayModeId(); // get 4K id
w.setAttributes(p); // set 4K resolution
// when does the switch to 4K occur?
mGLView = new MyGLSurfaceView();
setContentView(mGLView);
}
MyRenderer.onSurfaceChanged(int w, int h)
{
// only shows 1080p resolution, never 4K
}

play-services-vision: How do I sync the Face Detection speed with the Camera Preview speed?

I have some code that allows me to detect faces in a live camera preview and draw a few GIFs over their landmarks using the play-services-vision library provided by Google.
It works well enough when the face is static, but when the face moves at a moderate speed, the face detector takes longer than the camera's framerate to detect the landmarks at the face's new position. I know it might have something to do with the bitmap draw speed, but I took steps to minimize the lag in them.
(Basically I get complaints that the GIFs' repositioning isn't 'smooth enough')
EDIT: I did try getting the coordinate detection code...
List<Landmark> landmarksList = face.getLandmarks();
for(int i = 0; i < landmarksList.size(); i++)
{
Landmark current = landmarksList.get(i);
//canvas.drawCircle(translateX(current.getPosition().x), translateY(current.getPosition().y), FACE_POSITION_RADIUS, mFacePositionPaint);
//canvas.drawCircle(current.getPosition().x, current.getPosition().y, FACE_POSITION_RADIUS, mFacePositionPaint);
if(current.getType() == Landmark.LEFT_EYE)
{
//Log.i("current_landmark", "l_eye");
leftEyeX = translateX(current.getPosition().x);
leftEyeY = translateY(current.getPosition().y);
}
if(current.getType() == Landmark.RIGHT_EYE)
{
//Log.i("current_landmark", "r_eye");
rightEyeX = translateX(current.getPosition().x);
rightEyeY = translateY(current.getPosition().y);
}
if(current.getType() == Landmark.NOSE_BASE)
{
//Log.i("current_landmark", "n_base");
noseBaseY = translateY(current.getPosition().y);
noseBaseX = translateX(current.getPosition().x);
}
if(current.getType() == Landmark.BOTTOM_MOUTH) {
botMouthY = translateY(current.getPosition().y);
botMouthX = translateX(current.getPosition().x);
//Log.i("current_landmark", "b_mouth "+translateX(current.getPosition().x)+" "+translateY(current.getPosition().y));
}
if(current.getType() == Landmark.LEFT_MOUTH) {
leftMouthY = translateY(current.getPosition().y);
leftMouthX = translateX(current.getPosition().x);
//Log.i("current_landmark", "l_mouth "+translateX(current.getPosition().x)+" "+translateY(current.getPosition().y));
}
if(current.getType() == Landmark.RIGHT_MOUTH) {
rightMouthY = translateY(current.getPosition().y);
rightMouthX = translateX(current.getPosition().x);
//Log.i("current_landmark", "l_mouth "+translateX(current.getPosition().x)+" "+translateY(current.getPosition().y));
}
}
eyeDistance = (float)Math.sqrt(Math.pow((double) Math.abs(rightEyeX - leftEyeX), 2) + Math.pow(Math.abs(rightEyeY - leftEyeY), 2));
eyeCenterX = (rightEyeX + leftEyeX) / 2;
eyeCenterY = (rightEyeY + leftEyeY) / 2;
noseToMouthDist = (float)Math.sqrt(Math.pow((double)Math.abs(leftMouthX - noseBaseX), 2) + Math.pow(Math.abs(leftMouthY - noseBaseY), 2));
...in a separate thread within the View draw method, but it just nets me a SIGSEGV error.
My questions:
Is syncing the Face Detector's processing speed with the Camera Preview framerate the right thing to do in this case, or is it the other way around, or is it some other way?
As the Face Detector finds the faces in a camera preview frame, should I drop the frames that the preview feeds before the FD finishes? If so, how can I do it?
Should I just use setClassificationMode(NO_CLASSIFICATIONS) and setTrackingEnabled(false) in a camera preview just to make the detection faster?
Does the play-services-vision library use OpenCV, and which is actually better?
EDIT 2:
I read one research paper that, using OpenCV, the face detection and other functions available in OpenCV is faster in Android due to their higher processing power. I was wondering whether I can leverage that to hasten the face detection.
There is no way you can guarantee that face detection will be fast enough to show no visible delay even when the head motion is moderate. Even if you succeed to optimize the hell of it on your development device, you will sure find another model among thousands out there, that will be too slow.
Your code should be resilient to such situations. You can predict the face position a second ahead, assuming that it moves smoothly. If the users decide to twitch their head or device, no algorithm can help.
If you use the deprecated Camera API, you should pre-allocate a buffer and use setPreviewCallbackWithBuffer(). This way you can guarantee that the frames arrive to you image processor one at a time. You should also not forget to open the Camera on a background thread, so that the [onPreviewFrame()](http://developer.android.com/reference/android/hardware/Camera.PreviewCallback.html#onPreviewFrame(byte[], android.hardware.Camera)) callback, where your heavy image processing takes place, will not block the UI thread.
Yes, OpenCV face-detection may be faster in some cases, but more importantly it is more robust that the Google face detector.
Yes, it's better to turn the classificator off if you don't care about smiles and open eyes. The performance gain may vary.
I believe that turning tracking off will only slow the Google face detector down, but you should make your own measurements, and choose the best strategy.
The most significant gain can be achieved by turning setProminentFaceOnly() on, but again I cannot predict the actual effect of this setting for your device.
There's always going to be some lag, since any face detector takes some amount of time to run. By the time you draw the result, you will usually be drawing it over a future frame in which the face may have moved a bit.
Here are some suggestions for minimizing lag:
The CameraSource implementation provided by Google's vision library automatically handles dropping preview frames when needed so that it can keep up the best that it can. See the open source version of this code if you'd like to incorporate a similar approach into your app: https://github.com/googlesamples/android-vision/blob/master/visionSamples/barcode-reader/app/src/main/java/com/google/android/gms/samples/vision/barcodereader/ui/camera/CameraSource.java#L1144
Using a lower camera preview resolution, such as 320x240, will make face detection faster.
If you're only tracking one face, using the setProminentFaceOnly() option will make face detection faster. Using this and LargestFaceFocusingProcessor as well will make this even faster.
To use LargestFaceFocusingProcessor, set it as the processor of the face detector. For example:
Tracker<Face> tracker = *your face tracker implementation*
detector.setProcessor(
new LargestFaceFocusingProcessor.Builder(detector, tracker).build());
Your tracker implementation will receive face updates for only the largest face that it initially finds. In addition, it will signal back to the detector that it only needs to track that face for as long as it is visible.
If you don't need to detect smaller faces, using the setMinFaceSize() larger will make face detection faster. It's faster to detect only larger faces, since it doesn't need to spend time looking for smaller faces.
You can turn of classification if you don't need eyes open or smile indication. However, this would only give you a small speed advantage.
Using the tracking option will make this faster as well, but at some accuracy expense. This uses a predictive algorithm for some intermediate frames, to avoid the expense running full face detection on every frame.

Android FaceDetector.findFaces 100% CPU Time

I got this simple function
private PointF getFaceCenter(Bitmap faceBitmap){
PointF faceCenter = new PointF(faceBitmap.getWidth() / 2, faceBitmap.getHeight() / 2);
Face[] faces = new Face[1];
mFaceDetector = new FaceDetector(
faceBitmap.getWidth(), faceBitmap.getHeight(), 1);
int detected = mFaceDetector.findFaces(faceBitmap, faces);
if (detected > 0) {
faces[0].getMidPoint(faceCenter);
}
return faceCenter;
}
I use it to get face center so i can know where to draw my picture. I noticed that my UI thread become stuck all the time when re drawing..
So i placed this calculation on the onMeasure but still everytime onMeasure called UI gets slow..
I started profiling:
I saw that mFaceDetector.findFaces takes 100% CPU Time!!
I removed the face detection code and my app started running super fast.
Anything I'm doing wrong?
Any workaround ?
You are doing that on the main thread. That the same thread that processes UI events.
That's the reason UI blocks. Try using using deferent thread to do computation. (see AsyncTask android documentation)
As for cpu usage - it has to process a lot of data ( especially if you have 10MPics camera :)) so it is normal.
The next pitfall is that current implementation only works for RGB_565 (again android docs) so it might be worth to check bitmap config
EDIT:
I just checked this on Galaxy S3 - for back camera made picture it takes arount 16 sec to analyse picture.

Background image taking too long to draw (Canvas) Jerky Sprites......?

Hey all I'm at a crossroads with my app that I've been working on.
It's a game and an 'arcade / action' one at that, but I've coded it using Surfaceview rather than Open GL (it just turned out that way as the game changed drastically from it's original design).
I find myself plagued with performance issues and not even in the game, but just in the first activity which is an animated menu (full screen background with about 8 sprites floating across the screen).
Even with this small amount of sprites, I can't get perfectly smooth movement. They move smoothly for a while and then it goes 'choppy' or 'jerky' for a split second.
I noticed that (from what I can tell) the background (a pre-scaled image) is taking about 7 to 8 ms to draw. Is this reasonable? I've experimented with different ways of drawing such as:
canvas.drawBitmap(scaledBackground, 0, 0, null);
the above code produces roughly the same results as:
canvas.drawBitmap(scaledBackground, null, screen, null);
However, if I change my holder to:
getHolder().setFormat(PixelFormat.RGBA_8888);
The the drawing of the bitmap shoots up to about 13 MS (I am assuming because it then has to convert to RGB_8888 format.
The strange thing is that the rendering and logic move at a very steady 30fps, it doesn't drop any frames and there is no Garbage Collection happening during run-time.
I've tried pretty much everything I can think of to get my sprites moving smoothly
I recently incorporated interpolation into my gameloop:
float interpolation = (float)(System.nanoTime() + skipTicks - nextGameTick)
/ (float)(skipTicks);
I then pass this into my draw() method:
onDraw(interpolate)
I have had some success with this and it has really helped smooth things out, but I'm still not happy with the results.
Can any one give me any final tips on maybe reducing the time taken to draw my bitmaps or any other tips on what may be causing this or do you think it's simply a case of Surfaceview not being up to the task and therefore, should I scrap the app as it were and start again with Open GL?
This is my main game loop:
int TICKS_PER_SECOND = 30;
int SKIP_TICKS = 1000 / TICKS_PER_SECOND;
int MAX_FRAMESKIP = 10;
long next_game_tick = GetTickCount();
int loops;
bool game_is_running = true;
while( game_is_running ) {
loops = 0;
while( GetTickCount() > next_game_tick && loops < MAX_FRAMESKIP) {
update_game();
next_game_tick += SKIP_TICKS;
loops++;
}
interpolation = float( GetTickCount() + SKIP_TICKS - next_game_tick )
/ float( SKIP_TICKS );
display_game( interpolation );
}
Thanks
You shouldn't use Canvas to draw fast sprites, especially if you're drawing a fullscreen image. Takes way too long, I tell you from experience. I believe Canvas is not hardware accelerated, which is the main reason you'll never get good performance out of it. Even simple sprites start to move slow when there are ~15 on screen. Switch to OpenGL, make an orthographic projection and for every Sprite make a textured quad. Believe me, I did it, and it's worth the effort.
EDIT: Actually, instead of a SurfaceView, the OpenGL way is to use a GLSurfaceView. You create your own class, derive from it, implement surfaceCreated, surfaceDestroyed and surfaceChanged, then you derive from Renderer too and connect both. Renderer handles an onDraw() function, which is what will render, GLSurfaceView manages how you will render (bit depth, render modes, etc.)

Preventing "flickering" when calling Drawable.draw()

I have a little experimentation app (essentially a very cut-down version of the LunarLander demo in the Android SDK), with a single SurfaceView. I have a Drawable "sprite" which I periodically draw into the SurfaceView's Canvas object in different locations, without attempting to erase the previous image. Thus:
private class MyThread extends Thread {
SurfaceHolder holder; // Initialised in ctor (acquired via getHolder())
Drawable sprite; // Initialised in ctor
Rect bounds; // Initialised in ctor
...
#Override
public void run() {
while (true) {
Canvas c = holder.lockCanvas();
synchronized (bounds) {
sprite.setBounds(bounds);
}
sprite.draw(c);
holder.unlockCanvasAndPost(c);
}
}
/**
* Periodically called from activity thread
*/
public void updatePos(int dx, int dy) {
synchronized (bounds) {
bounds.offset(dx, dy);
}
}
}
Running in the emulator, what I'm seeing is that after a few updates have occurred, several old "copies" of the image begin to flicker, i.e. appearing and disappearing. I initially assumed that perhaps I was misunderstanding the semantics of a Canvas, and that it somehow maintains "layers", and that I was thrashing it to death. However, I then discovered that I only get this effect if I try to update faster than roughly every 200 ms. So my next best theory is that this is perhaps an artifact of the emulator not being able to keep up, and tearing the display. (I don't have a physical device to test on, yet.)
Is either of these theories correct?
Note: I don't actually want to do this in practice (i.e. draw hundreds of overlaid copies of the same thing). However, I would like to understand why this is happening.
Environment:
Eclipse 3.6.1 (Helios) on Windows 7
JDK 6
Android SDK Tools r9
App is targetting Android 2.3.1
Tangential question:
My run() method is essentially a stripped-down version of how the LunarLander example works (with all the excess logic removed). I don't quite understand why this isn't going to saturate the CPU, as there seems to be nothing to prevent it running at full pelt. Can anyone clarify this?
Ok, I've butchered Lunar Lander in a similar way to you, and having seen the flickering I can tell you that what you are seeing is a simple artefact of the double-buffering mechanism that every Surface has.
When you draw anything on a Canvas attached to a Surface, you are drawing to the 'back' buffer (the invisible one). And when you unlockCanvasAndPost() you are swapping the buffers over... what you drew suddenly becomes visible as the "back" buffer becomes the "front", and vice versa. And so your next frame of drawing is done to the old "front" buffer...
The point is that you always draw to seperate buffers on alternate frames. I guess there's an implicit assumption in graphics architecture that you're always going to be writing every pixel.
Having understood this, I think the real question is why doesn't it flicker on hardware? Having worked on graphics drivers in years gone by, I can guess at the reasons but hesitate to speculate too far. Hopefully the above will be sufficient to satisfy your curiousity about this rendering artefact. :-)
You need to clear the previous position of the sprite, as well as the new position. This is what the View system does automatically. However, if you use a Surface directly and do not redraw every pixel (either with an opaque color or using a SRC blending mode) you must clear the content of the buffer yourself. Note that you can pass a dirty rectangle to lockCanvas() and it will do the union for you of the previous dirty rectangle and the one you are passing (this is the mechanism used by the UI toolkit.) It will also set the clip rect of the Canvas to be the union of these two rectangles.
As for your second question, unlockAndPost() will do a vsync, so you will never draw at more than ~60fps (most devices that I've seen have a display refresh rate set around 55Hz.)

Categories

Resources