I am using the HelloAR demo application and I want to capture the screen of my Samsung Galaxy Tab S5e.
In onDrawFrame I call my screenshot function:
#Override
public void onDrawFrame(GL10 gl) {
// Capture the screen
createBitmapFromGLSurface(gl);
....
}
Here is the createBitmapFromGLSurface function:
public void createBitmapFromGLSurface(GL10 gl) {
ByteBuffer buffer = ByteBuffer.allocate(w * h * 4);
GLES20.glReadPixels(0, 0, w, h, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, buffer);
Bitmap bitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(buffer);
//Do something with the bitmap
}
This works (image gets saved to disk and is perfect), but it is absolutely dog slow! So, I thought to offload it to a background thread... and tried to wrap it in an AsyncTask:
public void onDrawFrame(GL10 gl) {
AsyncTask.execute(new Runnable() {
#Override
public void run() {
createBitmapFromGLSurface(gl);
}
}
However, this just gives me a totally blank (transparent) image when I save the bitmap to disk.
How can I speed this up, get it to work on a background thread or (preferably) both?
OpenGL stuff runs on its own dedicated thread, which has a GL context - you're probably getting a blank screen because it's running on another thread, without access to that context?
weston's version runs on the same thread, which should be why it works! But you're doing a bunch of allocations (and IO if you're writing the bitmap out) which you really want to avoid in onDrawFrame. You have 16ms to get everything done to maintain a smooth 60fps. Is that what you mean by slow, the display chugs?
Personally I'd do as little as possible on the GL thread - write the data to the buffer, then hand that buffer off to a worker thread to create a bitmap, write to disk etc. Ideally allocate the buffer once and reuse it (but you'll have to manage concurrency, so you're not writing to your buffer on one thread while another thread is trying to read it).
I wouldn't use an AsyncTask personally but it should still work - you're only meant to call #execute from the UI thread though, so if you're messing around with Loopers that might slow it down a bit too.
Related
So I am basically creating a 2D physics engine I want to use for other projects in the future.
The only problem here is the performance of my Surface View.
I get a constant 60 logic-updates per second, but only 6 frames per second no matter what I do.
If I call the draw() method in my calculating thread, I get 6 fps.
If I create a whole new thread which is only drawing, without any delay in the loop
#Override
public void run() {
while(running) {
if(holder.getSurface().isValid()) {
canvas = holder.lockCanvas();
canvas.drawColor(Color.DKGRAY);
for (GameObject object : objects) {object.draw(canvas);}
holder.unlockCanvasAndPost(canvas);
}
gs.totalFrames++;
}
}
(gs = Game Surface) I get 6 fps. The draw() function of my game object is pretty simple:
public void draw(Canvas canvas) {
canvas.drawCircle((float) x, (float) y, 10, paint);
}
Why is the performance so bad? (I only have 5 game objects, so that's 5 circles, which should not be a performance problem at all!)
I also tried a runnable posting itself delayed, but that was bad, too.
Calling the draw() function in the GameSurface from the thread instead of doing the drawing directly in it, didn't change anything. A tiny delay didn't improve it either.
Do I somehow use the SurfaceView in a way it's not supposed to be used in?
What is the problem?
I'm developing simple application, which should do some image processing and display progress to user. Because i want user to see the whole process, it's not possible to set bitmap to imageView after the process completes, or do the work in UI thread.
I ended up with following solution:
Obtain bitmap from source
Set bitmap to ImageView
Run image processing task on background thread (using AsyncTask)
Here is simplified source code:
public class ImageProcessingTask extends AsyncTask<Params, Void, Bitmap>
{
private Listener listener = null;
public ImageProcessingTask(Listener listener)
{
this.listener = listener;
}
#Override
protected Bitmap doInBackground(Params... params)
{
// Do work, call setPixel in cycle without Thread.sleep() is enough
ImageProcessing.processImage(params[0]);
}
#Override
protected void onPostExecute(Bitmap bitmap)
{
if(listener != null)
{
listener.onImageProcessingTaskPostExecute(bitmap);
}
}
}
Everything is working just great, except once in a while in approximately in half of image (randomly) the process stops and i get this
E/OpenGLRenderer﹕ Cannot generate texture from bitmap
In this thread i find out that it is problem related to hardware acceleration, so i turned it off
imageView.setLayerType(View.LAYER_TYPE_SOFTWARE, null);
The problem disapeared from console, but everyting i achieved was that bitmap disapeared from ImageView on exactly same spot, instead of freezing.
I found several discussions throughout the internet, but i haven't discover any useful solutions to this problem. I tried to synchronize access to bitmap using synchronized statement and java.util.concurrent.Semaphore, but with no success.
And last one interesting fact i noticed - i have simple color detection algorithm, that compute number of individual colors on picture. If i run it after image processing function, it returns different values, as if bitmap didn't change colors at all (physically, so it probably isn't just OpenGL problem).
List of approaches i used:
Disable hardware acceleration for ImageView (see above)
Synchronize access to bitmap (no change)
setPixels instead of setPixel (no change)
copyPixelsFromBuffer + copyPixelsToBuffer instead of setPixel (no change)
move setPixel in UI thread using Handler (no change)
much more...
(This is due to the limitations of the server software I will be using, if I could change it, I would).
I am receiving a sequence of 720x480 JPEG files (about 6kb in size), over a socket. I have benchmarked the network, and have found that I am capable of receiving those JPEGs smoothly, at 60FPS.
My current drawing operation is on a Nexus 10 display of 2560x1600, and here's my decoding method, once I have received the byte array from the socket:
public static void decode(byte[] tmp, Long time) {
try {
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferQualityOverSpeed = false;
options.inDither = false;
Bitmap bitmap = BitmapFactory.decodeByteArray(tmp, 0, tmp.length, options);
Bitmap background = Bitmap.createScaledBitmap
(bitmap, MainActivity.screenwidth, MainActivity.screenheight, false);
background.setHasAlpha(false);
Canvas canvas = MainActivity.surface.getHolder().lockCanvas();
canvas.drawColor(Color.BLACK);
canvas.drawBitmap(background, 0, 0, new Paint());
MainActivity.surface.getHolder().unlockCanvasAndPost(canvas);
} catch (Exception e) {
e.printStackTrace();
}
}
As you can see, I am clearing the canvas from a SurfaceView, and then drawing the Bitmap to the SurfaceView. My issue is that it is very, very, slow.
Some tests based on adding System.currentTimeMillis() before and after the lock operation result in approximately a 30ms difference between getting the canvas, drawing the bitmap, and then pushing the canvas back. The displayed SurfaceView is very laggy, sometimes it jumps back and forth, and the frame rate is terrible.
Is there a referred method for drawing like this? Again, I can't modify what I'm getting from the server, but I'd like the bitmaps to be displayed at 60FPS when possible.
(I've tried setting the contents of an ImageView, and am receiving similar results). I have no other code in the SurfaceView that could impact this. I have set the holder to the RGBA_8888 format:
getHolder().setFormat(PixelFormat.RGBA_8888);
Is it possible to convert this stream of Bitmaps into a VideoView? Would that be faster?
Thanks.
Whenever you run into performance questions, use Traceview to figure out exactly where your problem lies. Using System.currentTimeMillis() is like attempting to trim a steak with a hammer.
The #1 thing her is to get the bitmap decoding off the main application thread. Do that in a background thread. Your main application thread should just be drawing the bitmaps, pulling them off of a queue populated by that background thread. Android has the main application thread set to render on a 60fps basis as of Android 4.1 (a.k.a., "Project Butter"), so as long as you can draw your Bitmap in a couple of milliseconds, and assuming that your network and decoding can keep your queue current, you should get 60fps results.
Also, always use inBitmap with BitmapFactory.Options on Android 3.0+ when you have images of consistent size, as part of your problem will be GC stealing CPU time. Work off a pool of Bitmap objects that you rotate through, so that you generate less garbage and do not fragment your heap so much.
I suspect that you are better served letting Android scale the image for you in an ImageView (or just by drawing to a View canvas) than you are in having BitmapFactory scale the image, as Android can take advantage of hardware graphics acceleration for rendering, which BitmapFactory cannot. Again, Traceview is your friend here.
With regards to:
and have found that I am capable of receiving those JPEGs smoothly, at 60FPS.
that will only be true sometimes. Mobile devices tend to be mobile. Assuming that by "6kb" you mean 6KB (six kilobytes), you are assuming a ~3Mbps (three megabits per second) connection, and that's far from certain.
With regards to:
Is it possible to convert this stream of Bitmaps into a VideoView?
VideoView is a widget that plays videos, and you do not have a video.
Push come to shove, you might need to drop down to the NDK and do this in native code, though I would hope not.
I´m having this strange issue when using a GL10-object outside of the overridden Renderer functions.
For example, for the purpose of picking a geometry via color codes I tried to read out the color buffer via glReadPixels.
#Override
public void onDrawFrame(GL10 gl) {
...
ByteBuffer pixel = ByteBuffer.allocateDirect(4);
pixel.order(ByteOrder.nativeOrder());
gl.glReadPixels(0, 0, 1, 1, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, pixel);
while (pixel.hasRemaining()){
Log.v(TAG,""+(int)(pixel.get() & 0xFF));
}
}
This works and gives me the color values in range 0..255 for the pixel in the bottom left corner.
Now when I take my GL10-object and make it available to the whole class as a field, it doesn´t seem to work anymore:
#Override
public void update(Observable observable, Object data) {
Log.v(TAG, "update Observer glsurfaceviewrenderer");
if (data instanceof MotionEvent){
MotionEvent event = (MotionEvent) data;
ByteBuffer pixel = ByteBuffer.allocateDirect(4);
pixel.order(ByteOrder.nativeOrder());
gl.glReadPixels(0, 0, 1, 1, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, pixel);
while (pixel.hasRemaining()){
Log.v(TAG,""+(int)(pixel.get() & 0xFF));
}
}
}
This doesn´t work, all colors have value 0. Only difference is, I used the gl-object via a field and not via a function-argument. I checked the memory pointer to the gl-object by printing it to Log and both have the same address.
I´m really stumped right now...anybody having an idea?
Two problems:
1) You can only make OpenGL calls from the thread to which the context is bound. onDrawFrame runs in a thread created by GLSurfaceView, while I assume your update method is called from the main UI thread.
2) glReadPixels reads from the buffer you are currently rendering to. After onDrawFrame returns, GLSurfaceView will call eglSwapBuffers. You will no longer be able to read the buffer you were drawing to.
You'll need to reorganize your code so that you know what pixel you need to read at time that onDrawFrame is called. Your only other option is to fetch the entire frame every time.
I've read quite a few tutorials on game programming on android,
and all of them provide basically the same solution as to drawing the game, that is having a dedicated thread spinning like this:
public void run() {
while(true) {
if(!surfaceHolder.getSurface().isValid()) continue;
Canvas canvas = surfaceHolder.lockCanvas();
drawGame(canvas); /* do actual drawing here */
surfaceHolder.unlockCanvasAndPost(canvas);
}
}
now I'm wondering, isn't this wasteful? Suppose I've a game with very simple graphics, so that the actual time in drawGame is little;
then I'm going to draw the same things on and on, stealing cpu from the other threads;
a possibility could be skipping the drawing and sleeping a bit if the game state hasn't changed,
which I could check by having the state update thread mantaining a suitable status flag.
But maybe there are other options. For example, couldn'it be possible to synchronize with rendering,
so that I don't post updates too often? Or am I missing something and that is precisely what lockCanvas does,
that is it blocks and burns no cpu until proper time?
Thanks in advance
L.
I would say the tutorials you have seen are wrong, you really want to wait in the main loop. 16 milliseconds would be the target frame time in the example below
public void run() {
while(true) {
long start = System.currentTimeMillis();
if(!surfaceHolder.getSurface().isValid()) continue;
Canvas canvas = surfaceHolder.lockCanvas();
drawGame(canvas); /* do actual drawing here */
surfaceHolder.unlockCanvasAndPost(canvas);
long frameTime = System.currentTimeMillis() - start;
try {
Thread.sleep(Math.max(0, 16 - ( frameTime )));
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
You don't need to draw canvas from a loop in a thread, you can do this on request, like when moving the finger over the screen.
If the animation is not intensive, one can use just a custom view and then invalidate() the view from some user input event.
It is also possible to stop the thread and then create and start it again, as many time as needed witin the same SurfaceView class.