I´m having this strange issue when using a GL10-object outside of the overridden Renderer functions.
For example, for the purpose of picking a geometry via color codes I tried to read out the color buffer via glReadPixels.
#Override
public void onDrawFrame(GL10 gl) {
...
ByteBuffer pixel = ByteBuffer.allocateDirect(4);
pixel.order(ByteOrder.nativeOrder());
gl.glReadPixels(0, 0, 1, 1, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, pixel);
while (pixel.hasRemaining()){
Log.v(TAG,""+(int)(pixel.get() & 0xFF));
}
}
This works and gives me the color values in range 0..255 for the pixel in the bottom left corner.
Now when I take my GL10-object and make it available to the whole class as a field, it doesn´t seem to work anymore:
#Override
public void update(Observable observable, Object data) {
Log.v(TAG, "update Observer glsurfaceviewrenderer");
if (data instanceof MotionEvent){
MotionEvent event = (MotionEvent) data;
ByteBuffer pixel = ByteBuffer.allocateDirect(4);
pixel.order(ByteOrder.nativeOrder());
gl.glReadPixels(0, 0, 1, 1, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, pixel);
while (pixel.hasRemaining()){
Log.v(TAG,""+(int)(pixel.get() & 0xFF));
}
}
}
This doesn´t work, all colors have value 0. Only difference is, I used the gl-object via a field and not via a function-argument. I checked the memory pointer to the gl-object by printing it to Log and both have the same address.
I´m really stumped right now...anybody having an idea?
Two problems:
1) You can only make OpenGL calls from the thread to which the context is bound. onDrawFrame runs in a thread created by GLSurfaceView, while I assume your update method is called from the main UI thread.
2) glReadPixels reads from the buffer you are currently rendering to. After onDrawFrame returns, GLSurfaceView will call eglSwapBuffers. You will no longer be able to read the buffer you were drawing to.
You'll need to reorganize your code so that you know what pixel you need to read at time that onDrawFrame is called. Your only other option is to fetch the entire frame every time.
Related
I am using the HelloAR demo application and I want to capture the screen of my Samsung Galaxy Tab S5e.
In onDrawFrame I call my screenshot function:
#Override
public void onDrawFrame(GL10 gl) {
// Capture the screen
createBitmapFromGLSurface(gl);
....
}
Here is the createBitmapFromGLSurface function:
public void createBitmapFromGLSurface(GL10 gl) {
ByteBuffer buffer = ByteBuffer.allocate(w * h * 4);
GLES20.glReadPixels(0, 0, w, h, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, buffer);
Bitmap bitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(buffer);
//Do something with the bitmap
}
This works (image gets saved to disk and is perfect), but it is absolutely dog slow! So, I thought to offload it to a background thread... and tried to wrap it in an AsyncTask:
public void onDrawFrame(GL10 gl) {
AsyncTask.execute(new Runnable() {
#Override
public void run() {
createBitmapFromGLSurface(gl);
}
}
However, this just gives me a totally blank (transparent) image when I save the bitmap to disk.
How can I speed this up, get it to work on a background thread or (preferably) both?
OpenGL stuff runs on its own dedicated thread, which has a GL context - you're probably getting a blank screen because it's running on another thread, without access to that context?
weston's version runs on the same thread, which should be why it works! But you're doing a bunch of allocations (and IO if you're writing the bitmap out) which you really want to avoid in onDrawFrame. You have 16ms to get everything done to maintain a smooth 60fps. Is that what you mean by slow, the display chugs?
Personally I'd do as little as possible on the GL thread - write the data to the buffer, then hand that buffer off to a worker thread to create a bitmap, write to disk etc. Ideally allocate the buffer once and reuse it (but you'll have to manage concurrency, so you're not writing to your buffer on one thread while another thread is trying to read it).
I wouldn't use an AsyncTask personally but it should still work - you're only meant to call #execute from the UI thread though, so if you're messing around with Loopers that might slow it down a bit too.
To save processed image by OpenGL ES, I made codes as follows. And it works well.
ByteBuffer bb = ByteBuffer.allocate(mWidth * mHeight * 4);
mGL.glReadPixels(0, 0, mWidth, mHeight, GL_RGBA, GL_UNSIGNED_BYTE, bb);
try {
TJCompressor tjCompressor = new TJCompressor(bb.array(), 0, 0, mWidth, 0, mHeight, TJ.PF_RGB);
tjCompressor.setJPEGQuality(100);
tjCompressor.setSubsamp(TJ.SAMP_444);
return tjCompressor.compress(0);
} catch (Exception e) {
e.printStackTrace();
}
After that, to get 24bit color information without alpha channel for saving memory and processing time, I changed the line #1 and #2 of the codes as follows.
ByteBuffer bb = ByteBuffer.allocate(mWidth * mHeight * 3);
mGL.glReadPixels(0, 0, mWidth, mHeight, GL_RGB, GL_UNSIGNED_BYTE, bb);
And then additionally, I removed EGL_ALPHA_SIZE at mGL(GL10 instance)'s EGLConfig. And I passed GLES20.GL_RGB as internal format parameter, when GLUtils.texImage2D() method is called.
However, the result indicates there is something wrong. The result image has only black color, and when I checked the data of bb buffer after glReadPixels() method calling, I found all data is zero. I need advice. Help, plz.
In core GLES2, the only valid format/type combos for glReadPixels are:
GL_RGBA/GL_UNSIGNED_BYTE
Optional framebuffer-specific format/type queried via glGetIntegerv with GL_IMPLEMENTATION_COLOR_READ_FORMAT and GL_IMPLEMENTATION_COLOR_READ_TYPE respectively
In GLES2 without extensions, if GL_IMPLEMENTATION_COLOR_READ_FORMAT/GL_IMPLEMENTATION_COLOR_READ_TYPE don't yield something useful, you're stuck with GL_RGBA/GL_UNSIGNED_BYTE, unfortunately.
With GLES3, you can glReadPixels into the bound GL_PACK_BUFFER, and glMapBufferRange that, though again, you're limited by format.
I'll note that drivers are prone to emulating tightly-packed rgb8 24-bit formats, instead implementing only the better aligned formats like rgba8, rgb565, and rgba4. A renderable format exposed as "rgb8" is potentially just rgbx8 behind the scenes.
Highly driver dependent, but if you don't care about keeping the contents of the framebuffer around, you might be able to win back some memory bandwidth with EXT_discard_framebuffer. (or glInvalidateFramebuffer in GLES3)
You might also look into EGL_KHR_lock_surface3.
I have a basic openGL ES 20 application running with on a GLSurfaceView that has been added:
GLSurfaceView view = new GLSurfaceView(this);
view.setRenderer(new OpenGLRenderer());
setContentView(view);
Basically I am trying get a screenshot with the following method:
private static Bitmap getScreenshot(View v)
{
Bitmap b = Bitmap.createBitmap(v.getWidth(), v.getHeight(),
Bitmap.Config.ARGB_8888);
Canvas c = new Canvas(b);
v.draw(c);
return b;
}
But it seems the bitmap is transparent. The view I am passing in is:
View content = m_rootActivity.getWindow().getDecorView().getRootView();
Anyone has a solution on how to get screenshot on openGL ES without resorting into going into the DrawFrame method which I have seen in other solutions.
Maybe pass in the reference of the renderer? Any help would be appreciated.
Update:
I was exploring in rendering the bitmap from the onDrawFrame (Display black screen while capture screenshot of GLSurfaceView)
However, I was wondering if there is a better solution since I won't have access to the renderer nor the surfaceview. I can pass in their reference but would like a solution where we can just capture the entire view like what was mentioned earlier.
See this question.
You can get a screenshot with:
#Override
public void onDrawFrame(GL10 gl) {
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// draw ...
if (takeScreenshot) {
int screenshotSize = width * height;
ByteBuffer bb = ByteBuffer.allocateDirect(screenshotSize * 4);
bb.order(ByteOrder.nativeOrder());
GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, bb);
int pixelsBuffer[] = new int[screenshotSize];
bb.asIntBuffer().get(pixelsBuffer);
bb = null;
for (int i = 0; i < screenshotSize; ++i) {
// The alpha and green channels' positions are preserved while the red and blue are swapped
pixelsBuffer[i] = ((pixelsBuffer[i] & 0xff00ff00)) | ((pixelsBuffer[i] & 0x000000ff) << 16) | ((pixelsBuffer[i] & 0x00ff0000) >> 16);
}
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixelsBuffer, screenshotSize-width, -width, 0, 0, width, height);
// save bitmap...
}
}
You can not by any chance at all get the buffer data to the CPU from the GPU without reading the pixels. You should understand that this is not the same pipeline as is with views, the data in the buffer are filled on the GPU and are then sent directly to the display or nowhere.
So that being said the answer is no. You can not simply get a screenshot as a concept of screenshot does not even exist in this matter. There are only raw (usually RGBA) data on the GPU buffer. And those data must be filled with what you draw to get all you have drawn, if you were to simply read those data at any time the buffer might just be cleared, it might be half drawn or if you are lucky fully drawn.
So that is the reason why you make those screenshot in the drawing pipeline as you must assure the buffer is filled with the data.
There are generally 2 smart ways of intercepting the drawing pipeline best done just before presenting the buffer. One is to pass a certain flag that a screenshot should be done where then the engine itself creates a screenshot which is nice since it has all the data of the buffer on the fly. The second is to create a callback handle where the engine will notify the owner on every frame being fully drawn, in this case the owner can do some additional drawing or creating a screenshot or count frames per second... this again has many bonuses but you do need to at least pass the buffer dimensions to do anything with the buffer.
Also note that reading the pixels is extremely slow and in some cases the image you will receive will be upside-down.
I'm trying to port an emulator that i have written in java to android. Things have been going nicely, I was able to port most of my codes with minor changes however due to how emulation works, I need to render image at pixel level.
As for desktop java I use
int[] pixelsA = ((DataBufferInt) src.getRaster().getDataBuffer()).getData();
which allow me to get the reference to the pixel buffer and update it on the fly(minimize object creations)
Currently this is what my emulator for android does for every frame
#Override
public void onDraw(Canvas canvas)
{
buffer = Bitmap.createBitmap(pixelsA, 256, 192, Bitmap.Config.RGB_565);
canvas.drawBitmap(buffer, 0, 0, null);
}
pixelsA is an array int[], pixelsA contains all the colour informations, so every frame it will have to create a bitmap object by doing
buffer = Bitmap.createBitmap(pixelsA, 256, 192, Bitmap.Config.RGB_565);
which I believe is quite expensive and slow.
Is there any way to draw pixels efficiently with canvas?
One quite low-level method, but working fine for me (with native code):
Create Bitmap object, as big as your visible screen.
Also create a View object and implement onDraw method.
Then in native code you'd load libjnigraphics.so native library, lookup functions AndroidBitmap_lockPixels and AndroidBitmap_unlockPixels.
These functions are defined in Android source in bitmap.h.
Then you'd call lock/unlock on a bitmap, receiving address to raw pixels. You must interpret RGB format of pixels accordingly to what it really is (16-bit 565 or 32-bit 8888).
After changing content of the bitmap, you want to present this on screen.
Call View.invalidate() on your View. In its onDraw, blit your bitmap into given Canvas.
This method is very low level and dependent on actual implementation of Android, however it's very fast, you may get 60fps no problem.
bitmap.h is part of Android NDK since platform version 8, so this IS official way to do this from Android 2.2.
You can use the drawBitmap method that avoids creating a Bitmap each time, or even as a last resort, draw the pixels one by one with drawPoint.
Don't recreate the bitmap every single time. Try something like this:
Bitmap buffer = null;
#Override
public void onDraw(Canvas canvas)
{
if(buffer == null) buffer = Bitmap.createBitmap(256, 192, Bitmap.Config.RGB_565);
buffer.copyPixelsFromBuffer(pixelsA);
canvas.drawBitmap(buffer, 0, 0, null);
}
EDIT: as pointed out, you need to update the pixel buffer. And the bitmap must be mutable for that to happen.
if pixelsA is already an array of pixels (which is what I would infer from your statement about containing colors) then you can just render them directly without converting with:
canvas.drawBitmap(pixelsA, 0, 256, 0, 0, 256, 192, false, null);
i have implemented a tile based layer in my game, but after the first time that
it was drawn, just the first time i try to update it (to add tiny decals, like blood splats, craters, etc. thingies that are added to the map and that i don't want to draw separately every loop) i have a huge hiccup of 3~ seconds.
after some test, i have find the only call that hangs.
gl.glTexSubImage2D(GL10.GL_TEXTURE_2D, 0, x, y, width, height, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, pixelBuffer);
the decal is really tiny(32 * 32 pixel), there is no opengl error after this call, and i really don't get the whole thing (i mean.....the creation of the whole tile layer actually takes much lesser than 1 second, and is done by a thousand of glTexSubImage2D command on a large blank texture.....3 seconds is pretty huge).
i already have a workaround (a "fake" update just before the splash screen goes off) but i really want to understand this odd(for me, at least) behaviour.. :|
(i'm deeply sorry for my enGrish, i hope that's understandable)
METHOD:
public static void replaceSubImg(GL10 gl, int hwdId , int x,int y,int width, int height,Buffer pixelBuffer) {
gl.glFinish();
gl.glBindTexture(GL10.GL_TEXTURE_2D, hwdId);
long time = System.currentTimeMillis();
gl.glTexSubImage2D(GL10.GL_TEXTURE_2D, 0, x, y, width, height, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, pixelBuffer);
Log.d("testApps","openglUtil,######################### subImg replaced in "+(System.currentTimeMillis()-time)+" ms");
}
LOG:
DEBUG/testApps(3809): openglUtil,######################### subImg replaced in 2811 ms
DEBUG/testApps(3809): openglUtil,######################### subImg replaced in 1 ms(this is the 2d run)
after a ton of tests, i have isolated the whole thing and this looks like just a bug in the implementation of the opengl in the last cyanogenmod(or just gingerbread, i don't have tested this yet....) :
cyanogen(gingerbread, 7rc2) :
uploading the texture, ~1ms,
altering the texture, ~1ms,
drawin the texture, ~1ms,
any edit, even 1px for 1px, for every texture,a huge delay( depending on how much is big the texture), in my case ~3000ms
any further edit,regardless of the size, back to ~1ms
on froyo ( stock, frf91) :
uploading the texture, ~1ms,
altering the texture, ~1ms
drawing the texture, ~1ms
any edit, even 1px for 1px, for every texture, a little delay( depending on how much
is big the texture), in my case ~80ms
any further edit,regardless of the size, back to ~0ms
there is anyway a little delay, but understandable this time.
I'm not sure if I'm understanding you correctly, but:
Do you have gl.glTexSubImage2D(GL10.GL_TEXTURE_2D, 0, x, y, width, height, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, pixelBuffer); code in every iteration? That call is pretty expensive and it shouldn't be iterated every frame (it's like you're creating the texture every time you draw).
You need to load your textures in the onSurfaceCreated callback, store your GL pointer and assign that pointer to a Texture object (an object that just holds the pointer into OpenGL memory).
After that you can just bind it using glBindTexture(GL10.GL_TEXTURE_2D, yourPointer);.
OpenGL instructions can be pipelined and may not be executed immediately. It seems likely that the glTexSubImage2D call itself is fast, but has to wait for some other operation to complete.
To figure this out, it might be instructive to call glFinish after some major operations, and see when that takes a long time. You may find that the actual time is spent elsewhere... for example, in a thousand glTexSubImage2D calls?