Android OpenGL application crashes after few onDrawFrame calls - android

I am programming little application for android in OpenGL ES 2.0. Everything was going fine, but today I implemented 2D text drawing. I just create normal canvas, write text on it and then I just load this bitmap as 2D Texture and draw it. This is method I use to change the value of the text.
public void setText(String text){
if(!this.text.equals(text)){
this.text = text;
Bitmap bitmap = Bitmap.createBitmap(256, 256, Bitmap.Config.ARGB_4444);
Canvas canvas = new Canvas(bitmap);
bitmap.eraseColor(0);
Paint textPaint = new Paint();
textPaint.setTextSize(32);
textPaint.setAntiAlias(true);
textPaint.setARGB(0xff, 0x00, 0x00, 0x00);
textPaint.getTextBounds(text, 0, text.length(), bounds);
canvas.drawText(text, 0, bounds.height(), textPaint);
GLES20.glDeleteTextures(1, new int[]{textureHandle}, 0);
this.setTexture(bitmap);
bitmap.recycle();
}
}
I wanted to try it out, so I started to count amount of onDrawFrame calls. It works well, but at the 1045th call it freezes, then it continues for a few more frames and then the app just crashes.
I concluded, that it may be happening because of lack of free memory so I added GLES20.glDeleteTextures(1, new int[]{textureHandle}, 0); to free unnecessary texture from memory, but it haven't change anything.
Any ideas where might be problem?
Thanks Toneks

If setText() is being called by a different thread than the rest of your OpenGL ES code, then that is the problem. All call to OpenGL ES must be made from a single thread on Android. This article gives more detail on this:
http://software.intel.com/en-us/articles/porting-opengl-games-to-android-on-intel-atom-processors-part-1

Related

create a mask from bitmap

I have an image as bitmap e.g
I want to create mask programmatically from that bitmap like this
I searched online but did not find any solution.
2003 Java Q and A about Masking Images
The question posed on this website seems similar to yours and the answers should help you out. Their code was written in Java back in 2003, but I believe you are asking about something you plan to program in Android Studio from your tags in I am guessing Java. Your question is kind of vague, but maybe this website will be a good starting point. There are longer solutions, with complete code written out, but I'll post one of the solutions listed.
One of the Solutions posted on that forum is this:
//get the image pixel
int imgPixel = image.getRGB(x,y);
//get the mask pixel
int maskPixel = mask.getRGB(x,y);
//now, get rid of everything but the blue channel
//and shift the blue channel into the alpha channels sample space.
maskPixel = (maskPixel &0xFF)<<24
//now, merge img and mask pixels and copy them back to the image
image.setRGB(x,y,imgPixel|maskPixel);
I found a solution to my problem i solved my problem by tinting my bitmap using PorterDuffColorFilter in a following way.
public Bitmap tintBitmap(Bitmap bitmap, int color) {
Paint paint = new Paint();
paint.setColorFilter(new PorterDuffColorFilter(color, PorterDuff.Mode.SRC_IN));
Bitmap bitmapResult = Bitmap.createBitmap(bitmap.getWidth(), bitmap.getHeight(), Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(bitmapResult);
canvas.drawBitmap(bitmap, 0, 0, paint);
return bitmapResult;
}

Screenshot on android OpenGL ES application

I have a basic openGL ES 20 application running with on a GLSurfaceView that has been added:
GLSurfaceView view = new GLSurfaceView(this);
view.setRenderer(new OpenGLRenderer());
setContentView(view);
Basically I am trying get a screenshot with the following method:
private static Bitmap getScreenshot(View v)
{
Bitmap b = Bitmap.createBitmap(v.getWidth(), v.getHeight(),
Bitmap.Config.ARGB_8888);
Canvas c = new Canvas(b);
v.draw(c);
return b;
}
But it seems the bitmap is transparent. The view I am passing in is:
View content = m_rootActivity.getWindow().getDecorView().getRootView();
Anyone has a solution on how to get screenshot on openGL ES without resorting into going into the DrawFrame method which I have seen in other solutions.
Maybe pass in the reference of the renderer? Any help would be appreciated.
Update:
I was exploring in rendering the bitmap from the onDrawFrame (Display black screen while capture screenshot of GLSurfaceView)
However, I was wondering if there is a better solution since I won't have access to the renderer nor the surfaceview. I can pass in their reference but would like a solution where we can just capture the entire view like what was mentioned earlier.
See this question.
You can get a screenshot with:
#Override
public void onDrawFrame(GL10 gl) {
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// draw ...
if (takeScreenshot) {
int screenshotSize = width * height;
ByteBuffer bb = ByteBuffer.allocateDirect(screenshotSize * 4);
bb.order(ByteOrder.nativeOrder());
GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, bb);
int pixelsBuffer[] = new int[screenshotSize];
bb.asIntBuffer().get(pixelsBuffer);
bb = null;
for (int i = 0; i < screenshotSize; ++i) {
// The alpha and green channels' positions are preserved while the red and blue are swapped
pixelsBuffer[i] = ((pixelsBuffer[i] & 0xff00ff00)) | ((pixelsBuffer[i] & 0x000000ff) << 16) | ((pixelsBuffer[i] & 0x00ff0000) >> 16);
}
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixelsBuffer, screenshotSize-width, -width, 0, 0, width, height);
// save bitmap...
}
}
You can not by any chance at all get the buffer data to the CPU from the GPU without reading the pixels. You should understand that this is not the same pipeline as is with views, the data in the buffer are filled on the GPU and are then sent directly to the display or nowhere.
So that being said the answer is no. You can not simply get a screenshot as a concept of screenshot does not even exist in this matter. There are only raw (usually RGBA) data on the GPU buffer. And those data must be filled with what you draw to get all you have drawn, if you were to simply read those data at any time the buffer might just be cleared, it might be half drawn or if you are lucky fully drawn.
So that is the reason why you make those screenshot in the drawing pipeline as you must assure the buffer is filled with the data.
There are generally 2 smart ways of intercepting the drawing pipeline best done just before presenting the buffer. One is to pass a certain flag that a screenshot should be done where then the engine itself creates a screenshot which is nice since it has all the data of the buffer on the fly. The second is to create a callback handle where the engine will notify the owner on every frame being fully drawn, in this case the owner can do some additional drawing or creating a screenshot or count frames per second... this again has many bonuses but you do need to at least pass the buffer dimensions to do anything with the buffer.
Also note that reading the pixels is extremely slow and in some cases the image you will receive will be upside-down.

Drawing bitmaps with alpha channel: please advise... (some solutions and speed issues)

I have a rather big number of small bitmaps (100+, size about 40x40) each one have some opaque and some transparent parts and i need to paint them respecting these areas.
Bitmaps are in ARGB format, 888(rgb) plus a 256bit alpha channel, standard like in PNG format.
The only (working) way i found to draw them is the following approach:
create a bitmap (ARGB_8888)
fill the bitmap with the raw data
extract the alpha layer from the bitmap
create a BitmapShader (RGB_565) based on the original bitmap
Create a paint for the bitmap which uses the created shader
Then paint the alpha mask using the paint with the special BitmapShader.
The initialization code is run only once, of course:
void initializeTile( int t ){
// Allocate the bitmap:
Bitmap original_data = Bitmap.createBitmap( tile_w, tile_h, Bitmap.Config.ARGB_8888);
// Fill with raw data (this is actually native C++ code):
populateBitmap( original_data );
// Get the alpha mask:
tile_mask[ t ] = original_data.extractAlpha();
// Create the bitmap shader:
tile_data = original_data.copy( Bitmap.Config.RGB_565, false);
// Create the shader:
BitmapShader shader = new BitmapShader(tile_data, CLAMP, CLAMP);
// Create the paint:
tile_paint[ t ] = new Paint();
tile_paint[ t ].setDither(true);
tile_paint[ t ].setAntiAlias(true);
tile_paint[ t ].setFilterBitmap(true);
tile_paint[ t ].setShader( shader );
}
And the paint code is the most simple possible, and it's in the main draw loop:
void paintTile(t){
canvas.drawBitmap( tile_mask[ t ], tile_x[ t], tile_y[ t], tile_paint[ t] );
}
Now, on phones like the Ideos (Android 2.2) it run smooth and fine, but on other phones like the top-end Samsung Galaxy SII (Android 2.3) it's crappy and slow. This does not make much sense to me...
So, what do you think of this approach? Are there better, faster, ways to achieve the same result?
And, why do you think it's so slow on modern, fast hardware? Is there any ways to improve it?
Ok, after some work i found out a better solution. I cannot answer my own questions, so please do if you know more than me.
But, in case more people needs this, i am posting my new solution, which is much faster albeit a bit more complicated. The key idea is to use the shader approach ONLY during initialization and not for painting.
To do this, i create a new bitmap which will contain the "clipped" bitmap (with all the transparent areas cleared) using the shader approach, then paint that clipped bitmap without any shader in the draw code.
void initializeTile( int t ){
// Allocate the bitmap:
Bitmap original_data = Bitmap.createBitmap( tile_w, tile_h, Bitmap.Config.ARGB_8888);
// Fill with raw data (this is actually native C++ code):
populateBitmap( original_data );
// Now make a new bitmap to be clipped:
Bitmap clipped_data = Bitmap.createBitmap( tile_w, tile_h, Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(clipped_data);
Paint clip_paint = new Paint();
clip_paint.setDither(true);
clip_paint.setAntiAlias(true);
clip_paint.setFilterBitmap(true);
clip_paint.setShader( new BitmapShader(original_data, CLAMP, CLAMP));
// Paint the clipped bitmap:
canvas.drawBitmap( tile_mask[ t ], 0, 0, clip_paint );
//Use the clipped bitmap as original bitmap:
tile_data[ t ] = clipped_data;
}
And also drawing code:
void paintTile(t){
canvas.drawBitmap( tile_data[ t ], tile_x[ t], tile_y[ t], null );
}
Overall, this is much faster.
Still it's unclear to me WHY Android would not paint my alpha-channelled bitmaps properly without all this mess!

android fast pixel access and manipulation

I'm trying to port an emulator that i have written in java to android. Things have been going nicely, I was able to port most of my codes with minor changes however due to how emulation works, I need to render image at pixel level.
As for desktop java I use
int[] pixelsA = ((DataBufferInt) src.getRaster().getDataBuffer()).getData();
which allow me to get the reference to the pixel buffer and update it on the fly(minimize object creations)
Currently this is what my emulator for android does for every frame
#Override
public void onDraw(Canvas canvas)
{
buffer = Bitmap.createBitmap(pixelsA, 256, 192, Bitmap.Config.RGB_565);
canvas.drawBitmap(buffer, 0, 0, null);
}
pixelsA is an array int[], pixelsA contains all the colour informations, so every frame it will have to create a bitmap object by doing
buffer = Bitmap.createBitmap(pixelsA, 256, 192, Bitmap.Config.RGB_565);
which I believe is quite expensive and slow.
Is there any way to draw pixels efficiently with canvas?
One quite low-level method, but working fine for me (with native code):
Create Bitmap object, as big as your visible screen.
Also create a View object and implement onDraw method.
Then in native code you'd load libjnigraphics.so native library, lookup functions AndroidBitmap_lockPixels and AndroidBitmap_unlockPixels.
These functions are defined in Android source in bitmap.h.
Then you'd call lock/unlock on a bitmap, receiving address to raw pixels. You must interpret RGB format of pixels accordingly to what it really is (16-bit 565 or 32-bit 8888).
After changing content of the bitmap, you want to present this on screen.
Call View.invalidate() on your View. In its onDraw, blit your bitmap into given Canvas.
This method is very low level and dependent on actual implementation of Android, however it's very fast, you may get 60fps no problem.
bitmap.h is part of Android NDK since platform version 8, so this IS official way to do this from Android 2.2.
You can use the drawBitmap method that avoids creating a Bitmap each time, or even as a last resort, draw the pixels one by one with drawPoint.
Don't recreate the bitmap every single time. Try something like this:
Bitmap buffer = null;
#Override
public void onDraw(Canvas canvas)
{
if(buffer == null) buffer = Bitmap.createBitmap(256, 192, Bitmap.Config.RGB_565);
buffer.copyPixelsFromBuffer(pixelsA);
canvas.drawBitmap(buffer, 0, 0, null);
}
EDIT: as pointed out, you need to update the pixel buffer. And the bitmap must be mutable for that to happen.
if pixelsA is already an array of pixels (which is what I would infer from your statement about containing colors) then you can just render them directly without converting with:
canvas.drawBitmap(pixelsA, 0, 256, 0, 0, 256, 192, false, null);

Android OpenGL ES 2.0 -- glReadPixels() and glTexImage2D() drawing a black texture?

I'm working on some Android code for caching and redrawing a framebuffer object's color buffer between the loss and recreation of EGL contexts. Development is primarily happening on a Xoom tablet running Honeycomb. Anyway, what I'm trying to do is store the result of calling glReadPixels() on the FBO in a direct ByteBuffer, then use that buffer with glTexImage2D() and draw it back into the (now cleared) framebuffer. All of this seems to work fine — the ByteBuffer contains the right values ([-1, 0, 0, -1] etc. for a pixel, according to Java's inability to understand unsigned bytes), no GlErrors seem to be thrown, and the quad is drawn to the right part of the screen (currently the top-left quarter of the framebuffer for testing purposes).
However, no matter what I try, glTexImage2D() always outputs a plain black texture. I've had some issues with this before — when displaying Bitmaps, I eventually gave up trying to use the basic GLES20.glTexImage2D() with Buffers and skipped to using GLUtils.glTexImage2D(), which processes the Bitmap for you. Unfortunately, that's less of an option here (I did actually try converting the ByteBuffer to a Bitmap so I could use GLUtils, without much success), so I've really run out of ideas.
Can anyone think of anything that could be causing glTexImage2D() to not correctly process a perfectly good ByteBuffer? Any and all suggestions would be welcome.
ByteBuffer pixelBuffer;
void storePixels() {
try {
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, fbuf);
pixelBuffer = ByteBuffer.allocateDirect(width * height * 4).order(ByteOrder.nativeOrder());
GLES20.glReadPixels(0, 0, width, height, GL20.GL_RGBA, GL20.GL_UNSIGNED_BYTE, pixelBuffer);
GLES20.glBindFrameBuffer(GLES20.GL_FRAMEBUFFER, 0);
gfx.checkGlError("store Pixels");
}catch (OutOfMemoryError e) {
pixelBuffer = null;
}
}
void redrawPixels() {
GLES20.glBindFramebuffer(GL20.GL_FRAMEBUFFER, fbuf);
int[] texId = new int[1];
GLES20.glGenTextures(1, texId, 0);
int bufferTex = texId[0];
GLES20.glBindTexture(GL20.GL_TEXTURE_2D, bufferTex);
GLES20.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
GLES20.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
GLES20.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_S, repeatX ? GL20.GL_REPEAT
: GL20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_T, repeatY ? GL20.GL_REPEAT
: GL20.GL_CLAMP_TO_EDGE);
GLES20.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_RGBA, width, height, 0, GL20.GL_RGBA, GL20.GL_UNSIGNED_BYTE, pixelBuffer);
gfx.drawTexture(bufferTex, width, height, Transform.IDENTITY, width/2, height/2, false, false, 1);
GLES20.glDeleteTextures(1, IntBuffer.wrap(new int[] {bufferTex}));
pixelBuffer = null;
GLES20.glBindFrameBuffer(GLES20.GL_FRAMEBUFFER, 0);
}
gfx.drawTexture() builds a quad and draws it to the currently bound framebuffer, by the way. That code has been well-tested in other parts of my project — it shouldn't be the issue here.
For those of you playing along at home, this code is in fact totally valid. Remember when I swore blind that gfx.drawTexture() has been well-tested and shouldn't be the issue here"? Yeah, it was totally the issue there. I was buffering vertices to draw without actually flushing them through a glDrawElements() call. Whoops.

Categories

Resources