I have a basic openGL ES 20 application running with on a GLSurfaceView that has been added:
GLSurfaceView view = new GLSurfaceView(this);
view.setRenderer(new OpenGLRenderer());
setContentView(view);
Basically I am trying get a screenshot with the following method:
private static Bitmap getScreenshot(View v)
{
Bitmap b = Bitmap.createBitmap(v.getWidth(), v.getHeight(),
Bitmap.Config.ARGB_8888);
Canvas c = new Canvas(b);
v.draw(c);
return b;
}
But it seems the bitmap is transparent. The view I am passing in is:
View content = m_rootActivity.getWindow().getDecorView().getRootView();
Anyone has a solution on how to get screenshot on openGL ES without resorting into going into the DrawFrame method which I have seen in other solutions.
Maybe pass in the reference of the renderer? Any help would be appreciated.
Update:
I was exploring in rendering the bitmap from the onDrawFrame (Display black screen while capture screenshot of GLSurfaceView)
However, I was wondering if there is a better solution since I won't have access to the renderer nor the surfaceview. I can pass in their reference but would like a solution where we can just capture the entire view like what was mentioned earlier.
See this question.
You can get a screenshot with:
#Override
public void onDrawFrame(GL10 gl) {
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// draw ...
if (takeScreenshot) {
int screenshotSize = width * height;
ByteBuffer bb = ByteBuffer.allocateDirect(screenshotSize * 4);
bb.order(ByteOrder.nativeOrder());
GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, bb);
int pixelsBuffer[] = new int[screenshotSize];
bb.asIntBuffer().get(pixelsBuffer);
bb = null;
for (int i = 0; i < screenshotSize; ++i) {
// The alpha and green channels' positions are preserved while the red and blue are swapped
pixelsBuffer[i] = ((pixelsBuffer[i] & 0xff00ff00)) | ((pixelsBuffer[i] & 0x000000ff) << 16) | ((pixelsBuffer[i] & 0x00ff0000) >> 16);
}
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixelsBuffer, screenshotSize-width, -width, 0, 0, width, height);
// save bitmap...
}
}
You can not by any chance at all get the buffer data to the CPU from the GPU without reading the pixels. You should understand that this is not the same pipeline as is with views, the data in the buffer are filled on the GPU and are then sent directly to the display or nowhere.
So that being said the answer is no. You can not simply get a screenshot as a concept of screenshot does not even exist in this matter. There are only raw (usually RGBA) data on the GPU buffer. And those data must be filled with what you draw to get all you have drawn, if you were to simply read those data at any time the buffer might just be cleared, it might be half drawn or if you are lucky fully drawn.
So that is the reason why you make those screenshot in the drawing pipeline as you must assure the buffer is filled with the data.
There are generally 2 smart ways of intercepting the drawing pipeline best done just before presenting the buffer. One is to pass a certain flag that a screenshot should be done where then the engine itself creates a screenshot which is nice since it has all the data of the buffer on the fly. The second is to create a callback handle where the engine will notify the owner on every frame being fully drawn, in this case the owner can do some additional drawing or creating a screenshot or count frames per second... this again has many bonuses but you do need to at least pass the buffer dimensions to do anything with the buffer.
Also note that reading the pixels is extremely slow and in some cases the image you will receive will be upside-down.
Related
Using JavaCV to consume a multicast stream, I want to render the video frames in a GLSurfaceView. The frames are grabbed using the FFmpegFrameGrabber class; I have successfully output the captured frames to sdcard and a non-GL surface for visual debuggging. I have looked all over for a solution or clue to no avail; here is the section of code where help is needed:
// get the frame
opencv_core.IplImage img = capture.grab();
if (img != null) {
opencv_core.CvMat rgbaImg = opencv_core.CvMat.create(height, width, CV_8U, 4);
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
// convert colorspace
cvCvtColor(img, rgbaImg, CV_BGR2RGBA);
bitmap.copyPixelsFromBuffer(rgbaImg.getByteBuffer());
Rect rect = new Rect(x, y, width, height);
Canvas c = surface.lockCanvas(rect);
c.drawBitmap(bitmap, 0, 0, null);
surface.unlockCanvasAndPost(c);
if (bitmap != null) {
bitmap.recycle();
}
if (rgbaImg != null) {
rgbaImg.release();
}
}
Also if there is a more optimal way to do anything above, let me know.
Edit Since there's not much action on the first part of this question, would a "workaround" of rendering on the SufaceTexture that is used to create the Surface be a possibility instead?
SurfaceTexture surfaceTexture = new SurfaceTexture(textureId);
surfaceTexture.setOnFrameAvailableListener(this);
surface = new Surface(surfaceTexture);
Note: I am forced to stick with Android 4.2.2 for now.
ffmpeg with wild video formats:
For your first method, you can speed it up by a shared bitmap and the other resources, that will eliminate any memory allocation and speed it up a lot.
As for rendering FFmpeg results to a GLSurfaceView, you should look here:
(I have used both JJmpeg and JavaCV)
https://code.google.com/p/jjmpeg/source/browse/#svn%2Fbranches%2Fffmpeg-0.10-android%2Fjjmpeg%2Fsrc%2Fau%2Fnotzed%2Fjjmpeg%2Fmediaplayer
Most of the gems are here: (GLESVideoRenderer.onDrawFrame method)
https://code.google.com/p/jjmpeg/source/browse/branches/ffmpeg-0.10-android/jjmpeg/src/au/notzed/jjmpeg/mediaplayer/GLESVideoRenderer.java
Basic idea is to load the frames into 2D texture array, and then draw it.
You can modify the FFmpegFrameGrabber to a renderer for the GLSurfaceView, framerates will vary between devices.
If you know the video format:
What you really should do since you are already on Android 4.2.2, is to use MediaCodec from SDK and push the frames directly onto a surface.
I am new at this - asking questions, android developement and NDK. I hope I am clear enough.
I need to be able to create multiple surfaces/bitmaps.
e.g.
Surface s = new Surface (width, height)
they can copy between each other
s->copy (s2) copy surface s to s2 (including format conversion between RGBA and alpha-text surface and resizing/scaling)
use fill (x, u, w, h, color) - fill rectangle with color (something like glClear)
As far as I understand you have only one ANativeWindow which is supplied to you by android_app->window variable and if I use EGL I can create upto 1 EGLSurface. I need to be able to create many surfaces (~ 100 for instance). How is this possible? And then blit all of them to the window framebuffer
There is also android/bitmap.h But I am not getting it exactly how to work with it. But it does not offer me API to create surface, just to get already created or something like this?
You can create bitmap through JNI calls:
// setup bitmap class
jclass bitmap_class = (jclass)env->FindClass ("android/graphics/Bitmap");
// setup create method
jmethodID bitmap_create_method = env->GetStaticMethodID (bitmap_class, "createBitmap", "(IILandroid/graphics/Bitmap$Config;)Landroid/graphics/Bitmap;");
// get_enum_value return jobject corresponding in our case to Bitmap.Config.ARGB_8888. (the implentation is irrelevant here)
jobject bitmap_config_ARGB = get_enum_value ("android/graphics/Bitmap$Config", "ARGB_8888");
// Do not forget to call DeleteLocalRef where appropriate
// create the bitmap by calling the CreateBitmap method
// Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
jobject bitmap = env->CallStaticObjectMethod (bitmap_class, bitmap_create_method, width, height, bconfig);
// at the end of course clean-up must be done
env->DeleteLocalRef (bitmap);
You can access some bitmap properties and the raw pixels through the API in android/bitmap.h
AndroidBitmap_getInfo gives information about format (ARGB_8888 or alpha-only), dimensions, stride or pitch.
AndroidBitmap_lockPixels give the raw pixels. After finished manipulating the pixels one MUST call AndroidBitmap_unlockPixels
To make fill (color, dimension)
JNI can help. This can be written through JNI calls (I will use java because it is easier for me to write and clearer to read).
canvas.save ();
canvas.setBitmap (bitmap);
canvas.clipRect (left, top, right, bottom, Region.Op.REPLACE);
canvas.drawColor (color, PorterDuff.Mode.SRC);
canvas.restore ();
To copy one bitmap over another one - copy (src_bitmap, src_rect, dest_rect)
canvas.save ();
canvas.setBitmap (dest_bitmap);
canvas.clipRect (left, top, right, bottom, Region.Op.REPLACE);
canvas.drawBitmap (src_bitmap, src_rect, dest_rect, null);
canvas.restore ();
You can create Bitmaps and use the jnigraphics library (android/bitmap.h) or you can use multiple EGL textures.
Using Bitmaps you'll have to implement fill yourself, because Bitmap does only have pixel-based getters and setters (see setPixels(..))
I come from the Qt world and i am porting an application to Android. I am bit confused, i am banging my head for a few days now on something that must be so trivial that i cannot find why it's not working.
Some background: i have a C++ engine which i use trough NDK and JNI. This engine creates some bitmaps and passes them to the Java side, the Java side must display them on a View and let the user interact with them (drag and such).
The engine works properly, because i use it under Qt with full success. This is the workflow:
1- Java loads a big Bitmap from a custom data file (the C++ engine expects it to be in ARGB format, but it's compressed JPG data)
Bitmap.Config fmt = Bitmap.Config.ARGB_8888;
Bitmap bitmap = BitmapFactory.decodeByteArray(buffer, 0, size).copy( fmt , false);
2- initialize the C++ engine passing the bitmap. The C++ engine "breaks" the bitmap in smaller tiles. For tile it builds a rather complex alpha mask and stores it into the first byte of the bitmap (the "a" byte). This alpha mask only uses two values: 0xFF for opaque and 0x00 for transparent.
init_C_engine( this.fullImage );
3- The Java side then allocates all the tiles bitmaps, i do in two steps because before init i dont know which size will the tiles be. The engine will populate the tile_width and tile_height arrays:
Bitmap.Config fmt = Bitmap.Config.ARGB_8888;
for (int t = 0; t < this.puzzle_size; t++ ){
tile_data[ t ] = Bitmap.createBitmap( tile_width[t], tile_height[t], fmt);
4- Last step,inside the C++ engine, all the tiles bitmaps are filled:
for ( int n = 0; n < nBitmaps; n++ )
{
jobject bitmap = env->GetObjectArrayElement( bitmaps, n );
AndroidBitmap_getInfo(env, bitmap, &info);
AndroidBitmap_lockPixels(env, bitmap, reinterpret_cast<void **>(&pixels));
game->getTileBitmap( n, (unsigned char*)pixels );
AndroidBitmap_unlockPixels(env, bitmap);
env->SetObjectArrayElement( bitmaps, n, bitmap );
}
}
Now, in my custom View:
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
canvas.drawColor(Color.BLACK);
for ( int tile = 0; tile < board.nTiles; tile++ ){
canvas.drawBitmap( tile_data[tile],
tile_x[tile],
tile_y[tile], paint);
}
}
What i expect is that on my View i see my tiles with transparent areas. what i get instead is a weird behaviour so that on the black background i see the ENTIRE tile like the alpha bytes are all set to opaque, but when i move the tiles one of top of the other, the "transparent" areas get combined in some strange way, like colors are "xor"ed or multiplied in some way! When i move one tile on the other i can see the areas where the alpha bytes are set to transparent but colors gets mangled instead of being transparend!
Basically i expect that pixels having alpha set to 0 are drawn as transparent... i looked on internet but i could not find anything usefull to help me out....
Does somebody have ideas? Anything will be appreciated!
thanks.
Shouldn't you use the index t instead of tile inside the for loop inside onDraw? Like this:
canvas.drawBitmap(tile_data[t], tile_x[t], tile_y[t], paint);
I'm trying to port an emulator that i have written in java to android. Things have been going nicely, I was able to port most of my codes with minor changes however due to how emulation works, I need to render image at pixel level.
As for desktop java I use
int[] pixelsA = ((DataBufferInt) src.getRaster().getDataBuffer()).getData();
which allow me to get the reference to the pixel buffer and update it on the fly(minimize object creations)
Currently this is what my emulator for android does for every frame
#Override
public void onDraw(Canvas canvas)
{
buffer = Bitmap.createBitmap(pixelsA, 256, 192, Bitmap.Config.RGB_565);
canvas.drawBitmap(buffer, 0, 0, null);
}
pixelsA is an array int[], pixelsA contains all the colour informations, so every frame it will have to create a bitmap object by doing
buffer = Bitmap.createBitmap(pixelsA, 256, 192, Bitmap.Config.RGB_565);
which I believe is quite expensive and slow.
Is there any way to draw pixels efficiently with canvas?
One quite low-level method, but working fine for me (with native code):
Create Bitmap object, as big as your visible screen.
Also create a View object and implement onDraw method.
Then in native code you'd load libjnigraphics.so native library, lookup functions AndroidBitmap_lockPixels and AndroidBitmap_unlockPixels.
These functions are defined in Android source in bitmap.h.
Then you'd call lock/unlock on a bitmap, receiving address to raw pixels. You must interpret RGB format of pixels accordingly to what it really is (16-bit 565 or 32-bit 8888).
After changing content of the bitmap, you want to present this on screen.
Call View.invalidate() on your View. In its onDraw, blit your bitmap into given Canvas.
This method is very low level and dependent on actual implementation of Android, however it's very fast, you may get 60fps no problem.
bitmap.h is part of Android NDK since platform version 8, so this IS official way to do this from Android 2.2.
You can use the drawBitmap method that avoids creating a Bitmap each time, or even as a last resort, draw the pixels one by one with drawPoint.
Don't recreate the bitmap every single time. Try something like this:
Bitmap buffer = null;
#Override
public void onDraw(Canvas canvas)
{
if(buffer == null) buffer = Bitmap.createBitmap(256, 192, Bitmap.Config.RGB_565);
buffer.copyPixelsFromBuffer(pixelsA);
canvas.drawBitmap(buffer, 0, 0, null);
}
EDIT: as pointed out, you need to update the pixel buffer. And the bitmap must be mutable for that to happen.
if pixelsA is already an array of pixels (which is what I would infer from your statement about containing colors) then you can just render them directly without converting with:
canvas.drawBitmap(pixelsA, 0, 256, 0, 0, 256, 192, false, null);
I'm working on some Android code for caching and redrawing a framebuffer object's color buffer between the loss and recreation of EGL contexts. Development is primarily happening on a Xoom tablet running Honeycomb. Anyway, what I'm trying to do is store the result of calling glReadPixels() on the FBO in a direct ByteBuffer, then use that buffer with glTexImage2D() and draw it back into the (now cleared) framebuffer. All of this seems to work fine — the ByteBuffer contains the right values ([-1, 0, 0, -1] etc. for a pixel, according to Java's inability to understand unsigned bytes), no GlErrors seem to be thrown, and the quad is drawn to the right part of the screen (currently the top-left quarter of the framebuffer for testing purposes).
However, no matter what I try, glTexImage2D() always outputs a plain black texture. I've had some issues with this before — when displaying Bitmaps, I eventually gave up trying to use the basic GLES20.glTexImage2D() with Buffers and skipped to using GLUtils.glTexImage2D(), which processes the Bitmap for you. Unfortunately, that's less of an option here (I did actually try converting the ByteBuffer to a Bitmap so I could use GLUtils, without much success), so I've really run out of ideas.
Can anyone think of anything that could be causing glTexImage2D() to not correctly process a perfectly good ByteBuffer? Any and all suggestions would be welcome.
ByteBuffer pixelBuffer;
void storePixels() {
try {
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, fbuf);
pixelBuffer = ByteBuffer.allocateDirect(width * height * 4).order(ByteOrder.nativeOrder());
GLES20.glReadPixels(0, 0, width, height, GL20.GL_RGBA, GL20.GL_UNSIGNED_BYTE, pixelBuffer);
GLES20.glBindFrameBuffer(GLES20.GL_FRAMEBUFFER, 0);
gfx.checkGlError("store Pixels");
}catch (OutOfMemoryError e) {
pixelBuffer = null;
}
}
void redrawPixels() {
GLES20.glBindFramebuffer(GL20.GL_FRAMEBUFFER, fbuf);
int[] texId = new int[1];
GLES20.glGenTextures(1, texId, 0);
int bufferTex = texId[0];
GLES20.glBindTexture(GL20.GL_TEXTURE_2D, bufferTex);
GLES20.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
GLES20.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
GLES20.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_S, repeatX ? GL20.GL_REPEAT
: GL20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_T, repeatY ? GL20.GL_REPEAT
: GL20.GL_CLAMP_TO_EDGE);
GLES20.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_RGBA, width, height, 0, GL20.GL_RGBA, GL20.GL_UNSIGNED_BYTE, pixelBuffer);
gfx.drawTexture(bufferTex, width, height, Transform.IDENTITY, width/2, height/2, false, false, 1);
GLES20.glDeleteTextures(1, IntBuffer.wrap(new int[] {bufferTex}));
pixelBuffer = null;
GLES20.glBindFrameBuffer(GLES20.GL_FRAMEBUFFER, 0);
}
gfx.drawTexture() builds a quad and draws it to the currently bound framebuffer, by the way. That code has been well-tested in other parts of my project — it shouldn't be the issue here.
For those of you playing along at home, this code is in fact totally valid. Remember when I swore blind that gfx.drawTexture() has been well-tested and shouldn't be the issue here"? Yeah, it was totally the issue there. I was buffering vertices to draw without actually flushing them through a glDrawElements() call. Whoops.