How to transfer textures from one OpenGL context to another - android

Background:
Android native camera app uses a OpenGL_1.0 context to display camera preview and gallery pictures. Now I want to add a live-filter on the native camera preview.
To add a live-filter on my own camera app preview is simple --- just use the OpenGL_2.0 to do the image-processing and display. Since OpenGL_1.0 doses't support image-process and somehow it is used for display in the Android native camera app. *I now want to create a new GL context based on OpenGL_2.0 for image-processing and pass the processed image to the other GL context based on OpenGL_1.0 for display.*
Problem:
The problem is how to transfer the processed image from the GL-context-process (based on OpenGL_2.0) to the GL-context-display (based on OpenGL_1.0). I have tried to use FBO: first copy the image pixels from texture in GL-context-process and then set them back to another texture in GL-context-display. But copy pixels from texture is quite slow, typically takes hundreds of milliseconds. That is too slow for camera preview.
*Is there a better way to transfer textures from one GL context to another? Especially, when one GL context is based on OpenGL_2.0 while the other is based on OpenGL_1.0.*

I have found a solution using EGLImage. Just in case someone finds it useful:
Thread #1 that loads a texture:
EGLContext eglContext1 = eglCreateContext(eglDisplay, eglConfig, EGL_NO_CONTEXT, contextAttributes);
EGLSurface eglSurface1 = eglCreatePbufferSurface(eglDisplay, eglConfig, NULL); // pbuffer surface is enough, we're not going to use it anyway
eglMakeCurrent(eglDisplay, eglSurface1, eglSurface1, eglContext1);
int textureId; // texture to be used on thread #2
// ... OpenGL calls skipped: create and specify texture
//(glGenTextures, glBindTexture, glTexImage2D, etc.)
glBindTexture(GL_TEXTURE_2D, 0);
EGLint imageAttributes[] = {
EGL_GL_TEXTURE_LEVEL_KHR, 0, // mip map level to reference
EGL_IMAGE_PRESERVED_KHR, EGL_FALSE,
EGL_NONE
};
EGLImageKHR eglImage = eglCreateImageKHR(eglDisplay, eglContext1, EGL_GL_TEXTURE_2D_KHR, reinterpret_cast<EGLClientBuffer>(textureId), imageAttributes);
Thread #2 that displays 3D scene:
// it will use eglImage created on thread #1 so make sure it has access to it + proper synchronization etc.
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
// texture parameters are not stored in EGLImage so don't forget to specify them (especially when no additional mip map levels will be used)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, eglImage);
// texture state is now like if you called glTexImage2D on it
Reference:
http://software.intel.com/en-us/articles/using-opengl-es-to-accelerate-apps-with-legacy-2d-guis
https://groups.google.com/forum/#!topic/android-platform/qZMe9hpWSMU

Related

SurfaceTexture in Android plugin doesn't work in Unity

I can't get the texture tied to a SurfaceTexture to display in Unity.
Update 4: Based on the pipeline in update 1 (surface->external texture via surface texture -> fbo -> texture 2d) I know the SurfaceTexture isn't properly converting its surface to a texture. I can get correctly drawn pictures from its surface via pixelcopy and I can confirm my FBO drawing to texture2d pipeline works with some test colors. So the question is, why can't the SurfaceTexture convert its surface to a texture?
I generate a Texture in Java and pass its pointer back to Unity:
public void initGLTexture()
{
Log.d("Unity", "initGLTexture");
int textures[] = new int[1];
GLES20.glGenTextures(1, textures, 0);
mTextureId = textures[0];
GLES20.glBindTexture(GL_TEXTURE_EXTERNAL_OES, mTextureId);
GLES20.glTexParameterf(GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameterf(GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
I create a SurfaceTexture from the id (in Java):
mSurfaceTexture = new SurfaceTexture(mTextureId);
mSurfaceTexture.setDefaultBufferSize(512, 512);
I use a third-party library, GeckoView, to render onto the Surface of the SurfaceTexture. I call the following method from Unity's OnRenderObject() to keep all GL rendering on the same thread:
mSurfaceTexture.updateTexImage();
I know the above code allows proper drawing onto the surface.
I call the following in Unity to load the texture:
_imageTexture2D = Texture2D.CreateExternalTexture(
512,512,TextureFormat.RGBA32,false,true,(IntPtr) mTextureId);
_rawImage.texture = _imageTexture2D;
Why does the RawImage with the texture applied show only this sprite-looking thing, which should be a webpage?
Update 1: So I've been working on the hypothesis of: use Gecko to draw to the Surface, and use a SurfaceTexture to render this surface to a GL_TEXTURE_EXTERNAL_OES. Since I can't display this on Unity (not sure why) I am drawing this texture to a frame buffer and copying the pixels in the framebuffer to a GL_TEXTURE_2D. I am getting a web page in the texture_2d (in the emulator with an imageview and glReadPixels). However, when I import the work into Unity to test if the pipeline is okay thus far I just get a black screen. I CAN get images of the surface via the PixelCopy api.
Here is my FBO overview code - my rendering code comes from grafika's texture2D program:
// bind display buffer
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, mFrameBufferId);
GlUtil.checkGlError("glbindframebuffer");
// unbind external texture to make sure it's fresh
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, 0);
GlUtil.checkGlError("glunbindexternaltex");
// bind source texture (done in drawFrame as well )
GLES20.glBindTexture(GL_TEXTURE_EXTERNAL_OES, mOffscreenTextureId);
GlUtil.checkGlError("glBindFramebuffer");
// draw to frame buffer
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // again, only really need to
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); // clear pixels outside rect
mFullScreen.drawFrame(mOffscreenTextureId, mIdentityMatrix);
// unbind source texture
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D,0);
GlUtil.checkGlError("glBindTexture2d");
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, 0);
GlUtil.checkGlError("glunbindexternaltex");
// make sure we're still bound to fbo
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, mFrameBufferId);
GlUtil.checkGlError("glBindTexture2d");
// copy pixels from frame buffer to display texture
GLES20.glCopyTexImage2D(GLES20.GL_TEXTURE_2D,0,GLES20.GL_RGBA,0,0,512,512,0);
// read pixels from the display buffer to imageview for debugging
BitmapDisplay.mBitmap = SavePixels(0,0,512,512);
// unbind texture
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D,0);
Here's my player settings > other:
Update 2: Possible pipeline to try: call the draw function of the external texture to FBO (attached to Unity's texture_2d) in C++ via this interface.
Update 3: Calling the Java functions from native code that are responsible for drawing the texture from the SurfaceTexture to the FBO to Unity's texture via the GL.IssuePluginEvent produce a black texture as in the first update. It will show images in the emulator but not in Unity.
I had to do a similar task a couple of months ago and found out that the correct pipeline is creating a texture in Unity, obtaining a native pointer in C and finally updating it in the Java layer.
Please take a look at this sample project, it should give you a different perspective.
https://github.com/robsondepaula/unity-android-native-camera
Regards

how to assign one texture to another efficiently in OpenGL ES for Android

I want to copy texture1 to texture2. The background is that I generate 15 empty texture id and bind them to GLES11Ext.GL_TEXTURE_EXTERNAL_OES, the following is my code:
int[] textures = new int[15];
GLES20.glGenTextures(15, textures, 0);
GlUtil.checkGlError("glGenTextures");
for(int i=0;i<15;i++)
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, textures[i]);
then I always transfer camera preview to textures[0], I want copy the texture from textures[0] to textures[1] in order to keep the frame content of timestamp 1, and copy the texture from textures[0] to textures[2] in order to keep the frame content of timestamp 2... it looks like buffer some texture data in GPU and render some of that in the future. So I want to know is there anyway to do this? And can I just use textures[2]=textures[0] to copy texture data?
There isn't a very direct way to copy texture data in ES 2.0. The easiest way is probably using glCopyTexImage2D(). To use this, you have to create an FBO, and attach the source texture to it. Say if srcTexId is the id of the source texture, and dstTexId the id of the destination texture:
GLuint fboId = 0;
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, srcTexId, 0);
glBindTexture(GL_TEXTURE_2D, dstTexId);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0, width, height, 0);
That being said, from your description, I don't believe that this is really what you should be doing, for the following reasons:
I don't think copying texture data as shown above will work for the external textures you are using.
Copying texture data will always be expensive, and sounds completely unnecessary to solve your problem.
It sounds like you want to keep the 15 most recent camera images. To do this, you can simply track which of your 15 textures contains the most recent image, and treat the list of the 15 texture ids as a circular buffer.
Say initially you create your texture ids:
int[] textures = new int[15];
GLES20.glGenTextures(15, textures, 0);
int newestIdx = 0;
Then every time you receive a new frame, you write it to the next entry in your list of texture ids, wrapping around at 15:
newestIdx = (newestIdx + 1) % 15;
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, textures[newestIdx]);
// Capture new frame into currently bound texture.
Then, every time you want to use the ith frame, with 0 referring to the most recent, 1 to the frame before that, etc, you bind it with:
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, textures[(newestIdx + i) % 15]);
So the textures never get copied. You just keep track of which texture contains which frame, and access them accordingly.

Unexpected pixel data layout when reading from GraphicBuffer

I am currently working on a platform in the native Android framework where I use GraphicBuffer to allocate memory and then create an EGLImage from it. This is then used as a texture in OpenGL which I render to (with a simple fullscreen quad).
The problem is when I read the rendered pixel data from the GraphicBuffer, I expect it to be in a linear RGBA format in memory but the result is a texture which contains three parallell smaller clones of the image and with overlapping pixels. Maybe that description doesn't say much but the point is the actual pixel data makes sense but the memory layout seems to be something other than linear RGBA. I assume this is because the graphics drivers store the pixels in an internal format other than linear RGBA.
If I render to a standard OpenGL texture and read with glReadPixels everything works fine, so I assume the problem lies with my custom memory allocation with GraphicBuffer.
If the reason is the drivers' internal memory layout, is there any way of forcing the layout to linear RGBA? I have tried most of the usage flags supplied to the GraphicBuffer constructor with no success. If not, is there a way to output the data differently in the shader to "cancel out" the memory layout?
I am building Android 4.4.3 for Nexus 5.
//Allocate graphicbuffer
outputBuffer = new GraphicBuffer(outputFormat.width, outputFormat.height, outputFormat.bufferFormat,
GraphicBuffer::USAGE_SW_READ_OFTEN |
GraphicBuffer::USAGE_HW_RENDER |
GraphicBuffer::USAGE_HW_TEXTURE);
/* ... */
//Create EGLImage from graphicbuffer
EGLint eglImageAttributes[] = {EGL_WIDTH, outputFormat.width, EGL_HEIGHT, outputFormat.height, EGL_MATCH_FORMAT_KHR,
outputFormat.eglFormat, EGL_IMAGE_PRESERVED_KHR, EGL_FALSE, EGL_NONE};
EGLClientBuffer nativeBuffer = outputBuffer->getNativeBuffer();
eglImage = _eglCreateImageKHR(display, EGL_NO_CONTEXT, EGL_NATIVE_BUFFER_ANDROID, nativeBuffer, eglImageAttributes);
/* ... */
//Create output texture
glGenTextures(1, &outputTexture);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
_glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, eglImage);
/* ... */
//Create target fbo
glGenFramebuffers(1, &targetFBO);
glBindFramebuffer(GL_FRAMEBUFFER, targetFBO);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, outputTexture, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
/* ... */
//Read from graphicbuffer
const Rect lockBoundsOutput(quadRenderer->outputFormat.width, quadRenderer->outputFormat.height);
status_t statusgb = quadRenderer->getOutputBuffer()->lock(GraphicBuffer::USAGE_SW_READ_OFTEN, &result);
I managed to find the answer myself and I was wrong all along. The simple reason was that although I was rendering a 480x1080 texture the memory allocated was padded to 640x1080 so I just needed to remove the padding after each row and the output texture made sense.

android read pixels from GL_TEXTURE_EXTERNAL_OES

I am trying to read pixels/data from an OpenGL texture which is bound to GL_TEXTURE_EXTERNAL_OES.
The reason for binding the texture to that target is because in order get live camera feed on android a SurfaceTexture needs to be created from an OpenGL texture which is bound to GL_TEXTURE_EXTERNAL_OES.
Since android uses OpenGL ES I can't use glGetTexImage() to read the image data.
Therefore I am binding the target to an FBO and then reading it using readPixels(). This is my code:
GLuint framebuffer;
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
//Attach 2D texture to this FBO
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_EXTERNAL_OES, cameraTexture, 0);
status("glFramebufferTexture2D() returned error %d", glGetError());
However I am getting error 1282 (GL_INVALID_OPERATION) for some reason.
I think this might be the problem:
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_EXTERNAL_OES, cameraTexture, 0);
You should not attach the cameraTexture to the framebuffer, instead you should generate a new texture in the format of GL_TEXTURE_2D
glGenTextures(1, mTextureHandle, 0);
glBindTexture(GL_TEXTURE_2D, mTextureHandle[0]);
...
cameraTexture is the one you get from SurfaceTexture, and it is the source used for rendering. This new texture is the one you should render to (which could be used later in the rendering pipeline). Then do this:
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, mTextureHandle[0], 0);
cameraTexture is drawn to the framebuffer's attached texture, using a simple shader program to draw, bind the cameraTexture when the shader program is in use:
glBindTexture(GL_TEXTURE_EXTERNAL_OES, cameraTexture);
The GL_TEXTURE_EXTERNAL_OES texture target is usually in a YUV color space and glReadPixels() requires the target to be RGB. It probably does not do the color space conversion automatically. However, you could do the conversion in your own fragment shader which renders RGB into another texture and then use glReadPixels() to read that.
texture for YUV420 to RGB conversion in OpenGL ES

android - Can't read pixels from GraphicBuffer at adreno GPU, by Karthik's method(Hacky alternatives of glReadPixels)

Since July, I have developed Android Application to edit video files like .avi, .flv etc. I use FFMPEG and OpenGL ES 2.0 to implement this application.
Because it is required too many calculations to execute a filter effect like "Blur" by CPU, I decide to use OpenGl ES 2.0 for applying filter effect to a frame of video by using GPU and Shader.
What I try to do is 'Using shader to apply a filter effect to a frame of video and get pixels which are stored in Frame Buffer'.
So I have to use glReadPixels only OpenGl ES 2.0 method that can be used to get pixels from FrameBuffer. But according to many GPU Development Guides, using glReadPixels was not recommended and guide books warned the potential risk when using glReadPixels. Also, the performance of glReadPixels differs depending on GPU version and vendor. I cannot concretely decide to use glReadPixels and tried to find other method for getting pixels which is result of GPU calculation.
After a few days, I found the hacky method for getting pixels data by using Android GraphicBuffer.
Here is the link.
From this link, I tried Karthik's method to my codes.
Only difference is:
//render method I made.
void renderFrame(){
/* some codes to init */
glBindFramebuffer(GL_FRAMEBUFFER, iFBO);
/* Set the viewport according to the FBO's texture. */
glViewport(0, 0, mTexWidth , mTexHeight);
/* Clear screen on FBO. */
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Different Code compare to Karthik's.
contents->setTexture();
contents->draw(mPositionVarIndex, mTextrueCoIndex);
contents->releaseData();
/* And unbind the FrameBuffer Object so subsequent drawing calls are to the EGL window surface. */
glBindFramebuffer(GL_FRAMEBUFFER,0);
LOGI("Read Graphic Buffer");
// Just in case the buffer was not created yet
void* vaddr;
// Lock the buffer and retrieve a pointer where we are going to write the data
buffer->lock(GRALLOC_USAGE_SW_WRITE_OFTEN, &vaddr);
if (vaddr == NULL) {
LOGE("lock error");
buffer->unlock();
return;
}
/* some codes that use the pixels from GraphicBuffer...*/
}
void setTexture(){
glGenTextures(1, mTexture);
glBindTexture(GL_TEXTURE_2D, mTexture[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, mData);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
}
void releaseData(){
glDeleteTextures(1, mTexture);
glDeleteBuffers(1, mVbo);
}
void draw(int positionIndex, int textureIndex){
mVbo[0] = create_vbo(lengthOfArray*sizeOfFloat*2, NULL, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, mVbo[0]);
glBufferSubData(GL_ARRAY_BUFFER, 0, lengthOfArray*sizeOfFloat, this->vertexData);
glEnableVertexAttribArray(positionIndex);
// checkGlError("glEnableVertexAttribArray");
glVertexAttribPointer(positionIndex, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
// checkGlError("glVertexAttribPointer");
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, mVbo[0]);
glBufferSubData(GL_ARRAY_BUFFER, lengthOfArray*sizeOfFloat, lengthOfArray*sizeOfFloat, this->mImgTextureData);
glEnableVertexAttribArray(textureIndex);
glVertexAttribPointer(textureIndex, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(lengthOfArray*sizeOfFloat));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, mTexture[0]);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 6);
checkGlError("glDrawArrays");
}
I use a texture and render frame to fill the Buffer. I have 2 Test Phones, One is Samsung Galaxy S 2, which renderer is Mali-400MP. The other is LG Optimus G Pro, and renderer is Adreno(TM) 320. Galaxy S2 works well with above code and Karthik's method. But in case of LG smartphone, there are some problems.
E/libgenlock(17491): perform_lock_unlock_operation: GENLOCK_IOC_DREADLOCK failed (lockType0x1,err=Connection timed out fd=47)
E/gralloc(17491): gralloc_lock: genlock_lock_buffer (lockType=0x2) failed
W/GraphicBufferMapper(17491): lock(...) failed -22 (Invalid argument)
Accroding to this link,
On Qualcomm hardware pre-Android-4.2, a Qualcomm-specific mechanism,
named Genlock, is used.
Only I could see the error related to GenLock, so I carefully guessed at some problem between GraphicBuffer and Qualcomm GPU. After that, I searched and read the code of Gralloc.cpp, GraphicBufferMapper.cpp, GraphicBuffer.cpp and *.h for finding reasons of those errors, but failed.
My questions are:
Is it right approach to get filter effect from GPU calculation? If not, how to get a filter effect like "Blur" which requires so many calculations?
Is Karthik's method not working for Qualcomm GPU? I want to know that why those errors occured only at Qualcomm GPU, Adreno.
Make sure your GraphicBuffer allocation has GRALLOC_USAGE_SW_READ_OFTEN specified. Without it you may not be able to lock the buffer from code running on the CPU.
Unrelated but possibly suggestive of a better approach: see the CameraToMpegTest example, which does a trivial edit to live camera input using a GLES 2.0 shader.
Update: there's now an example of applying filters with the GPU in Grafika. You can see a screenrecorded demo here.

Categories

Resources