Gstreamer video to opengl texture - android

I'm trying to render AV frames grabbed and converted from a MPEG4 video using Gstreamer to an Android (2.2)-opengl texture. I've pretty much exhausted google and not found an answer.
Basically, I am using Gstreamer uridecodebin to decode the frame, and then convert the frame to RGB, and then glTexSubImage2D() to create an openGL texture from it, but can't seem to get anything to work.The texture is getting colored when I get the decoded data (RGB) from Gstreamer.
I am getting the video size as 320 * 256 and my Texture size is 512 * 256 & I am using glDrawTexiOES(0,0,videowidth,videoheight), I am not getting any errors related to opengl, but the texture is blank( different color frames), though the Audio works fine.
Here is my code:Native OnDraw:
if (theGStPixelBuffer != 0) {
glBindTexture (GL_TEXTURE_2D, s_texture);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glPixelStorei( GL_UNPACK_ALIGNMENT, 2);
glTexSubImage2D (GL_TEXTURE_2D, 0, 0, 0, theTexWidth,
theTexHeight, GL_RGB, GL_UNSIGNED_BYTE,
GST_BUFFER_DATA(theGStPixelBuffer));
check_gl_error("glTexSubImage2D");
theGStPixelBuffer = 0;
}
glDrawTexiOES(0, 0, 0, theTexWidth, theTexHeight);
check_gl_error("glDrawTexiOES")

I have encounter the same problem ;you can get the bitmap and use martix class to resize the bitmap.

Related

How to create texture with 16bit depth one component in OpenGL ES3.0 on Android?

I'm working on displaying yuv pictures with OpenGL ES3.0 on Android. I convert yuv pixels to rgb in a fragment shader. At first, I need to pass yuv pixel data to OpenGL as a texture.
When the yuv data is 8 bit-depth, I program like below and it works:
GLenum glError;
GLuint tex_y;
glGenTextures(1, &tex_y);
glBindTexture(GL_TEXTURE_2D, tex_y);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// pix_y is y component data.
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, y_width, y_height, 0, GL_LUMINANCE,GL_UNSIGNED_BYTE, pix_y);
glGenerateMipmap(GL_TEXTURE_2D);
glError = glGetError();
if (glError != GL_NO_ERROR) {
LOGE(TAG, "y, glError: 0x%x", glError);
}
However there are some yuv formats with more depth like YUV420P10le. I don't want to lose the benefit of more depth, so I convert yuv data which has more than 8 bit depth to 16 bit by shifting data (for example: yuv420p10le, y_new = y_old << 6)。 Now I want to generate a 16 bit depth texture, but I always fail GL_INVALID_OPERATION. Below is the code to create a 16 bit texture:
// rest part is the same with 8 bit texture.
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16UI, y_width, y_height, 0, GL_RED_INTEGER, GL_UNSIGNED_SHORT, pix_y);
I've tried many format combinations in https://registry.khronos.org/OpenGL-Refpages/es3.0/html/glTexImage2D.xhtml, none of them succeed.
By the way, I also tested on MacOS OpenGL 3.3 and succeeded, I just need to pass data as one channel data of RGB..
// Code on MacOS OpenGL3.3. The data format depends on y depth, GL_UNSIGNED_BYTE or
// GL_UNSIGNED_SHORT. Using this config, I can access the data in RED channel of textures
// which is normalized to [0.0f, 1.0f]. However, this config doesn't work on OpenGL ES3.0
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, y_width, y_height, 0, GL_RED, dataFormat, y);

Surface texture using Opengles3.0 on Android

I'm using OpenGLES3.0 to do video processing.
I found that it's possible to get frames from a video by using Android MediaExtractor and MediaCodec API together with Surface texture, like below.
glGenTextures( 1, &textureId );
glBindTexture(GL_TEXTURE_EXTERNAL_OES, textureId);
glTexParameterf(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_EXTERNAL_OES, 0);
But there is no such API in GLES3.0.
My question is: is there a relevant API in OpenGLES3.0 to generate and use surface texture? Or is there some other APIs in Android to get all frames from a video?
I tried getFrameAtTime() in MediaMetadataRetriever and FFmpegMediaMetadataRetriever but I could only get the key frames rather than all frames, even with MediaMetadataRetriever.OPTION_CLOSEST.
It seems that 1.1 or 2.0 extensions are still available in 3.0. So to use GL_TEXTURE_EXTERNAL_OES, you only need to include the gl2ext.h.

Unexpected pixel data layout when reading from GraphicBuffer

I am currently working on a platform in the native Android framework where I use GraphicBuffer to allocate memory and then create an EGLImage from it. This is then used as a texture in OpenGL which I render to (with a simple fullscreen quad).
The problem is when I read the rendered pixel data from the GraphicBuffer, I expect it to be in a linear RGBA format in memory but the result is a texture which contains three parallell smaller clones of the image and with overlapping pixels. Maybe that description doesn't say much but the point is the actual pixel data makes sense but the memory layout seems to be something other than linear RGBA. I assume this is because the graphics drivers store the pixels in an internal format other than linear RGBA.
If I render to a standard OpenGL texture and read with glReadPixels everything works fine, so I assume the problem lies with my custom memory allocation with GraphicBuffer.
If the reason is the drivers' internal memory layout, is there any way of forcing the layout to linear RGBA? I have tried most of the usage flags supplied to the GraphicBuffer constructor with no success. If not, is there a way to output the data differently in the shader to "cancel out" the memory layout?
I am building Android 4.4.3 for Nexus 5.
//Allocate graphicbuffer
outputBuffer = new GraphicBuffer(outputFormat.width, outputFormat.height, outputFormat.bufferFormat,
GraphicBuffer::USAGE_SW_READ_OFTEN |
GraphicBuffer::USAGE_HW_RENDER |
GraphicBuffer::USAGE_HW_TEXTURE);
/* ... */
//Create EGLImage from graphicbuffer
EGLint eglImageAttributes[] = {EGL_WIDTH, outputFormat.width, EGL_HEIGHT, outputFormat.height, EGL_MATCH_FORMAT_KHR,
outputFormat.eglFormat, EGL_IMAGE_PRESERVED_KHR, EGL_FALSE, EGL_NONE};
EGLClientBuffer nativeBuffer = outputBuffer->getNativeBuffer();
eglImage = _eglCreateImageKHR(display, EGL_NO_CONTEXT, EGL_NATIVE_BUFFER_ANDROID, nativeBuffer, eglImageAttributes);
/* ... */
//Create output texture
glGenTextures(1, &outputTexture);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
_glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, eglImage);
/* ... */
//Create target fbo
glGenFramebuffers(1, &targetFBO);
glBindFramebuffer(GL_FRAMEBUFFER, targetFBO);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, outputTexture, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
/* ... */
//Read from graphicbuffer
const Rect lockBoundsOutput(quadRenderer->outputFormat.width, quadRenderer->outputFormat.height);
status_t statusgb = quadRenderer->getOutputBuffer()->lock(GraphicBuffer::USAGE_SW_READ_OFTEN, &result);
I managed to find the answer myself and I was wrong all along. The simple reason was that although I was rendering a 480x1080 texture the memory allocated was padded to 640x1080 so I just needed to remove the padding after each row and the output texture made sense.

Black artifacts with open gl es 2.0 texture rendering on Certain devices

While rendering a texture on some devices (only galaxy s3 mini confirmed) i got dark area flickering on the texture as described in this thread:
Black Artifacts on Android in OpenGL ES 2
I'm not allowed to comment this thread (not enough credit) but I would like clarification from the author who solved this issue:
Could you explain a little more how you use glTexImage2D() and glTexSubImage2D() to solve this?
In code I got these lines to load the bitmaps:
(As you can see I'm using texImage2D to load the bitmap, the android documentation about gltexImage2D only provides attribute types but no explaination)
...
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inScaled = false;
final Bitmap bitmap = BitmapFactory.decodeResource(
context.getResources(), resourceId, options);
if (bitmap == null) {
if (LoggerConfig.ON) {
Log.w(TAG, "Resource ID " + resourceId + " could not be decoded.");
}
glDeleteTextures(1, textureObjectIds, 0);
return 0;
}
glBindTexture(GL_TEXTURE_2D, textureObjectIds[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
texImage2D(GL_TEXTURE_2D, 0, bitmap, 0);
glGenerateMipmap(GL_TEXTURE_2D);
...
edit:
tried to implement the solution according to the link in top but no luck, same flickering effect,
new code to load bitmap:
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(bitmap.getWidth() * bitmap.getHeight() * 4);
byteBuffer.order(ByteOrder.BIG_ENDIAN);
IntBuffer ib = byteBuffer.asIntBuffer();
int[] pixels = new int[bitmap.getWidth() * bitmap.getHeight()];
bitmap.getPixels(pixels, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
for(int i=0; i<pixels.length; i++){
ib.put(pixels[i] << 8 | pixels[i] >>> 24);
}
bitmap.recycle();
byteBuffer.position(0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bitmap.getWidth(), bitmap.getHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, null);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, bitmap.getWidth(), bitmap.getHeight(), GL_RGBA, GL_UNSIGNED_BYTE, byteBuffer);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
Illustration of odd behavior, see the black area to the middle right in the image:
(I need 10 reputations just to post an image?!?!?)
https://dl.dropboxusercontent.com/u/61092317/blackflickering.jpg
The issue in the other question that you've referenced appears to be this: the author specified MipMaps when originally loading up his textures, and the MipMaps didn't work properly. There's a possibility that he wasn't using glTexImage2d properly.
The artifact that he got was: when the texture unit tried to move from one MipMap level to the next, there was no information there, and it rendered a blank (apparently black) texture instead. So, at least one level of his MipMap did get loaded.
I'm unsure if that author actually resolved his issue properly. OpenGL can be incredibly specific about how you load textures into it, and even the smallest error in your code can cause a problem. The problem may only occur on some platforms, so you get the impression that something is wrong with the device, but it still may be the code.
The best place to start is the Red Book: http://www.glprogramming.com/red/chapter09.html
And then: http://www.opengl.org/sdk/docs/man3/xhtml/glGenerateMipmap.xml
All of the functions that you want to know about are outlined there.
There's a few details that aren't answered in your question. Are you creating your own MipMaps by hand? Are you wanting OpenGL to generate MipMaps automatically?
I would suggest starting with a simple texture, that has no MipMaps, just one level. See if you can make that work. If that fixes the problem, then start moving towards using MipMaps.
As for the original question, there isn't exactly a difference between loading your texture data with glTexImage2D or updating texture data with glTexSubImage2D. glTexImage2D is used to specify a texture (ie: the width / height of the texture, which is different for each MipMap level), whereas glTexSubImage2D is used to update all or part of a texture that has already been specified by glTexImage2D. In the second instance, you're just updating it with new texels.

Android drawing texture with OpenGL ES 2.0 slow on some devices

In my Android 4.3 application, I would like to load a texture from a local png onto a TextureView. I do not know OpenGL and I am using the code from the GLTextureActivity hardware acceleration test. I am pasting also the loading texture part here:
private int loadTexture(int resource) {
int[] textures = new int[1];
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, textures, 0);
checkGlError();
int texture = textures[0];
glBindTexture(GL_TEXTURE_2D, texture);
checkGlError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
Bitmap bitmap = BitmapFactory.decodeResource(mResources, resource);
GLUtils.texImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bitmap, GL_UNSIGNED_BYTE, 0);
checkGlError();
bitmap.recycle();
return texture;
}
I am running the code in two devices: Nexus 7 and Galaxy Nexus phone, and I notice a huge speed difference between the two. For Nexus 7, the drawing part takes about 170 ms, although for the Galaxy Nexus it takes 459 ms. The most time consuming operation is the loading of the texture and especially the texImage2D call. I have read that there are devices with chips that are slow on texImage2D-texSubImage2D functions but how can someone tell which are those devices and how can I avoid to use those functions to achieve the same result?
Thank you in advance.
// EDIT: the glDrawArrays(GL_TRIANGLE_STRIP, 0, 4) call seems to also be significantly slower in the phone device. Why is this happening? How could I avoid it?
Are you loading textures on each frame redraw? That's just not right - you should load textures only once before main rendering loop. You won't get instant textures loading even on the fastest possible device - you load resource, decode bitmap from it and then load it to GPU. This takes some time.

Categories

Resources