In my Android 4.3 application, I would like to load a texture from a local png onto a TextureView. I do not know OpenGL and I am using the code from the GLTextureActivity hardware acceleration test. I am pasting also the loading texture part here:
private int loadTexture(int resource) {
int[] textures = new int[1];
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, textures, 0);
checkGlError();
int texture = textures[0];
glBindTexture(GL_TEXTURE_2D, texture);
checkGlError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
Bitmap bitmap = BitmapFactory.decodeResource(mResources, resource);
GLUtils.texImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bitmap, GL_UNSIGNED_BYTE, 0);
checkGlError();
bitmap.recycle();
return texture;
}
I am running the code in two devices: Nexus 7 and Galaxy Nexus phone, and I notice a huge speed difference between the two. For Nexus 7, the drawing part takes about 170 ms, although for the Galaxy Nexus it takes 459 ms. The most time consuming operation is the loading of the texture and especially the texImage2D call. I have read that there are devices with chips that are slow on texImage2D-texSubImage2D functions but how can someone tell which are those devices and how can I avoid to use those functions to achieve the same result?
Thank you in advance.
// EDIT: the glDrawArrays(GL_TRIANGLE_STRIP, 0, 4) call seems to also be significantly slower in the phone device. Why is this happening? How could I avoid it?
Are you loading textures on each frame redraw? That's just not right - you should load textures only once before main rendering loop. You won't get instant textures loading even on the fastest possible device - you load resource, decode bitmap from it and then load it to GPU. This takes some time.
Related
I'm working on displaying yuv pictures with OpenGL ES3.0 on Android. I convert yuv pixels to rgb in a fragment shader. At first, I need to pass yuv pixel data to OpenGL as a texture.
When the yuv data is 8 bit-depth, I program like below and it works:
GLenum glError;
GLuint tex_y;
glGenTextures(1, &tex_y);
glBindTexture(GL_TEXTURE_2D, tex_y);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// pix_y is y component data.
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, y_width, y_height, 0, GL_LUMINANCE,GL_UNSIGNED_BYTE, pix_y);
glGenerateMipmap(GL_TEXTURE_2D);
glError = glGetError();
if (glError != GL_NO_ERROR) {
LOGE(TAG, "y, glError: 0x%x", glError);
}
However there are some yuv formats with more depth like YUV420P10le. I don't want to lose the benefit of more depth, so I convert yuv data which has more than 8 bit depth to 16 bit by shifting data (for example: yuv420p10le, y_new = y_old << 6)。 Now I want to generate a 16 bit depth texture, but I always fail GL_INVALID_OPERATION. Below is the code to create a 16 bit texture:
// rest part is the same with 8 bit texture.
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16UI, y_width, y_height, 0, GL_RED_INTEGER, GL_UNSIGNED_SHORT, pix_y);
I've tried many format combinations in https://registry.khronos.org/OpenGL-Refpages/es3.0/html/glTexImage2D.xhtml, none of them succeed.
By the way, I also tested on MacOS OpenGL 3.3 and succeeded, I just need to pass data as one channel data of RGB..
// Code on MacOS OpenGL3.3. The data format depends on y depth, GL_UNSIGNED_BYTE or
// GL_UNSIGNED_SHORT. Using this config, I can access the data in RED channel of textures
// which is normalized to [0.0f, 1.0f]. However, this config doesn't work on OpenGL ES3.0
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, y_width, y_height, 0, GL_RED, dataFormat, y);
Within my Android App, which is an OpenGL ES 2.0 game, I've included 4 sets of graphic resources like so
ldpi
mdpi
hdpi
xhdpi
Now, within my xhdpi folder, the largest asset I have is 2560 x 1838 as I'm targeting large screens here (for example the Google Nexus 10 tablet which gets its resources from the XHDPI folder). Up until now, everything has worked on the devices on which I've tested (Google Nexus 10, Samsung Galaxy S3, S4 & S5, Nexus 4 & 5 etc etc....)
I've recently heard from a user who has encountered problems running this on an HTC One X+ handset (I believe there are other reasons for this besides the one I'm talking about here so I have a separate question open for the other issues).
The thing is, according to: this, the maximum texture size for the this phone is 2048x2048, but then according to this this phone gets its resources from the XHDPI folder.
So, it won't display the textures from this atlas (it's an atlas of backgrounds containing 6 separate backgrounds of 1280*612.
Now I have two options that I am aware of to fix this:
Reduce the size of the backgrounds. Ruled out as this would compromise quality on larger-screen devices (like the nexus 10)
Split into 2 atlases would rather not do this as would like to keep all backgrounds in 1 atlas to optimise loading speed (and just to keep things tidy)
Are there any other options? Am I able to provide, within a sub folder of xhdpi another folder that fits within the 1X+'s limits? And if so, how do I get the device to grab resources from there?
I'm doing this blind as I don't have an 1X+ on which to test this, but from what I've read, I believe having assets larger than 2048 x 2048 is going to cause problems on this device.
This is my code for applying textures:
public static int LoadTexture(GLSurfaceView view, Bitmap imgTex){
//Array for texture
int textures[] = new int[1];
try {
//texture name
GLES20.glGenTextures(1, textures, 0);
//Bind textures
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textures[0]);
//Set parameters for texture
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
//Apply texture
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, imgTex, 0);
//clamp texture
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T,GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S,GLES20.GL_CLAMP_TO_EDGE);
} catch (Exception e){
}
//Increase texture count
textureCount++;
//Return texture
return textures[0];
}
If you don't know if a texture load will work, you can do all the same operations, except use GL_PROXY_TEXTURE_2D (or 1D, 3D, etc.) instead of GL_TEXTURE_2D to check to see if the load will work for a given texture size and parameters. OpenGL attempts to perform the load, and and it will set all texture state to 0 if it doesn't work or there is another problem in a texture parameter. Then if the texture load fails (in your case due to the image being too big), have your program load a smaller scaled texture.
EDIT: I use iOS and my gl.h doesn't include GL_PROXY_TEXTURE_* and I can't find any reference in the OpenGL ES3 specification, so I'm not sure this will work for you with OpenGL.
Alternatively, get your max texture size (per dimension, e.g. width, height or depth) and load a suitable image sized using:
glGetIntegerv( GL_MAX_TEXTURE_SIZE, &size );
Afterward, check your image to ensure it worked.
GLuint name;
int width = 10000; // really bad width
int height = 512;
GLint size;
glGenTextures( 1, &name );
glBindTexture( GL_TEXTURE_2D, name );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, 0);
GLenum error = glGetError();
if( error ) {
printf("Failed!");
}
else {
printf("Succeeded!");
}
I am currently working on a platform in the native Android framework where I use GraphicBuffer to allocate memory and then create an EGLImage from it. This is then used as a texture in OpenGL which I render to (with a simple fullscreen quad).
The problem is when I read the rendered pixel data from the GraphicBuffer, I expect it to be in a linear RGBA format in memory but the result is a texture which contains three parallell smaller clones of the image and with overlapping pixels. Maybe that description doesn't say much but the point is the actual pixel data makes sense but the memory layout seems to be something other than linear RGBA. I assume this is because the graphics drivers store the pixels in an internal format other than linear RGBA.
If I render to a standard OpenGL texture and read with glReadPixels everything works fine, so I assume the problem lies with my custom memory allocation with GraphicBuffer.
If the reason is the drivers' internal memory layout, is there any way of forcing the layout to linear RGBA? I have tried most of the usage flags supplied to the GraphicBuffer constructor with no success. If not, is there a way to output the data differently in the shader to "cancel out" the memory layout?
I am building Android 4.4.3 for Nexus 5.
//Allocate graphicbuffer
outputBuffer = new GraphicBuffer(outputFormat.width, outputFormat.height, outputFormat.bufferFormat,
GraphicBuffer::USAGE_SW_READ_OFTEN |
GraphicBuffer::USAGE_HW_RENDER |
GraphicBuffer::USAGE_HW_TEXTURE);
/* ... */
//Create EGLImage from graphicbuffer
EGLint eglImageAttributes[] = {EGL_WIDTH, outputFormat.width, EGL_HEIGHT, outputFormat.height, EGL_MATCH_FORMAT_KHR,
outputFormat.eglFormat, EGL_IMAGE_PRESERVED_KHR, EGL_FALSE, EGL_NONE};
EGLClientBuffer nativeBuffer = outputBuffer->getNativeBuffer();
eglImage = _eglCreateImageKHR(display, EGL_NO_CONTEXT, EGL_NATIVE_BUFFER_ANDROID, nativeBuffer, eglImageAttributes);
/* ... */
//Create output texture
glGenTextures(1, &outputTexture);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
_glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, eglImage);
/* ... */
//Create target fbo
glGenFramebuffers(1, &targetFBO);
glBindFramebuffer(GL_FRAMEBUFFER, targetFBO);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, outputTexture, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
/* ... */
//Read from graphicbuffer
const Rect lockBoundsOutput(quadRenderer->outputFormat.width, quadRenderer->outputFormat.height);
status_t statusgb = quadRenderer->getOutputBuffer()->lock(GraphicBuffer::USAGE_SW_READ_OFTEN, &result);
I managed to find the answer myself and I was wrong all along. The simple reason was that although I was rendering a 480x1080 texture the memory allocated was padded to 640x1080 so I just needed to remove the padding after each row and the output texture made sense.
While rendering a texture on some devices (only galaxy s3 mini confirmed) i got dark area flickering on the texture as described in this thread:
Black Artifacts on Android in OpenGL ES 2
I'm not allowed to comment this thread (not enough credit) but I would like clarification from the author who solved this issue:
Could you explain a little more how you use glTexImage2D() and glTexSubImage2D() to solve this?
In code I got these lines to load the bitmaps:
(As you can see I'm using texImage2D to load the bitmap, the android documentation about gltexImage2D only provides attribute types but no explaination)
...
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inScaled = false;
final Bitmap bitmap = BitmapFactory.decodeResource(
context.getResources(), resourceId, options);
if (bitmap == null) {
if (LoggerConfig.ON) {
Log.w(TAG, "Resource ID " + resourceId + " could not be decoded.");
}
glDeleteTextures(1, textureObjectIds, 0);
return 0;
}
glBindTexture(GL_TEXTURE_2D, textureObjectIds[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
texImage2D(GL_TEXTURE_2D, 0, bitmap, 0);
glGenerateMipmap(GL_TEXTURE_2D);
...
edit:
tried to implement the solution according to the link in top but no luck, same flickering effect,
new code to load bitmap:
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(bitmap.getWidth() * bitmap.getHeight() * 4);
byteBuffer.order(ByteOrder.BIG_ENDIAN);
IntBuffer ib = byteBuffer.asIntBuffer();
int[] pixels = new int[bitmap.getWidth() * bitmap.getHeight()];
bitmap.getPixels(pixels, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
for(int i=0; i<pixels.length; i++){
ib.put(pixels[i] << 8 | pixels[i] >>> 24);
}
bitmap.recycle();
byteBuffer.position(0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bitmap.getWidth(), bitmap.getHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, null);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, bitmap.getWidth(), bitmap.getHeight(), GL_RGBA, GL_UNSIGNED_BYTE, byteBuffer);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
Illustration of odd behavior, see the black area to the middle right in the image:
(I need 10 reputations just to post an image?!?!?)
https://dl.dropboxusercontent.com/u/61092317/blackflickering.jpg
The issue in the other question that you've referenced appears to be this: the author specified MipMaps when originally loading up his textures, and the MipMaps didn't work properly. There's a possibility that he wasn't using glTexImage2d properly.
The artifact that he got was: when the texture unit tried to move from one MipMap level to the next, there was no information there, and it rendered a blank (apparently black) texture instead. So, at least one level of his MipMap did get loaded.
I'm unsure if that author actually resolved his issue properly. OpenGL can be incredibly specific about how you load textures into it, and even the smallest error in your code can cause a problem. The problem may only occur on some platforms, so you get the impression that something is wrong with the device, but it still may be the code.
The best place to start is the Red Book: http://www.glprogramming.com/red/chapter09.html
And then: http://www.opengl.org/sdk/docs/man3/xhtml/glGenerateMipmap.xml
All of the functions that you want to know about are outlined there.
There's a few details that aren't answered in your question. Are you creating your own MipMaps by hand? Are you wanting OpenGL to generate MipMaps automatically?
I would suggest starting with a simple texture, that has no MipMaps, just one level. See if you can make that work. If that fixes the problem, then start moving towards using MipMaps.
As for the original question, there isn't exactly a difference between loading your texture data with glTexImage2D or updating texture data with glTexSubImage2D. glTexImage2D is used to specify a texture (ie: the width / height of the texture, which is different for each MipMap level), whereas glTexSubImage2D is used to update all or part of a texture that has already been specified by glTexImage2D. In the second instance, you're just updating it with new texels.
I'm trying to render AV frames grabbed and converted from a MPEG4 video using Gstreamer to an Android (2.2)-opengl texture. I've pretty much exhausted google and not found an answer.
Basically, I am using Gstreamer uridecodebin to decode the frame, and then convert the frame to RGB, and then glTexSubImage2D() to create an openGL texture from it, but can't seem to get anything to work.The texture is getting colored when I get the decoded data (RGB) from Gstreamer.
I am getting the video size as 320 * 256 and my Texture size is 512 * 256 & I am using glDrawTexiOES(0,0,videowidth,videoheight), I am not getting any errors related to opengl, but the texture is blank( different color frames), though the Audio works fine.
Here is my code:Native OnDraw:
if (theGStPixelBuffer != 0) {
glBindTexture (GL_TEXTURE_2D, s_texture);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glPixelStorei( GL_UNPACK_ALIGNMENT, 2);
glTexSubImage2D (GL_TEXTURE_2D, 0, 0, 0, theTexWidth,
theTexHeight, GL_RGB, GL_UNSIGNED_BYTE,
GST_BUFFER_DATA(theGStPixelBuffer));
check_gl_error("glTexSubImage2D");
theGStPixelBuffer = 0;
}
glDrawTexiOES(0, 0, 0, theTexWidth, theTexHeight);
check_gl_error("glDrawTexiOES")
I have encounter the same problem ;you can get the bitmap and use martix class to resize the bitmap.