How to check if an OpenGL texture name is valid - android

I have an Android application and, under certain situations, some, but not all, of our textures seem to become unbound. (That is, when I use glBindTexture and draw it, it is rendered as a blank texture)
I've tried looking for an error from glBindTexture, and tried using glGet with GL_TEXTURE_BINDING_2D, but nothing has helped thus far.
Is there any way to discover if a texture name is still valid/pointing to valid data?
My last resort is to save some small amount of pixel data and then, when these events happen, bind and use glReadPixels and see if they're still there... But that seems really... non-optimal...
This is OpenGL ES 1.0/1.1.

Sounds like glIsTexture is what you're looking for.

glIsTexture is not valid reply to this question. glIsTexture is returning true even if the texture name is not valid. This is a problem of Android implementation of OpenGL. I have no solution to this.

Related

On Android, determining whether glBlendFuncSeparateOES is supported

I have some code that uses glBlendFuncSeparateOES and glBendEquationSeparateOES in order to render onto framebuffers with alpha.
However, I've found that a couple of my target devices do NOT appear to support these functions. They fail silently, and all that happens is that your render mode doesn't get set. My Kinda Fire cheapie tablet and an older Samsung both exhibit this behavior.
Is there a good way, on android, to query if they're actually implemented? I have tried eglGetProcAddress, but it returns an address for any string you throw at it!!!!!
Currently, I just have the game, on startup, do a quick render on a small FBO to see if the transparency is correct or if it has artifacts. It works, but it's a very kludgy method.
I'd much prefer if there was something like glIsBlendFuncSeparateSupported().
You can get a list of all available extensions using glGetString(GL_EXTENSIONS). This returns a space-separated list of supported extensions. For more details see Khronos specification.

Most efficient way of creating large textures at runtime in OpenGL ES for Android

I'm working on an Android app built in Unity3D that needs to create new textures at runtime every so often based off different images pixel data.
Since Unity for Android uses OpenGL ES and my app is a graphical one that needs to run at ideally a solid 60 frames per second, I've created a C++ plugin operating on OpenGL code instead of just using Unity's Texture2D slow texture construction. The plugin allows me to upload the pixel data to a new OpenGL texture, then let Unity know about it through their Texture2D's CreateExternalTexture() function.
Since the version of OpenGL ES running in this setup is unfortunately single-threaded, in order to keep things running in frame I do a glTexImage2D() call with an already gen'd TextureID but with null data in the first frame. And then call glTexSubImage2D() with a section of my buffer of pixel data, over multiple subsequent frames to fill out the whole texture, essentially doing the texture creation synchronously but chunking the operation up over multiple frames!
Now, the problem I'm having is that every time I create a new texture with large dimensions, that very first glTexImage2D() call will still lead to a frame-out, even though I'm putting null data into it. I'm guessing that the reason for this is that there is still a pretty large memory allocation going on in the background with that first glTexImage2D() call, even though I'm not filling in the image until later.
Unfortunately, these images that I'm creating textures for are of varying sizes that I don't know of beforehand and so I can't just create a bunch of textures up front on load, I need to specify a new width and height with each new texture every time. =(
Is there anyway I can avoid this memory allocation, maybe allocating a huge block of memory at the start and using it as a pool for new textures? I've read around and people seem to suggest using FBO's instead? I may have misunderstood but it seemed to me like you still need to do a glTexImage2D() call to allocate the texture before attaching it to the FBO?
Any and all advice is welcome, thanks in advance! =)
PS: I don't come from a Graphics background, so I'm not aware of best practices with OpenGL or other graphics libraries, I'm just trying to create new textures at runtime without framing out!
I haven't dealt with the specific problem you've faced, but I've found texture pools to be immensely useful in OpenGL in terms of getting efficient results without having to put much thought into it.
In my case the problem was that I can't use the same texture for an input to a deferred shader as the texture used to output the results. Yet I often wanted to do just that:
// Make the texture blurry.
blur(texture);
Yet instead I was having to create 11 different textures with varying resolutions and having to swap between them as inputs and outputs for horizontal/vertical blur shaders with FBOs to get a decent-looking blur. I never liked GPU programming very much because some of the most complex state management I've ever encountered was often there. It felt incredibly wrong that I needed to go to the drawing board just to figure out how to minimize the number of textures allocated due to this fundamental requirement that texture inputs for shaders cannot also be used as texture outputs.
So I created a texture pool and OMG, it simplified things so much! It made it so I could just create temporary texture objects left and right and not worry about it because the destroying the texture object doesn't actually call glDeleteTextures, it simply returns them to the pool. So I was able to finally be able to just do:
blur(texture);
... as I wanted all along. And for some funny reason, when I started using the pool more and more, it sped up frame rates. I guess even with all the thought I put into minimizing the number of textures being allocated, I was still allocating more than I needed in ways the pool eliminated (note that the actual real-world example does a whole lot more than blurs including DOF, bloom, hipass, lowpass, CMAA, etc, and the GLSL code is actually generated on the fly based on a visual programming language the users can use to create new shaders on the fly).
So I really recommend starting with exploring that idea. It sounds like it would be helpful for your problem. In my case I used this:
struct GlTextureDesc
{
...
};
... and it's a pretty hefty structure given how many texture parameters we can specify (pixel format, number of color components, LOD level, width, height, etc. etc.).
Yet the structure is comparable and hashable and ends up being used as a key in a hash table (like unordered_multimap) along with the actual texture handle as the value associated.
That allows us to then do this:
// Provides a pool of textures. This allows us to conveniently rapidly create,
// and destroy texture objects without allocating and freeing an excessive number
// of textures.
class GlTexturePool
{
public:
// Creates an empty pool.
GlTexturePool();
// Cleans up any textures which haven't been accessed in a while.
void cleanup();
// Allocates a texture with the specified properties, retrieving an existing
// one from the pool if available. The function returns a handle to the
// allocated texture.
GLuint allocate(const GlTextureDesc& desc);
// Returns the texture with the specified key and handle to the pool.
void free(const GlTextureDesc& desc, GLuint texture);
private:
...
};
At which point we can create temporary texture objects left and right without worrying about excessive calls to glTexImage2D and glDeleteTextures.I found it enormously helpful.
Finally of note is that cleanup function above. When I store textures in the hash table, I put a time stamp on them (using system real time). Periodically I call this cleanup function which then scans through the textures in the hash table and checks the time stamp. If a certain period of time has passed while they're just sitting there idling in the pool (say, 8 seconds), I call glDeleteTextures and remove them from the pool. I use a separate thread along with a condition variable to build up a list of textures to remove the next time a valid context is available by periodically scanning the hash table, but if your application is all single-threaded, you might just invoke this cleanup function every few seconds in your main loop.
That said, I work in VFX which doesn't have quite as tight realtime requirements as, say, AAA games. There's more of a focus on offline rendering in my field and I'm far from a GPU wizard. There might be better ways to tackle this problem. However, I found it enormously helpful to start with this texture pool and I think it might be helpful in your case as well. And it's fairly trivial to implement (just took me half an hour or so).
This could still end up allocating and deleting lots and lots of textures if the texture sizes and formats and parameters you request to allocate/free are all over the place. There it might help to unify things a bit, like at least using POT (power of two) sizes and so forth and deciding on a minimum number of pixel formats to use. In my case that wasn't that much of a problem since I only use one pixel format and the majority of the texture temporaries I wanted to create are exactly the size of a viewport scaled up to the ceiling POT.
As for FBOs, I'm not sure how they help your immediate problem with excessive texture allocation/freeing either. I use them primarily for deferred shading to do post-processing for effects like DOF after rendering geometry in multiple passes in a compositing-style way applied to the 2D textures that result. I use FBOs for that naturally but I can't think of how FBOs immediately reduce the number of textures you have to allocate/deallocate, unless you can just use one big texture with an FBO and render multiple input textures to it to an offscreen output texture. In that case it wouldn't be the FBO helping directly so much as just being able to create one huge texture whose sections you can use as input/output instead of many smaller ones.

Image processing in android using OpenGL. glReadPixels is slow and don't understand how to get EGL_KHR_image_base included and working in my project

So I'm trying to get the camera pixel data, monitor any major changes in luminosity and then save the image. I have decided to use open gl as I figured it would be quicker to do the luminosity checks in the fragment shader.
I bind a surface texture to the camera to get the image to the shader and am currently using glReadPixels to get the pixels back which I then put in a bitmap and save.
The bottle neck on the glReadPixels is crazy so I looked into other options and saw that EGL_KHR_image_base was probably my best bet as I'm using OpenGL-ES 2.0.
Unfortunately I have no experience with extensions and don't know where to find exactly what I need. I've downloaded the ndk but am pretty stumped. Could anyone point me in the direction of some documentation and help explain it if I don't understand fully?
Copying pixels with glReadPixels() can be slow, though it may vary significantly depending on the specific device and pixel format. Some tests with using glReadPixels() to save frames from video data (which is also initially YUV) found that 96.5% of the time was in PNG compression and file I/O on a Nexus 5.
In some cases, the time required goes up substantially if the source and destination formats don't match. On one particular device I found that copying to RGBA, instead of RGB, reduced the time required.
The EGL calls can work but require non-public API calls. And it's a bit tricky; see e.g. this answer. (I think the comment in the edit would allow it to work, but I never got back around to trying it, and I'm not in a position to do so now.)
The only solution would be using Pixel Pack Buffer (PBO), where the reading is asynchronous. However, to utilize this asynchronous, you need to have PBO and use it as ping pong buffer.
I refer to http://www.jianshu.com/p/3bc4db687546 where I reduce the read time for 1080p from 40ms to 20ms.

Simple particle system on Android using OpenGL ES 1.0

I'm trying to put a particle system together in Android, using OpenGL. I want a few thousand particles, most of which will probably be offscreen at any given time. They're fairly simple particles visually, and my world is 2D, but they will be moving, changing colour (not size - they're 2x2), and I need to be able to add and remove then.
I currently have an array which I iterate through, handling velocity changes, managing lifecyling (killing old ones, adding new ones), and plotting them, using glDrawArrays. What OpenGl is pointing at, though, for this call, is a single vertex; I glTranslatex it to the relevant co-ords for each particle I want to plot, one at a time, set the colour with glColor4x then glDrawArrays it. It works, but it's a bit slow and only works for a few hundred particles. I'm handling the clipping myself.
I've written a system to support static particles which I have loaded into a vertex/colourarray and plot using glDrawArrays, but this approach only seems suitable for particles which will never change relative location (ie I move all of them using glTranslate), colour and where I don't need to add/remove particles. A few tests on my phone (HTC Desire) suggest that trying to alter the contents of those arrays (which are ByteBuffers, pointed to by OpenGL) is extremely slow.
Perhaps there's some way of manually writing the screen myself with the CPU. If I'm just plotting 1x1/2x2 dots on the screen, and I'm purely interested in writing and not doing any blending/antialiasing, is this an option? Would it be quicker than whatever OpenGl is doing?
(200 or so particles on a 1ghz machine with megs of ram. This is way slower than I was getting 20 years ago on a 7mhz machine with <500k of ram! I appreciate I'm using Java here, but surely there must be a better solution. Do I have to use the NDK to get the power of C++, or is what I'm after possible)
I've been hoping somebody might answer this definitively, as I'll be needing particles on Android myself. (I'm working in C++, though -- Currently using glDrawArrays(), but haven't pushed particles to the limit yet.)
I found this thread on gamedev.stackexchange.com (not Android-specific), and nobody can agree on the best approach there, but you might want to try a few things out and see for yourself.
I was going to suggest glDrawArrays(GL_POINTS, ...) with glPointSize(), but the guy asking the question there seemed unhappy with it.
Let us know if you find a good solution!

OpenGL ES2 for Android - Weird random juts

I'm just starting to learn OpenGL ES2 for Android, and have come across a weird problem where sometimes a weird jut will be rendered from my objects (see pic). This doesn't always happen, which is strange, so I'm wondering if anyone has any experience with this sort of thing and how to fix it.
http://img717.imageshack.us/i/device2h.png/
The problem here was that I was calling GLES20.glDrawArrays(type, first, count) with an incorrect count. Setting the variable correctly fixed it.

Categories

Resources