In OpenGL ES 1.1, I would like to take multiple texture Ids and combine them into a single textureId. Then I would be able to use this resulting texture multiple times in the future. My texture sources could be transparent PNGs that I want to stack together. This would be a huge optimization since I wouldn't have to render multiple textures every frame.
I have seen examples like the wiki Texture_Combiners, but it doesn't seem like the results are reusable.
Also, if there is a way to mask an image with another into a reusable texture, that would be extremely helpful too.
What you want to do is render to texture. If you're writing for iOS you're guaranteed that the OES framebuffer extension will be available, so you can use that. If you're writing for Android or another platform then the extension may be available but isn't guaranteed. If it isn't available you can fall back on glCopyTexImage2D.
So in the first case you'd create a frame buffer which has a texture as its colour buffer. Render to that then switch to another frame buffer and you can henceforth draw from the texture.
In the second you'd draw into whatever frame buffer you have, then use glCopyTexImage2D to copy from the current colour buffer into a texture. This will be a little slower because it's a copy, but it'll still probably be a lot faster than reading back the rendered content and then uploading it yourself.
ES 2.0 makes the functions contained in the framebuffer extension mandatory, so ES 2.0 capable GPUs are very likely to support the extension.
Related
I'm trying to understand graphics memory usage/flow in Android and specifically with respect to encoding frames from the camera using MediaCodec. In order to do that I'm having to understand a bunch of graphics, OpenGL, and Android terminology/concepts that are unclear to me. I've read the Android graphics architecture material, a bunch of SO questions, and a bunch of source but I'm still confused primarily because it seems that terms have different meanings in different contexts.
I've looked at CameraToMpegTest from fadden's site here. My specific question is how MediaCodec::createInputSurface() works in conjunction with Camera::setPreviewTexture(). It seems that an OpenGL texture is created and then this is used to create an Android SurfaceTexture which can then be passed to setPreviewTexture(). My specific questions:
What does calling setPreviewTexture() actually do in terms of what memory buffer the frames go to from the camera?
From my understanding an OpenGL texture is a chunk of memory that is accessible by the GPU. On Android this has to be allocated using gralloc with the correct usage flags. The Android description of SurfaceTexture mentions that it allows you to "stream images to a given OpenGL texture": https://developer.android.com/reference/android/graphics/SurfaceTexture.html#SurfaceTexture(int). What does a SurfaceTexture do on top of an OpenGL texture?
MediaCodec::createInputSurface() returns an Android Surface. As I understand it an Android Surface represents the producer side of a buffer queue so it may be multiple buffers. The API reference mentions that "the Surface must be rendered with a hardware-accelerated API, such as OpenGL ES". How do the frames captured by the camera get from the SurfaceTexture to this Surface that is input to the encoder? I see CameraToMpegTest creates an EGLSurface using this Surface somehow but not knowing much about EGL I don't get this part.
Can someone clarify the usage of "render"? I see things such as "render to a surface", "render to the screen" among other usages that seem to maybe mean different things.
Edit: Follow-up to mstorsjo's responses:
I dug into the code for SurfaceTexture and CameraClient::setPreviewTarget() in CameraService some more to try and understand the inner workings of Camera::setPreviewTexture() better and have some more questions. To my original question of understanding the memory allocation it seems like SurfaceTexture creates a BufferQueue and CameraService passes the associated IGraphicBufferProducer to the platform camera HAL implementation. The camera HAL can then set the gralloc usage flags appropriately (e.g. GRALLOC_USAGE_SW_READ_RARELY | GRALLOC_USAGE_SW_WRITE_NEVER | GRALLOC_USAGE_HW_TEXTURE) and also dequeue buffers from this BufferQueue. So the buffers that the camera captures frames into are gralloc allocated buffers with some special usage flags like GRALLOC_USAGE_HW_TEXTURE. I work on ARM platforms with unified memory architectures so the GPU and CPU can access the same memory so what kind of impact would the GRALLOC_USAGE_HW_TEXTURE flag have on how the buffer is allocated?
The OpenGL (ES) part of SurfaceTexture seems to mainly be implemented as part of GLConsumer and the magic seems to be in updateTexImage(). Are there additional buffers being allocated for the OpenGL (ES) texture or is the same gralloc buffer that was filled by the camera able to be used? Is there some memory copying that has to happen here to get the camera pixel data from the gralloc buffer into the OpenGL (ES) texture? I guess I don't understand what calling updateTexImage() does.
It means that the camera provides the output frames via an opaque handle instead of in a user-provided buffer within the application's address space (if using setPreviewCallback or setPreviewCallbackWithBuffer). This opaque handle, the texture, can be used within OpenGL drawing.
Almost. In this case, the OpenGL texture is not a physical chunk of memory, but a handle to a variable chunk of memory within an EGL context. In this case, the sample code itself doesn't actually allocate or size the texture, it only creates a "name"/handle for a texture using glGenTextures - it's basically just an integer. Within normal OpenGL (ES), you'd use OpenGL functions to allocate the actual storage for the texture and fill it with content. In this setup, SurfaceTexture provides an Android level API/abstraction to populate the texture with data (i.e. allocate storage for it with the right flags, provide it with a size and content) - allowing you to pass the SurfaceTexture to other classes that can fill it with data (either Camera that takes a SurfaceTexture directly, or wrap in the Surface class to be able to use it in other contexts). This allows filling the OpenGL texture with content efficiently, without having to pass a buffer of raw data to your application's process and having your app upload it to OpenGL.
(Answering points 3 and 4 in reverse order.) OpenGL (ES) is a generic API for drawing. In the normal/original setup, consider a game, you'd have a number of textures for different parts of the game content (backgrounds, props, actors, etc), and then with OpenGL APIs draw this to the screen. The textures could either be more or less just copied as such to the screen, or be wrapped around a 3D object built out of triangles. This is the process called "rendering", taking the input textures and set of triangles and drawing it. In the simplest cases, you would render content straight to the screen. The GPU usually can do the same rendering into any other output buffer as well. In games, it is common to render some scene into a texture, and use that prerendered texture as part of the final render which actually ends up displayed on the screen.
An EGL context is created for passing the output from the camera into the encoder input. An EGL context is basically a context for doing OpenGL rendering. The target for the rendering is the Surface from the encoder. That is, whatever graphics is drawn using OpenGL ends up in the encoder input buffer instead of on the screen. Now the scene that is drawn using OpenGL could be any sequence of OpenGL function calls, rendering a game scene into the encoder. (This is what the Android Breakout game recorder example does.) Within the context, an texture handle is created. Instead of filling the texture with content by loading a picure from disk, as in normal game graphics rendering, this is made into a SurfaceTexture, to allow Camera to fill it with the camera picture. The SurfaceTexture class provides a callback, giving a signal when the Camera has updated the content. When this callback is received, the EGL context is activated and one frame is rendered into the EGL context output target (which is the encoder input). The rendering itself doesn't do anything fancy, but more or else copies the input texture as-is straight into the output.
This might all sound quite roundabout, but it does give a few benefits:
The actual raw bits of the camera frames never need to be handled directly within the application code (and potentially never within the application's process and address space at all). For low resolutions, this isn't much of an issue, but the setPreviewCallback API is a bottleneck when it comes to higher resolutions.
You can do color adjustments and anything else you can do within OpenGL, almost for free with GPU acceleration.
I'm programming an Android Game. To reduce the amount of textures that need to be loaded (OpenGL ES 2.0) I've created several spritesheets of size 1024x1024. Some frames of the same animation are on different spritesheets. Now my question is if that is bad for the performance since I have to bind (OpenGL.bind()) a different texture for each animation frame?
Yes, changing the texture binding has some performance impact compared to not doing it. How much would probably be best determined by empirical testing.
If you can switch to OpenGL ES 3, you can use a texture array, rather than a texture.
However, if that's not an option, why not simply bind all your sprite sheets at once? If you have fewer than GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS, then you don't need to change the texture binding, just provide some way of letting the shader know to which bound texture it should go.
When I decode a video to a surface I want to save the frames i want as bitmap/jpeg files. I don't want to draw on the screen and just want to save the content of the SurfaceTexture as an image file.
You have to render the texture.
If it were a normal texture, and you were using GLES 2 or later, you could attach it to an FBO and read directly from that. A SurfaceTexture is backed by an "external texture", and might be in a format that the GL driver doesn't support a full set of operations on, so you can't do that. You need to render it, and read the result.
FWIW, the way you go about saving the frame can have a significant performance impact. A full example demonstrating the use of MediaExtractor, MediaCodec, glReadPixels(), and PNG file creation is now up on bigflake (ExtractMpegFramesTest).
I've been looking at this lately, on the Android platform. Summing up the various options and why they are/aren't applicable.
glReadPixels()
The only option Android Java coders currently really have. Said to be slow. Reads from a framebuffer, not a texture (so one must render the texture to an internal frame buffer first, unless one wants to record the screen itself). Okay. Got things to work.
EGL_KHR_image_base()
An extension that seems to be available on the native (NJK) level, but not in Java.
glGetTexImage()
Looked promising but not available in OpenGL 2.0 ES variant.
Pixel Buffer Objects
Probably the 'right' way to do things, but requires OpenGL 3.0 ES (i.e. selected Android 4.3+ devices).
I'm not saying this is adding any info that wouldn't be available elsewhere. But having so many seemingly similar options (that still wouldn't work) was confusing. I'm not an OpenGL expert so any mistakes above are gladly corrected.
I'm working on this photo app that uses OpenGL 2.0 with a Renderer, an off-screen GLSurfaceView and some shader scripts (*.fsh and *.vsh).
after loading the shader scripts from Assets folder, preparing the GL surface and context, etc, etc we finally call GLES20.glDrawArrays(GLES20.GL_TRIANGLE_FAN, 0, 4); and it works quite nicely and it generates the bitmaps with the effects.
The problem, OF COURSE, is the memory limitations and any large enough bitmap (regardless of device, not so big for old Gingerbread and very large images for the Nexus 10) and it will produce and OutOfMemoryException.
I'm not so knowledgeable in OpenGL and the way I know to deal with very large amounts of data is to use a stream so it's not necessary to hold it all in memory.
So the question is, is there a way to do apply an openGl shader/renderer through a Stream instead of a in-memory Bitmap ? If yes, any pointer to a link or base procedure?
Not exactly sure what you mean by Stream but here's another solution. Split rendering up into multiple passes. Fore instance, if you have a 512x512 texture and a corresponding quad to texture but can only afford to upload a 256x256 due to memory restrictions do the following:
split up the texture into 4 chunks
create a single, fitting texture object
for each chunk
upload the current chunk into the tex objects data store
draw 1/4 of the quad, e.g. top-left and texture accordingly
Note that the above example assume a 512x512 texture and screen-size. In any case, I think you get the idea.
Obviously, this is the usual memory/performance trade-off.You circumvent memory restrictions by using more bandwidth for transfers and do more rendering.
Note: I'm a desktop GL guy and I'm not quite sure how memory is split up betweem the GPU and the rest, or if there even is some dedicated VRAM. I assume you've got a limited amount available for GL resources which is even smaller than the overall system memory.
Is there any difference in terms of rendering overhead, if we simply use a color (e.g., green) or load a texture image file (e.g., green.png file) for a 3D object?
Shouldn't the OpenGL ES finally create a texture even for a colored object?
I am using Android API v8 along with the emulator and the goal is an actual Android phone.
Why would it finally create a texture for a colored object?
I can't speak to every conceivable implementation, but it seems to me that the uniform would be a much better solution. The uniform value is likely cached very locally in the datapath with quick access, versus a texture which has to interpolate the texture coordinates, retrieve the texture from vram, and sample it. I'm not sure how that could be as fast as just reading a couple floats from the uniform.