I've written code to convert YUV to RGB using openGL ES 3.0. Instead of showing the converted image into glViewPort surface, i want to store it on bitmap memory.
I used openGL frame buffer and render buffer(render to texture), then trying to get the output using glReadpixels() function. I am getting output, but i dont know how to upload the output into bitmap memory. Please help.
If you have image data in client (system) memory, and want to send it to texture memory, you would use glTexImage2D (I'm assuming you have a 2D image).
However, if I'm understanding your use case, it's kind of a strange to render to render buffer, use glReadPixels to read the results it into system memory, and then use that data to create a new texture. Generally, you would just attach a texture as the color output for the framebuffer, and then use that directly. This would bypass the roundtrip to system memory, which is expensive.
Related
I am currently trying to develop a video player on Android, but am struggling with color formats.
Context: I extract and decode a video through the standard combinaison of MediaExtractor/MediaCodec. Because I need the extracted frames to be available as OpenGLES textures (RGB), I setup my decoder (MediaCodec) so that it feeds an external GLES texture (GL_TEXTURE_EXTERNAL_OES) through a SurfaceTexture. I know the data output by my HW decoder is in the NV12 (YUV420SemiPlanar) format, and I need to convert it to RGB by rendering it (with a fragment shader doing the conversion).
MediaCodec ---> GLES External Texture (NV12) [1] ---> Rendering ---> GLES Texture (RGB)
The point where I struggle is: How do I access the specific Y, U, and V values contained in the GLES External Texture ([1]). I have no idea how the GLES texture memory is set, nor how to access it (except for the "texture()" and "texelFetch()" GLSL functions).
Is there a way to access the data as I would access a simple array (pointer + offset)?
Did I overthink the whole thing?
Do either Surface or SurfaceTexture take care of conversions? (I don't think so)
Do either Surface or SurfaceTexture change the memory layout of the data while populating the GLES External Texture ([1]) so components can be accessed through GLES texture access functions?
Yes, I would say you're overthinking it. Did you test things and run into an actual issue that you could describe, or is this only theoretical so far?
Even though the raw decoder itself outputs NV12, this detail is hidden when you access it via a SufaceTexture - then you get to access it as any RGB texture. Since the physical memory layout of the texture is hidden, you don't really know if it actually is converted all at once before you get it, or if the texture accessors do a on-the-fly conversion each time you sample it. As far as I know, the implementation is free to do it in any of these ways, and the implementation details about how it is done are not observable through the public APIs at all.
I am trying to get each frame from a TextureView, unfortunately trying:
textureView.getBitmap();
Results in slow performance is there a faster way to obtain a bitmap. Is it better to use the NDK instead?
Looking for actual examples
A TextureView receives frames on a SurfaceTexture, which takes frames sent to its Surface and converts them to a GLES texture. To get the pixel data out, the texture must be rendered to a framebuffer, then read out with glReadPixels(). The pixel data can then be wrapped with a Bitmap object (which may or may not involve copying the pixel data).
Using the NDK isn't going to do you much good, as all of the code that needs to run quickly is already implemented natively.
You may see some improvement by sending the data directly to a SurfaceTexture and doing the GLES work yourself, but presumably you want to display the incoming frames in the TextureView, so all you'd potentially save is the Bitmap overhead (which may or may not be significant).
It might help if you explained in your question where the frames are coming from and what it is you want to do with them.
Im Making an android Game with OpenGl ES, I Want to capture what is rendered onto screen By FloatBuffer and save it for later use, For Example If this is the OutPut:
I want this for Result (as PNG Image) :
How can I do this?
What is on screen won't be a floating point buffer - it's typically RGBA8 unorm 32-bit per pixel.
Capture via glReadPixels to fetch the raw RGBA data - you'll have to supply the raw to PNG save functionality, that's not part of OpenGL ES itself.
Note that this is a relatively expensive operation, especially at high screen resolutions, so don't expect to do this at interactive frame rates.
The example DecodeEditEncodeTest.java in bigflake.com, demonstrates an example of simple editing (swap the color channels using OpenGL FRAGMENT_SHADER).
Here, I want to do some complicated image processing (such as adding something in it) of each frame.
In this way, does it mean I cannot use surface. Instead, I need to use buffer?
But from EncodeDecodeTest.java, it says:
(1) Buffer-to-buffer. Buffers are software-generated YUV frames in ByteBuffer objects, and decoded to the same. This is the slowest (and least portable) approach, but it allows the application to examine and modify the YUV data.
(2) Buffer-to-surface. Encoding is again done from software-generated YUV data in ByteBuffers, but this time decoding is done to a Surface. Output is checked with OpenGL ES, using glReadPixels().
(3) Surface-to-surface. Frames are generated with OpenGL ES onto an input Surface, and decoded onto a Surface. This is the fastest approach, but may involve conversions between YUV and RGB.
If I use Buffer-to-buffer, from what the above says, it is the slowest and least portable. How slow would it be?
Or I use surface-to-surface, and read pixels out from the surface.
Which way is more feasible?
Any example available?
I'm working on this photo app that uses OpenGL 2.0 with a Renderer, an off-screen GLSurfaceView and some shader scripts (*.fsh and *.vsh).
after loading the shader scripts from Assets folder, preparing the GL surface and context, etc, etc we finally call GLES20.glDrawArrays(GLES20.GL_TRIANGLE_FAN, 0, 4); and it works quite nicely and it generates the bitmaps with the effects.
The problem, OF COURSE, is the memory limitations and any large enough bitmap (regardless of device, not so big for old Gingerbread and very large images for the Nexus 10) and it will produce and OutOfMemoryException.
I'm not so knowledgeable in OpenGL and the way I know to deal with very large amounts of data is to use a stream so it's not necessary to hold it all in memory.
So the question is, is there a way to do apply an openGl shader/renderer through a Stream instead of a in-memory Bitmap ? If yes, any pointer to a link or base procedure?
Not exactly sure what you mean by Stream but here's another solution. Split rendering up into multiple passes. Fore instance, if you have a 512x512 texture and a corresponding quad to texture but can only afford to upload a 256x256 due to memory restrictions do the following:
split up the texture into 4 chunks
create a single, fitting texture object
for each chunk
upload the current chunk into the tex objects data store
draw 1/4 of the quad, e.g. top-left and texture accordingly
Note that the above example assume a 512x512 texture and screen-size. In any case, I think you get the idea.
Obviously, this is the usual memory/performance trade-off.You circumvent memory restrictions by using more bandwidth for transfers and do more rendering.
Note: I'm a desktop GL guy and I'm not quite sure how memory is split up betweem the GPU and the rest, or if there even is some dedicated VRAM. I assume you've got a limited amount available for GL resources which is even smaller than the overall system memory.