HOW to read pixel data from screen in android using GLES 3 - android

i am using GLES 3 to create 3D textures and render data in surface view. Is it possible to read this data from the screen/texture some how.

You can read what you've rendered with glReadPixels(), but that tends to be slow.
Depending on what you're trying to do, you may get better results by rendering to an FBO.
You can find some example code in Grafika; see for example EglSurfaceBase#saveFrame().

Related

Can I reuse SurfaceView to encode after display to screen?

I am using OpenGLES2 output to display to a SurfaceView or encode to mp4 using MediaCodec.
However, I can only do one at a time. I can obviously draw using OpenGLES2 onto two separate surfaces but that would be a really inefficient use of the GPU.
What I want is to use some sort of reference counting to reuse the buffer to both draw on the screen and encode the single OpenGLES2 output. Like how camera service does in the Shared Surfaces concept.
Can can one do both display and encode of a buffer? Is there some sort of tee element (like in GStreamer) present in Android?
There is no T-component available at the moment.
But you can avoid rendering twice by drawing to a Framebuffer Object and then copying the frame both to the screen and to the encoder.
Here's an example (pretty old).
You can't make your surfaceView bigger than the screen.
Although there are multiple ways to do this in different manner but directly you can't reuse surfaceview to encode after display to screen.

Extract pointclouds WITH colour using the Project Tango; i.e. getting the current camera frame

I am trying to produce a point cloud where each point has a colour. I can get just the point cloud or I can get the camera to take a picture, but I need them to be as simultaneous as possible. If I could look up an RGB image with a timestamp or call a function to get the current frame when onXYZijAvailable() is called I would be done. I could just go over the points, find out where it would intersect with the image plane and get the colour of that pixel.
As it is now I have not found any way to get the pixel info of an image or get coloured points. I have seen AR apps where the camera is connected to the CameraView and then things are rendered on top, but the camera stream is never touched by the application.
According to this post it should be possible to get the data I want and synchronize the point cloud and the image plane by a simple transformation. This post is also saying something similar. However, I have no idea how to get the RGB data. I cant find any open source projects or tutorials.
The closest I have gotten is finding out when a frame is ready by using this:
public void onFrameAvailable(final int cameraId) {
if (cameraId == TangoCameraIntrinsics.TANGO_CAMERA_COLOR) {
//Get the new rgb frame somehow.
}
}
I am working with the Java API and I would very much like to not delve into JNI and the NDK if at all possible. How can I get the frame that most closely matches the timestamp of my current point cloud?
Thank you for your help.
Update:
I implemented a CPU version of it and even after optimising it a bit I only managed to get .5 FPS on a small point cloud. This is also due to the fact that the colours have to be converted from the android native NV21 colour space to the GPU native RGBA colour space. I could have optimized it further, but I am not going to get a real time effect with this. The CPU on the android device simply can not perform well enough. If you want to do this on more than a few thousand points, go for the extra hassle of using the GPU or do it in post.
Tango normally delivers color pixel data directly to an OpenGLES texture. In Java, you create the destination texture and register it with Tango.connectTextureId(), then in the onFrameAvailable() callback you update the texture with Tango.updateTexture(). Once you have the color image in a texture, you can access it using OpenGLES drawing calls and shaders.
If your goal is to color a Tango point cloud, the most efficient way to do this is in the GPU. That is, instead of pulling the color image out of the GPU and accessing it in Java, you instead pass the point data into the GPU and use OpenGLES shaders to transform the 3D points into 2D texture coordinates and look up the colors from the texture. This is rather tricky to get right if you're doing it for the first time but may be required for acceptable performance.
If you really want direct access to pixel data without using the C API,
you need to render the texture into a buffer and then read the color data from the buffer. It's kind of tricky if you aren't used to OpenGL and writing shaders, but there is an Android Studio app that demonstrates that here, and is further described in this answer. This project demonstrates both how to draw the camera texture to the screen, and how to draw to an offscreen buffer and read RGBA pixels.
If you really want direct access to pixel data but decide that the NDK might be less painful than OpenGLES, the C API has TangoService_connectOnFrameAvailable() which gives you pixel data directly, i.e. without going through OpenGLES. Note, however, that the format of the pixel data is NV21, not RGB or RGBA.
I am doing this now by capturing depth with onXYZijAvailable() and images with onFrameAvailable(). I am using native code, but the same should work in Java. For every onFrameAvailable() I get the image data and put it in a preallocated ring buffer. I have 10 slots and a counter/pointer. Each new image increments the counter, which loops back from 9 to 0. The counter is an index into an array of images. I save the image timestamp in a similar ring buffer. When I get a depth image, onXYZijAvailable(), I grab the data and the timestamp. Then I go back through the images, starting with the most recent and moving backwards, until I find the one with the closest timestamp to the depth data. As you mentioned, you know that the image data will not be from the same frame as the depth data because they use the same camera. But, using these two calls (in JNI) I get within +/- 33msec, i.e. the previous or next frame, on a consistent basis.
I have not checked how close it would be to just naively use the most recently updated rgb image frame, but that should be pretty close.
Just make sure to use the onXYZijAvailable() to drive the timing, because depth updates more slowly than rgb.
I have found that writing individual images to the file system using OpenCV::imwrite() does not keep up with the real time of the camera. I have not tried streaming to a file using the video codec. That should be much faster. Depending on what you plan to do with the data in the end you will need to be careful how you store your results.

How to save SurfaceTexture as bitmap

When I decode a video to a surface I want to save the frames i want as bitmap/jpeg files. I don't want to draw on the screen and just want to save the content of the SurfaceTexture as an image file.
You have to render the texture.
If it were a normal texture, and you were using GLES 2 or later, you could attach it to an FBO and read directly from that. A SurfaceTexture is backed by an "external texture", and might be in a format that the GL driver doesn't support a full set of operations on, so you can't do that. You need to render it, and read the result.
FWIW, the way you go about saving the frame can have a significant performance impact. A full example demonstrating the use of MediaExtractor, MediaCodec, glReadPixels(), and PNG file creation is now up on bigflake (ExtractMpegFramesTest).
I've been looking at this lately, on the Android platform. Summing up the various options and why they are/aren't applicable.
glReadPixels()
The only option Android Java coders currently really have. Said to be slow. Reads from a framebuffer, not a texture (so one must render the texture to an internal frame buffer first, unless one wants to record the screen itself). Okay. Got things to work.
EGL_KHR_image_base()
An extension that seems to be available on the native (NJK) level, but not in Java.
glGetTexImage()
Looked promising but not available in OpenGL 2.0 ES variant.
Pixel Buffer Objects
Probably the 'right' way to do things, but requires OpenGL 3.0 ES (i.e. selected Android 4.3+ devices).
I'm not saying this is adding any info that wouldn't be available elsewhere. But having so many seemingly similar options (that still wouldn't work) was confusing. I'm not an OpenGL expert so any mistakes above are gladly corrected.

Is it possible to use GPU resources on Gingerbread for an app?

I'm creating an image processing app and would like to use the available GPU on my phone (running gingerbread). Is it possible to do so?
I googled a lot on this issue, but I couldnt find any answers, much to my surprise.
Yes, if the device supports GLES2, you can create a shader that implements the functionality. It would go something like:
Create a fragment shader that implements the image processing.
Create a texture holding the image you want to process and pass it to the shader.
Pass parameters through uniforms or pack them into another texture and pass to the shader.
Render to an frame buffer.
Get the resulting image out via glReadPixels

how to display multiple pictures on GLSurface view using Android ndk

I am able to display an image using OpenGL ES in android ndk. now I want to display 2 or four images using multithreading in OPENGL ES through android ndk.
I have done huge search for this and came to know a Surfaceview can only have one picture. Then what is the way to display multiple pictures on GLSurface view..
Can anybody please tell me how it can be done..
Thanks in Advance
It seems there are several issues here.
First of all, if you are trying to display "pictures" through OpenGL(ES), you mean textures (OpenGL readable format for "pictures" or "image"), right ? If you are not sure of what I am talking about, find some tutorial about displaying images using OpenGLES. Learn how to display juste 1 and you will be able to display 4.
a Surfaceview can only have one picture
You may have misunderstand something. A GLSurfaceView can draw as many textures as your video memory can handle.
Basically, to display your textures, you will draw 2 or 4 quads and bind the appropriate textures to them.
About the multithreading, I guess you gather your pictures asynchronously. Just wait for a complete picture, and while in the OpenGL thread, create a texture and bind it to a quad.

Categories

Resources