In my Application, I am using Android Camera API to access Camera of device. I'm receiving camera frames as byte array on callback onPreviewFrame(). I have to process the image/bye array and give to OpenGL display.
I'm not configuring camera to setPreviewDisplay(holder) or setPreviewTexture(surface) for the frames to be rendered on view directly.
I have been googling.. still no useful reference found.
Please suggest useful information or source for image buffer rendering on OPENGL?
What you are looking for is how to render a texture, which in this case is your image byte array data. You will want to create a vertex buffer object with attributes - position and texture coordinates. Basically you will be creating a quad and mapping the corners of the texture to it. You can find a good example here
Related
In Android development, I have obtained the Yuv video stream byte array, and I want to perform real-time video recognition. I have tried this: first convert the obtained byte array into Bitmap format with the yuvByteArrayToBitmap method, and then use the yolov5ncnn.Detect method to detect , the returned object contains data such as the coordinates of the detection target. Finally, use these data to display the video stream on the ImageView component through the showObjects method and detect the detection frame of the corresponding target in the video. Although object detection can detect rectangular box results. But the whole process is stuck and not smooth.
I have tried commenting out the target detection code, not executing the target detection code, and only displaying the video stream on the Imageview component, but it is still very stuck, so I think it should be slower to display the video stream using the ImageView component. I heard that it is possible to use surface and SurfaceTexture. Is it possible to change each frame of image in the onSurfaceTextureUpdated method to output an image with a target detection rectangle, but I am not familiar with the implementation of this.
Now I want to perform object detection more smoothly and draw rectangles of detected objects on the UI. The byte array of the Yuv video stream has been obtained, and the coordinates of the target rectangle can also be obtained through the target detection algorithm, but I don't know how to display it on the UI smoothly.
Hope to get your help, thank you all.
I am performing a video effect that requires dual pass rendering (the texture needs to be passed through multiple shader programs). Attaching a SurfaceTexture to a GL_TEXTURE_EXTERNAL_OES that is passed in the constructor does not seem to be a solution, since the displayed result is only rendered once.
One solution I am aware of is that the first rendering can be done to a FrameBuffer, and then the resulting texture can be rendered to where it actually gets displayed.
However, it seems that a SurfaceTexture must be attached to a GL_TEXTURE_EXTERNAL_OES texture, and not a FrameBuffer. I'm not sure if there is a workaround around this, or if there is a different approach I should take.
Thank you.
SurfaceTexture receives a buffer of graphics data and essentially wraps it up as an "external" texture. If it helps to see source code, start in updateTexImage(). Note the name of the class ("GLConsumer") is a more accurate description of the function than "SurfaceTexture": it consumes frames of graphic data and makes them available to GLES.
SurfaceTexture is expected to work with formats that OpenGL ES doesn't "naturally" work with, notably YUV, so it always uses external textures.
Currently I'm showing a preview of the camera on the screen providing the camera preview texture - camera.setPreviewTexture(...) (doing it using opengl of course).
I have a native library which get bytes[] as an image, and return a byte[] - the result image related to the input image. I want to call it, and then draw the input image and the result to the screen - one on each other.
I know that in Opengl, in order to get the data of texture back in the CPU we must be read it using glReadPixel and after process i will have to load the result to a texture - which will have big impact on performances to do it each frame.
I thought about using camera.setPreviewCallback(...), There i'm getting the frame (Calling the process method and transfer the result to the my SurfaceView), and parallel continue using the texture preview Technic for drawing on the screen, but than i'm afraid of synchronizing between the frames that i got in the previewCallback to those i got in the texture.
Am i missing anything ? or there is not easy way to solve this issue?
One approach that may be useful is to direct the output of the Camera to an ImageReader, which provides a Surface. Each frame sent to the Surface is made available as YUV data without a copy, which makes it faster than some of the alternatives. The variations in color formats (stride, alignment, interleave) are handled by ImageReader.
Since you want the camera image to be presented simultaneously with the processing output, you can't send frames down two independent paths.
When the frame is ready, you will need to do a color-space conversion and upload the pixels with glTexImage2D(). This will likely be the performance-limiting factor.
From the comments it sounds like you're familiar with image filtering using a fragment shader; for anyone else who finds this, you can see an example here.
In Android, I need an efficient way of modifying the camera stream before displaying it on the screen. This post discusses a couple of ways of doing so and I was able to implement the first one:
Get frame buffer from onPreviewFrame
Convert frame to YUV
Modify frame
Convert modified frame to jpeg
Display frame to ImageView which was placed on SurfaceView used for the
preview
That worked but brought down the 30 fps I was getting with a regular camera preview to 5 fps or so. Converting frames back and forth from different image spaces is also power hungry, which I don't want.
Are there examples on how to get access to the raw frames directly and not have to go through so many conversions? Is using OpenGL the right way of doing this? It must be a very common thing to do but I can't find good examples.
Note: I'd rather avoid using the Camera2 APIs for backward compatibility sake.
The most efficient form of your CPU-based pipeline would look something like this:
Receive frames from the Camera on a Surface, rather than as byte[]. With Camera2 you can send the frames directly to an ImageReader; that will get you CPU access to the raw YUV data from the Camera without copying it or converting it. (I'm not sure how to rig this up with the old Camera API, as it wants either a SurfaceTexture or SurfaceHolder, and ImageReader doesn't provide those. You can run the frames through a SurfaceTexture and get RGB values from glReadPixels(), but I don't know if that'll buy you anything.)
Perform your modifications on the YUV data.
Convert the YUV data to RGB.
Either convert the RGB data into a Bitmap or a GLES texture. glTexImage2D will be more efficient, but OpenGL ES comes with a steep learning curve. Most of the pieces you need are in Grafika (e.g. the texture upload benchmark) if you decide to go that route.
Render the image. Depending on what you did in step #4, you'll either render the Bitmap through a Canvas on a custom View, or render the texture with GLES on a SurfaceView or TextureView.
I think the most significant speedup will be from eliminating the JPEG compression and uncompression, so you should probably start there. Convert the output of your frame editor to a Bitmap and just draw it on the Canvas of a TextureView or custom View, rather than converting to JPEG and using ImageView. If that doesn't get you the speedup you want, figure out what's slowing you down and work on that piece of the pipeline.
If you're restricted to the old camera API, then using a SurfaceTexture and doing your processing in a GPU shader may be most efficient.
This assumes whatever modifications you want to do can be expressed reasonably as a GL fragment shader, and that you're familiar enough with OpenGL to set up all the boilerplate necessary to render a single quadrilateral into a frame buffer, using the texture from a SurfaceTexture.
You can then read back the results with glReadPixels from the final rendering output, and save that as a JPEG.
Note that the shader will provide you with RGB data, not YUV, so if you really need YUV, you'll have to convert back to a YUV colorspace before processing.
If you can use camera2, as fadden says, ImageReader or Allocation (for Java/JNI-based processing or Renderscript, respectively) become options as well.
And if you're only using the JPEG to get to a Bitmap to place on an ImageView, and not because you want to save it, then again as fadden says you can skip the encode/decode step and draw to a view directly. For example, if using the Camera->SurfaceTexture->GL path, you can just use a GLSurfaceView as the output destination and render directly into a GLSurfaceView if that's all you need to do with the data.
Im trying to find a way to draw a part of a texture in opengl (for example, in a sprite I need to draw different parts of the image) and I cant find it. In the questions I have been looking into, people talk about the glDrawTexfOES but from what I understand its a short way to draw a rectangle texture.
Thanks in advance.
Yes, those texture coordinates are the ones.. You can change them at runtime but I'd need some info of your pipeline how and where do you push vertex and texture coordinates to GL. If you do that every frame with something like "glTexCoordPointer" you just need your buffer not to be constant and change values whenever you want. If you use some GPU buffers you will need to retrieve buffer pointer and change the values. In both cases it would be wise to do that on same thread as your "draw" method.