I am working with a GLSurfaceView activity to display the camera frame on an android device. As I am newb in OpenGl Es, I wondered how I can get the image buffer and modify it, then display the modified frame on the phone?
In my Renderer class which implements GLSurfaceView.Renderer, I call a native function:
public class Renderer implements GLSurfaceView.Renderer {
public void onDrawFrame(GL10 gl) {
MyJNINative.render();
}
...
}
The API I am working with, provided a connectCallBack method that enables accessing image buffer via something like onFrameAvailableNow.
So I have already the image buffer which is unfortunately of const type. So my modifications to it will not get reflected.
Now my question is how to add some gl methods to modify the image buffer that can be reflected on the display?
My native renderer:
Java_com_project_MyJNINative_render(
JNIEnv*, jobject) {
// Let's say I have image buffer here called "uint_8t* buffer"
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
glViewport(0, 0, width, height);
// UpdateTexture()
api_handler.UpdateTexture());
gl_vid_obj->Render(glm::mat4(1.0f), glm::mat4(1.0f));
/// I NEED SOME CODE HERE TO set gl buffer
}
As fadden explained, you cannot change the preview buffer that is connected to SurfaceTexture. But you can obtain the preview buffers with onPreviewFrame(), modify it and push the result to OpenGL via glTexSubImage2D(). There are two pitfalls: you should hide the actual preview (probably connecting it to a texture that will not be visible on your GL surface), and you should do all processing fast enough (20 FPS at least for the "preview" to look natural).
Related
I can't get the texture tied to a SurfaceTexture to display in Unity.
Update 4: Based on the pipeline in update 1 (surface->external texture via surface texture -> fbo -> texture 2d) I know the SurfaceTexture isn't properly converting its surface to a texture. I can get correctly drawn pictures from its surface via pixelcopy and I can confirm my FBO drawing to texture2d pipeline works with some test colors. So the question is, why can't the SurfaceTexture convert its surface to a texture?
I generate a Texture in Java and pass its pointer back to Unity:
public void initGLTexture()
{
Log.d("Unity", "initGLTexture");
int textures[] = new int[1];
GLES20.glGenTextures(1, textures, 0);
mTextureId = textures[0];
GLES20.glBindTexture(GL_TEXTURE_EXTERNAL_OES, mTextureId);
GLES20.glTexParameterf(GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameterf(GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
I create a SurfaceTexture from the id (in Java):
mSurfaceTexture = new SurfaceTexture(mTextureId);
mSurfaceTexture.setDefaultBufferSize(512, 512);
I use a third-party library, GeckoView, to render onto the Surface of the SurfaceTexture. I call the following method from Unity's OnRenderObject() to keep all GL rendering on the same thread:
mSurfaceTexture.updateTexImage();
I know the above code allows proper drawing onto the surface.
I call the following in Unity to load the texture:
_imageTexture2D = Texture2D.CreateExternalTexture(
512,512,TextureFormat.RGBA32,false,true,(IntPtr) mTextureId);
_rawImage.texture = _imageTexture2D;
Why does the RawImage with the texture applied show only this sprite-looking thing, which should be a webpage?
Update 1: So I've been working on the hypothesis of: use Gecko to draw to the Surface, and use a SurfaceTexture to render this surface to a GL_TEXTURE_EXTERNAL_OES. Since I can't display this on Unity (not sure why) I am drawing this texture to a frame buffer and copying the pixels in the framebuffer to a GL_TEXTURE_2D. I am getting a web page in the texture_2d (in the emulator with an imageview and glReadPixels). However, when I import the work into Unity to test if the pipeline is okay thus far I just get a black screen. I CAN get images of the surface via the PixelCopy api.
Here is my FBO overview code - my rendering code comes from grafika's texture2D program:
// bind display buffer
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, mFrameBufferId);
GlUtil.checkGlError("glbindframebuffer");
// unbind external texture to make sure it's fresh
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, 0);
GlUtil.checkGlError("glunbindexternaltex");
// bind source texture (done in drawFrame as well )
GLES20.glBindTexture(GL_TEXTURE_EXTERNAL_OES, mOffscreenTextureId);
GlUtil.checkGlError("glBindFramebuffer");
// draw to frame buffer
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // again, only really need to
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); // clear pixels outside rect
mFullScreen.drawFrame(mOffscreenTextureId, mIdentityMatrix);
// unbind source texture
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D,0);
GlUtil.checkGlError("glBindTexture2d");
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, 0);
GlUtil.checkGlError("glunbindexternaltex");
// make sure we're still bound to fbo
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, mFrameBufferId);
GlUtil.checkGlError("glBindTexture2d");
// copy pixels from frame buffer to display texture
GLES20.glCopyTexImage2D(GLES20.GL_TEXTURE_2D,0,GLES20.GL_RGBA,0,0,512,512,0);
// read pixels from the display buffer to imageview for debugging
BitmapDisplay.mBitmap = SavePixels(0,0,512,512);
// unbind texture
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D,0);
Here's my player settings > other:
Update 2: Possible pipeline to try: call the draw function of the external texture to FBO (attached to Unity's texture_2d) in C++ via this interface.
Update 3: Calling the Java functions from native code that are responsible for drawing the texture from the SurfaceTexture to the FBO to Unity's texture via the GL.IssuePluginEvent produce a black texture as in the first update. It will show images in the emulator but not in Unity.
I had to do a similar task a couple of months ago and found out that the correct pipeline is creating a texture in Unity, obtaining a native pointer in C and finally updating it in the Java layer.
Please take a look at this sample project, it should give you a different perspective.
https://github.com/robsondepaula/unity-android-native-camera
Regards
I am trying to overlay stickers on face using OpenCV and OpenGL.
I am getting the ByteBuffer inside the onDrawFrame:
#Override
public void onDrawFrame(GL10 unused) {
if (VERBOSE) {
Log.d(TAG, "onDrawFrame tex=" + mTextureId);
}
mSurfaceTexture.updateTexImage();
mSurfaceTexture.getTransformMatrix(mSTMatrix);
byteBuffer.rewind();
GLES20.glReadPixels(0, 0, mWidth, mHeight, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, byteBuffer);
mat.put(0, 0, byteBuffer.array());
if (mCascadeClassifier != null) {
mFaces.empty();
mCascadeClassifier.detectMultiScale(mat, mFaces);
Log.d(TAG, "No. of faces detected : " + mFaces.toArray().length);
}
drawFrame(mTextureId, mSTMatrix);
}
My mat object is initialized in with camera preview width and height:
mat = new Mat(height, width, CvType.CV_8UC3);
The log return 0 face detections. I have two questions:
What am I missing here for face detection using OpenCV?
Also, how can I improve the performance/efficiency of video frame rendering and do the realtime face detection? because glReadPixels takes time to execute and slow down the rendering.
You are calling glReadPixels() on the GLES frame buffer before you've rendered anything. You'd need to do it after drawFrame() if you were hoping to read back the SurfaceTexture rendering. You may want to consider rendering the texture offscreen to a pbuffer EGLSurface instead, and reading back from that.
There are a few different ways to get the pixel data from the Camera:
Use the Camera byte[] APIs. Generally involves a software copy, so it tends to be slow.
Send the output to an ImageReader. This gives you immediate access to the raw YUV data.
Send the output to a SurfaceTexture, render the texture, read RGB data out with glReadPixels() (which is what I believe you are trying to do). This is generally very fast, but on some devices and versions of Android it can be slow.
I am trying to apply face detection on camera preview frames. I am using OpenGL and OpenCV to process these camera frames at run-time.
#Override
public void onDrawFrame(GL10 unused) {
if (VERBOSE) {
Log.d(TAG, "onDrawFrame tex=" + mTextureId);
}
mSurfaceTexture.updateTexImage();
mSurfaceTexture.getTransformMatrix(mSTMatrix);
// TODO: need to implement
//JniCppManager.processFrame();
drawFrame(mTextureId, mSTMatrix);
}
I am trying to implement a c++ implementation of processFrame(). How can I get a Mat object in c++ from transformation matrix? Could anyone provide me some pointers to the solution.
Your pipeline is currently:
Camera (produces frame)
SurfaceTexture (receives frame, converts to GLES "external" texture)
[missing stuff]
Array of RGB bytes passed to C++
What you need to do for [missing stuff] is render the pixels to an off-screen pbuffer and read them back with glReadPixels(). You can do this from code written in Java or native; for the former you'd want to read them into a "direct" ByteBuffer so you can easily access the pixels from native code. The EGL context used by GLES is held in thread-local storage, so the native code running on the GLSurfaceView render thread will be able to access it.
An example of this can be found in the bigflake ExtractMpegFramesTest, which differs primarily in that it's grabbing frames from a video rather than a Camera.
For API 19+, if you can process frames in YV12 or NV21 rather than RGB, you can feed the Camera to an ImageReader and get access to the data without having to copy/convert it.
I was trying to connect my callback function via the TangoService_connectOnFrameAvailable. I was able to connect it and accessing the TangoImageBuffer. However, I noticed that the buffer is const and can not be updated. I need to modify the image data for some image processing purposes. Like contour detection and displaying it.
So my question is how can we change the TangoJNINative_render method to update gl buffer.
Here is how the renederer function looks like:
Java_com_project_TangoJNINative_render(
JNIEnv*, jobject) {
// Let's say I have image buffer here called "uint_8t* buffer"
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
glViewport(0, 0, screen_width, screen_height);
// UpdateTexture()
tango_handler.UpdateColorTexture());
/// I NEED SOME CODE HERE TO set gl buffer
video_overlay->Render(glm::mat4(1.0f), glm::mat4(1.0f));
}
Thanks for your help.
Similar to regular Camera API, you can receive TangoImageBuffer, manipulate the pixels and assign them to your own texture (not the one provided by Tango), and display this texture instead of TextureRenderer and the like.
On Android, I'm trying to perform some OpenGL processing on camera frames, show those frames in the camera preview and then encode the frames in a video file. I'm trying to do this with OpenGL, using the GLSurfaceView and GLSurfaceView.Renderer and with FFMPEG for video encoding.
I've successfully processed the image frames using a shader. Now I need to encode the processed frames to video. The GLSurfaceView.Renderer provides the onDrawFrame(GL10 ..) method. It's in this method that I'm attempting to read the image frames using just glReadPixels() and then place the frames on a queue for encoding to video. On it's own, glReadPixels() is much too slow - my frame rate is in the single digits. I'm attempting to speed this up using Pixel Buffer Objects. This is not working. After plugging in the pbo, the frame rate is unchanged. This is my first time using OpenGL and I do not know where to begin looking for the problem. Am I doing this right? Can anyone give me some direction? Thanks in advance.
public class MainRenderer implements GLSurfaceView.Renderer, SurfaceTexture.OnFrameAvailableListener {
.
.
public void onDrawFrame ( GL10 gl10 ) {
//Create a buffer to hold the image frame
ByteBuffer byte_buffer = ByteBuffer.allocateDirect(this.width * this.height * 4);
byte_buffer.order(ByteOrder.nativeOrder());
//Generate a pointer to the frame buffers
IntBuffer image_buffers = IntBuffer.allocate(1);
GLES20.glGenBuffers(1, image_buffers);
//Create the buffer
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, image_buffers.get(0));
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, byte_buffer.limit(), byte_buffer, GLES20.GL_STATIC_DRAW);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, image_buffers.get(0));
//Read the pixel data into the buffer
gl10.glReadPixels(0, 0, this.width, this.height, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, byte_buffer);
//encode the frame to video
enQueueForEncoding(byte_buffer);
//unbind the buffer
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
}
.
.
}
I have never tried something like that before (opengl+video enconding) but I can tell you that reading from device memory is SLOW. Try double buffering, this may help since the GPU can keep rendering to the second buffer while the DMA controller reads back stuff.
Load a profiler (check your devices' GPU vendor), this may give you some idea. Another thing that may help is setting internal pbuffer format to something else, try lower numbers and dropping a channel (alpha).
EDIT: If you feel like that, you can encode the video at the GPU, that's going to boost, memory and processing wise, your application.
As I remember glBufferData() is not mapping your internal buffer onto GPU memory, it just copies data from your memory into the buffer (initializes).
To get access to the memory, which is allocated by glBufferData(), you should use glMapBufferRange(). That function returns a Java Buffer object which you can read.