I was trying to connect my callback function via the TangoService_connectOnFrameAvailable. I was able to connect it and accessing the TangoImageBuffer. However, I noticed that the buffer is const and can not be updated. I need to modify the image data for some image processing purposes. Like contour detection and displaying it.
So my question is how can we change the TangoJNINative_render method to update gl buffer.
Here is how the renederer function looks like:
Java_com_project_TangoJNINative_render(
JNIEnv*, jobject) {
// Let's say I have image buffer here called "uint_8t* buffer"
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
glViewport(0, 0, screen_width, screen_height);
// UpdateTexture()
tango_handler.UpdateColorTexture());
/// I NEED SOME CODE HERE TO set gl buffer
video_overlay->Render(glm::mat4(1.0f), glm::mat4(1.0f));
}
Thanks for your help.
Similar to regular Camera API, you can receive TangoImageBuffer, manipulate the pixels and assign them to your own texture (not the one provided by Tango), and display this texture instead of TextureRenderer and the like.
Related
I am using ArCore-android-SDK augmented faces sample from google-ar repository
The fox's face is showing well.
Now I have implemented a button and added several new masks to the assets. How can I change the texture that is shown on the user's face in runtime?
The default image is freckles.png, and I want to change it to my texture which is called freckles_1.png
Here is the object with the face texture that is created in onSurfaceCreated
augmentedFaceRenderer.createOnGlThread(this, "models/freckles.png");
augmentedFaceRenderer.setMaterialProperties(0.0f, 1.0f, 0.1f, 6.0f);
And it is drawn in onDrawFrame
face.getCenterPose().toMatrix(modelMatrix, 0);
augmentedFaceRenderer.draw(
projectionMatrix, viewMatrix, modelMatrix, colorCorrectionRgba, face);
I have tried to recreate the object by calling createOnGlThread again, but I get Fatal Exception
java.lang.RuntimeException: Error creating shader.
I have a FloatBuffer as an output from the neural network, where the RGB channels are encoded with [-1 .. +1] values. I would like to render them on-screen, using GLSurfaceView. What is the best way to handle it?
I can dump the buffer into SSBO and write a compute shader, which maps it to ByteBuffer of [0 .. 255] range, then somehow bind it to regular texture. Or maybe I can set up my compute shader to output directly to some texture buffer? Or maybe I am supposed to read my SSBO directly from the fragment shader (and implement my own linear interpolation)?
So, which is the best way to render stuff via OpenGL ES? Please, help.
You can try to load it with but it depends how many update you need per seconde. That is to test with you machine.
First Bind your texture (you must create one) then when your input buffer is ready use
GLES20.glTexSubImage2D(GLES20.GL_TEXTURE_2D, 0, 0, 0, width, height, GLES30.GL_RGB, GLES20.GL_FLOAT, InputFloatBuffer);
It work well with ByteBuffer and i did not try with Float but there is no Signed_float format.
Use a kernel to change the signed float to byte.
I am working with a GLSurfaceView activity to display the camera frame on an android device. As I am newb in OpenGl Es, I wondered how I can get the image buffer and modify it, then display the modified frame on the phone?
In my Renderer class which implements GLSurfaceView.Renderer, I call a native function:
public class Renderer implements GLSurfaceView.Renderer {
public void onDrawFrame(GL10 gl) {
MyJNINative.render();
}
...
}
The API I am working with, provided a connectCallBack method that enables accessing image buffer via something like onFrameAvailableNow.
So I have already the image buffer which is unfortunately of const type. So my modifications to it will not get reflected.
Now my question is how to add some gl methods to modify the image buffer that can be reflected on the display?
My native renderer:
Java_com_project_MyJNINative_render(
JNIEnv*, jobject) {
// Let's say I have image buffer here called "uint_8t* buffer"
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
glViewport(0, 0, width, height);
// UpdateTexture()
api_handler.UpdateTexture());
gl_vid_obj->Render(glm::mat4(1.0f), glm::mat4(1.0f));
/// I NEED SOME CODE HERE TO set gl buffer
}
As fadden explained, you cannot change the preview buffer that is connected to SurfaceTexture. But you can obtain the preview buffers with onPreviewFrame(), modify it and push the result to OpenGL via glTexSubImage2D(). There are two pitfalls: you should hide the actual preview (probably connecting it to a texture that will not be visible on your GL surface), and you should do all processing fast enough (20 FPS at least for the "preview" to look natural).
iOS Code:
I have working code on iOS which prepares a 3D transformation for a UIView's layer:
CATransform3D t = CATransform3DIdentity;
t.m34 = -1.0f/500.0f;
t = CATransform3DTranslate(t, 10.0f, 20.0f, 0.0f);
t = CATransform3DRotate(t, 0.25f * M_PI, -1.0f, 0.0f, 0.0f);
I'm trying to port the above code to Android. I'm trying to prepare an android.view.animation.Transformation t which will do the same thing. It will be executed by ViewGroup.getChildStaticTransformation(View v, Transformation t).
unfinished Android Code:
t.clear();
t.setTransformationType(Transformation.TYPE_MATRIX);
android.graphics.Camera camera = new android.graphics.Camera();
// set perspective (m34) here.. how??
camera.translate(10.0f, 20.0f, 0.0f);
camera.rotateX(-1.0f * Math.toDegrees(0.25 * Math.PI));
camera.getMatrix(t.getMatrix());
My main issue:
The main problem is that I'm not sure how set the perspective t.m34 = -1.0f/500.0f in Android. Reading the docs is rather cryptic and my best bet is using Camera.setLocation(). Also, the docs say nothing about units, so what would be an appropriate value?
Another issue is that setLocation() is only available from API 12, so I would really need to set it manually in the Matrix instead (or via some transformation). Any ideas how?
Final comment:
I'm aware that there are probably more issues.. like the translate() units, transformation order and generally the issue that we transform the camera in Android but the object in iOS. I will get to all of these later :)
I´m having this strange issue when using a GL10-object outside of the overridden Renderer functions.
For example, for the purpose of picking a geometry via color codes I tried to read out the color buffer via glReadPixels.
#Override
public void onDrawFrame(GL10 gl) {
...
ByteBuffer pixel = ByteBuffer.allocateDirect(4);
pixel.order(ByteOrder.nativeOrder());
gl.glReadPixels(0, 0, 1, 1, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, pixel);
while (pixel.hasRemaining()){
Log.v(TAG,""+(int)(pixel.get() & 0xFF));
}
}
This works and gives me the color values in range 0..255 for the pixel in the bottom left corner.
Now when I take my GL10-object and make it available to the whole class as a field, it doesn´t seem to work anymore:
#Override
public void update(Observable observable, Object data) {
Log.v(TAG, "update Observer glsurfaceviewrenderer");
if (data instanceof MotionEvent){
MotionEvent event = (MotionEvent) data;
ByteBuffer pixel = ByteBuffer.allocateDirect(4);
pixel.order(ByteOrder.nativeOrder());
gl.glReadPixels(0, 0, 1, 1, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, pixel);
while (pixel.hasRemaining()){
Log.v(TAG,""+(int)(pixel.get() & 0xFF));
}
}
}
This doesn´t work, all colors have value 0. Only difference is, I used the gl-object via a field and not via a function-argument. I checked the memory pointer to the gl-object by printing it to Log and both have the same address.
I´m really stumped right now...anybody having an idea?
Two problems:
1) You can only make OpenGL calls from the thread to which the context is bound. onDrawFrame runs in a thread created by GLSurfaceView, while I assume your update method is called from the main UI thread.
2) glReadPixels reads from the buffer you are currently rendering to. After onDrawFrame returns, GLSurfaceView will call eglSwapBuffers. You will no longer be able to read the buffer you were drawing to.
You'll need to reorganize your code so that you know what pixel you need to read at time that onDrawFrame is called. Your only other option is to fetch the entire frame every time.