I am using ArCore-android-SDK augmented faces sample from google-ar repository
The fox's face is showing well.
Now I have implemented a button and added several new masks to the assets. How can I change the texture that is shown on the user's face in runtime?
The default image is freckles.png, and I want to change it to my texture which is called freckles_1.png
Here is the object with the face texture that is created in onSurfaceCreated
augmentedFaceRenderer.createOnGlThread(this, "models/freckles.png");
augmentedFaceRenderer.setMaterialProperties(0.0f, 1.0f, 0.1f, 6.0f);
And it is drawn in onDrawFrame
face.getCenterPose().toMatrix(modelMatrix, 0);
augmentedFaceRenderer.draw(
projectionMatrix, viewMatrix, modelMatrix, colorCorrectionRgba, face);
I have tried to recreate the object by calling createOnGlThread again, but I get Fatal Exception
java.lang.RuntimeException: Error creating shader.
Related
I can't get the texture tied to a SurfaceTexture to display in Unity.
Update 4: Based on the pipeline in update 1 (surface->external texture via surface texture -> fbo -> texture 2d) I know the SurfaceTexture isn't properly converting its surface to a texture. I can get correctly drawn pictures from its surface via pixelcopy and I can confirm my FBO drawing to texture2d pipeline works with some test colors. So the question is, why can't the SurfaceTexture convert its surface to a texture?
I generate a Texture in Java and pass its pointer back to Unity:
public void initGLTexture()
{
Log.d("Unity", "initGLTexture");
int textures[] = new int[1];
GLES20.glGenTextures(1, textures, 0);
mTextureId = textures[0];
GLES20.glBindTexture(GL_TEXTURE_EXTERNAL_OES, mTextureId);
GLES20.glTexParameterf(GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameterf(GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
I create a SurfaceTexture from the id (in Java):
mSurfaceTexture = new SurfaceTexture(mTextureId);
mSurfaceTexture.setDefaultBufferSize(512, 512);
I use a third-party library, GeckoView, to render onto the Surface of the SurfaceTexture. I call the following method from Unity's OnRenderObject() to keep all GL rendering on the same thread:
mSurfaceTexture.updateTexImage();
I know the above code allows proper drawing onto the surface.
I call the following in Unity to load the texture:
_imageTexture2D = Texture2D.CreateExternalTexture(
512,512,TextureFormat.RGBA32,false,true,(IntPtr) mTextureId);
_rawImage.texture = _imageTexture2D;
Why does the RawImage with the texture applied show only this sprite-looking thing, which should be a webpage?
Update 1: So I've been working on the hypothesis of: use Gecko to draw to the Surface, and use a SurfaceTexture to render this surface to a GL_TEXTURE_EXTERNAL_OES. Since I can't display this on Unity (not sure why) I am drawing this texture to a frame buffer and copying the pixels in the framebuffer to a GL_TEXTURE_2D. I am getting a web page in the texture_2d (in the emulator with an imageview and glReadPixels). However, when I import the work into Unity to test if the pipeline is okay thus far I just get a black screen. I CAN get images of the surface via the PixelCopy api.
Here is my FBO overview code - my rendering code comes from grafika's texture2D program:
// bind display buffer
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, mFrameBufferId);
GlUtil.checkGlError("glbindframebuffer");
// unbind external texture to make sure it's fresh
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, 0);
GlUtil.checkGlError("glunbindexternaltex");
// bind source texture (done in drawFrame as well )
GLES20.glBindTexture(GL_TEXTURE_EXTERNAL_OES, mOffscreenTextureId);
GlUtil.checkGlError("glBindFramebuffer");
// draw to frame buffer
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // again, only really need to
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); // clear pixels outside rect
mFullScreen.drawFrame(mOffscreenTextureId, mIdentityMatrix);
// unbind source texture
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D,0);
GlUtil.checkGlError("glBindTexture2d");
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, 0);
GlUtil.checkGlError("glunbindexternaltex");
// make sure we're still bound to fbo
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, mFrameBufferId);
GlUtil.checkGlError("glBindTexture2d");
// copy pixels from frame buffer to display texture
GLES20.glCopyTexImage2D(GLES20.GL_TEXTURE_2D,0,GLES20.GL_RGBA,0,0,512,512,0);
// read pixels from the display buffer to imageview for debugging
BitmapDisplay.mBitmap = SavePixels(0,0,512,512);
// unbind texture
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D,0);
Here's my player settings > other:
Update 2: Possible pipeline to try: call the draw function of the external texture to FBO (attached to Unity's texture_2d) in C++ via this interface.
Update 3: Calling the Java functions from native code that are responsible for drawing the texture from the SurfaceTexture to the FBO to Unity's texture via the GL.IssuePluginEvent produce a black texture as in the first update. It will show images in the emulator but not in Unity.
I had to do a similar task a couple of months ago and found out that the correct pipeline is creating a texture in Unity, obtaining a native pointer in C and finally updating it in the Java layer.
Please take a look at this sample project, it should give you a different perspective.
https://github.com/robsondepaula/unity-android-native-camera
Regards
I was trying to connect my callback function via the TangoService_connectOnFrameAvailable. I was able to connect it and accessing the TangoImageBuffer. However, I noticed that the buffer is const and can not be updated. I need to modify the image data for some image processing purposes. Like contour detection and displaying it.
So my question is how can we change the TangoJNINative_render method to update gl buffer.
Here is how the renederer function looks like:
Java_com_project_TangoJNINative_render(
JNIEnv*, jobject) {
// Let's say I have image buffer here called "uint_8t* buffer"
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
glViewport(0, 0, screen_width, screen_height);
// UpdateTexture()
tango_handler.UpdateColorTexture());
/// I NEED SOME CODE HERE TO set gl buffer
video_overlay->Render(glm::mat4(1.0f), glm::mat4(1.0f));
}
Thanks for your help.
Similar to regular Camera API, you can receive TangoImageBuffer, manipulate the pixels and assign them to your own texture (not the one provided by Tango), and display this texture instead of TextureRenderer and the like.
I'm a newbie in Android Vuforia AR development. After google and vuforia forum has no results, I come here and need your suggestions. I successful replace a teapot by my own 3d object, now i need to add another teapots into "stones" target, like this image link? Have you ever work with this case? Please give me some traces to begin.
Thanks and best Regards!
Are you using Unity? Here are two suggestions:
You can programmatically instantiate prefabs on an image target following this code, just add additional transforms:
https://developer.vuforia.com/forum/faq/unity-how-can-i-dynamically-attach-my-3d-model-image-target
Alternatively, in your Scene Hierarchy, you can make additional GameObjects children of the ImageTarget prefab (probably the easiest way), and adjust their position using the Scene Editor.
First, grab a fresh copy of the modelview matrix before transforming it. Second, bind your modelViewProjectionMatrix before using it.
modelViewMatrix = QCAR::Tool::convertPose2GLMatrix(trackable->getPose());
SampleUtils::rotatePoseMatrix(5.0f, 0.0f, 0.0f, 1.0f, &modelViewMatrix.data[0]);
SampleUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale,
&modelViewMatrix.data[0]);
SampleUtils::multiplyMatrix(&projectionMatrix.data[0],
&modelViewMatrix.data[0] ,
&modelViewProjection.data[0]);
glUniformMatrix4fv(mvpMatrixHandle, 1, GL_FALSE,
(GLfloat*)&modelViewProjection.data[0] );
SampleUtils::checkGlError("ImageTargets renderFrame");
glDrawElements(GL_TRIANGLES, NUM_TEAPOT_OBJECT_INDEX, GL_UNSIGNED_SHORT,
(const GLvoid*) &teapotIndices[0]);
I'm learning Open GL ES and would like to get a more intuitive interface with 3D objects than the one suggested by google in the TouchRotateActivity sample.
In order to do that, I would like to multiply my Modelview matrix by the ModelView matrix in the previous state.
But I encounter the following problem : getFloatv function returns 0 values in my float array, and I don't understand why (my ModelView matrix is not empty : if it was, I wouldn't get my cube on the screen).
Could someone help me to figure out what the problem is? Here are the changes in the code .
private float[] previous;
public CubeRenderer() {
mCube = new Cube();
previous = new float[16];
}
public void onDrawFrame(GL10 gl) {
GL11 gl11 = (GL11) gl;
gl11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
gl11.glMatrixMode(GL11.GL_MODELVIEW);
gl11.glLoadIdentity();
gl11.glTranslatef(0, 0, -3.0f);
gl11.glRotatef(mAngleX, 0, 1, 0);
gl11.glRotatef(mAngleY, 1, 0, 0);
gl11.glEnableClientState(GL11.GL_VERTEX_ARRAY);
gl11.glEnableClientState(GL11.GL_COLOR_ARRAY);
/*if(!previous.equals(new float[16]))
gl11.glMultMatrixf(previous, 0);*/
gl11.glGetFloatv(GL11.GL_MODELVIEW_MATRIX, previous, 0);
Log.d("taille matrice",Integer.toString(previous.length));
for(int i=0; i<previous.length;i++)
Log.d(Integer.toString(i),Float.toString(previous[i]));
mCube.draw(gl11);
}
Thank you in advance.
Depending on your device you might be using the PixelFlinger software GL renderer, which unfortunately does not implement glGetFloat, at least as of version 1.2. Checking the logcat output should reveal messages to this effect if this is the case.
The solution is to handle the matrices yourself so there's no need to retrieve them from OpenGL in the first place. Like so.
I don't program in Java, so for all I know, your problem could be in the way the memory is being passed to glGetFloatv. In any case, I found this page floating around out there, maybe it will help you.
I've been doing some simple shaders and Im encountering an error that happens randomly, when I start rendering my scene, sometimes the mesh is rendered with extra vectors, and if I kill the activity and then I open the same activity it renders sometimes without the extra vectors.
My guesses are that the memory on the GPU is not completely wiped out when I kill the activity. Whats more weird is that these extra polygons are rendered sometimes using my shader logic and other times they render as if they were filled with random squares.
Im going all crazy I've reviewed all the code, from where I read the obj, to where I set the vertex attributes, if you have been seen this before please let me know. BTW I'm using a motorola milestone with android 2.1.
This is the code related where I create a simple triangle and set the attributes of the vertices:
//This is where I create the mesh
mMesh = new Mesh();
mMesh.setVertices(new float[]{-0.5f, 0f, 0.5f,
0.5f, 0f, -0.5f,
-0.5f, 0f, -0.5f});
ArrayList<VertexAttribute> attributes = new ArrayList<VertexAttribute>();
attributes.add(new VertexAttribute(Usage.Position, 3, ProgramShader.POSITION_ATTRIBUTE));
VertexAttributes vertexAttributes = new VertexAttributes(attributes.toArray(new VertexAttribute[attributes.size()]));
mMesh.setVertexAttributes(vertexAttributes);
...
...
.......
//This is where I send the mesh to opengl
for(VertexAttribute attr :mVertexAttributes.getAttributes().values()){
mVertexBuffer.position(attr.offset);
int handler = shader.getHandler(attr.alias);
if(handler != -1){
try{
GLES20.glVertexAttribPointer(handler, attr.numComponents, GLES20.GL_FLOAT, false, mVertexAttributes.vertexSize, mVertexBuffer);
GLES20.glEnableVertexAttribArray(handler);
}catch (RuntimeException e) {
Log.d("CG", attr.alias);
throw e;
}
}
}
//(length = 3 for a triangle)
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, length);
Here are some screenshots for you to see the issue:
Screenshots
Also here is a link to a video I took when I run the app on the phone.
Video
So... I found the problem it was a really dumb thing I was doing,
//on this line I was sending length, where length was
//the length of the vertices for the triangle it was "9" (x,y,z for each vertex)
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, length);
//I had to divide that by the number of components for each vertex
//so when the vertex only has position attributes (x,y,z) is divided by 3
//when you have more i.e. normals it will be divided by 6 (x,y,z, normalX, normalY, normalZ)
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, length/mVertexAttributes.vertexNumComponents);
I hope this helps others.