I am using a stencil mask on a 3d object using the hello ar java demo however i am running into some unexpected behaviour. My stencil mask correctly occludes the plane renderer but the 3d object (andy) does not seem to react expectedly. Instead he seems to get flipped as shown in the picture. I am not sure how to approach fixing this issue. Attached is the code snippet doing the stencil masking
Image of stencil correctly working on plane buffer but failing on 3d model
GLES20.glClear ( GLES20.GL_STENCIL_BUFFER_BIT );
GLES20.glEnable(GLES20.GL_STENCIL_TEST);
GLES20.glColorMask(false, false, false, false);
GLES20.glDepthMask(false);
GLES20.glStencilFunc(GLES20.GL_NEVER, 1, 0xFF);
GLES20.glStencilOp(GLES20.GL_REPLACE, GLES20.GL_KEEP, GLES20.GL_KEEP);
GLES20.glStencilMask(0xFF);
GLES20.glClear(GLES20.GL_STENCIL_BUFFER_BIT);
// controls how pixels are rendered in the stencil mask
quadDrawer.draw();
GLES20.glColorMask(true, true, true, true);
GLES20.glDepthMask(true);
GLES20.glStencilMask(0xFF);
GLES20.glStencilFunc(GLES20.GL_EQUAL, 0, 0xFF);
// Visualize planes.
// reacts correctly to the stencil mask
planeRenderer.drawPlanes(
session.getAllTrackables(Plane.class), camera.getDisplayOrientedPose(), projmtx);
// Visualize anchors created by touch.
float scaleFactor = 1.0f;
for (ColoredAnchor coloredAnchor : anchors) {
if (coloredAnchor.anchor.getTrackingState() != TrackingState.TRACKING) {
continue;
}
// Get the current pose of an Anchor in world space. The Anchor pose is updated
// during calls to session.update() as ARCore refines its estimate of the world.
coloredAnchor.anchor.getPose().toMatrix(anchorMatrix, 0);
// Update and draw the model and its shadow.
// does not react correctly to the
virtualObject.updateModelMatrix(anchorMatrix, scaleFactor);
virtualObjectShadow.updateModelMatrix(anchorMatrix, scaleFactor);
virtualObject.draw(viewmtx, projmtx, colorCorrectionRgba, coloredAnchor.color);
virtualObjectShadow.draw(viewmtx, projmtx, colorCorrectionRgba, coloredAnchor.color);
}
GLES20.glDisable(GLES20.GL_STENCIL_TEST);
Forgot to update the "solution" i found for this. Instead of applying the stencil mask over the 3d object, i render the background again but passing it through the stencil mask. This means that the background will overlay the 3d object hence achieving the desired "masking" effect.
I have been working with the Arcore demo code provided by Google and was working in Android Studio, I would like to avoid using Unity if I can to complete this task.
By default the plane is shown as triangles that are white and the negative space is transparent. I would like to change that plan to rather be a texture that can be tiled throughout the environment, an example of this would be a grass texture.
The default image the plane uses is a file called trigrid.png and that is defined in the HelloArActivity.java.
https://github.com/google-ar/arcore-android-sdk/blob/master/samples/java_arcore_hello_ar/app/src/main/java/com/google/ar/core/examples/java/helloar/HelloArActivity.java
I tried to replace that with an image file that was just grass texture and called it floor.png . This just appears all white and doesn't display the grass at all.
}
try {
mPlaneRenderer.createOnGlThread(/*context=*/this, "floor.png");
} catch (IOException e) {
Log.e(TAG, "Failed to read plane texture");
}
I have tried adding
GLES20.glEnable(GLES20.GL_BLEND);
in the drawPlanes function but that didn't seem to help. I also commented out some of the changing of the colors in drawPlanes as well.
//GLES20.glClearColor(1, 1, 1, 1);
//GLES20.glColorMask(false, false, false, true);
//GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
//GLES20.glColorMask(true, true, true, true);
I'm not sure what is required to make the texture show. It could have to do with the plane_fragment.shader files but I don't have any experience with those.
Any insight would be helpful.
The shaders are very important. If you want to do any graphics programming in OpenGL, you need to know at least a little bit about shaders. They are programs that run on the graphics processing unit (GPU) to determine the color of every pixel for every frame. I quick intro to shaders is here: https://youtu.be/AyNZG_mqGVE.
To answer your question, you can use a new fragment shader which just draws your texture and does not mix in other colors. This is a quick and dirty solution, in the long term you definitely want to clean up the code to not reference all uniform variables that are no longer used.
Specifically:
Create a new file called plane_simple_fragment.shader in the src/main/assets/raw directory.
Open it in the editor and add the following code:
uniform sampler2D u_Texture;
varying vec3 v_TexCoordAlpha;
void main() {
gl_FragColor = texture2D(u_Texture, v_TexCoordAlpha.xy);
}
Then in PlaneRenderer change to the new shader by replacing R.raw.plane_fragment with R.raw.plane_simple_fragment.
I'm extremely new to Unity and VR and I've just been finishing up with Unity tutorials from YouTube. Unfortunately there isn't exactly much documentation on the process one should follow to make a VR app with Unity.
What I need is to be able to replicate the Photosphere App in the Cardboard App for Android. I need to do this using Unity if possible.
The Photosphere's have been taken from a Nexus 4's Camera with the photosphere option and look like below image:
I tried following this really nice walkthrough which attaches a cubemap skybox to the lighting. The problem is the top and the bottom part of of the cube don't seem to show the proper image.
I tried doing it with a 6 sided skybox too but I'm pretty lost about the way i should proceed with that. Primarily because I've just got one Photosphere image and the 6 sided skybox has 6 input texture parameters.
I also tried following this link but the information there is slightly overwhelming.
Any help or pointers in the right direction would be extremely appreciated!
Thank you :)
I also went through the tutorial of the link you provided, but it seems that they did it "manually".
Since you have Unity 3D and Cardboard SDK for Unity, you don't have to do configure the cameras.
You please follow this tutorial.
There is an alternative way doing a Sphere and put the cam inside:
http://zhvillues.tumblr.com/post/126331275376/creating-a-360-viewer-using-unity-3d
You have to apply a custom shader to the Sphere to render the inside of it.
Shader "Custom/sphereShader" {
Properties {
_MainTex ("Base (RGB)", 2D) = "white" {}
_Color ("Main Color", Color) = (1,1,1,0.5)
}
SubShader {
Tags { "RenderType" = "Opaque" }
Cull Front
CGPROGRAM
#pragma surface surf Lambert vertex:vert
sampler2D _MainTex;
struct Input {
float2 uv_MainTex;
float4 color : COLOR;
};
void vert(inout appdata_full v)
{
v.normal.xyz = v.normal * -1;
}
void surf (Input IN, inout SurfaceOutput o) {
fixed3 result = tex2D(_MainTex, IN.uv_MainTex);
o.Albedo = result.rgb;
o.Alpha = 1;
}
ENDCG
}
Fallback "Diffuse"
}
In my App I use several DecalSprites as a part of my scene. They all have transparency (PNG-textures). When I have them overlapping, some of those show black background instead of transparency. Those DecalSprites have different Z-coordinates. So they should look like one behind another.
Please note also the line on the border of a texture. This is also something that I'm struggling to remove.
Update 1: I use PerspectiveCamera in the scene. But all the decals are positioned to face the camera as in 2d mode. So this "black" background appears only in certain cases e.g. when camera goes right (and all those decals appear in the left of the scene). Also I use the CameraGroupStrategy
Solved! The reason was that CameraGroupStrategy when ordering Decals (from farthest to closest to camera) takes the "combined" vector distance between camera and the Decal. When my camera panned to left or to right the distance to the Z-farthest Decal became LESS than the Z-closer Decal. This produced the artifact. Fix:
GroupStrategy strategy = new CameraGroupStrategy(cam , new ZStrategyComparator());
And the Comparator:
private class ZStrategyComparator implements Comparator<Decal> {
#Override
public int compare (Decal o1, Decal o2) {
float dist1 = cam.position.dst(0, 0, o1.getPosition().z);
float dist2 = cam.position.dst(0, 0, o2.getPosition().z);
return (int)Math.signum(dist2 - dist1);
}
}
Thanks to all guys who tried to help. Especially Xoppa. He sent me into the right direction in libGDX IRC.
i want to use libgdx as a solution to scaling of apps based on screens aspect ratio.
i've found this link , and i find it really useful:
http://blog.acamara.es/2012/02/05/keep-screen-aspect-ratio-with-different-resolutions-using-libgdx/
i'm quite rusty with opengl (haven't written for it in years) and i wish to use the example on this link so that it would be easy to put images and shapes .
sadly , the starting point is in the middle . i want to use it like on many other platforms - top left corner should be (0,0) , and bottom right corner should be (targetWidth-1,targetHeight-1) ,
from what i remember, i need to move(translate) and rotate the camera in order to achieve it , but i'm not sure .
here's my modified code of the link's example for the onCreate method:
#Override
public void create()
{
camera=new OrthographicCamera(VIRTUAL_WIDTH,VIRTUAL_HEIGHT);
// camera.translate(VIRTUAL_WIDTH/2,VIRTUAL_HEIGHT/2,0);
// camera.rotate(90,0,0,1);
camera.update();
//
font=new BitmapFont(Gdx.files.internal("data/fonts.fnt"),false);
font.setColor(Color.RED);
//
screenQuad=new Mesh(true,4,4,new VertexAttribute(Usage.Position,3,"attr_position"),new VertexAttribute(Usage.ColorPacked,4,"attr_color"));
Point bottomLeft=new Point(0,0);
Point topRight=new Point(VIRTUAL_WIDTH,VIRTUAL_HEIGHT);
screenQuad.setVertices(new float[] {//
bottomLeft.x,bottomLeft.y,0f,Color.toFloatBits(255,0,0,255),//
topRight.x,bottomLeft.y,0f,Color.toFloatBits(255,255,0,255),//
bottomLeft.x,topRight.y,0f,Color.toFloatBits(0,255,0,255),//
topRight.x,topRight.y,0f,Color.toFloatBits(0,0,255,255)});
screenQuad.setIndices(new short[] {0,1,2,3});
//
bottomLeft=new Point(VIRTUAL_WIDTH/2-50,VIRTUAL_HEIGHT/2-50);
topRight=new Point(VIRTUAL_WIDTH/2+50,VIRTUAL_HEIGHT/2+50);
quad=new Mesh(true,4,4,new VertexAttribute(Usage.Position,3,"attr_position"),new VertexAttribute(Usage.ColorPacked,4,"attr_color"));
quad.setVertices(new float[] {//
bottomLeft.x,bottomLeft.y,0f,Color.toFloatBits(255,0,0,255),//
topRight.x,bottomLeft.y,0f,Color.toFloatBits(255,255,0,255),//
bottomLeft.x,topRight.y,0f,Color.toFloatBits(0,255,0,255),//
topRight.x,topRight.y,0f,Color.toFloatBits(0,0,255,255)});
quad.setIndices(new short[] {0,1,2,3});
//
texture=new Texture(Gdx.files.internal(IMAGE_FILE));
spriteBatch=new SpriteBatch();
spriteBatch.getProjectionMatrix().setToOrtho2D(0,0,VIRTUAL_WIDTH,VIRTUAL_HEIGHT);
}
so far , i've succeeded to use this code in order to use scaled coordinates , and still keep aspect ratio (which is great) , but i didn't succeed in moving the starting point (0,0) to the top left corner .
please help me .
EDIT: ok , after some testing , i've found out that the reason for it not working is that i use the spriteBatch . i think it ignores the camera . this code occurs in the render part. no matter what i do to the camera , it will still show the same results.
#Override
public void render()
{
if(Gdx.input.isKeyPressed(Keys.ESCAPE)||Gdx.input.justTouched())
Gdx.app.exit();
// update camera
// camera.update();
// camera.apply(Gdx.gl10);
// set viewport
Gdx.gl.glViewport((int)viewport.x,(int)viewport.y,(int)viewport.width,(int)viewport.height);
// clear previous frame
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
//
final String msg="test";
final TextBounds textBounds=font.getBounds(msg);
spriteBatch.begin();
screenQuad.render(GL10.GL_TRIANGLE_STRIP,0,4);
quad.render(GL10.GL_TRIANGLE_STRIP,0,4);
Gdx.graphics.getGL10().glEnable(GL10.GL_TEXTURE_2D);
spriteBatch.draw(texture,0,0,texture.getWidth(),texture.getHeight(),0,0,texture.getWidth(),texture.getHeight(),false,false);
spriteBatch.draw(texture,0,VIRTUAL_HEIGHT-texture.getHeight(),texture.getWidth(),texture.getHeight(),0,0,texture.getWidth(),texture.getHeight(),false,false);
font.draw(spriteBatch,msg,VIRTUAL_WIDTH-textBounds.width,VIRTUAL_HEIGHT);
spriteBatch.end();
}
These lines:
camera.translate(VIRTUAL_WIDTH/2,VIRTUAL_HEIGHT/2,0);
camera.rotate(90,0,0,1);
should move the camera such that it does what you want. However, my code has an additional:
camera.update()
call after it calls translate, and it looks like you're missing that. The doc for Camera.update says:
update
public abstract void update()
Recalculates the projection and view matrix of this camera and the Frustum planes. Use this after you've manipulated any of the attributes of the camera.