Number of Textures drawn with call to glDrawElements - android

Overview
I am having trouble rendering the texture on the sides of my cube. I have successfully rendered textures on the top and bottom of my cube, but am unable to render on the sides.
What I have
I have a texture buffer full of 48 elements, (4*2 elements per face and 6 faces is 48) and they are full will good coordinates.
The cube shape is drawing is drawing, but the sides are not being rendered.
The image I am drawing is simply a image with the numbers 1-9 as you can see from the top of the cube. The textureBuffer is the same patter over and over again...
texture[0] = 0;
texture[1] = 0;
texture[2] = 1;
texture[3] = 0;
texture[4] = 1;
texture[5] = 1;
texture[6] = 0;
texture[7] = 1;
texture[8] = 0;
texture[9] = 0;
texture[10] = 1;
texture[11] = 0;
texture[12] = 1;
texture[13] = 1;
texture[14] = 0;
texture[15] = 1;
texture[16] = 0f;
texture[17] = 0f;
texture[18] = 1f;
texture[19] = 0f;
texture[20] = 1f;
texture[21] = 1f;
texture[22] = 0f;
texture[23] = 1f;
which simply loads the texture Buffer to render the full texture.
Possible Problem##
It appears that only the first 16 texture coordinates are being drawn and used because only the top and bottom surface of the rectangle are being textured. I've debugged it and when I populate the TextureBuffer the size is 48 though.
Render Code
#Override
public void draw(GL10 gl)
{
super.draw(gl);
//gl.glColor4f(255, 0, 0, 150);
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glEnable(GL10.GL_ALPHA_TEST);
gl.glAlphaFunc(GL10.GL_GREATER, 0.0f);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glTexCoordPointer(2,GL10.GL_FLOAT,0,textureBuffer);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textureID);
gl.glBlendFunc(GL10.GL_SRC_ALPHA,GL10.GL_ONE_MINUS_SRC_ALPHA);
gl.glEnable(GL10.GL_BLEND);
gl.glFrontFace(GL10.GL_CCW);
gl.glEnable(GL10.GL_CULL_FACE);
gl.glCullFace(GL10.GL_BACK);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glVertexPointer(3,GL10.GL_FLOAT,0,vertexBuffer);
gl.glDrawElements(GL10.GL_TRIANGLES,indexBuffer.capacity(),GL10.GL_UNSIGNED_SHORT,indexBuffer);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glDisable(GL10.GL_CULL_FACE);
gl.glDisable(GL10.GL_ALPHA_TEST);
gl.glDisable(GL10.GL_TEXTURE_2D);
gl.glColor4f(255, 255, 255, 255);
}
Creating the variable textureBuffer
The parameter variable texture that comes in contains 48 elements
public void constructTextureBuffer(float[] texture)
{
ByteBuffer vbb = ByteBuffer.allocateDirect(texture.length*4);
vbb.order(ByteOrder.nativeOrder());
textureBuffer = vbb.asFloatBuffer();
textureBuffer.put(texture);
textureBuffer.position(0);
}
The vertexBuffer is correctly setup and using the index buffer to render a cube. Do you know why the sides of the cube are not being rendered?
NEW!!
So I tried creating a shape by hand and I am running into the same situation with the texture buffer. I can render two faces but not the third! It appears anything past 8 texture verticies do not work.
This picture shows my new shape. Notice the horizontal extension. No matter what I do to those texture coordinates, that texture does not change. That is also the third face of my random object.

I have yet to get index buffers and textures to work together. I've tried almost everything! So instead (and unfortunately) I am building lots of triangles. In order to parse the .obj file generated by blender, I wrote this function which creates the vertex buffer and texture buffer for me.
public static Mesh createMesh(int resourceID)
{
Mesh m = new Mesh();
Scanner s = null;
BufferedReader inputStream = null;
ArrayList<Float> readInVerticies = new ArrayList<Float>();
ArrayList<Float> readInTextures = new ArrayList<Float>();
ArrayList<Short> readInVertexIndicies = new ArrayList<Short>();
ArrayList<Short> readInTextureIndicies = new ArrayList<Short>();
int numberFaces = 0;
try
{
inputStream = new BufferedReader(new InputStreamReader(context.getResources().openRawResource(resourceID)));
s = new Scanner(inputStream);
String line = null;
/*
* Read the header part of the file
*/
line = inputStream.readLine();
line = inputStream.readLine();
line = inputStream.readLine();
line = inputStream.readLine();
line = inputStream.readLine();
while(line.charAt(0) == 'v' && line.charAt(1) != 't')
{
s = new Scanner(line);
s.next();
readInVerticies.add(s.nextFloat());
readInVerticies.add(s.nextFloat());
readInVerticies.add(s.nextFloat());
line = inputStream.readLine();
}
while(line.charAt(0)=='v' && line.charAt(1)=='t')
{
s = new Scanner(line);
s.next(); //read in "vt"
readInTextures.add(s.nextFloat());
readInTextures.add(s.nextFloat());
line = inputStream.readLine();
}
line = inputStream.readLine();
line = inputStream.readLine();
while(line != null && line.charAt(0) == 'f')
{
s = new Scanner(line);
s.useDelimiter("[ /\n]");
String buffer = s.next();
short vi1,vi2,vi3,vi4;
short ti1,ti2,ti3,ti4;
vi1 = s.nextShort();
ti1 = s.nextShort();
vi2 = s.nextShort();
ti2 = s.nextShort();
vi3 = s.nextShort();
ti3 = s.nextShort();
vi4 = s.nextShort();
ti4 = s.nextShort();
readInVertexIndicies.add(vi1);
readInVertexIndicies.add(vi2);
readInVertexIndicies.add(vi3);
readInVertexIndicies.add(vi4);
readInTextureIndicies.add(ti1);
readInTextureIndicies.add(ti2);
readInTextureIndicies.add(ti3);
readInTextureIndicies.add(ti4);
numberFaces = numberFaces + 1;
line = inputStream.readLine();
}
/*
* constructing our verticies. Use the number of faces * 6 because
* there are 2 triangles per face and 3 verticies on a triangle but there are
* also 3 coordinates per vertex.
*
* For the texture, the same number but there are only 2 coordinates per texture
*/
float verticies[] = new float[numberFaces * 6 * 3];
float textures[] = new float[numberFaces * 6 * 2];
for(int i=0;i<numberFaces;i++)
{
verticies[i*18+0] = readInVerticies.get((readInVertexIndicies.get(i*4+0)-1)*3+0);
verticies[i*18+1] = readInVerticies.get((readInVertexIndicies.get(i*4+0)-1)*3+1);
verticies[i*18+2] = readInVerticies.get((readInVertexIndicies.get(i*4+0)-1)*3+2);
textures[i*12+0] = readInTextures.get((readInTextureIndicies.get(i*4+0)-1)*2+0);
textures[i*12+1] = readInTextures.get((readInTextureIndicies.get(i*4+0)-1)*2+1);
verticies[i*18+3] = readInVerticies.get((readInVertexIndicies.get(i*4+1)-1)*3+0);
verticies[i*18+4] = readInVerticies.get((readInVertexIndicies.get(i*4+1)-1)*3+1);
verticies[i*18+5] = readInVerticies.get((readInVertexIndicies.get(i*4+1)-1)*3+2);
textures[i*12+2] = readInTextures.get((readInTextureIndicies.get(i*4+1)-1)*2+0);
textures[i*12+3] = readInTextures.get((readInTextureIndicies.get(i*4+1)-1)*2+1);
verticies[i*18+6] = readInVerticies.get((readInVertexIndicies.get(i*4+2)-1)*3+0);
verticies[i*18+7] = readInVerticies.get((readInVertexIndicies.get(i*4+2)-1)*3+1);
verticies[i*18+8] = readInVerticies.get((readInVertexIndicies.get(i*4+2)-1)*3+2);
textures[i*12+4] = readInTextures.get((readInTextureIndicies.get(i*4+2)-1)*2+0);
textures[i*12+5] = readInTextures.get((readInTextureIndicies.get(i*4+2)-1)*2+1);
verticies[i*18+9] = readInVerticies.get((readInVertexIndicies.get(i*4+0)-1)*3+0);
verticies[i*18+10] = readInVerticies.get((readInVertexIndicies.get(i*4+0)-1)*3+1);
verticies[i*18+11] = readInVerticies.get((readInVertexIndicies.get(i*4+0)-1)*3+2);
textures[i*12+6] = readInTextures.get((readInTextureIndicies.get(i*4+0)-1)*2+0);
textures[i*12+7] = readInTextures.get((readInTextureIndicies.get(i*4+0)-1)*2+1);
verticies[i*18+12] = readInVerticies.get((readInVertexIndicies.get(i*4+2)-1)*3+0);
verticies[i*18+13] = readInVerticies.get((readInVertexIndicies.get(i*4+2)-1)*3+1);
verticies[i*18+14] = readInVerticies.get((readInVertexIndicies.get(i*4+2)-1)*3+2);
textures[i*12+8] = readInTextures.get((readInTextureIndicies.get(i*4+2)-1)*2+0);
textures[i*12+9] = readInTextures.get((readInTextureIndicies.get(i*4+2)-1)*2+1);
verticies[i*18+15] = readInVerticies.get((readInVertexIndicies.get(i*4+3)-1)*3+0);
verticies[i*18+16] = readInVerticies.get((readInVertexIndicies.get(i*4+3)-1)*3+1);
verticies[i*18+17] = readInVerticies.get((readInVertexIndicies.get(i*4+3)-1)*3+2);
textures[i*12+10] = readInTextures.get((readInTextureIndicies.get(i*4+3)-1)*2+0);
textures[i*12+11] = readInTextures.get((readInTextureIndicies.get(i*4+3)-1)*2+1);
}
m.constructVertexBuffer(verticies);
m.constructTextureBuffer(textures);
}
catch (FileNotFoundException e)
{
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
return m;
}
So if you ever have trouble parsing a .obj file, feel free to use this as a reference or guide! Blender gives you everything in faces with 4 vertices, and this turns everything into 3s and constructs 2 triangles per face.

Related

How do you make shading work between several 3D objects using rajawali?

I made a 3D environment with a few 3D objects using rajawali. I set up a directionnal light, everything displays, I added diffuse material to every object.
But I can not have the shadow of one object on another.
The objects have their own shadows on themselves depending on how they are oriented within the directionnal light but they doesnt seem to have shadows of other objects. I also tried with a spotlight but it didnt work.
What can I do ? Is there a feature to enable or something ?
Here is my code :
public class BasicRenderer extends Renderer {
private Sphere mEarthSphere;
private DirectionalLight mDirectionalLight;
private SpotLight mSpotLight;
private Object3D[][][] mCubes;
private Object3D mRootCube;
public BasicRenderer(Context context) {
super(context);
setFrameRate(60);
}
#Override
protected void initScene() {
getCurrentScene().setBackgroundColor(0, 0.5f, 1.0f, 1.0f);
getCurrentScene().setFog(new FogMaterialPlugin.FogParams(FogMaterialPlugin.FogType.LINEAR, 0x999999, 50, 100));
mSpotLight = new SpotLight();
mSpotLight.setPower(20);
mSpotLight.enableLookAt();
mSpotLight.setPosition(new Vector3(2, 1, 0));
mSpotLight.setLookAt(0,0,0);
//getCurrentScene().addLight(mSpotLight);
mDirectionalLight = new DirectionalLight(-1f, -2f, -1.0f);
mDirectionalLight.setColor(1.0f, 1.0f, 1.0f);
mDirectionalLight.setPower(1.5f);
getCurrentScene().addLight(mDirectionalLight);
SpecularMethod.Phong phongMethod = new SpecularMethod.Phong(0xeeeeee, 200);
Material material = new Material();
material.enableLighting(true);
material.setDiffuseMethod(new DiffuseMethod.Lambert());
//material.setSpecularMethod(phongMethod);
Texture earthTexture = new Texture("Earth", R.drawable.earthtruecolor_nasa_big);
NormalMapTexture earthNormal = new NormalMapTexture("earthNormal", R.drawable.earthtruecolor_nasa_big_n);
earthTexture.setInfluence(.5f);
try{
material.addTexture(earthTexture);
material.addTexture(earthNormal);
} catch (ATexture.TextureException error){
Log.d("BasicRenderer" + ".initScene", error.toString());
}
material.setColorInfluence(0);
mEarthSphere = new Sphere(1, 24, 24);
mEarthSphere.setMaterial(material);
getCurrentScene().addChild(mEarthSphere);
getCurrentCamera().setZ(4.2f);
mCubes = new Object3D[30][30][2];
Material cubeMaterial = new Material();
cubeMaterial.enableLighting(true);
cubeMaterial.setDiffuseMethod(new DiffuseMethod.Lambert(1));
//cubeMaterial.setSpecularMethod(phongMethod);
cubeMaterial.enableTime(true);
cubeMaterial.setColorInfluence(0);
Texture cubeTexture = new Texture("Stone", R.drawable.stone);
try{
cubeMaterial.addTexture(cubeTexture);
} catch (ATexture.TextureException error){
Log.d("BasicRenderer" + ".initScene", error.toString());
}
cubeMaterial.addPlugin(new DepthMaterialPlugin());
mRootCube = new Cube(1);
mRootCube.setMaterial(cubeMaterial);
mRootCube.setY(-1f);
// -- similar objects with the same material, optimize
mRootCube.setRenderChildrenAsBatch(true);
getCurrentScene().addChild(mRootCube);
mCubes[0][0][0] = mRootCube;
for(int z = 0; z < 2; z++) {
for(int y = 0; y < 5; y++) {
for (int x = 0; x < 30; x++) {
Object3D cube = mRootCube.clone(true);
cube.setY(-5 + y);
cube.setX(-15 + x);
cube.setZ(z);
mRootCube.addChild(cube);
mCubes[x][y][z] = cube;
}
}
}
Object3D cube = mRootCube.clone(true);
cube.setY(0);
cube.setX(-15 + 10);
cube.setZ(1);
mRootCube.addChild(cube);
mCubes[5][10][1] = cube;
getCurrentScene().addChild(mCubes[5][10][1]);
// -- create a chase camera
// the first parameter is the camera offset
// the second parameter is the interpolation factor
ChaseCamera chaseCamera = new ChaseCamera(new Vector3(0, 3, 16), mEarthSphere);
// -- tell the camera which object to chase
// -- set the far plane to 1000 so that we actually see the sky sphere
chaseCamera.setFarPlane(1000);
getCurrentScene().replaceAndSwitchCamera(chaseCamera, 0); //<--Also the only change!!!
}
#Override
public void onRender(final long elapsedTime, final double deltaTime) {
super.onRender(elapsedTime, deltaTime);
mEarthSphere.rotate(Vector3.Axis.Y, 1.0);
}
}
As you can see on the next picture, there is no shadow on the ground next to the cube on the left, but I think there should be.
https://i.imgur.com/qtl6mZf.jpg

Android ImageReader YUV 420 888 Repeating Data

I am trying to convert an Image received from ImageReader using the Camera 2 API to a OpenCV matrix and display it on screen using CameraBridgeViewBase, more specifically the function deliverAndDrawFrame. The ImageFormat for the reader is YUV_420_888, which, as far as I understand, has a Y plane with grayscale values for each pixel, and a U plane that has U/V every other with 1 for every 4 pixels. However, when I try to display this image it appears as if the image is repeating and is rotated 90 degrees. The code below is supposed to put the YUV data into a OpenCV matrix (just grayscale for now, not rgba):
/**
* Takes an {#link Image} in the {#link ImageFormat#YUV_420_888} and puts it into a provided {#link Mat} in rgba format.
*
* #param yuvImage {#link Image} in the {#link ImageFormat#YUV_420_888} format.
*/
public static void yuv420888imageToRgbaMat(final Image yuvImage, final Mat rgbaMat) {
final Image.Plane
Yp = yuvImage.getPlanes()[0],
UandVp = yuvImage.getPlanes()[1];
final ByteBuffer
Ybb = Yp .getBuffer(),
UandVbb = UandVp.getBuffer();
Ybb .get(mYdata , 0, 480*640 );
UandVbb.get(mUandVData, 0, 480*640 / 2 - 8);
for (int i = 0; i < 640*480; i++) {
for (int j = 0; j < 4; j++) {
mRawRGBAFrameData[i + 640*480*j] = mYdata[i];
}
mRawRGBAFrameData[i*4 ] = mYdata[i];
mRawRGBAFrameData[i*4+1] = mYdata[i];
mRawRGBAFrameData[i*4+2] = mYdata[i];
mRawRGBAFrameData[i*4+3] = -1;
}
}
Here is my code for the OpenCV frame:
private class CameraFrame implements CvCameraViewFrame {
private Mat mRgba;
#Override
public Mat gray() {
return null;
}
#Override
public Mat rgba() {
mRgbaMat.put(0, 0, mRawRGBAFrameData);
return mRgba;
}
public CameraFrame(final Mat rgba) {
super();
mRgba = rgba;
}
}
The code for receiving drawing the frame:
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
final Image yuvImage = reader.acquireLatestImage();
yuv420888imageToRgbaMat(yuvImage, mRgbaMat);
deliverAndDrawFrame(mFrame);
yuvImage.close();
}
};
And, this is the code for making the image reader:
mRgbaMat = new Mat(mFrameHeight, mFrameWidth, CvType.CV_8UC4);
mFrame = new CameraFrame(mRgbaMat);
mImageReader = ImageReader.newInstance(mFrameWidth, mFrameHeight, ImageFormat.YUV_420_888, 1);
mImageReader.setOnImageAvailableListener(mOnImageAvailableListener, mBackgroundHandler);
AllocateCache();
This is the initialization of the arrays:
protected static byte[] mRawRGBAFrameData = new byte[640*480*4], mYdata = new byte[640*480], mUandVData = new byte[640*480 / 2];
Notes: mFrameWidth is 480 and mFrameHeight is 640. One weird thing is that the height and width for ImageReader and the Image received from it have inverted dimensions.
Here is the image with the code above: https://i.stack.imgur.com/lcdzf.png
Here is the image with this instead in yuv420888imageToRgbaMat https://i.stack.imgur.com/T2MOI.png
for (int i = 0; i < 640*480; i++) {
mRawRGBAFrameData[i] = mYdata[i];
}
We can see that data is repeating in the Y frame and for some reason this gives an actual good looking image.
For anyone having the same problem of trying to use OpenCV with the Camera 2 API, I have come up with a solution. The first thing that I discovered was the fact that there is padding in the ByteBuffer that the ImageReader supplies, so this can cause distortion in the output if you do not account for it. Another thing that I chose do to was to create my own SurfaceView and draw to it using a Bitmap instead of using CameraViewBase, and so far it has worked out great. OpenCV has a function Util.matToBitmap that takes a BGR matrix and converts it to an android Bitmap, so that has been useful. I obtain the BGR matrix by putting information from the first two Image.Planes supplied by the ImageReader into an OpenCV one channel matrix that is formatted as YUV 420, and using Imgproc.cvtColor with Imgproc.COLOR_YUV420p2BGR. The important thing to know is that the Y plane of the image has full pixels, but the second UV plane has interleaved pixels that map one to four Y pixels, so the total length of the UV plane is half of the Y plane. See here. Anyways, here is some code:
Initialization of matrices
m_BGRMat = new Mat(Constants.VISION_IMAGE_HEIGHT, Constants.VISION_IMAGE_WIDTH, CvType.CV_8UC3);
m_Yuv420FrameMat = new Mat(Constants.VISION_IMAGE_HEIGHT * 3 / 2, Constants.VISION_IMAGE_WIDTH, CvType.CV_8UC1);
Every frame:
// Convert image to YUV 420 matrix
ImageUtils.imageToMat(image, m_Yuv420FrameMat, m_RawFrameData, m_RawFrameRowData);
// Convert YUV matrix to BGR matrix
Imgproc.cvtColor(m_Yuv420FrameMat, m_BGRMat, Imgproc.COLOR_YUV420p2BGR);
// Flip width and height then mirror vertically
Core.transpose(m_BGRMat, m_BGRMat);
Core.flip(m_BGRMat, m_BGRMat, 0);
// Draw to Surface View
m_PreviewView.drawImageMat(m_BGRMat);
Here is the conversion to YUV 420 matrix:
/**
* Takes an Android {#link Image} in the {#link ImageFormat#YUV_420_888} format and returns an OpenCV {#link Mat}.
*
* #param image {#link Image} in the {#link ImageFormat#YUV_420_888} format
*/
public static void imageToMat(final Image image, final Mat mat, byte[] data, byte[] rowData) {
ByteBuffer buffer;
int rowStride, pixelStride, width = image.getWidth(), height = image.getHeight(), offset = 0;
Image.Plane[] planes = image.getPlanes();
if (data == null || data.length != width * height) data = new byte[width * height * ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8];
if (rowData == null || rowData.length != planes[0].getRowStride()) rowData = new byte[planes[0].getRowStride()];
for (int i = 0; i < planes.length; i++) {
buffer = planes[i].getBuffer();
rowStride = planes[i].getRowStride();
pixelStride = planes[i].getPixelStride();
int
w = (i == 0) ? width : width / 2,
h = (i == 0) ? height : height / 2;
for (int row = 0; row < h; row++) {
int bytesPerPixel = ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8;
if (pixelStride == bytesPerPixel) {
int length = w * bytesPerPixel;
buffer.get(data, offset, length);
// Advance buffer the remainder of the row stride, unless on the last row.
// Otherwise, this will throw an IllegalArgumentException because the buffer
// doesn't include the last padding.
if (h - row != 1)
buffer.position(buffer.position() + rowStride - length);
offset += length;
} else {
// On the last row only read the width of the image minus the pixel stride
// plus one. Otherwise, this will throw a BufferUnderflowException because the
// buffer doesn't include the last padding.
if (h - row == 1)
buffer.get(rowData, 0, width - pixelStride + 1);
else
buffer.get(rowData, 0, rowStride);
for (int col = 0; col < w; col++)
data[offset++] = rowData[col * pixelStride];
}
}
}
mat.put(0, 0, data);
}
And finally, drawing
/**
* Given an {#link Mat} that represents a BGR image, draw it on the surface canvas.
* use the OpenCV helper function {#link Utils#matToBitmap(Mat, Bitmap)} to create a {#link Bitmap}.
*
* #param bgrMat BGR frame {#link Mat}
*/
public void drawImageMat(final Mat bgrMat) {
if (m_HolderReady) {
// Create bitmap from BGR matrix
Utils.matToBitmap(bgrMat, m_Bitmap);
// Obtain the canvas and draw the bitmap on top of it
final SurfaceHolder holder = getHolder();
final Canvas canvas = holder.lockCanvas();
canvas.drawBitmap(m_Bitmap, null, new Rect(0, 0, m_HolderWidth, m_HolderHeight), null);
holder.unlockCanvasAndPost(canvas);
}
}
This way works, but I imagine the best way to do it is to set up an OpenGL rendering context and write some sort of simple shader to display the matrix.

Vuforia 6.0.117 0x501 error when rendering texture

I'm trying to get Vuforia 6.0.117 working in my Android app. I'm using this specific version since its the last version supporting FrameMarkers. The detection of FrameMarkers is working fine, but when i'm trying to render a texture over the FrameMarker on my phone I get an error stating:
After operation FrameMarkers render frame got glError 0x501
My renderFrame method:
// Clear color and depth buffer
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
// Get the state from Vuforia and mark the beginning of a rendering
// section
State state = Renderer.getInstance().begin();
// Explicitly render the Video Background
Renderer.getInstance().drawVideoBackground();
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendEquation(GLES20.GL_FUNC_ADD);
// GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE_MINUS_SRC_ALPHA);
// We must detect if background reflection is active and adjust the
// culling direction.
// If the reflection is active, this means the post matrix has been
// reflected as well,
// therefore standard counter clockwise face culling will result in
// "inside out" models.
GLES20.glEnable(GLES20.GL_CULL_FACE);
GLES20.glCullFace(GLES20.GL_BACK);
if (Renderer.getInstance().getVideoBackgroundConfig().getReflection() == VIDEO_BACKGROUND_REFLECTION.VIDEO_BACKGROUND_REFLECTION_ON) {
GLES20.glFrontFace(GLES20.GL_CW); // Front camera
} else {
GLES20.glFrontFace(GLES20.GL_CCW); // Back camera
}
// Did we find any trackables this frame?
if (mActivity.isHelpVisible() || state.getNumTrackableResults() == 0) {
// no marker scanned
mActivity.hideInfoButton();
} else {
// Get the trackable:
TrackableResult trackableResult = state.getTrackableResult(0);
float[] modelViewMatrix = Tool.convertPose2GLMatrix(trackableResult.getPose()).getData();
// Check the type of the trackable:
MarkerResult markerResult = (MarkerResult) trackableResult;
Marker marker = (Marker) markerResult.getTrackable();
if (markerId != marker.getMarkerId()) {
markerId = marker.getMarkerId();
tag = DataManager.getInstance().getTagByMarkerId(markerId);
if (tag != null) {
texture = Texture.loadTexture(tag.getTexture());
setupTexture(texture);
tag.addToDB();
}
}
if (tag != null) {
String poiReference = tag.getPoiReference();
if (!poiReference.isEmpty()) {
mActivity.showInfoButton(poiReference);
}
// Select which model to draw:
Buffer vertices = planeObject.getVertices();
Buffer normals = planeObject.getNormals();
Buffer indices = planeObject.getIndices();
Buffer texCoords = planeObject.getTexCoords();
int numIndices = planeObject.getNumObjectIndex();
float[] modelViewProjection = new float[16];
float scale = (float) tag.getScale();
Matrix.scaleM(modelViewMatrix, 0, scale, scale, scale);
Matrix.multiplyMM(modelViewProjection, 0, vuforiaAppSession.getProjectionMatrix().getData(), 0, modelViewMatrix, 0);
GLES20.glUseProgram(shaderProgramID);
GLES20.glVertexAttribPointer(vertexHandle, 3, GLES20.GL_FLOAT, false, 0, vertices);
GLES20.glVertexAttribPointer(normalHandle, 3, GLES20.GL_FLOAT, false, 0, normals);
GLES20.glVertexAttribPointer(textureCoordHandle, 2, GLES20.GL_FLOAT, false, 0, texCoords);
GLES20.glEnableVertexAttribArray(vertexHandle);
GLES20.glEnableVertexAttribArray(normalHandle);
GLES20.glEnableVertexAttribArray(textureCoordHandle);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texture.mTextureID[0]);
GLES20.glUniformMatrix4fv(mvpMatrixHandle, 1, false, modelViewProjection, 0);
GLES20.glUniform1i(texSampler2DHandle, 0);
GLES20.glDrawElements(GLES20.GL_TRIANGLES, numIndices, GLES20.GL_UNSIGNED_SHORT, indices);
GLES20.glDisableVertexAttribArray(vertexHandle);
GLES20.glDisableVertexAttribArray(normalHandle);
GLES20.glDisableVertexAttribArray(textureCoordHandle);
SampleUtils.checkGLError("FrameMarkers render frame");
}
}
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
Renderer.getInstance().end();
}
I'm loading a texture of the size 640x482 and is loading as follows:
public class Texture {
public int mWidth; // The width of the texture.
public int mHeight; // The height of the texture.
public int mChannels; // The number of channels.
public ByteBuffer mData; // The pixel data.
public int[] mTextureID = new int[1];
public boolean mSuccess = false;
public static Texture loadTexture(String fileName) {
try {
InputStream inputStream = new FileInputStream(fileName);
BufferedInputStream bufferedStream = new BufferedInputStream(inputStream);
Bitmap bitMap = BitmapFactory.decodeStream(bufferedStream);
bufferedStream.close();
inputStream.close();
int[] data = new int[bitMap.getWidth() * bitMap.getHeight()];
bitMap.getPixels(data, 0, bitMap.getWidth(), 0, 0, bitMap.getWidth(), bitMap.getHeight());
return loadTextureFromIntBuffer(data, bitMap.getWidth(), bitMap.getHeight());
} catch (IOException e) {
Log.e(Constants.DEBUG, "Failed to load texture '" + fileName + "' from APK");
Log.i(Constants.DEBUG, e.getMessage());
return null;
}
}
public static Texture loadTextureFromIntBuffer(int[] data, int width, int height) {
// Convert:
int numPixels = width * height;
byte[] dataBytes = new byte[numPixels * 4];
for (int p = 0; p < numPixels; ++p) {
int colour = data[p];
dataBytes[p * 4] = (byte) (colour >>> 16); // R
dataBytes[p * 4 + 1] = (byte) (colour >>> 8); // G
dataBytes[p * 4 + 2] = (byte) colour; // B
dataBytes[p * 4 + 3] = (byte) (colour >>> 24); // A
}
Texture texture = new Texture();
texture.mWidth = width;
texture.mHeight = height;
texture.mChannels = 4;
texture.mData = ByteBuffer.allocateDirect(dataBytes.length).order(ByteOrder.nativeOrder());
int rowSize = texture.mWidth * texture.mChannels;
for (int r = 0; r < texture.mHeight; r++) {
texture.mData.put(dataBytes, rowSize * (texture.mHeight - 1 - r), rowSize);
}
texture.mData.rewind();
texture.mSuccess = true;
return texture;
}
}
Anybody got an idea why i'm getting this error and how to fix it?
I cannot go over your entire code right now, and even if I could I'm not sure it would help. You first need to narrow down the problem, so I will first give you the method to do that, and I hope it will serve you in other cases as well.
You managed to find out that there was an error - but you are checking it only at the end of the rendering function. What you need to do is to place the checkGLError call in several places inside the rendering code (print a different text message), until you can pin-point the exact line after which the error first appears. Then, if you cannot understand the problem, comment here what is the problematic line and I will try to help.
UPDATE:
After looking at the shader code, following your report that normalHandle is -1, I got to the following conclusions:
The error, which indicates the variable vertexNormal cannot be found in the shader, may be due to the fact that this variable is probably optimized out during shader compilation, since it is not really required.
Explanation: in the vertex shader (CUBE_MESH_VERTEX_SHADER), vertexNormal is assigned to a varying called normal (variable that is passed to the fragment shader). In the fragment shader, this varying is declared but not used.
Therefore, you can actually delete the variables vertexNormal and normal from the shader, and you can delete all usages of 'normalHandle' in your code.
This should eliminate the error.

How to load and display .obj file in Android with OpenGL-ES 2

I am trying to load an .obj file into my Android application and display it using OpenGL 2.
You can find the file here: EDIT: I removed the file, you can use any .obj file that contains the values mentiones below for testing.
There are a lot of similar questions on stackoverflow but I did not find a simple solution that does not require some large library.
The file only contains the following value types:
g
v
vt
vn
f
I tried libgdx, which worked ok, but it is a bit overkill for what I need.
I tried the oObjLoader https://github.com/seanrowens/oObjLoader without the LWJGL. The parsing seems to work, but how can I display the values in a simple scene?
The next step is to attach an image as a texture to the object. But for now I would be happy to display the file as it is.
I am open to different solutions like pre-converting the file, because it will only be this one ever within the application.
Thanks!
Status update
Basic loading and displaying works now, as shown in my own answer.
I ended up writing a new parser, it can be used like this to build FloatBuffers to use in your Renderer:
ObjLoader objLoader = new ObjLoader(context, "Mug.obj");
numFaces = objLoader.numFaces;
// Initialize the buffers.
positions = ByteBuffer.allocateDirect(objLoader.positions.length * mBytesPerFloat)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
positions.put(objLoader.positions).position(0);
normals = ByteBuffer.allocateDirect(objLoader.normals.length * mBytesPerFloat)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
normals.put(objLoader.normals).position(0);
textureCoordinates = ByteBuffer.allocateDirect(objLoader.textureCoordinates.length * mBytesPerFloat)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
textureCoordinates.put(objLoader.textureCoordinates).position(0);
and here's the parser:
public final class ObjLoader {
public final int numFaces;
public final float[] normals;
public final float[] textureCoordinates;
public final float[] positions;
public ObjLoader(Context context, String file) {
Vector<Float> vertices = new Vector<>();
Vector<Float> normals = new Vector<>();
Vector<Float> textures = new Vector<>();
Vector<String> faces = new Vector<>();
BufferedReader reader = null;
try {
InputStreamReader in = new InputStreamReader(context.getAssets().open(file));
reader = new BufferedReader(in);
// read file until EOF
String line;
while ((line = reader.readLine()) != null) {
String[] parts = line.split(" ");
switch (parts[0]) {
case "v":
// vertices
vertices.add(Float.valueOf(parts[1]));
vertices.add(Float.valueOf(parts[2]));
vertices.add(Float.valueOf(parts[3]));
break;
case "vt":
// textures
textures.add(Float.valueOf(parts[1]));
textures.add(Float.valueOf(parts[2]));
break;
case "vn":
// normals
normals.add(Float.valueOf(parts[1]));
normals.add(Float.valueOf(parts[2]));
normals.add(Float.valueOf(parts[3]));
break;
case "f":
// faces: vertex/texture/normal
faces.add(parts[1]);
faces.add(parts[2]);
faces.add(parts[3]);
break;
}
}
} catch (IOException e) {
// cannot load or read file
} finally {
if (reader != null) {
try {
reader.close();
} catch (IOException e) {
//log the exception
}
}
}
numFaces = faces.size();
this.normals = new float[numFaces * 3];
textureCoordinates = new float[numFaces * 2];
positions = new float[numFaces * 3];
int positionIndex = 0;
int normalIndex = 0;
int textureIndex = 0;
for (String face : faces) {
String[] parts = face.split("/");
int index = 3 * (Short.valueOf(parts[0]) - 1);
positions[positionIndex++] = vertices.get(index++);
positions[positionIndex++] = vertices.get(index++);
positions[positionIndex++] = vertices.get(index);
index = 2 * (Short.valueOf(parts[1]) - 1);
textureCoordinates[normalIndex++] = textures.get(index++);
// NOTE: Bitmap gets y-inverted
textureCoordinates[normalIndex++] = 1 - textures.get(index);
index = 3 * (Short.valueOf(parts[2]) - 1);
this.normals[textureIndex++] = normals.get(index++);
this.normals[textureIndex++] = normals.get(index++);
this.normals[textureIndex++] = normals.get(index);
}
}
}
Try with this project that found me on Github.
https://github.com/WenlinMao/android-3d-model-viewer
This is a demo of OpenGL ES 2.0. It is an android application with a 3D engine that can load Wavefront OBJ, STL, DAE & glTF files. The application is based on andresoviedo's project which can be found here with an additional function of loading and rendering glTF format.
The purpose of this application is to learn and share how to draw using OpenGLES and Android. As this is my first android app, it is highly probable that there are bugs; but I will try to continue improving the app and adding more features.
This project is open source, and contain classes that can solve your problem!
Try this code when you have 'obj' contents to get all Vertices Indices Normals Texture Coords
I hope it will give some help here's the code:
void readObj(String objContents) {
final float[] vertices;
final float[] normals;
final float[] textureCoords;
final short[] indices;
Vector<Float> verticesTemp = new Vector<>();
Vector<Float> normalsTemp = new Vector<>();
Vector<Float> textureCoordsTemp = new Vector<>();
Vector<String> facesTemp = new Vector<>();
String[] lines = objContents.split("\n");
for (String line : lines) {
String[] parts = line.split(" ");
switch (parts[0]) {
case "v":
verticesTemp.add(Float.parseFloat(parts[1]));
verticesTemp.add(Float.parseFloat(parts[2]));
verticesTemp.add(Float.parseFloat(parts[3]));
break;
case "vn":
normalsTemp.add(Float.parseFloat(parts[1]));
normalsTemp.add(Float.parseFloat(parts[2]));
normalsTemp.add(Float.parseFloat(parts[3]));
break;
case "vt":
textureCoordsTemp.add(Float.parseFloat(parts[1]));
textureCoordsTemp.add(Float.parseFloat(parts[2]));
break;
case "f":
facesTemp.add(parts[1]);
facesTemp.add(parts[2]);
facesTemp.add(parts[3]);
break;
}
}
vertices = new float[verticesTemp.size()];
normals = new float[normalsTemp.size()];
textureCoords = new float[textureCoordsTemp.size()];
indices = new short[facesTemp.size()];
for (int i = 0, l = verticesTemp.size(); i < l; i++) {
vertices[i] = verticesTemp.get(i);
}
for (int i = 0, l = normalsTemp.size(); i < l; i++) {
normals[i] = normalsTemp.get(i);
}
for (int i = 0, l = textureCoordsTemp.size(); i < l; i++) {
textureCoords[i] = textureCoordsTemp.get(i);
}
for (int i = 0, l = facesTemp.size(); i < l; i++) {
indices[i] = (short) (Short.parseShort(facesTemp.get(i).split("/")[0]) - 1);
}
// now all vertices, normals, textureCoords and indices are ready
}

how to get the coordinates of a contour

I am using OpenCV4Android version 2.4.11 and I am detecting any rectangular shapes in frames retrieved by camera. As shown in image_1 below, i draw a contour of a black color around the detected object, and what I am trying to do
is, to get all the coordinates of the drawn contour the one that is ONLY drawn in black. What I attempted is, as shown in code_1 below, i get the largest contour and the index of the largest contour and save them in the
"largestContour" and "largest_contour_index" respectively. Then, I draw the contour using
Imgproc.drawContours(mMatInputFrame, contours, largest_contour_index, new Scalar(0, 0, 0), 2, 8, hierachy, 0, new Point());
and then I pass points of the largest contour to the class FindCorners because i want to find the specific coordinates of the contour drawn in black as follows:
this.mFindCorners = new FindCorners(largestContour.toArray());
double[] cords = this.mFindCorners.getCords();
the following line of code:
double[] cords = this.mFindCorners.getCords();
should gives me the smalle x-coordinates, smallest y-coordinates, largest x-coordinates and largest y-coordinates. But when i draw a circle around the coordinates i got from "this.mFindCorners.getCords();" i got something like
in image_2 below, which is just the corners of the BoundingRect.
Actually i do not want any coordinates from the boundingRect i want to have access to the coordinates of the contour that is drawn around the detected object in balck
please let me know how to get the coordinates of the contour itself?
code_1:
if (contours.size() > 0) {
for (int i = 0; i < contours.size(); i++) {
contour2f = new MatOfPoint2f(contours.get(i).toArray());
approxDistance = Imgproc.arcLength(contour2f, true) * .01;//.02
approxCurve = new MatOfPoint2f();
Imgproc.approxPolyDP(contour2f, approxCurve, approxDistance, true);
points = new MatOfPoint(approxCurve.toArray());
double area = Math.abs(Imgproc.contourArea(points, true));
if (points.total() >= 4 && area >= 40000 && area <= 200000) {
if (area > largest_area) {
largest_area = area;
largest_contour_index = i;
pointsOfLargestContour = points;
largestContour = contours.get(i);
}
}
}
if (largest_area > 0) {
Imgproc.drawContours(mMatInputFrame, contours, largest_contour_index, new Scalar(0, 0, 0), 2, 8, hierachy, 0, new Point());
this.mFindCorners = new FindCorners(largestContour.toArray());
double[] cords = this.mFindCorners.getCords();
Core.circle(mMatInputFrame, new Point(cords[0], cords[1]), 10, new Scalar(255, 0, 0));
Core.circle(mMatInputFrame, new Point(cords[2], cords[3]), 10, new Scalar(255, 255, 0));
}
FindCorners:
public class FindCorners {
private final static String TAG = FragOpenCVCam.class.getSimpleName();
private ArrayList<Double> mlistXCords = null;
private ArrayList<Double> mlistYCords = null;
private double mSmallestX;
private double mSmallestY;
private double mLargestX;
private double mLargestY;
private double[] mCords = null;
public FindCorners(Point[] points) {
this.mlistXCords = new ArrayList<>();
this.mlistYCords = new ArrayList<>();
this.mCords = new double[4];
Log.d(TAG, "points.length: " + points.length);
for (int i = 0; i < points.length; i++) {
this.mlistXCords.add(points[i].x);
this.mlistYCords.add(points[i].y);
}
//ascending
Collections.sort(this.mlistXCords);
Collections.sort(this.mlistYCords);
this.mSmallestX = this.mlistXCords.get(0);
this.mSmallestY = this.mlistYCords.get(0);
this.mLargestX = this.mlistXCords.get(this.mlistXCords.size() - 1);
this.mLargestY = this.mlistYCords.get(this.mlistYCords.size() - 1);
this.mCords[0] = this.mSmallestX;
this.mCords[1] = this.mSmallestY;
this.mCords[2] = this.mLargestX;
this.mCords[3] = this.mLargestY;
}
public double[] getCords() {
return this.mCords;
}
}
image_1:
image_2:
update
i do not want to have the coordinates of the bounding rect, what i want to have is, the exact coordinates of the black contour. as shown in image_3 the coordinates i am getting from my code are where the red and yellow circles are..but i am looking for having access to the coordinates of the black line "contour" so i can draw some circles on it as shown in image_3. the spots in green are just to show you where i want to have coordinates.
image_3:
Your problem is that you have sorted x's and y's separately and it is obvious that your algorithm will find the red and yellow points.
I can suggest the following algorithm:
int min_x=INF, min_x_index, min_y=1000, min_y_index;
int max_x=-1, max_x_index, max_y=-1, max_y_index;
for (int i = 0; i < points.length; i++)
{
if(points[i].x < min_x) { min_x = points[i].x; min_x_index = i; }
if(points[i].x > max_x) { max_x = points[i].x; max_x_index = i; }
if(points[i].y < min_y) { min_y = points[i].y; min_y_index = i; }
if(points[i].y > max_y) { max_y = points[i].y; max_y_index = i; }
}
Point corner1(points[min_x_index]);
Point corner2(points[min_y_index]);
Point corner3(points[max_x_index]);
Point corner4(points[max_y_index]);

Categories

Resources