Basically i have an application for Android 1.5 with a GLSurfaceView class that shows a simple square polygon on the screen. I want to learn to add a new functionality, the functionality of moving the square touching it with the finger. I mean that when the user touches the square and moves the finger, the square should be moved with the finger, until the finger releases the screen.
I'm trying to use gluUnProject to obtain the OpenGL coordinates that matches the exact position of the finger, then, i will make a translatef to the polygon, and i will get the polygon moved to that position (i hope it)
The problem is that something is going wrong with gluUnProject, it is giving me this exception: java.lang.IllegalArgumentException: length - offset < n on the call to gluUnProject.
First of all, i'm passing 0 as Z win coordinate because i dont know what i have to pass as z win coordinate, because win doesn't have Z coordinates, only X and Y. I tested passing 1 on Z coordinate, and i'm getting the same exception.
float [] outputCoords=getOpenGLCoords(event.getX(), event.getY(), 0);
x=outputCoords[0];
y=outputCoords[1];
z=outputCoords[2];
.
.
.
public float[] getOpenGLCoords(float xWin,float yWin,float zWin)
{
int screenW=SectionManager.instance.getDisplayWidth();
int screenH=SectionManager.instance.getDisplayHeight();
//CODE FOR TRANSLATING FROM SCREEN COORDINATES TO OPENGL COORDINATES
mg.getCurrentProjection(MyGl);
mg.getCurrentModelView(MyGl);
float [] modelMatrix = new float[16];
float [] projMatrix = new float[16];
modelMatrix=mg.mModelView;
projMatrix=mg.mProjection;
int [] mView = new int[4];
mView[0] = 0;
mView[1] = 0;
mView[2] = screenW; //width
mView[3] = screenH; //height
float [] outputCoords = new float[3];
GLU.gluUnProject(xWin, yWin, zWin, modelMatrix, 0, projMatrix, 0, mView, 0, outputCoords, 0);
return outputCoords;
}
I answered the same question here; basically the gluUnproject function expects your outputCoords array to have size 4 instead of 3. Note that these are homogeneous coordinates, so you still have to divide the first 3 by the 4th one if you're doing perspective projection.
Related
I am currently working on a Worms-like game. I generate random Levels, which holds an Array of Points for each x with corresponding y. Also i have two arrays with some x values, where I then place trees and huts. I use a Orthographic camera and the translate method to move the camera when the user touches the screen. In order to have big levels, I decided to render the map only for the part that is currently visible. for that I have a BackgroundActor, which gets the current position of the camera, from that information I get the corresponding part of my map from the level class with the surface array. I then render this information with a ShapeRenderer. Then I render the props (trees and huts). The problem is, that the props get unaligned with the surface, when I drag the screen. For example: I move the map to the left, and the surface is moving faster to the left than the props. I already tried to set the projection Matrix for both the SpriteBatch and the ShapeRenderer, but it did not help.
Code:
#Override
public void draw(Batch batch, float parentAlpha) {
setBounds(); //gets the index for my map array from the camera
ShapeRenderer shapeRenderer = AndroidLauncher.gameScreen.shapeRenderer;
batch.end(); //needed, because otherwise the props do not render
shapeRenderer.begin(ShapeRenderer.ShapeType.Filled);
for (int x = 0; x < ScreenValues.screenWidth; x++) {
int y = level.getYForX(x + leftBound);
shapeRenderer.setColor(level.getUndergroundColor());
shapeRenderer.rectLine(x, 0, x, y - level.getSurfaceThickness(), 1);
shapeRenderer.setColor(level.getSurfaceColor());
shapeRenderer.rectLine(x, y - level.getSurfaceThickness(), x, y, 1);
}
shapeRenderer.end();
batch.begin();
for (int x = 0; x < ScreenValues.screenWidth; x++) {
int y = level.getYForX(x + leftBound);
if (level.getPropForX(x) != Level.PROP_NONE) {
if (level.getPropForX(x) == Level.PROP_TREE) y -= 10;
Image imageToDraw = getImageFromPropId(level.getPropForX(x)); //Images are setup in the create method of my Listener
imageToDraw.setPosition(x, y);
imageToDraw.draw(batch, parentAlpha);
}
}
}
I fixed the issue myself. In the for loop for the props I needed to run x from leftBound to ScreenValues.screenWidth + leftBound. This still gives me Texture popping when the props get to the left side of the screen, because the props x position is out of screen, but this will be a small fix.
I am using LibGDX and Box2d to build my first Android game. Yay!
But I am having some serious problems with Box2d.
I have a simple stage with a rectangular Box2d body at the bottom representing the ground, and two other rectangular Box2d bodies both at the left and right representing the walls.
A Screenshot
Another Screenshot
I also have a box. This box can be touched and it moves using applyLinearImpulse, like if it was kicked. It is a DynamicBody.
What happens is that in my draw() code of the Box object, the Box2d body of the Box object is giving me a wrong value for the X axis. The value for the Y axis is fine.
Those blue "dots" on the screenshots are small textures that I printed on the box edges that body.getPosition() give me. Note how in one screenshot the dots are aligned with the actual DebugRenderer rectangle and in the other they are not.
This is what is happening: when the box moves, the alignment is lost in the movement.
The collision between the box, the ground and the walls occur precisely considering the area that the DebugRenderer renders. But body.getPosition() and fixture.testPoint() considers that area inside those blue dots.
So, somehow, Box2d is "maintaining" these two areas for the same body.
I thought that this could be some kind of "loss of precision" between my conversions of pixels and meters (I am scaling by 100 times) but the Y axis uses the same technique and it's fine.
So, I thought that I might be missing something.
Edit 1
I am converting from Box coordinates to World coordinates. If you see the blue debug sprites in the screenshots, they form the box almost perfectly.
public static final float WORLD_TO_BOX = 0.01f;
public static final float BOX_TO_WORLD = 100f;
The box render code:
public void draw(Batch batch, float alpha) {
x = (body.getPosition().x - width/2) * TheBox.BOX_TO_WORLD;
y = (body.getPosition().y - height/2) * TheBox.BOX_TO_WORLD;
float xend = (body.getPosition().x + width/2) * TheBox.BOX_TO_WORLD;
float yend = (body.getPosition().y + height/2) * TheBox.BOX_TO_WORLD;
batch.draw(texture, x, y);
batch.draw(texture, x, yend);
batch.draw(texture, xend, yend);
batch.draw(texture, xend, y);
}
Edit 2
I am starting to suspect the camera. I got the DebugRenderer and a scene2d Stage. Here is the code:
My screen resolution (Nexus 5, and it's portrait):
public static final int SCREEN_WIDTH = 1080;
public static final int SCREEN_HEIGHT = 1920;
At the startup:
// ...
stage = new Stage(SCREEN_WIDTH, SCREEN_HEIGHT, true);
camera = new OrthographicCamera();
camera.setToOrtho(false, SCREEN_WIDTH, SCREEN_HEIGHT);
debugMatrix = camera.combined.cpy();
debugMatrix.scale(BOX_TO_WORLD, BOX_TO_WORLD, 1.0f);
debugRenderer = new Box2DDebugRenderer();
// ...
Now, the render() code:
public void render() {
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
camera.update();
world.step(1/45f, 6, 6);
world.clearForces();
stage.act(Gdx.graphics.getDeltaTime());
stage.draw();
debugRenderer.render(world, debugMatrix);
}
Looks like the answer to that one was fairly simple:
stage.setCamera(camera);
I was not setting the OrthographicCamera to the stage, so the stage was using some kind of default camera that wasn't aligned with my stuff.
It had nothing to do with Box2d in the end. Box2d was returning healthy values, but theses values were corresponding to wrong places in my screen because of the wrong stage resolution.
i know what i am going to ask is already discussed sometimes but after going through all of them i can't found my complete answer so i am asking a new question
when i tried integrating JPCT-ae with QCAR all goes well as expected, i got my modelview matrix from renderframe from jni and successfully transferred that in java to jpct model is shown perfectly as expected. but when i tried to pass this matrix to JPCT world camera my model disappear.
my code:in onsurfacechanged:
world = new World();
world.setAmbientLight(20, 20, 20);
sun = new Light(world);
sun.setIntensity(250, 250, 250);
cube = Primitives.getCube(1);
cube.calcTextureWrapSpherical();
cube.strip();
cube.build();
world.addObject(cube);
cam = world.getCamera();
cam.moveCamera(Camera.CAMERA_MOVEOUT, 10);
cam.lookAt(cube.getTransformedCenter());
SimpleVector sv = new SimpleVector();
sv.set(cube.getTransformedCenter());
sv.y -= 100;
sv.z -= 100;
sun.setPosition(sv);
MemoryHelper.compact();
and in ondraw:
com.threed.jpct.Matrix mResult = new com.threed.jpct.Matrix();
mResult.setDump(modelviewMatrix ); //modelviewMatrix i get from Qcar
cube.setRotationMatrix(mResult);
cam.setBack(mResult);
fb.clear(back);
world.renderScene(fb);
world.draw(fb);
fb.display();
after some research i found that QCAR uses a right-handed coordinate system meaning that the X positive goes right, the Y positive goes up and the Z positive comes out of screen but in JPCT coordinate system the X positive goes right, the Y positive goes down and the Z positive goes into the screen.
Qcar coordinate system:
i know that matrix QCar is giving is a 4*4 matrix having 3*3 rotational values and translation vector .
i am posting matrices to be more clear:
modelviewmatrix:
1.512537 -159.66255 -10.275316 0.0
-89.86529 -1.1592013 4.7839375 0.0
-8.619186 10.179538 -159.44305 0.0
59.182976 93.205956 437.2832 1.0
modelviewmatrix after reverse using cam.setBack(modelviewmatrix.invert(modelviewmatrix)) :
5.9083453E-5 -0.01109448 -3.3668696E-4 0.0
0.0040540528 -3.8752193E-4 0.0047518034 0.0
-0.004756433 -4.6811014E-4 0.0040459237 0.0
0.7533285 0.4116795 2.7063704 0.9999999
if i remove 13,14 and 15 matrix element assuming 3*3 rotation matrix...model is rotated properly but translation(in and out movement of image) is not there
finally i dont know what changes translation vector is needed.
so please suggest me what i am missing here?
QCAR::Matrix44F inverseMatrix = SampleMath::Matrix44FInverse(modelViewMatrix);
QCAR::Matrix44F invTransposeMatrix = SampleMath::Matrix44FTranspose(inverseMatrix);
then pass the invTransposeMatrix value to java
env->SetFloatArrayRegion(modelviewArray, 0, 16, invTransposeMatrix.data);
env->CallVoidMethod(obj, method, modelviewArray);
I have been trying to make a cylinder in renderscript. This is the code I've tried:
public Mesh cylinder(){
float radius=1.25f, halfLength=5;
int slices=16;
Mesh.TriangleMeshBuilder mbo= new TriangleMeshBuilder(mRSGL,3, Mesh.TriangleMeshBuilder.TEXTURE_0);
for(int i=0; i<slices; i++) {
float theta = (float) (((float)i)*2.0*Math.PI);
float nextTheta = (float) (((float)i+1)*2.0*Math.PI);
/*vertex at middle of end*/
mbo.addVertex(0.0f, halfLength, 0.0f);
/*vertices at edges of circle*/
mbo.addVertex((float)(radius*Math.cos(theta)), halfLength, (float)(radius*Math.sin(theta)));
mbo.addVertex((float)(radius*Math.cos(nextTheta)), halfLength, (float)(radius*Math.sin(nextTheta)));
/* the same vertices at the bottom of the cylinder*/
mbo.addVertex((float)(radius*Math.cos(nextTheta)), -halfLength, (float)(radius*Math.sin(nextTheta)));
mbo.addVertex((float)(radius*Math.cos(theta)), halfLength, (float)(radius*Math.sin(theta)));
mbo.addVertex(0.0f, -halfLength, 0.0f);
mbo.addTriangle(0, 1, 2);
mbo.addTriangle(3, 4, 5);
}
return mbo.create(true);
}
But this code gives me a rectangle of length 5. Any ideas where I'm going wrong?
You actually have a few problems here. First, your angles are always equal to multiples of 2pi. You need to divide by the number of sectors when you calculate your angles. Additionally in this step you have an unnecessary explicit type conversion, java will handle the conversion of integer to double for you.
Second, you are constantly adding the same two triangles to the mesh and not adding any triangles for the side of the cylinder, just the two end faces. In your loop when calling addTriangle() you should use indices, for example addTriangle(n, n+1, n+2).
Finally, you were missing a negative sign when you created your 4th vertex, so it was actually at halfLength, not -halfLength.
Try this:
public Mesh cylinder(){
float radius=1.25f, halfLength=5;
int slices=16;
Mesh.TriangleMeshBuilder mbo= new TriangleMeshBuilder(mRSGL,3, Mesh.TriangleMeshBuilder.TEXTURE_0);
/*vertex at middle of end*/
mbo.addVertex(0.0f, halfLength, 0.0f);
mbo.addVertex(0.0f, -halfLength, 0.0f);
for(int i=0; i<slices; i++) {
float theta = (float) (i*2.0*Math.PI / slices);
float nextTheta = (float) ((i+1)*2.0*Math.PI / slices);
/*vertices at edges of circle*/
mbo.addVertex((float)(radius*Math.cos(theta)), halfLength, (float)(radius*Math.sin(theta)));
mbo.addVertex((float)(radius*Math.cos(nextTheta)), halfLength, (float)(radius*Math.sin(nextTheta)));
/* the same vertices at the bottom of the cylinder*/
mbo.addVertex((float)(radius*Math.cos(nextTheta)), -halfLength, (float)(radius*Math.sin(nextTheta)));
mbo.addVertex((float)(radius*Math.cos(theta)), -halfLength, (float)(radius*Math.sin(theta)));
/*Add the faces for the ends, ordered for back face culling*/
mbo.addTriangle(4*i+3, 4*i+2, 0);
//The offsets here are to adjust for the first two indices being the center points. The sector number (i) is multiplied by 4 because the way you are building this mesh, there are 4 vertices added with each sector
mbo.addTriangle(4*i+5, 4*i+4, 1);
/*Add the faces for the side*/
mbo.addTriangle(4*i+2, 4*i+4, 4*i+5);
mbo.addTriangle(4*i+4, 4*i+2, 4*i+3);
}
return mbo.create(true);
}
I have also added a slight optimization where the vertices for the centers of the circles are created only once, thus saving memory. The order of indices here is for back face culling. Reverse it if you want front face. Should your needs require a more efficient method eventually, allocation builders allow for using trifans and tristrips, but for a mesh of this complexity the ease of triangle meshes is merited. I have run this code on my own system to verify that it works.
My requirement is to create a 3d surface plot(should also display the x y z axis) from a list of data points (x y z) values.The 3d visualization should be done on ANDROID.
My Inputs : Currently planning on using open gl 1.0 and java. I m also considering Adore3d , min3d and rgl package which uses open gl 1.0. Good at java ,but a novice at 3d programming.
Time Frame : 2 months
I would like to know the best way to go about it? Is opengl 1.0 good for 3d surface plotting?Any other packages/libraries that can be used with Android?
Well, you can plot the surface using OpenGL 1.0 or OpenGL 2.0. All you need to do is to draw the axes as lines and draw the surface as triangles. If you have your heightfield data, you would do:
float[][] surface;
int width, height; // 2D surface data and it's dimensions
GL.glBegin(GL.GL_LINES);
GL.glVertex3f(0, 0, 0); // line starting at 0, 0, 0
GL.glVertex3f(width, 0, 0); // line ending at width, 0, 0
GL.glVertex3f(0, 0, 0); // line starting at 0, 0, 0
GL.glVertex3f(0, 0, height); // line ending at 0, 0, height
GL.glVertex3f(0, 0, 0); // line starting at 0, 0, 0
GL.glVertex3f(0, 50, 0); // line ending at 0, 50, 0 (50 is maximal value in surface[])
GL.glEnd();
// display the axes
GL.glBegin(GL.GL_TRIANGLES);
for(int x = 1; x < width; ++ x) {
for(int y = 1; y < height; ++ y) {
float a = surface[x - 1][y - 1];
float b = surface[x][y - 1];
float c = surface[x][y];
float d = surface[x - 1][y];
// get four points on the surface (they form a quad)
GL.glVertex3f(x - 1, a, y - 1);
GL.glVertex3f(x, b, y - 1);
GL.glVertex3f(x, c, y);
// draw triangle abc
GL.glVertex3f(x - 1, a, y - 1);
GL.glVertex3f(x, c, y);
GL.glVertex3f(x - 1, d, y);
// draw triangle acd
}
}
GL.glEnd();
// display the data
This draws simple axes and heightfield, all in white color. It should be pretty straight forward to extend it from here.
Re the second part of your question:
Any other packages/libraries that can be used with Android?
Yes, it's now possible to draw an Android 3D Surface Plot with SciChart.
Link to Android Chart features page
Link to Android 3D Surface Plot example
Lots of configurations are possible including drawing wireframe, gradient colour maps, contours and real-time updates.
Disclosure, I'm the tech lead on the scichart team