firstly I'm really new to opengl on androidand I'm still looking though courses online
I'm trying to make a simply appthat has a sequre in the middle of the screen
and on touch it's moves to the touch event coordinate
this is my sureface view
public class MyGLSurfaceView extends GLSurfaceView {
private final MyGLRenderer mRenderer;
public MyGLSurfaceView(Context context) {
super(context);
// Set the Renderer for drawing on the GLSurfaceView
mRenderer = new MyGLRenderer();
setRenderer(mRenderer);
// Render the view only when there is a change in the drawing data
setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY);
}
#Override
public boolean onTouchEvent(MotionEvent e)
{
float x =e.getX();
float y =e.getY();
if(e.getAction()==MotionEvent.ACTION_MOVE)
{
mRenderer.setLoc(x,y);
requestRender();
}
return true;
}
}
and this is me renderer class
public class MyGLRenderer implements GLSurfaceView.Renderer {
private float mAngle;
public PVector pos;
public Rectangle r;
public Square sq;
#Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
// Set the background frame color
gl.glClearColor(1f, 1f, 1f, 1.0f);
pos=new PVector();
r=new Rectangle(0.5f,0.4f);
sq=new Square(0.3f);
}
#Override
public void onDrawFrame(GL10 gl) {
// Draw background color
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
// Set GL_MODELVIEW transformation mode
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity(); // reset the matrix to its default state
// When using GL_MODELVIEW, you must set the view point
GLU.gluLookAt(gl, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
gl.glTranslatef(pos.x,pos.y,0f);
//r.draw(gl);
sq.draw(gl);
}//rend
#Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
// Adjust the viewport based on geometry changes
// such as screen rotations
gl.glViewport(0, 0, width, height);
// make adjustments for screen ratio
float ratio = (float) width / height;
gl.glMatrixMode(GL10.GL_PROJECTION); // set matrix to projection mode
gl.glLoadIdentity(); // reset the matrix to its default state
gl.glFrustumf(-ratio, ratio, -1, 1, 3, 7); // apply the projection matrix
}
/**
* Returns the rotation angle of the triangle shape (mTriangle).
*
* #return - A float representing the rotation angle.
*/
public float getAngle() {
return mAngle;
}
/**
* Sets the rotation angle of the triangle shape (mTriangle).
*/
public void setAngle(float angle) {
mAngle = angle;
}
public void setLoc(float x,float y)
{
pos.x=x; pos.y=y;
}
}
When asking a question you should add what is your current result and what is your expected result.
From a short glance at your code I would expect the square is drawn correctly until you touch the screen after which it disappears completely (unless you press on a top left part of the screen actually).
If this is the case your problem is only that you are not transforming the touch coordinates to the openGL coordinate system. You openGL coordinate system is by default in range [-1,1] in all axis but you may change it (as you do) with matrices. The 2 most common are glFrustum and glOrtho both of these accept 4 border coordinates which are left, right, bottom and top which represent what value is at corresponding border of the view.
So to compute x from the touch for instance you would first normalize it to see what part of the screen you pressed by relativeX = touch.x / view.size.width then this will present a coordinate in openGL such that you do glX = left + (right-left)*relativeX. Similar is for the vertical coordinate.
But in your case it would be better to use 2D and use glOrtho while using view coordinates. This means replacing the frustum call with this one and setting left=.0, right=viewWidth, top=.0, bottom=viewHeight. Now you will have the same coordinate system in openGL as you do on the view. You will need to increase the square size to see it since it is very small at this point. Also you should then remove the lookAt and just use identity + translate.
Related
I would like to render spheres like in the image below attached to anchors.
Unfortunately all examples are based on a Sceneform which I don't want to use. The spheres should be free in in the air without being bound to a flat surface.
With the Hello_AR example from Google I was able to render a 3D sphere into the space and fix it by attaching it to an anchor.
#Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
...
backgroundRenderer.createOnGlThread(this);
virtualObject.createOnGlThread(this, "models/sphere.obj", "models/sphere.png");
virtualObject.setMaterialProperties(0.0f, 0.0f, 0.0f, 0.0f);
...
}
#Override
public void onDrawFrame(GL10 gl) {
...
// Get projection matrix.
float[] projmtx = new float[16];
camera.getProjectionMatrix(projmtx, 0, 0.1f, 100.0f);
// Get camera matrix and draw.
float[] viewmtx = new float[16];
camera.getViewMatrix(viewmtx, 0);
// Compute lighting from average intensity of the image.
// The first three components are color scaling factors.
// The last one is the average pixel intensity in gamma space.
final float[] colorCorrectionRgba = new float[] {255f, 0, 0, 255f};
frame.getLightEstimate().getColorCorrection(colorCorrectionRgba, 0);
// Visualize anchors created by touch.
float scaleFactor = 1.0f;
for (Anchor anchor : anchors) {
if (anchor.getTrackingState() != TrackingState.TRACKING) {
continue;
}
anchor.getPose().toMatrix(anchorMatrix, 0);
virtualObject.updateModelMatrix(anchorMatrix, scaleFactor);
float[] objColor = new float[] { 255f, 255f, 255f, 0 };
virtualObject.draw(viewmtx, projmtx, colorCorrectionRgba, objColor);
}
}
With that I am able to create a black sphere 1 meter away from the camera in the air.
My questions:
Is this a good / correct way to do it?
How do I change the color of the sphere, since color values have no effect on the object
How do I make it transparent?
Thank you very much.
You need to attach it to anchor. You don't need to use sceneform. Sceneform is only one of two methods.
In terms of color and transparency it depends on the way you serve your object. In your code I see that you're using material so it's hard to change color.
I'm trying to create a children's game using the LibGdx framework. What I want to accomplish is to tilt an image of a balloon that will be used to collect points. So far I have added code to move the balloon up/down. I'm unable to figure out how to tilt it to the right or left. Here is the code I have so far. Can someone please help
public class balloongame extends ApplicationAdapter {
SpriteBatch batch;
Texture background;
Texture balloon;
private float renderX;
#Override
public void create () {
batch = new SpriteBatch();
background = new Texture("bg.png");
balloon = new Texture("final.png");
renderX = 100;
}
#Override
public void render () {
renderX += Gdx.input.getAccelerometerX();
if(renderX < 0) renderX = 0;
if(renderX > Gdx.graphics.getWidth() - 200) renderX = Gdx.graphics.getWidth() - 200;
batch.begin();
batch.draw(background, 0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
batch.draw(balloon,renderX, Gdx.graphics.getWidth());
batch.end();
}
#Override
public void dispose () {
batch.dispose();
}
}
I know two easy ways to do it.
Affine2 class have shear method in it.And similar transformation methods.
Affine2.shear()
After you set affine matrix you can draw it with
draw(TextureRegion region, float width, float height, Affine2 transform)
I prefer changing vertices attributes in draw method.
draw(Texture texture,
float[] spriteVertices,
int offset,
int count)
There must be 4 vertices, each made up of 5 elements in this order: x, y, color, u, v
batch.draw(textures[0], new float[]{
0, 0 ,Color.RED.toFloatBits(), 0f, 1f,
textures[0].getWidth(),50, Color.BLUE.toFloatBits(), 1f, 1f,
textures[0].getWidth(), 50+textures[1].getHeight(), Color.GREEN.toFloatBits(), 1f, 0f,
0 ,textures[0].getHeight(), Color.GOLD.toFloatBits(), 0f, 0f},0,20);
You can set individial colors for each vertice or simply can set color to white.
You can change it however you want.
I have an OpenGL scene with a sphere having a radius of 1, and the camera being at the center of the sphere (it's a 360° picture viewer). The user can rotate the sphere by panning.
Now I need to display 2D pins "attached" to some parts of the picture. To do so, I want to convert the 3D coordinates of my pins into 2D screen coordinates, to add the pin image at that screen coordinates.
I'm using GLU.glProject and the following classes from android-apidemo:
MatrixGrabber
MatrixStack
MatrixTrackingGL
I save the projection matrix in the onSurfaceChanged method and the model-view matrix in the onDraw method (after having drawn my sphere). Then I feed GLU.glProject with them when the user rotates the sphere to update the pins position.
When I pan horizontally, the pins pan correctly, but when I pan vertically, the texture pans "faster" than the pin image (like if the pin was closer to the camera than the sphere).
Here are some relevant parts of my code:
public class CustomRenderer implements GLSurfaceView.Renderer {
MatrixGrabber mMatrixGrabber = new MatrixGrabber();
private float[] mModelView = null;
private float[] mProjection = null;
[...]
#Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
// Get the sizes:
float side = Math.max(width, height);
int x = (int) (width - side) / 2;
int y = (int) (height - side) / 2;
// Set the viewport:
gl.glViewport(x, y, (int) side, (int) side);
// Set the perspective:
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
GLU.gluPerspective(gl, FIELD_OF_VIEW_Y, 1, Z_NEAR, Z_FAR);
// Grab the projection matrix:
mMatrixGrabber.getCurrentProjection(gl);
mProjection = mMatrixGrabber.mProjection;
// Set to MODELVIEW mode:
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
}
#Override
public void onDrawFrame(GL10 gl) {
// Load the texture if needed:
if(mTextureToLoad != null) {
mSphere.loadGLTexture(gl, mTextureToLoad);
mTextureToLoad = null;
}
// Clear:
gl.glClearColor(0.5f, 0.5f, 0.5f, 0.0f);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
// Rotate the scene:
gl.glRotatef( (1 - mRotationY + 0.25f) * 360, 1, 0, 0); // 0.25 is used to adjust the texture position
gl.glRotatef( (1 - mRotationX + 0.25f) * 360, 0, 1, 0); // 0.25 is used to adjust the texture position
// Draw the sphere:
mSphere.draw(gl);
// Grab the model-view matrix:
mMatrixGrabber.getCurrentModelView(gl);
mModelView = mMatrixGrabber.mModelView;
}
public float[] getScreenCoords(float x, float y, float z) {
if(mModelView == null || mProjection == null) return null;
float[] result = new float[3];
int[] view = new int[] {0, 0, (int) mSurfaceViewSize.getWidth(), (int) mSurfaceViewSize.getHeight()};
GLU.gluProject(x, y, z,
mModelView, 0,
mProjection, 0,
view, 0,
result, 0);
result[1] = mSurfaceViewSize.getHeight() - result[1];
return result;
}
}
I use the result of the getScreenCoords method to display my pins. The y value is wrong.
What am I doing wrong?
I'm trying to draw a simple line drawing connecting several vertices in OpenGL ES. However, the line is drawn inverted or in a different position from where it should be drawn. I've attached the class for the line drawing below
ConnectingPath.java
--------------------
public class ConnectingPath {
int positionBufferId;
PointF[] verticesList;
public float vertices[];
public FloatBuffer vertexBuffer;
public ConnectingPath(LinkedList<PointF> verticesList, float[] colors)
{
List<PointF> tempCorners = verticesList;
int i = 0;
this.verticesList = new PointF[tempCorners.size()];
for (PointF corner : tempCorners) {
this.verticesList[i++] = corner;
}
}
public float[] getTransformedVertices()
{
float z;
List<Float> finalVertices = new ArrayList<Float>();
finalVertices.clear();
for(PointF point : verticesList){
finalVertices.add(point.x);
finalVertices.add(point.y);
finalVertices.add(0.0f);
}
int i = 0;
float[] verticesArray = new float[finalVertices.size()];
for (Float f : finalVertices) {
verticesArray[i++] = (f != null ? f : Float.NaN);
}
return verticesArray;
}
public void initBooth(){
vertices = this.getTransformedVertices();
for(Float f : vertices){
Log.d("Mapsv3--", f + "");
}
ByteBuffer bb = ByteBuffer.allocateDirect(vertices.length * 4);
bb.order(ByteOrder.nativeOrder());
vertexBuffer = bb.asFloatBuffer();
vertexBuffer.put(vertices);
vertexBuffer.position(0);
int[] buffers = new int[1];
GLES11.glGenBuffers(1, buffers, 0);
GLES11.glBindBuffer(GLES11.GL_ARRAY_BUFFER, buffers[0]);
GLES11.glBufferData(GLES11.GL_ARRAY_BUFFER, 4 * vertices.length, vertexBuffer, GLES11.GL_STATIC_DRAW);
positionBufferId = buffers[0];
}
public void Render(GL10 gl){
GLES11.glPushMatrix();
GLES11.glBindBuffer(GLES11.GL_ARRAY_BUFFER, positionBufferId);
GLES11.glEnableClientState(GL10.GL_VERTEX_ARRAY);
GLES11.glVertexPointer(3, GL10.GL_FLOAT, 0, 0);
GLES11.glBindBuffer(GLES11.GL_ARRAY_BUFFER, 0);
GLES11.glFrontFace(GL10.GL_CW);
GLES11.glLineWidth(10.0f);
GLES11.glColor4f(0.0f,0.0f,0.0f,1.0f);
GLES11.glDrawArrays(GL10.GL_LINE_STRIP, 0, verticesList.length);
GLES11.glDisableClientState(GL10.GL_VERTEX_ARRAY);
GLES11.glPopMatrix();
}
}
Drawing code :
Renderer.java
--------------
// Variables here
public void onSurfaceChanged(GL10 gl, int width, int height) {
viewWidth = width;
viewHeight = height;
}
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
gl.glEnable(GL10.GL_TEXTURE_2D); //Enable Texture Mapping
gl.glShadeModel(GL10.GL_SMOOTH); //Enable Smooth Shading
gl.glClearColor(1.0f, 1.0f, 1.0f, 1.0f); //Grey Background
gl.glClearDepthf(1.0f); //Depth Buffer Setup
gl.glEnable(GL10.GL_DEPTH_TEST); //Enables Depth Testing
gl.glDepthFunc(GL10.GL_LEQUAL);
gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST);
}
public void onDrawFrame(GL10 gl) {
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
GLU.gluOrtho2D(gl, -viewWidth/2, viewWidth/2, -viewHeight/2,viewHeight/2);
gl.glTranslatef(center.x,center.y,0);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glTranslatef(0,0, 0);
gl.glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
gl.glEnable(GL10.GL_CULL_FACE);
gl.glCullFace(GL10.GL_FRONT);
if(connectingPath!=null){
connectingPath.Render(gl);
}
gl.glDisable(GL10.GL_CULL_FACE);
gl.glLoadIdentity();
}
Screenshot :
The drawing in OpenGL seems to be inverted for you due to the way OpenGL defines it's screen coordinates. In contrast to most 2D drawing API's, the origin is located in the bottom left corner, which means that the y axis values increase when moving upwards. A very nice explanation is available in the OpenGL common pitfalls (Number 12):
Given a sheet of paper, people write from the top of the page to the bottom. The origin for writing text is at the upper left-hand margin of the page (at least in European languages). However, if you were to ask any decent math student to plot a few points on an X-Y graph, the origin would certainly be at the lower left-hand corner of the graph. Most 2D rendering APIs mimic writers and use a 2D coordinate system where the origin is in the upper left-hand corner of the screen or window (at least by default). On the other hand, 3D rendering APIs adopt the mathematically minded convention and assume a lower left-hand origin for their 3D coordinate systems.
I need a little help with this:
android developers, Tutorials: OpenGLES10.
a link
It all works fine for the first Triangle, until I put in the code for Projection & Camera View. This should rezise OpenGLES Square view to match Phone's screen, so object stay in propotions.
As a Newbie watching, the code looks fine and i have cheked with referencefiles, that there's not missing a parameter or something like that. But now i'm lost..! Can't see what's wrong.
If Projection and Camera code are applied, there is no triangle, but the app. is runing and the View with backgroundcolor are shown.
Here is my code:
package notme.helloopengles10;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
import android.opengl.GLSurfaceView;
import android.opengl.GLU;
public class HelloOpenGLES10Renderer implements GLSurfaceView.Renderer {
// Set the background frame color
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
gl.glClearColor(0.5f, 0.5f, 0.5f, 1.0f);
// initialize the triangle vertex array
initShapes();
//enable use of vertex arrays
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
}
public void onDrawFrame(GL10 gl) {
// Redraw background color
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
/* // set GL_MODELVIEW transformation mode (If outline from here to after GLU.gluLookAt() - it works when also outlines further down i code!
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity(); // reset Matrix to its default state
// when using GL_MODELVIEW, you must set the view point
GLU.gluLookAt(gl, 0, 0, -5, 0f, 0f, 0f, 0f, 1.0f, 0.0f); */
//Draw Triangel
gl.glColor4f(0.63671875f, 0.76953125f, 0.22265625f, 0.0f);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, triangleVB);
gl.glDrawArrays(GL10.GL_TRIANGLES, 0, 3);
}
// Redraw on orientation changes // adjust for screen size ratio
public void onSurfaceChanged(GL10 gl, int width, int height) {
gl.glViewport(0, 0, width, height);
// Make adjustments for screen ratio
/*(If outline from here to after gl.Frumstumf() - it works!
float ratio = (float) width / height;
gl.glMatrixMode(GL10.GL_PROJECTION); // set matrix to projection mode
gl.glLoadIdentity(); // reset the matrix to its default state
gl.glFrustumf(-ratio, ratio, -1, 1, 3, 7); // apply the projection */
}
/*
* Draw a shape, a triangle. first add new member variable to contain
* the vertices of a triangle
*/
private FloatBuffer triangleVB;
//Create a method, initShaoe(), which populate the members variable
private void initShapes(){
//create a array
float triangleCoords[] = {
// X, Y, Z
-0.5f, -0.25f, 0,
0.5f, -0.25f, 0,
0.0f, 0,559016994f, 0
};
// initialize vertex Buffer for triangle
ByteBuffer vbb= ByteBuffer.allocateDirect(
//(# of coordinates values * 4 bytes per float)
triangleCoords.length * 4 );
vbb.order(ByteOrder.nativeOrder()); // use device hardware's native byte order
triangleVB = vbb.asFloatBuffer(); //create floating point buffer from the ByteBuffer
triangleVB.put(triangleCoords); // add coordinates to the FloatBuffer
triangleVB.position(0); // set the buffer to read the first coordinate
}
} // end
I hope some one can tell me, where things go wrong?
DevTool: Eclipse.
I had the same problem with this tutorial and it got solved when I changed the order of multiplying in the vertex shader code in the Triangle class. So instead of having uMVPMatrix * vPosition, replace it with vPosition * uMVPMatrix. I guess the reason for this is because vPosition is a row vector.
The code looks resonable (if you uncomment the parts that are commented out at the moment). Your matrix modification code is quite correct and all transformations are applied to the correct matrices.
But at the moment you are looking from the point (0,0,-5) to the point (0,0,0) and therefore along the +z axis. But since the default OpenGL view looks along the -z axis, you actually rotate the view 180 degrees around the y-axis. Whereas this is absolutely no problem, you now see the back-side of the triangle. So can it be, that you have back-face culling enabled and this back-side is just optimized away? Just try disabling back-face culling by calling glDisable(GL_CULL_FACE) or change the -5 in the gluLookAt call to a 5, so that you look along the -z axis.
You can also try to use gluPerspective(45, ratio, 3, 7) instead of the glFrustum call, but your arguments to glFrustum look quite reasonable. Of course, keep in mind that both calls create a perspective view, with farther objects getting smaller, like in reality. If you actually want a parallel/orthographic view (where size on screen is independent on depth) you should replace the glFrustum with a glOrtho, though the parameters can stay the same.
Your call to gluLookAt trashes your modelview matrix. You should call this function with the projection matrix active.
http://www.opengl.org/sdk/docs/man/xhtml/gluLookAt.xml
This code shows the triangle for me:
public void onDrawFrame(GL10 gl) {
// Redraw background color
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
// when using GL_MODELVIEW, you must set the view point
GLU.gluLookAt(gl, 0, 0, -5, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
// set GL_MODELVIEW transformation mode (If outline from here to after GLU.gluLookAt() - it works when also outlines further down i code!
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity(); // reset Matrix to its default state
//Draw Triangel
gl.glColor4f(0.63671875f, 0.76953125f, 0.22265625f, 0.0f);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, triangleVB);
gl.glDrawArrays(GL10.GL_TRIANGLES, 0, 3);
}