I am using AndEngine.
Now the problem is that I want to draw bezier curves on the scene. I know there is not built in functionality to do this.
So I computed the points etc in a custom fashion. Now I am stack that how can I draw the lines on the scene.
I overridded the method
myScene = new Scene()
{
#Override
protected void onManagedDraw(GL10 pGL, Camera pCamera) {
log("Draw","in Draw");
super.onManagedDraw(pGL, pCamera);
}
};
The log is working fine. The code I used for line drawing.
public static void DrawQuadBezier(GL10 gl, CGPoint origin, CGPoint control,
CGPoint destination, int segments) {
FastFloatBuffer vertices = getVertices(2 * (segments + 1));
float t = 0.0f;
for(int i = 0; i < segments; i++) {
float x = (float)Math.pow(1 - t, 2) * origin.x + 2.0f * (1 - t) * t * control.x + t * t * destination.x;
float y = (float)Math.pow(1 - t, 2) * origin.y + 2.0f * (1 - t) * t * control.y + t * t * destination.y;
vertices.put(x);
vertices.put(y);
t += 1.0f / segments;
}
vertices.put(destination.x);
vertices.put(destination.y);
vertices.position(0);
gl.glDisable(GL10.GL_TEXTURE_2D);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glDisableClientState(GL10.GL_COLOR_ARRAY);
gl.glVertexPointer(2, GL_FLOAT, 0, vertices.bytes);
gl.glDrawArrays(GL_LINE_STRIP, 0, segments + 1);
// restore default state
gl.glEnableClientState(GL10.GL_COLOR_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glEnable(GL10.GL_TEXTURE_2D);
}
Now when I call this method giving it the required parameters nothing is drawn on the screeen. Can any one help me with it. I will be very thankful.
Related
I am developing an android application that needs to detect collision between 3d Node and 2d object detected by tensorflow lite.To do that tried to transform my node's 3d coordinates to 2d screen coordinates but no success so far. Anyone have any idea how to do that ?
Before running all these methods, you should check that getTrackingState() == TrackingState.TRACKING.
You should obtain projection matrix of AR Camera. I'll name the matrix proj afterwards.
Note: in current case offset should be 0 (because we need to write values from beginning of the array), near and far values defines viewing frustum. In a nutshell, it's distance after/behind that objects should be clipped. I usually set near to 0.1f, far=100.0f.
Obtain surfaceView width and height.
You may override onSurfaceChanged method inside your renderer (which, for sure, should implement GLSurfaceView.Renderer interface), then add new fields for width and height to your class and change their values when onSurfaceChanged is called. Like:
class Yourrenderer : GLSurfaceView.Renderer {
private var width: Int = -1
private var height: Int = -1
...
override fun onSurfaceChanged(gl: GL10?, width: Int, height: Int) {
this.width = width
this.height = height
}
...
}
Get pose of your anchor using getPose() method and obtain matrix using Pose.toMatrix method. I'll name the anchor pose as anchorPose, and this matrix as anchorMat afterwards.
Then you need to get convert world-space coordinates into window (2D) coordinate. We implement a method that's called project (from word projection).
// Dummy matrix
val model = floatArrayOf(
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
)
// Anchor pose should be obtained at 4th step
val tX: Float =
model[0] * anchorPose.tx() + model[4] * anchorPose.ty() + model[8] * anchorPose.tz() + model[12] * 1f
val tY: Float =
model[1] * anchorPose.tx() + model[5] * anchorPose.ty() + model[9] * anchorPose.tz() + model[13] * 1f
val tZ: Float =
model[2] * anchorPose.tx() + model[6] * anchorPose.ty() + model[10] * anchorPose.tz() + model[14] * 1f
val tW: Float =
model[3] * anchorPose.tx() + model[7] * anchorPose.ty() + model[11] * anchorPose.tz() + model[15] * 1f
var tmpX: Float = proj.get(0) * tX + proj.get(4) * tY + proj.get(8) * tZ + proj.get(12) * tW
var tmpY: Float = proj.get(1) * tX + proj.get(5) * tY + proj.get(9) * tZ + proj.get(13) * tW
var tmpZ: Float =
proj.get(2) * tX + proj.get(6) * tY + proj.get(10) * tZ + proj.get(14) * tW
val tmpW: Float =
proj.get(3) * tX + proj.get(7) * tY + proj.get(11) * tZ + proj.get(15) * tW
tmpX /= tmpW
tmpY /= tmpW
tmpZ /= tmpW
tmpX = tmpX * 0.5f + 0.5f
tmpY = tmpY * 0.5f + 0.5f
tmpZ = tmpZ * 0.5f + 0.5f
tmpX = tmpX * width + 0.0f // to obtain width and height see third step
tmpY = tmpY * height + 0.0f
Here is what you wanted! tmpX, tmpY is a window-space coordinates. You may ignore tmpZ.
If you have a lot of math like such as that, you might look to migrate your project to C++, because there's a popular library, which have implemented most of the necessary math, called GLM. It's header-only (which means that size of your compiled app won't increase) and more productive (because it uses C++, which is compiled).
I have to develop an equirectangular image viewer, like the one of the Ricoh Theta app.
I'm doing it on Android, with Open GL ES (1.0, but I can change to 2.0 if needed).
For now, I have managed to create the half sphere (based on this answer), with this code:
public class HalfSphere {
// ---------------------------------------------------------------------------------------------
// region Attributes
private final int[] mTextures = new int[1];
float[][] mVertices;
int mNbStrips;
int mNbVerticesPerStrips;
private final List<FloatBuffer> mVerticesBuffer = new ArrayList<>();
private final List<ByteBuffer> mIndicesBuffer = new ArrayList<>();
private final List<FloatBuffer> mTextureBuffer = new ArrayList<>();
// endregion
// ---------------------------------------------------------------------------------------------
// ---------------------------------------------------------------------------------------------
// region Constructor
public HalfSphere(int nbStrips, int nbVerticesPerStrips, float radius) {
// Generate the vertices:
mNbStrips = nbStrips;
mNbVerticesPerStrips = nbVerticesPerStrips;
mVertices = new float[mNbStrips * mNbVerticesPerStrips][3];
for (int i = 0; i < mNbStrips; i++) {
for (int j = 0; j < mNbVerticesPerStrips; j++) {
mVertices[i * mNbVerticesPerStrips + j][0] = (float) (radius * Math.cos(j * 2 * Math.PI / mNbVerticesPerStrips) * Math.cos(i * Math.PI / mNbStrips));
mVertices[i * mNbVerticesPerStrips + j][1] = (float) (radius * Math.sin(i * Math.PI / mNbStrips));
mVertices[i * mNbVerticesPerStrips + j][2] = (float) (radius * Math.sin(j * 2 * Math.PI / mNbVerticesPerStrips) * Math.cos(i * Math.PI / mNbStrips));
}
}
// Populate the buffers:
for(int i = 0; i < mNbStrips - 1; i++) {
for(int j = 0; j < mNbVerticesPerStrips; j++) {
byte[] indices = {
0, 1, 2, // first triangle (bottom left - top left - top right)
0, 2, 3 // second triangle (bottom left - top right - bottom right)
};
float[] p1 = mVertices[i * mNbVerticesPerStrips + j];
float[] p2 = mVertices[i * mNbVerticesPerStrips + (j + 1) % mNbVerticesPerStrips];
float[] p3 = mVertices[(i + 1) * mNbVerticesPerStrips + (j + 1) % mNbVerticesPerStrips];
float[] p4 = mVertices[(i + 1) * mNbVerticesPerStrips + j];
float[] quad = {
p1[0], p1[1], p1[2],
p2[0], p2[1], p2[2],
p3[0], p3[1], p3[2],
p4[0], p4[1], p4[2]
};
mVerticesBuffer.add(floatArrayToFloatBuffer(quad));
mTextureBuffer.add(floatArrayToFloatBuffer(quad));
mIndicesBuffer.add(byteArrayToByteBuffer(indices));
}
}
}
// endregion
// ---------------------------------------------------------------------------------------------
// ---------------------------------------------------------------------------------------------
// region Draw
public void draw(final GL10 gl) {
// bind the previously generated texture.
gl.glBindTexture(GL10.GL_TEXTURE_2D, this.mTextures[0]);
// Point to our buffers.
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Set the face rotation, clockwise in this case.
gl.glFrontFace(GL10.GL_CW);
for(int i = 0; i < mVerticesBuffer.size(); i++) {
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVerticesBuffer.get(i));
gl.glTexCoordPointer(3, GL10.GL_FLOAT, 0, mTextureBuffer.get(i));
gl.glDrawElements(GL10.GL_TRIANGLE_STRIP, 6, GL10.GL_UNSIGNED_BYTE, mIndicesBuffer.get(i)); // GL_TRIANGLE_STRIP / GL_LINE_LOOP
}
// Disable the client state before leaving.
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
// endregion
// ---------------------------------------------------------------------------------------------
// ---------------------------------------------------------------------------------------------
// region Utils
public void loadGLTexture(GL10 gl, Bitmap texture) {
// Generate one texture pointer, and bind it to the texture array.
gl.glGenTextures(1, this.mTextures, 0);
gl.glBindTexture(GL10.GL_TEXTURE_2D, this.mTextures[0]);
// Create nearest filtered texture.
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
// Use Android GLUtils to specify a two-dimensional texture image from our bitmap.
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, texture, 0);
texture.recycle();
}
public FloatBuffer floatArrayToFloatBuffer(float[] array) {
ByteBuffer vbb = ByteBuffer.allocateDirect(array.length * 4);
vbb.order(ByteOrder.nativeOrder()); // use the device hardware's native byte order
FloatBuffer fb = vbb.asFloatBuffer(); // create a floating point buffer from the ByteBuffer
fb.put(array); // add the coordinates to the FloatBuffer
fb.position(0); // set the buffer to read the first coordinate
return fb;
}
public ByteBuffer byteArrayToByteBuffer(byte[] array) {
ByteBuffer vbb = ByteBuffer.allocateDirect(array.length * 4);
vbb.order(ByteOrder.nativeOrder()); // use the device hardware's native byte order
vbb.put(array); // add the coordinates to the FloatBuffer
vbb.position(0); // set the buffer to read the first coordinate
return vbb;
}
// endregion
// ---------------------------------------------------------------------------------------------
}
Of course, the texture is not applied correctly, as I'm using the coordinates of my vertices. Does someone see how to do it correctly? I'll also need to be able to "move" the texture when the user pan.
EDIT: as suggested by codetiger, doing lat/180 and lon/360, and then normalizing to [0..1] worked. Now, I'm trying to add the panning. It works when panning on longitude (horizontally):
But not when panning on latitude (vertically):
I'm simply adding values between 0..1 when the user pans. I tried to use the formula given here with no success. Any idea?
If it helps, that's what I want (obtained with the Ricoh Theta app):
In order to make the sphere a full 360 degree sphere, you can replace the lines below.
mVertices[i * mNbVerticesPerStrips + j][0] = (float) (radius * Math.cos(j * 2 * Math.PI / mNbVerticesPerStrips) * Math.cos(2 * i * Math.PI / mNbStrips));
mVertices[i * mNbVerticesPerStrips + j][1] = (float) (radius * Math.sin(2 * i * Math.PI / mNbStrips));
mVertices[i * mNbVerticesPerStrips + j][2] = (float) (radius * Math.sin(j * 2 * Math.PI / mNbVerticesPerStrips) * Math.cos(2 * i * Math.PI / mNbStrips));
The only change is using 2 * Math.PI / mNbStrips for second angle instead of Math.PI / mNbStrips
And to rotate the image, you can rotate the sphere by using
gl.glRotatef(angle, 1.0f, 0.0f, 0.0f);
Update:
To get correct Texture Coordinates for the sphere, for standard distortion sphere texture you can use (lat/180, lon/360) and normalise it to get [0..1]. As mentioned here https://stackoverflow.com/a/10395141/409315
I have created sound visualization using felixpalmer library. Now i have to modify his code to look like this.How could i achieve empty spaces between bars. like
I will handle the color scheme but don't know how to add empty spaces in bars and shadow effect.
BarGraphRender
public class BarGraphRenderer extends Renderer
{
private int mDivisions;
private Paint mPaint;
private boolean mTop;
/**
* Renders the FFT data as a series of lines, in histogram form
* #param divisions - must be a power of 2. Controls how many lines to draw
* #param paint - Paint to draw lines with
* #param top - whether to draw the lines at the top of the canvas, or the bottom
*/
public BarGraphRenderer(int divisions , Paint paint , boolean top)
{
super();
mDivisions = divisions;
mPaint = paint;
mTop = top;
}
#Override
public void onRender(Canvas canvas, AudioData data, Rect rect)
{
// Do nothing, we only display FFT data
}
#Override
public void onRender(Canvas canvas, FFTData data, Rect rect)
{
for (int i = 0; i < data.bytes.length / mDivisions; i++)
{
mFFTPoints[i * 4] = i * 4 * mDivisions;
mFFTPoints[i * 4 + 2] = i * 4 * mDivisions;
byte rfk = data.bytes[mDivisions * i];
byte ifk = data.bytes[mDivisions * i + 1];
float magnitude = (rfk * rfk + ifk * ifk);
int dbValue = (int) (10 * Math.log10(magnitude));
if(mTop)
{
mFFTPoints[i * 4 + 1] = 0;
mFFTPoints[i * 4 + 3] = (dbValue * 2 - 10);
}
else
{
mFFTPoints[i * 4 + 1] = rect.height();
mFFTPoints[i * 4 + 3] = rect.height() - (dbValue * 2 - 10);
}
}
canvas.drawLines(mFFTPoints, mPaint);
}
}
I have generated an n-sided polygon using the code below:
public class Vertex
{
public FloatBuffer floatBuffer; // buffer holding the vertices
public ShortBuffer indexBuffer;
public int numVertices;
public int numIndeces;
public Vertex (float[] vertex)
{
this.setVertices(vertex);
}
public Vertex (float[] vertex, short[] indices)
{
this.setVertices(vertex);
this.setIndices(indices);
}
private void setVertices(float vertex[])
{
// a float has 4 bytes so we allocate for each coordinate 4 bytes
ByteBuffer factory = ByteBuffer.allocateDirect (vertex.length * 4);
factory.order (ByteOrder.nativeOrder ());
// allocates the memory from the byte buffer
floatBuffer = factory.asFloatBuffer ();
// fill the vertexBuffer with the vertices
floatBuffer.put (vertex);
// set the cursor position to the beginning of the buffer
floatBuffer.position (0);
numVertices = vertex.length;
}
protected void setIndices(short[] indices)
{
ByteBuffer ibb = ByteBuffer.allocateDirect(indices.length * 2);
ibb.order(ByteOrder.nativeOrder());
indexBuffer = ibb.asShortBuffer();
indexBuffer.put(indices);
indexBuffer.position(0);
numIndeces = indices.length;
}
}
Then to create a n-sided polygon:
public class Polygon extends Mesh
{
public Polygon(int lines)
{
this(lines, 1f, 1f);
}
public Polygon(int lines, float xOffset, float yOffset)
{
float vertices[] = new float[lines*3];
float texturevertices[] = new float[lines*2];
short indices[] = new short[lines+1];
for (int i = 0; i < lines;i++)
{
vertices[i*3] = (float) (xOffset * Math.cos(2*Math.PI*i/lines));
vertices[(i*3)+1] = (float) (yOffset * Math.sin(2*Math.PI*i/lines));
vertices[(i*3)+2] = 0.0f;//z
indices[i] = (short)i;
texturevertices[i*2] =(float) (Math.cos(2*Math.PI*i/lines)/2 + 0.5f);
texturevertices[(i*2)+1] = (float) (Math.sin(2*Math.PI*i/lines)/2 + 0.5f);
}
indices[lines] = indices[0];
shape = new Vertex(vertices,indices);
texture = new Vertex(texturevertices, indices);
}
}
and as you can see I am settup up the indeces in-order so that I can render them as a line strip. Now I wish to texture the polygon. How do I do this?
I have tried implementing this:
from here: http://en.wikipedia.org/wiki/UV_mapping
But that result is really poor. How do I go through the coordinates and determine the ordering from texturing?
A related reference can be found here: How to draw a n sided regular polygon in cartesian coordinates?
EDIT I updated according to the answer given by Matic Oblak below and this is the result:
The rotation is of no concern.
This is very close... but no cigar just yet. The original texture is as follows:
If I am reading this correctly you are trying to create a circle from n polygons. There are many ways to use different types of textures and paste them to a shape, the most direct would be to have a texture with a whole shape drawn (for large 'n' it would be a circle) and texture coordinates would be the same as a circle with a center in (.5, .5) and a radius of .5:
//for your case:
u = Math.cos(2*Math.PI*i/lines)/2 + .5
v = Math.sin(2*Math.PI*i/lines)/2 + .5
//the center coordinate should be set to (.5, .5) though
The equations you posted are meant for a sphere and are a bit more complicated since it is hard to even imagine to put it as an image to a 2d surface.
EDIT (from comments):
Creating these triangles is not exactly the same as drawing the line strip. You should use a triangle fan and not triangle strip AND you need to set first point to center of the shape.
public Polygon(int lines, float xOffset, float yOffset)
{
float vertices[] = new float[(lines+1)*3]; //number of angles + center
float texturevertices[] = new float[(lines+1)*2];
short indices[] = new short[lines+2]; //number of vertices + closing
vertices[0*3] = .0f; //set 1st to center
vertices[(0*3)+1] = .0f;
vertices[(0*3)+2] = .0f;
indices[0] = 0;
texturevertices[0] = .5f;
texturevertices[1] = .5f;
for (int i = 0; i < lines;i++)
{
vertices[(i+1)*3] = (float) (xOffset * Math.cos(2*Math.PI*i/lines));
vertices[((i+1)*3)+1] = (float) (yOffset * Math.sin(2*Math.PI*i/lines));
vertices[((i+1)*3)+2] = 0.0f;//z
indices[(i+1)] = (short)i;
texturevertices[(i+1)*2] =(float) (Math.cos(2*Math.PI*i/lines)/2 + 0.5f);
texturevertices[((i+1)*2)+1] = (float) (Math.sin(2*Math.PI*i/lines)/2 + 0.5f);
}
indices[lines+1] = indices[1]; //closing part is same as for i=0
shape = new Vertex(vertices,indices);
texture = new Vertex(texturevertices, indices);
}
Now you just need to draw till index count with triangle FAN. Just a bit of note here to your "offsets", you use xOffset and yOffset as elliptic parameters and not as offsets. If you will be using them as offsets vertices[(i+1)*3] = (float) (xOffset + Math.cos(2*Math.PI*i/lines)); (note '+' instead of '*') then 1st vertex should be at offset instead of (0,0) while texture coordinates remain the same.
I'm trying to create a simple Android OpenGL 2.0 game to get my feet wet. I refeered to Androids tutorial on OpenGL and got it up and running, moved my square to where I want it and now i'm trying to translate it with on touch.
I've read that I have to unproject the current square... but not understanding this. Below is my code if there is any help on performing a translation on the square...
private float mPreviousY;
#Override
public boolean onTouchEvent(MotionEvent e) {
// MotionEvent reports input details from the touch screen
// and other input controls. In this case, you are only
// interested in events where the touch position changed.
float y = e.getY();
switch (e.getAction()) {
case MotionEvent.ACTION_MOVE:
float dy = y - mPreviousY;
// reverse direction of rotation to left of the mid-line
if (y < getHeight() / 2) {
dy = dy * -1 ;
}
mRenderer.mOffSet += dy;
requestRender();
}
mPreviousY = y;
return true;
}
my onDrawFrame:
#Override
public void onDrawFrame(GL10 unused) {
// Draw background color
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// Set the camera position (View matrix)
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -50, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
Matrix.translateM(mModleViewProjMatrix, 0, 0, mOffSet, 0);
// Calculate the projection and view transformation
Matrix.multiplyMM( mModleViewProjMatrix, 0, mProjMatrix, 0, mViewMatrix, 0);
// Draw square
mPaddle.draw(mModleViewProjMatrix);
}
Unprojecting means, reversing the process a vertex undergoes when being transformed. The forward transform is
v_eye = Modelview · v
v_clip = Projection · v_eye
v_ndc = v_clip / v_clip.w
Now what you have to do is reversing this process. I suggest you take a look at the sourcecode of the GLU function gluUnProject of Mesa, to be found here http://cgit.freedesktop.org/mesa/glu/tree/src/libutil/project.c
Update
Unprojecting is essentially reversing the process.
Let's look at Mesa's GLU gluUnProject code:
GLint GLAPIENTRY
gluUnProject(GLdouble winx, GLdouble winy, GLdouble winz,
const GLdouble modelMatrix[16],
const GLdouble projMatrix[16],
const GLint viewport[4],
GLdouble *objx, GLdouble *objy, GLdouble *objz)
{
double finalMatrix[16];
double in[4];
double out[4];
First the compund transformation Projection · Modelview is evaluated…
__gluMultMatricesd(modelMatrix, projMatrix, finalMatrix);
…and inverted, i.e. reversed;
if (!__gluInvertMatrixd(finalMatrix, finalMatrix)) return(GL_FALSE);
in[0]=winx;
in[1]=winy;
in[2]=winz;
in[3]=1.0;
Then the window/viewport coordinates are mapped back into NDC coordinates
/* Map x and y from window coordinates */
in[0] = (in[0] - viewport[0]) / viewport[2];
in[1] = (in[1] - viewport[1]) / viewport[3];
/* Map to range -1 to 1 */
in[0] = in[0] * 2 - 1;
in[1] = in[1] * 2 - 1;
in[2] = in[2] * 2 - 1;
And multiplied with the inverse of the compound projection modelview
__gluMultMatrixVecd(finalMatrix, in, out);
Finally it is checked, that the so called homogenous component is nonzero
if (out[3] == 0.0) return(GL_FALSE);
And the homogenous divide inverted.
out[0] /= out[3];
out[1] /= out[3];
out[2] /= out[3];
Resulting in the original vertex position prior to the projection process
*objx = out[0];
*objy = out[1];
*objz = out[2];
return(GL_TRUE);
}