For an opengl(es) game, the 'camera' moves about the xyz axes in response to user input via gllookat. Works so far. However I need to implement a heads up display that is in a static position at the corner of the screen. Knowing the current location of the camera, I draw the HUD. When the camera moves, the HUD jostles for a frame or two before returning to its proper location. Is there some good way of drawing a HUD in opengl that is not affected by the camera?
Why not just reset the MVT matrix to use a fixed camera position before you draw the hud - and given that the camera and view for the hud is fixed you only need to calculate it once. I've done this on gles2.0, but it should work in earlier versions as well.
When you start drawing a frame remember your last position (save it in a variable) and use this data. This should remove all problems.
Related
I want to place some object to real world coordinate.
Is it possible? without surface detection?
Yes, you can place relative to another point, for example the position of the camera, i.e. the centre of the current view.
See this example, which uses the original, now depreciated Sceneform: https://stackoverflow.com/a/56681538/334402
Sceneform is now being updated and maintained here and the same basic approach will still work: https://github.com/thomasgorisse/sceneform-android-sdk
It is possible but not recommended as anchors are supposed to be attached to a trackable surface. What you can do is shoot a ray from your camera and place an anchor where the ray intersects a Trackable plane. You can set your app to do this a set number of seconds after the user starts the app (make sure to let the user know they need to look around to help in plane detection)
Now , i have the circle and image on the canvas
Ping-pong board and Ping-pong ball (draw by drawCircle)
The position of ball will depend on the accelerator
is possible to detect whether the ball the outside the board or not?
Or, i need to draw the board Programmatically without using the image
What you are looking for is called 'Collision Detection'. A technique used in game programming where the boundaries (area that defines where the object can be hit) are and then if an object enters those positions.
You can do this simply by saying that the boundaries are anything in the height / width of the image on the canvas. But I suspect that in your game you will want a subsection of that.
You will need to define a related object to your image that holds the 'Collision Boundary'. On a 2D game that will be the starting X,Y and then the height and width. While on a 3D game you will also need to store the Z position.
This is probably quite confusing to start with but I found you this little guide that explains it in more detail than I have space for here:
http://www.kilobolt.com/day-4-collision-detection-part-1.html
Let me know if you have any questions and the game sounds exciting!
I'm writing an android app using OpenCV for my masters that will be something like a game. The main goal is to a detect a car in selected area. The "prize" will be triggered randomly while detecting cars. When the user will hit the proper car I want to display a 3D object overlay on the screen and attach it to the middle of the car and keep it there so when the user will change the angle of his view on the car, the object will also be seen from diffrent angle.
at the moment I have EVERYTHING beside attaching the object. I've created detection, I'm drawing the 3D overlay, I've created functions that allow me to rotate the camera etc. BUT I do not have any clue how can I attach the overlay to the specific point. Cause I don't have this I have no point to recalculate the renderer to change the overlay perspective.
Please, I really need some help, even a small idea will be fine:
How can I attach the overlay to the specific in real world
(Sorry, I couldn't comment. Need at least 50 points to do that ... :P )
I assume your image of the car is coming from a camera feed and you are drawing 3d car in opengl. If so, then you can try this:
You set the pixel format of the opengl layer as RGBA_8888, so that you can set the background of the opengl camera as a transparent color.
You take a relative layout as layout of your activity.
first you add the opencv camera layout to it as full height and width.
then you add opengl layer as full height and width.
you get the position of the real car from opencv layer as pixel value or something you did.
then scale it to your opengl parameters so that you can draw it on the right spot.
it worked for me. hope it works for you too.
for over a week I've been struggling with a feature to draw some picture on the screen (overlaying the camera preview), and then move the picture according to the motion of a camera. An example: I hold my phone in a landscape position (the app is always in that orientation) and look at my table with a camera. Afterwards I touch the screen to draw a box on that table, and then look at the ceiling with the phone camera - this should make the box go in an opposite direction - which is down (creating an illusion of the box really being there).
I am able to draw an obiect in a surface view with a transparent background, which overlays the camera preview and move the obiect up and down using the Sensor.TYPE_GRAVITY (Z-axis readings). It works really smooth and perfectly mimics the up/down motion.
The problem starts, when I want to move it horizontally - I could not find a sensor type, which would help me achieve a desired effect.
To move the picture on the screen, I first save the current sensor reading when the user touched the screen (to paint a picture):
temporaryX = ?? - (this I did not figure out yet).
temporaryY = accelerometerHandler.getAccelZ;
Then subtract that value from the new reading:
differenceX = newX - temporaryX;
differenceY = accelerometerHandler.getAccelZ - temporaryY;
And then draw the picture on the canvas using those modificators:
canvas.drawBitmap(image, posX + differenceX, posY + differenceY, null);
I tried with Sensor.TYPE_ROTATION_VECTOR, which allowed me to move horizontally with quite good results, but the sensor readings also change when I look up or down, which messes with the vertical motion (the obiect moves at an angle).
Should I stick with sensors, or is there a different approach for the described problem? Even though I searched a for a similar question a lot - I could not find any topics which covered that problem. Thank you for all your answers!
I'm struggling with something I would expect to be straight forward with libgdx.
In short this is a "finger paint" app where I want to draw a path of a certain width where the user touches the screen.
I've earlier done this by using a plain Android android.view.View. I had a android.graphics.Path in which I stored the coordinates of the user's current touch. In the onDraw() method of the view I drew the path to the android.graphics.Canvas. Whenever the user released a finger I drew the path to an offline canvas/android.graphics.Bitmap which also was drawn in the onDraw() method. Plain and simple.
How can that be done using libgdx?
I have tried using a com.badlogic.gdx.graphics.Pixmap that I can draw a line to whenever the user moves a finger. This works well except the fact that I'm unable to control the witdh of the line using Gdx.gl.glLineWidth(). I know I can draw a rectangle instead of a line to set the width, but Pixmap doesn't seem to have any means of rotating, so I don't see how this can be done.
I can use a com.badlogic.gdx.graphics.glutils.ShapeRenderer for drawing lines (or rectangles) in com.badlogic.gdx.Screen.render(). As far as I can see I then need to store every single touch point of the current touch, and then draw all lines on render. Whenever the user relases a finger I guess I can store the screen as-is with something like com.badlogic.gdx.utils.ScreenUtils.getFrameBufferPixmap(). Hopefully there is a easier way to achieve what I want.
I ended up drawing circles on a pixmap where the line should be:
One circle on touchDown
Several circles from last touch point to next touch point reported to touchDragged
I'm not very happy with this solution, but it kinda works.
Maybe you can calculate line dot coordinates manually and use triangles to draw them on pixmap? Like use 2 triangles to form (rotated) box?