Currently I'm developing an Android App for blind people that uses the device's camera.
The app shows in a SurfaceView the preview of the camera's captured frames. When the screen is touched, the app gets the coordinates from the screen, with event.getX() and event.getY(). However, what I really want to do is to obtain the coordinates of the touched point but taking as a reference the coordinates of the frame.
With view.getLocationonScreen() or view.getLocationInWindow() I can get the location of referred to the top left position where SurfaceView begins, but the frame don't occupies all the surface of the SurfaceView.
How can I transform the coordinates of the screen to relative coordinates to the frame? Or how can I know the coordinates where the frame is located inside the Surface View? When I use getMarginLeft() or similar I obtain the value 0.
Related
I want to place some object to real world coordinate.
Is it possible? without surface detection?
Yes, you can place relative to another point, for example the position of the camera, i.e. the centre of the current view.
See this example, which uses the original, now depreciated Sceneform: https://stackoverflow.com/a/56681538/334402
Sceneform is now being updated and maintained here and the same basic approach will still work: https://github.com/thomasgorisse/sceneform-android-sdk
It is possible but not recommended as anchors are supposed to be attached to a trackable surface. What you can do is shoot a ray from your camera and place an anchor where the ray intersects a Trackable plane. You can set your app to do this a set number of seconds after the user starts the app (make sure to let the user know they need to look around to help in plane detection)
I am developing an app where on image detection I am playing a video as anchor node. Everything is working perfect except video stoping on camera focus out. I can hear audio from video playing even if I move camera away from detected image. I have tried augmented image tracking state stop but it did not help.
Is there any callback or observer where I can check if camera is not focusing on augmented image?
AFAIK there is no call-back available at this time - this type of thing has been discussed on the ARCore issues list and it was noted that it is outside the scope of ARCore: https://github.com/google-ar/arcore-android-sdk/issues/78
You can, however, check yourself by mapping your nodes world points to screen point and then checking if it is within the 'arSceneView.scene.camera' view bounds - see below for values for x and y which will indicate if it is outside the view, from the ARCore documentation: https://developers.google.com/ar/reference/java/sceneform/reference/com/google/ar/sceneform/Camera#worldToScreenPoint(com.google.ar.sceneform.math.Vector3)
public Vector3 worldToScreenPoint (Vector3 point)
Convert a point from world space into screen space.
The X value is negative when the point is left of the viewport, between 0 and the width of the SceneView when the point is within the viewport, and greater than the width when the point is to the right of the viewport.
The Y value is negative when the point is below the viewport, between 0 and the height of the SceneView when the point is within the viewport, and greater than the height when the point is above the viewport.
The Z value is always 0 since the return value is a 2D coordinate.
I am developing an application where I drew a point on the camera preview using canvas. Now I want to fix the point on the screen though camera is moved up/down, left/right.
My question is,
Is it possible to fix the point on the preview screen?
for over a week I've been struggling with a feature to draw some picture on the screen (overlaying the camera preview), and then move the picture according to the motion of a camera. An example: I hold my phone in a landscape position (the app is always in that orientation) and look at my table with a camera. Afterwards I touch the screen to draw a box on that table, and then look at the ceiling with the phone camera - this should make the box go in an opposite direction - which is down (creating an illusion of the box really being there).
I am able to draw an obiect in a surface view with a transparent background, which overlays the camera preview and move the obiect up and down using the Sensor.TYPE_GRAVITY (Z-axis readings). It works really smooth and perfectly mimics the up/down motion.
The problem starts, when I want to move it horizontally - I could not find a sensor type, which would help me achieve a desired effect.
To move the picture on the screen, I first save the current sensor reading when the user touched the screen (to paint a picture):
temporaryX = ?? - (this I did not figure out yet).
temporaryY = accelerometerHandler.getAccelZ;
Then subtract that value from the new reading:
differenceX = newX - temporaryX;
differenceY = accelerometerHandler.getAccelZ - temporaryY;
And then draw the picture on the canvas using those modificators:
canvas.drawBitmap(image, posX + differenceX, posY + differenceY, null);
I tried with Sensor.TYPE_ROTATION_VECTOR, which allowed me to move horizontally with quite good results, but the sensor readings also change when I look up or down, which messes with the vertical motion (the obiect moves at an angle).
Should I stick with sensors, or is there a different approach for the described problem? Even though I searched a for a similar question a lot - I could not find any topics which covered that problem. Thank you for all your answers!
For an opengl(es) game, the 'camera' moves about the xyz axes in response to user input via gllookat. Works so far. However I need to implement a heads up display that is in a static position at the corner of the screen. Knowing the current location of the camera, I draw the HUD. When the camera moves, the HUD jostles for a frame or two before returning to its proper location. Is there some good way of drawing a HUD in opengl that is not affected by the camera?
Why not just reset the MVT matrix to use a fixed camera position before you draw the hud - and given that the camera and view for the hud is fixed you only need to calculate it once. I've done this on gles2.0, but it should work in earlier versions as well.
When you start drawing a frame remember your last position (save it in a variable) and use this data. This should remove all problems.