I am trying to develop an Android app which is trying to draw a perfect line directly in front of me. My reference point is my phone, which means that the line has to parallel to my phone's left side.
I have following issues:
I am using Sceneform, so there is no "onSurfaceCreated" callback.(?)
I assume that the white-dots shows the surface. But, then I think if there is a detected surface then I can place a Shape on it. But is can not happen sometimes. And sometimes, I can place a Shape even if there are no visible white-dots.
When I try to draw a line between the points (0,0,0) to (1,0,0), it is not always parallel to the left side of my phone. I assume that the reason of this is related with the following fact :
angle between the left-bottom corner of the detected surface and the left-top corner is not zero. (If we consider the coordinate system as follows : phone's left side is y-axis, bottom is the x-axis.)And this angle changes each time I reopen the app.
These are more theory questions than the implementation. So, I need someone to prove or disprove, or give me guideline.
1) There isn't method like onSurfaceCreated.
2) Not all the detected planes are covered with white-dots. Is is intended because if all the detected planes are rendered with white-dots, it would confuse the users
3) When you talk about the points(0,0,0) and (1,0,0), is it world position or local position? Whether it is world position or local position, you can not draw a line parallel to your left side of phone in the way you approach.
Related
Context: I'm currently working on a Augmented Reality (AR) application using OpenGL ES 2.0 and some AR glasses running on Android. My goal is to display a virtual cursor at the tip of a real object : a screwdriver. Both the glasses and the screwdriver locations are tracked by a fixed external camera. The left image just below can give you an idea of the setup.
Things that are working: So far, I'm able to display a virtual 3D object (for example a cube) at a given location in space. For example, I am able to position it at (more or less 1cm from) the tip of a tracked screwdriver. When I just rotate the head, the virtual cube gives the impression to "stay at the same place" in the real world, which is nice. This behavior is what I expected, and is consistent with its real-world anchor.
Issue: However, when I do a translation with the head (and thus a translation of the opengl camera), the cube seems to have a strange spatial offset, like if it was shifted from the object's tip (case 2 in the drawing above). This shift can be pretty significant (until 5 or 6 cm), and unconsistent with the real-world. But if I align the object exactly with any of the camera axis, the cube seems well-placed at the tip of the object, which confuses me.
Question: Is it just a strange visual perspective effect ? How can it work with head rotations but not head translations ? Did I miss something about perspective projection in OpenGL ES ?
Implementation details The fixed external camera is the origin of world coordinates. It is really precise, and gives me both the world-space position and rotation of each object (including the glasses and the screwdriver). To be more precise, it continuously send this data via Bluetooth to my Android program to make sure what the user can see is up-to-date.
In the case 1, this works like a charm: the camera correctly detects that the screwdriver is at position (0, 0, 1 meter) and whatever rotation for example, I display a cube centered around that position, and it appears correctly placed. But after a head translation (case 2), the screwdriver is still detected at the correct position (it didn't move after all), but the cube is shifted in a way that does not make sense to me.
If it was a small offset, I would put that on an accumulation of small errors, but here it seems to big to be the only explanation. Depending on the head translation I do, the cube gains a different offset and overall give the impression not to have a single fixed position in the world.
I am using perspective projection with the FOV and aspect ration of the AR glasses. The position of the opengl camera is set to the position of the AR glasses, and the Look-at values are computed according to the direction the head is currently facing.
If I modify the FOV, I loose the expected behavior I have about head rotations and correct positionning. Finally, I am using the glasses as a stereo display.
Could someone explain me how convex path is calculated? I need to draw some cubic and additionally some lines but then path is shown as non convex. However when I leave only lines or just cubic it is then convex. The problem is that I need some non regular shaped background and need Convex path for shadow outline but can't get how I could connect drawing cubic with some lines to make convex path if it is even possible
A path is convex if it has a single contour, and only ever curves in a single direction.
Convex means it keeps bending / rotating in one direction, and one direction only. You really have to make sure that all your angles and curves add up. If your curve connects to a line it has to have the same angle or be "more convex", I hope the following 2 images will clear this up.
The picture below is not convex. That's also likely your problem. The line connects to a curve, but the curve has a different angle than the line and it will change the direction where it connects. See where the line goes down but instead of continuing the downwards-motion it suddenly goes up again. Instead of keeping one direction it will change for a moment where line and curve meet.
The above Image is exaggerated for clarity, but even small errors in the connection between the line and curve will trigger an error.
The next line connects to a curve with a steeper angle. This is convex and won't be a problem. See how the whole contour keeps a single motion in one direction, depending in which direction you follow it it keeps turning left/right.
I answered because I was facing a similar issue recently and I feel your pain. I recommend pen and paper to double and triple check the math and to use a small epsilon value to account for rounding errors etc... You really have to nail the math, because if your line and curve connection is just off by very little it will throw that exception.
Sorry for my bad paint skills
Problem that I am trying to solve is to detect cubes and get colours from them. I use live images from camera captured by Android phone. Recognition has to be fast (<1s) Example of a cube:
I also have differently coloured cubes. They can be placed randomly (for example when they touch each other).
I can easily detect one cube, in same cases even two cubes, but the problem is when I have 3 or more and for 2 cubes when they are really close to each other.
Currently processing looks like this:
blur image with Gaussian
convert to hsv and use only s channel
detect edges with Canny
dilate and erode edges
use HoughLinesP to get lines
from lines (I reject too long and too short lines) calculate intersection points and from that get corners of cubes
knowing corners (must be precise) get colours
nothing detected
2 cubes detected (red and orange points are corners and cyan points are intersection points, black lines are detected lines by hough lines)
nothing detected, some lines found
Basically what I need is to find correct corners of the cubes. I tried using Imgproc.goodFeaturesToTrack and Imgproc.cornerHarris, but it finds too many of them and usually not the most important ones.
I also tried using findContours with no success even for two objects. findContours was also crashing my app after minute of running. At some point I tried using Feature Matching + Homography to find matches to a grayscale image of a cube with the one from camera, but results were messy. Also Template Matching didn't give me good results.
Do you have any idea how to make detection more reliable and precise?
Thanks for help
I'm writing an android app using OpenCV for my masters that will be something like a game. The main goal is to a detect a car in selected area. The "prize" will be triggered randomly while detecting cars. When the user will hit the proper car I want to display a 3D object overlay on the screen and attach it to the middle of the car and keep it there so when the user will change the angle of his view on the car, the object will also be seen from diffrent angle.
at the moment I have EVERYTHING beside attaching the object. I've created detection, I'm drawing the 3D overlay, I've created functions that allow me to rotate the camera etc. BUT I do not have any clue how can I attach the overlay to the specific point. Cause I don't have this I have no point to recalculate the renderer to change the overlay perspective.
Please, I really need some help, even a small idea will be fine:
How can I attach the overlay to the specific in real world
(Sorry, I couldn't comment. Need at least 50 points to do that ... :P )
I assume your image of the car is coming from a camera feed and you are drawing 3d car in opengl. If so, then you can try this:
You set the pixel format of the opengl layer as RGBA_8888, so that you can set the background of the opengl camera as a transparent color.
You take a relative layout as layout of your activity.
first you add the opencv camera layout to it as full height and width.
then you add opengl layer as full height and width.
you get the position of the real car from opencv layer as pixel value or something you did.
then scale it to your opengl parameters so that you can draw it on the right spot.
it worked for me. hope it works for you too.
I'm struggling with something I would expect to be straight forward with libgdx.
In short this is a "finger paint" app where I want to draw a path of a certain width where the user touches the screen.
I've earlier done this by using a plain Android android.view.View. I had a android.graphics.Path in which I stored the coordinates of the user's current touch. In the onDraw() method of the view I drew the path to the android.graphics.Canvas. Whenever the user released a finger I drew the path to an offline canvas/android.graphics.Bitmap which also was drawn in the onDraw() method. Plain and simple.
How can that be done using libgdx?
I have tried using a com.badlogic.gdx.graphics.Pixmap that I can draw a line to whenever the user moves a finger. This works well except the fact that I'm unable to control the witdh of the line using Gdx.gl.glLineWidth(). I know I can draw a rectangle instead of a line to set the width, but Pixmap doesn't seem to have any means of rotating, so I don't see how this can be done.
I can use a com.badlogic.gdx.graphics.glutils.ShapeRenderer for drawing lines (or rectangles) in com.badlogic.gdx.Screen.render(). As far as I can see I then need to store every single touch point of the current touch, and then draw all lines on render. Whenever the user relases a finger I guess I can store the screen as-is with something like com.badlogic.gdx.utils.ScreenUtils.getFrameBufferPixmap(). Hopefully there is a easier way to achieve what I want.
I ended up drawing circles on a pixmap where the line should be:
One circle on touchDown
Several circles from last touch point to next touch point reported to touchDragged
I'm not very happy with this solution, but it kinda works.
Maybe you can calculate line dot coordinates manually and use triangles to draw them on pixmap? Like use 2 triangles to form (rotated) box?