Find width of an object from camera and sensors - android

I should find the width of the object in camera. I have read many posts here in a SO but none of them explained about width calculation, except the distance and height of object. So someone please guide me explaining theoretically with trignometric formulas to find the width of object. I have only one input i.e the height of camera from ground.
Thanks in advance.

Well I am answering my own question to help others having same problem.
In order to find the width follow below steps. This requires a 3D imagination to understand
1) User need to enter the height of the point of observation(that is camera height from ground) manually .
2) The object whose width is to be measured , needs to be at the same level where the person is standing.
3) We need to point at bottom of the object's start point from the same height and capture the angle.
4) Then we need to tilt the device at the same height and capture the bottom of the end point of object and measure the angle.
5) With the angle and camera height from ground, we will find the respective distance of both sides applying tan rule.
tan(θ)=opposite/adjacent
tan(θ)=height_of_camera/distance
6) From the distances we calculated in previous step and with the height of camera, using pythagorean theorem, calculate the projected distances ie.camera to the objects start and end points(imagine in 3D view)
7) Now as we have 2 sides and one angle use the following formula(cosine rule) to get the other side which is the width of the object.
Width = sqrt((s1*s1) + (s2*s2) − 2(s1)(s2)cos(θ))
s1=sideOne , s2=sideTwo

Related

Angle of view based on Ellipse

I need to calculate the angle of view of Camera to Object preferably in a 180 degree plane (angles ranging from 0 - 180), providing me direction.
I have so far tried using Ellipse angle or calculating one by proportion of width and height multiplied by 90, but they are not as accurate (basically I see failure & success in proceeding calculation at the same angle.
Attached are multiple views of an circle. The grid below in background (only in test) will help us understand the orientation. There is no such guidance available in real life condition but the radius of object is known.
(Can't post image due to less than desired reputation)

Android GestureOverlayView parameters

I started experimenting with Custom Gestures and the GestureOverlayView and noticed a few variables where I am not sure what they are for and what range of values can and should be assigned, the docs seem to be somewhat vague on those:
//Minimum curve angle a stroke must contain before it is recognized as a gesture.
android:gestureStrokeAngleThreshold
I assumed this is in degrees and when I add "25" here, a sharp edge must be contained in the gesture, but actually it still is detected ifI draw a circle or a perfect square.
//Minimum length of a stroke before it is recognized as a gesture.
android:gestureStrokeLengthThreshold
Is this in dp ? Because it seems like on smaller screens it is harder to trigger the gesture...
//Squareness threshold of a stroke before it is recognized as a gesture.
android:gestureStrokeSquarenessThreshold
what is this?
EDIT:
Ok I just realized that every prediction has a score value, which should be used to find out if the gesture performed actually meets the requirements, so I added a check if the prediction's score is greater than 1.
Still I am curious what those variables in GestureOverlayView are doing, so enlighten me :)
gestureStrokeLengthThreshold definitely is not density independent but apparently uses pixels. If you want to set a density independent threshold you can calculate the gestureStrokeLengthThreshold at runtime, like that:
DisplayMetrics metrics = getResources().getDisplayMetrics();
float normalizedScreenSize = (metrics.heightPixels + metrics.widthPixels) / 2.0f;
return normalizedScreenSize * GESTURE_LENGTH_THRESHOLD;
GESTURE_LENGTH_THRESHOLD would be a value representing how long the gesture should be. A value of 1.0 would roughly be the size of the screen (averaged from screen width and height)
Still I am interested in what those other variables in GestureOverlayView do, so if you know more - enlighten me :)

how the scale function in vector base draw app implement?

I am try to implement a hand-painted app by android (like infinite design )
and decide to use vector because it can scale and not distortion.
I think a lot and try to use the mode of viewport-horizon-world
Viewport which rely on the sizeof phone (etc 1080*1920) it is the path you see and you touch
Horizon the thing which will display on viewport
World the real coordinates of the point(which make up the path etc line
,bessel).
this model works like
first you touch the screen of phone and horizon will translate the point to the real world coordinates (if you move of scale) and save the value to world
second you can move and scale by gestures it will change the attribute of horizon etc you move left 100 and down 100 the horizon will know now it offset (100,100) and the bound will change ((0,0),(1080,1920))->((100,100),(1180,2020))
last when draw i find the path which include in horizon (calculate the bound of horizon and the bound of path) then calculate the display coordinate rely on horizonand draw the path by canvas.draw() etc
now the problem is when i just offset the horizon ,calculate the display coordinate just need to plus the offset values.but when scale it become difficult.for example a path bound in ((0,0),(100,100)) and the horizon scale 0.5 in point (500,500) i don't know the position of the bound and don't know how to calculate the anchor of new path the size and width(maybe just multiplied by the scale factor)
the function i want to implement like the viewport in svg
i think it should use coordinate mapping but how?
please give me some clue

camera: image projection

I'd like to project images on a wall using camera. Images, essentially, must scale regarding the distance between camera and the wall.
Firstly, I made distance calculations by using right triangle trigonometry(visionHeight * Math.tan(a)). It's not 100% exact but yet close to real values.
Secondly, knowing the distance we can try to figure out all panorama height by using isosceles triangle trigonometry formula: c = a * tan(A);
A = mCamera.getParameters().getVerticalViewAngle();
The results are about 30% greater than the actual object height and it's kinda weird.
double panoramaHeight = (distance * Math.tan( mCamera.getParameters().getVerticalViewAngle() / 2 * 0.0174532925)) * 2;
I've also tried figuring out those angles using the same isosceles triangle's formula, but now knowing the distance and the height. I got 28 and 48 degrees angles.
Does it mean that android camera doesn't render everything it shoots ? And, what other solutions you can suggest ?
Web search shows that the values returned by getVerticalViewAngle() cannot be blindly trusted on all devices; also note that you should take into account the zoom level and aspect ratio, see Determine angle of view of smartphone camera

In opengl, How can I get relation between pixels and gl.gltranslatef(floatx,y,z)?

I am trying to learn opengl stuff on Android. In the gl.gltranslatef(x,y,z) call, I am shifting my texture by some units in the +ve x direction. But I am unable to find the number of pixels does 1 unit of x belong to?
Here is what I am doing:
I call gl.glviewport(0,0,width,height); // This will set my rectangle with 0,0 as lowerleft corner and then extend it to accommodate width and height.
Then
I call to gl.glfrustrum(-5,5,-7,7,3,7); // I am little confused how this call is using the dimensions I set in gl.glviewport.
How will -5 to 5 units from left to right in the above call, translate to pixels on the screen of android?
I mean if width = 320 and height = 533 pixels, then what will be the number of pixels occupied on the screen due to the gl.glfrustrum call?
I am experimenting in the gl.gltranslatef call by specifying xshift as 5.0, but it does not translate the bitmap at the right or left corner of the screen, when I increase it to 6, part of it is still visible on the screen.
Thanks
Siddhesh
In short, I am searching for the maximum number of units (in terms of X) which will represent extreme corners of my android phone screen.
glViewpoint tells it what rectangle (in pixels) your OpenGL output should be displayed in.
glFrustum tells it what coordinates in your "world" units should be mapped to that viewport.
An important point: your glFrustum call includes not only a height and width, but also a depth. Since you are specifying a Frustum, not a cube, that means anything with a Z coordinate anywhere but the very front of your frustum will be scaled down appropriately for its distance from the viewer.
As such, when you to a glTranslatef, the distance by which a particular object will move (in terms of pixels) will depend on its distance from the viewer. The further away it is from the viewer, the fewer pixels a particular sideways or up/down will translate to.
Depending on what else you're doing, one easy way to deal with this might be to use glOrtho instead of glFrustum. glOrtho gives orthographic mode, which means no perspective scaling is done, so a given X or Y distance will translate to the same number of pixels, regardless of distance from the viewer.

Categories

Resources