I'm trying to determine the degree size of the field-of-view of a Droid Incredible smartphone's camera. I need to know this value for an application that I'm developing. Does anyone know how I can find out/calculate it programmatically?
The Camera.Parameters getHorizontalViewAngle() and getVerticalViewAngle() functions provide you with the base view angles. I say "base", because these apply only to the Camera itself in an unzoomed state, and the values returned by these functions do not change even when the view angle itself does.
Camera.Parameters p = camera.getParameters();
double thetaV = Math.toRadians(p.getVerticalViewAngle());
double thetaH = Math.toRadians(p.getHorizontalViewAngle());
Two things cause your "effective" view angle to change: zoom, and using a preview aspect ratio that does not match the camera aspect ratio.
Basic Math
The trigonometry of field-of-view (Θ) is fairly simple:
tan(Θ/2) = x / 2z
x = 2z tan(Θ/2)
x is the linear distance viewable at distance z; i.e., if you held up a ruler at distance z=1 meter, you would be able to see x meters of that ruler.
For instance on my camera, horizontal field of view is 52.68° while vertical field of view is 40.74°. Convert these to radians and plug them into the formula with an arbitrary z value of 100m, and you get x values of 99.0m(horizontal) and 74.2m(vertical). This is a 4:3 aspect ratio.
Zoom
Applying this math to zoom levels is only slightly harder. Now, x remains constant and it is z that changes in a known ratio; we must determine Θ.
tan (Θ/2) = x / (2z)
tan (Θ'/2) = x / (2z')
Θ' = 2 atan((z / z') tan(Θ/2))
Where z is the base zoom level (100), z' is the current zoom level (from CameraParameters.getZoomRatios), Θ is the base horizontal/vertical field of view, and Θ' is the effective field of view. Adding on degree->radian conversions makes this rather verbose.
private static double zoomAngle(double degrees, int zoom) {
double theta = Math.toRadians(degrees);
return 2d * Math.atan(100d * Math.tan(theta / 2d) / zoom);
}
Camera.Parameters p = camera.getParameters();
int zoom = p.getZoomRatios().get(p.getZoom()).intValue();
double thetaH = zoomAngle(p.getHorizontalViewAngle(), zoom);
double thetaV = zoomAngle(p.getVerticalViewAngle(), zoom);
Aspect Ratio
While the typical camera is a 4:3 aspect ratio, the preview may also be available in 5:3 and 16:9 ratios and this seems to be accomplished by actually extending the horizontal field of view. This appears to be undocumented, hence unreliable, but by assuming that's how it works we can calculate the field of view.
The math is similar to the zoom calculations; however, in this case z remains constant and it is x that changes. By assuming that the vertical view angle remains unchanged while the horizontal view angle is varied as the aspect ratio changes, it's possible to calculate the new effective horizontal view angle.
tan(Θ/2) = v / (2z)
tan(Θ'/2) = h / (2z)
2z = v / tan(Θ/2)
Θ' = 2 atan((h/v) tan(Θ/2))
Here h/v is the aspect ratio and Θ is the base vertical field of view, while Θ' is the effective horizontal field of view.
Camera.Parameters p = camera.getParameters();
int zoom = p.getZoomRatios().get(p.getZoom()).intValue();
Camera.Size sz = p.getPreviewSize();
double aspect = (double) sz.width / (double) sz.height;
double thetaV = Math.toRadians(p.getVerticalViewAngle());
double thetaH = 2d * Math.atan(aspect * Math.tan(thetaV / 2));
thetaV = 2d * Math.atan(100d * Math.tan(thetaV / 2d) / zoom);
thetaH = 2d * Math.atan(100d * Math.tan(thetaH / 2d) / zoom);
As I said above, since this appears to be undocumented, it is simply a guess that it will apply to all devices; it should be considered a hack. The correct solution would be splitting off a new set of functions getCurrentHorizontalViewAngle and getCurrentVerticalViewAngle.
Unless there's some API call for that (I'm not an Android programmer, I wouldn't know), I would just snap a picture of a ruler from a known distance away, see how much of the ruler is shown in the picture, then use trigonometry to find the angle like this:
now you have the two distances l and d from the figure. With some simple goniometry, one can obtain:
tan(α/2) = (l/2)/d,
hence
α = 2*atan(l/2d)
So with this formula you can calculate the horizontal field-of-view of your camera. Of course measuring the vertical f.o.v. goes exactly the same way except that you then need to view the object in its vertical position.
Then you can hard-code it as a constant in your program. (A named constant, of course, so it'd be easy to change :-p)
I have a Droid Incredible as well. Android 2.2 introduced the functions you are looking for. In my code, I have:
public double getHVA() {
return camera.getParameters().getHorizontalViewAngle();
}
public double getVVA() {
return camera.getParameters().getVerticalViewAngle();
}
However, these require that you have the camera open. I'd be interested to know if there is a "best practices" way to not have to open the camera each time to get those values.
#David Zaslavsky - how? What is the mathematical relationship between the zoom levels? I can't find it anywhere (I asked in this question: What do the Android camera zoom numbers mathematically represent?)
Related
I want to find the solution to get the dimensions of an Object Using Camera, Well it sounds like Duplicate one
How to measure height, width and distance of object using camera?
But the solution doesn't help me out.Well from the Above link i got some idea to find out the distance (Measure Distance).
Can somebody suggest me how am i supposed to get the width as well as height of an object. Simple math or any Idea would be really helpful.
Is there any possibilities to achieve the above solution using OpenCV.
Measure Height and Width
What i have tried so far:
Suppose we assume the Fixed Distance , We can calculate Angle of elevation
tan(α/2) = (l/2)/d,
hence
α = 2*atan(l/2d)
But still we don't know the value of L (Length of the object)
Another way to find the View angle:
double thetaV = Math.toRadians(camera.getParameters().getVerticalViewAngle());
double thetaH = Math.toRadians(camera.getParameters().getHorizontalViewAngle());
Seems Not working !!
The actual physics of a lens are explained for example on this website of Georgia State University.
See this illustration which explains how you can use either the linear magnification or focal length relations to find out object size from image size:
In particular, -i / h' = o / h, and this relation o / h holds true for all similar triangles (that is, an object of size 2h at distance 2o has the same sizeh' on the picture). So as you can see, even in the case of the full equation, you can't know both the distance o and the size h of an object -- however one will give you the other.
On the other hand, two objects at the same distance o will see their sizes h1' and h2' on the image be proportional to their sizes in real life h1 and h2, since h1' / h1 = M = h2' / h2.
Hence if you know for one object both oand h, you know M, thus knowing an object's size on film you can deduct its size from its distance and vice versa.
The -i / h' value is naturally expressed for the maximal h'. If the size of an object fills the image exactly, it fills the field of view, then the ratio of its distance to its size is tan(α/2) = (l / 2) / d (note that in the conventions of the image below, d = o and l = 2 * h).
This α is what you name theta in your example. Now, from the image size you can get under what angle you see the image -- that is, what size l would the image be if it were at distance d. From there, you can deduce the size of the object from its distance and vice versa.
Algorithm steps:
get ratio r = size of object in image (in px) / total size of image (in px).
Do this along the axis for which you know or plan to get the real object size, of course.
get the corresponding field of view and angle multiply r by the tangent of half that angle
r *= tan(camera.getParameters().getXXXXViewAngle() / 2)
r is now the tangent of the half-angle under which you see the object, hence the following relations are true: r = (l / 2) / d = h / o (with the respective drawing's notations).
If you know the distance d to the object, its size is l = 2 * r * d
If you know the size l of the object, it is at distance is d = l / (2 * r)
This works for objects hat are actually pointed at by the camera, if they aren't centred the maths may be off.
I'm hoping someone can help me out. I'm making an image manipulation app, and I found I needed a better way to load in large images.
My plan, is to iterate through "hypothetical" pixels of an image (a "for loop" that covers width/height of the base image, so each iteration represents a pixel), scale/translate/rotate that pixels position relative to the view, then use this information to determine which pixels are being displayed in the view itself, then use a combination of BitmapRegionDecoder and BitmapFactory.Options to load in only the section of image that the output actually needs rather than a full (even if scaled) image.
So far I seem to have covered scale of the image and translation properly, but I can't seem to figure out how to calculate rotation. Since it's not a real Bitmap pixel I can't use Matrix.rotate =( Here is the image translations in the onDraw of the view, imgPosX and imgPosY hold the center point of the image:
m.setTranslate(-userImage.getWidth() / 2.0f, -userImage.getHeight() / 2.0f);
m.postScale(curScale, curScale);
m.postRotate(angle);
m.postTranslate(imgPosX, imgPosY);
mCanvas.drawBitmap(userImage.get(), m, paint);
and here is the math so far of how I'm trying to determine if an images pixel is on the screen:
for(int j = 0;j < imageHeight;j++) {
for(int i = 0;i < imageWidth;i++) {
//image starts completely center in view, assume image is original size for simplicity
//this is the original starting position for each pixel
int x = Math.round(((float) viewSizeWidth / 2.0f) - ((float) newImageWidth / 2.0f) + i);
int y = Math.round(((float) viewSizeHeight / 2.0f) - ((float) newImageHeight / 2.0f) + j);
//first we scale the pixel here, easy operation
x = Math.round(x * imageScale);
y = Math.round(y * imageScale);
//now we translate, we do this by determining how many pixels
//our images x/y coordinates have differed from it's original
//starting point, imgPosX and imgPosY in the view start in center
//of view
x = x + Math.round((imgPosX - ((float) viewSizeWidth / 2.0f)));
y = y + Math.round((imgPosY - ((float) viewSizeHeight / 2.0f)));
//TODO need rotation here
}
}
so, assuming my math up until rotation is correct (probably not but it appears to be working so far), how would I then calculate the rotation from that pixels position? I've tried other similar questions like:
Link 1
Link 2
Link 3
without using rotation the pixels I expect to actually be on the screen are represented (I made text file that outputs the results in 1's and 0's so I can have a visual representation of whats on the screen), but with the formula found in those questions the information isn't what is expected. (Scenario: I've rotated an image so only the top left corner is visible in the view. Using the info from Here to rotate the pixel, I should expect to see a triangular set of 1's in the upper left corner of the output file, but that's not the case)
So, how would I calculate a a pixels position after rotation without using the Android matrix? But still get the same results.
And if I've just messed it up entirely my apologies =( Any help would be appreciated, this project has gone on for so long and I want to finally be done lol
If you need any more information I will provide as much as I possibly can =) Thank you for your time
I realize this question is particularly difficult so I will be posting a bounty as soon as SO allows.
You do not need to create your own Matrix, use the existing one.
http://developer.android.com/reference/android/graphics/Matrix.html
You can map bitmap coordinates to screen coordinates by using
float[] coords = {x, y};
m.mapPoints(coords);
float sx = coords[0];
float sy = coords[1];
If you want to map screen to bitmap coordinates, you can create the inverse matrix
Matrix inverse = new Matrix(m);
inverse.inverse();
inverse.mapPoints(...)
I think your overall approach is going to be slow, as doing the pixel manipulation on the CU from Java has a lot of overhead. When drawing bitmaps normally, the pixel manipulation is done on the GPU.
I have a game what I made in 480x320 resolution (I have set it in the build settings) in Unity. But I would like to publish my game for every Android device with every resolution. How can I do it, to tell Unity to scale my game up to the device's resolution? Is it possible to do?
Thanks in advance!
The answer to your question largely depends on how you've implemented the game. If you've created it using GUI textures, then it largely depends on how you've placed/sized your objects versus screen size, which makes things a little tricky.
If the majority of your game is done using objects (such as planes, cubes, etc) then there's two methods I usually choose to use.
1) First method is very easy to implement, though doesn't always look too good. You can simply change the camera's aspect ratio to match the one you've designed your game around. So in your case, since you've designed your game at 4:3, you'd do something like this:
Camera.aspect = 4f/3f;
However, if someone's playing on a screen meant for 16:9, the game will end up looking distorted and stretched.
2) The second method isn't as easy, requiring quite a bit of work and calculations, but will give a much cleaner looking result for you. If you're using an orthographic camera, one important thing to keep in mind is that regardless of what screen resolution is being used, the orthographic camera keeps the height at a set height and only changes the width. For example, with an orthographic camera at a size of 10, the height will be set to 2. With this in mind what you'd need to do is compensate for the widest possible camera within each level (for example, have a wide background) or dynamically change the Orthographic Size of the camera until its width matches what you've created.
If you've done a 3d game with a stereoscopic camera , screen resolution shouldn't really affect how it looks, but I guess that depends on the game, so more info would be required
The way i did is to change camera viewport according to device aspect ratio
Consider you made the game for 800x1280
The you can do this in any one of the script
float xFactor = Screen.width / 800f;
float yFactor = Screen.height / 1280f;
Camera.main.rect=new Rect(0,0,1,xFactor/yFactor);
and this works like magic
A easy way to do this is considering your target, I mean if you're doing a game for Iphone 5 then the aspect ratio is 9:16 v or 16:9 h.
public float targetRatio = 9f/16f; //The aspect ratio you did for the game.
void Start()
{
Camera cam = GetComponent<Camera>();
cam.aspect = targetRatio;
}
Here is my script for scaling the ortographic camera in 2D games
public float screenHeight = 1920f;
public float screenWidth = 1080f;
public float targetAspect = 9f / 16f;
public float orthographicSize;
private Camera mainCamera;
// Use this for initialization
void Start () {
// Initialize variables
mainCamera = Camera.main;
orthographicSize = mainCamera.orthographicSize;
// Calculating ortographic width
float orthoWidth = orthographicSize / screenHeight * screenWidth;
// Setting aspect ration
orthoWidth = orthoWidth / (targetAspect / mainCamera.aspect);
// Setting Size
Camera.main.orthographicSize = (orthoWidth / Screen.width * Screen.height);
}
I assume it's 2D instead of 3D, this what I do:
Create a Canvas object
Set the Canvas Scaler to Scale with Screen Size
Set the Reference Resolution to for example: 480x320
Set the Screen Match Mode to match width or height
Set the match to 1 if your current screen width is smaller (0 if height is smaller)
Create an Image as background inside the Canvas
Add Aspect Ratio Fitter script
Set the Aspect Mode to Fit in Parent (so the UI anchor can be anywhere)
Set the Aspect Ratio to 480/320 = 1.5
And add this snippet on main Canvas' Awake method:
var canvasScaler = GetComponent<CanvasScaler>();
var ratio = Screen.height / (float) Screen.width;
var rr = canvasScaler.referenceResolution;
canvasScaler.matchWidthOrHeight = (ratio < rr.x / rr.y) ? 1 : 0;
//Make sure to add Using Unity.UI on top of your Aspect Ratio Script!
For 3D objects you can use any of the answers above
The best solution for me is to use the theorem of intersecting lines so that there is neither a cut-off on the sides nor a distortion of the game view. That means that you have to step back or forward depending on the different aspect ratio.
If you like, I have an asset on the Unity asset store which automatically corrects the camera distance so you never have a distortion or a cut off no matter which handheld device you are using.
I am trying to build a game and was wondering how would one go about supporting different resolution and screen sizes. For position of sprite I've implemented a basic function which sets the position according to a certain ratio, which I get by getting the screen width and height from sharedDirector's winSize method.
But this approach is not tested as I have yet to develop something to calculate the scaling factor for sprites depending upon the resolution of device. Can somebody advise me some method and tips by which I can correctly calculate the scaling of sprite and suggest a system to avoid the pixelation of sprites if I do apply any such method.
I searched on Google and found that Cocos2d-x supports different resolutions and sizes but I am bound to use Cocos2d only.
EDIT: I am bit confused as this is my first game. Please point out any mistakes that I may have made.
Okay I finally did this by getting the device display denitiy like
getResources().getResources().getDisplayMetrics().densityDpi
and based on it I am multiplying by PTM ratio by 0.75,1,1.5,2.0 for ldpi,mdpi,hdpi,and xhdpi respectively.
I am also changing the scale of sprites accordingly. And for positioning I've kept 320X480 as my base and then multiplying that with a ratio of my current x and y in pixels with my base pixels.
EDIT: Adding some code for better understanding:
public class MainLayer extends CCLayer()
{
CGsize size; //this is where we hold the size of current display
float scaleX,scaleY;//these are the ratios that we need to compute
public MainLayer()
{
size = CCDirector.sharedDirector().winSize();
scaleX = size.width/480f;//assuming that all my assets are available for a 320X480(landscape) resolution;
scaleY = size.height/320f;
CCSprite somesprite = CCSprite.sprite("some.png");
//if you want to set scale without maintaining the aspect ratio of Sprite
somesprite.setScaleX(scaleX);
somesprite.setScaleY(scaleY);
//to set position that is same for every resolution
somesprite.setPosition(80f*scaleX,250f*scaleY);//these positions are according to 320X480 resolution.
//if you want to maintain the aspect ratio Sprite then instead up above scale like this
somesprite.setScale(aspect_Scale(somesprite,scaleX,scaleY));
}
public float aspect_Scale(CCSprite sprite, float scaleX , float scaleY)
{
float sourcewidth = sprite.getContentSize().width;
float sourceheight = sprite.getContentSize().height;
float targetwidth = sourcewidth*scaleX;
float targetheight = sourceheight*scaleY;
float scalex = (float)targetwidth/sourcewidth;
float scaley = (float)targetheight/sourceheight;
return Math.min(scalex,scaley);
}
}
I'd like to project images on a wall using camera. Images, essentially, must scale regarding the distance between camera and the wall.
Firstly, I made distance calculations by using right triangle trigonometry(visionHeight * Math.tan(a)). It's not 100% exact but yet close to real values.
Secondly, knowing the distance we can try to figure out all panorama height by using isosceles triangle trigonometry formula: c = a * tan(A);
A = mCamera.getParameters().getVerticalViewAngle();
The results are about 30% greater than the actual object height and it's kinda weird.
double panoramaHeight = (distance * Math.tan( mCamera.getParameters().getVerticalViewAngle() / 2 * 0.0174532925)) * 2;
I've also tried figuring out those angles using the same isosceles triangle's formula, but now knowing the distance and the height. I got 28 and 48 degrees angles.
Does it mean that android camera doesn't render everything it shoots ? And, what other solutions you can suggest ?
Web search shows that the values returned by getVerticalViewAngle() cannot be blindly trusted on all devices; also note that you should take into account the zoom level and aspect ratio, see Determine angle of view of smartphone camera