I'm developing an android "radar" in which I will show a gps coordinates in my android screen. So I have to convert the latitude and longitude to the pixels (x, y) of the device screen. To do that I though in take the center of the screen like (0, 0) and transform the coordinates of (lat, lng) to my screen pixels (x, y).
The problem is that I dont know how to implement this. I think the idea would sucess but I dont know how the screen works, the pixel coordinates, etc.
Any tips pls?
Thanks
Android supports many different screen orientations. Because of this, it is HIGHLY UNADVISED to EVER use absolute pixels.
The best way to accomplish what you are trying to do is likely through the OpenGL ES API.
If all you want to do is get the "center" of the display, you can use the following:
int centX = WindowManager.getDefaultDisplay().getWidth() / 2;
int centY = WindowManager.getDefaultDisplay().getHeight() / 2;
Hope this helps!
Related
I would like to convert 2d Screen Coordinates to 3d World Coordinates.
I put together a hack but would like a solid solution. If possible please use your equation/algorithm on the example below (if there is already a link, could you use the link to show the solution to the problem below)
My Environment is Java/C++ for Android and iOS using OpenGL 2.0
Problem to Solve
Where screen
(0,0) is top left and bottom right is screenWidth and screenHeight.
screenWidth=667; screenHeight=375;
//Camera's position,where camera is looking at, and camera direction
CamPos (0.0f,0.0f,11.0f); CamLookAt (0.0f, 0.0f, 10.5f); CamDir(0.0f,1.0f,0.0f);
//The Frustum
left =1.3f; right=1.3f; bottom=1.0f; top=1.0f; near=3.0f; far=1000.0f;
Obj
//object's position in 3d space x,y,z and scaling
objPos(1.0f, -.5f, 5.0f); objScale(.1f,.1f,0.0f);
The problem is how to convert (600,200) screen coordinates (with scaling of (.1f,.1f.0.0f))to the object's (1.0f,-.5f,5.0f) world coordinates to check if they are colliding(I am doing simple box collision, just need the 2d screen coordinates converted to world). I will put the hack that I have below but I am sure there is a better answer. Thank you in advance for any help. Please use the numbers above to show how your algorithm works.
Hack
//this is 0,0 in screen coordinates//-Cam_Offset =hardcoded to camera's pos
float convertedZeroY= (((mCam->right)*(Cam_Offset))); //2
float convertedZeroX= -(((Cam_Offset)+convertedZeroY));
convertedZeroY=convertedZeroY/2;
convertedZeroX=convertedZeroX/2;
//this is the length from top to bottom in game coordinates
float normX=convertedZeroX*2;
float normY=convertedZeroY*2;
//this is the screen coordinates converted to 3d world coordinates
mInput->mUserInputConverted->mPos.x=(-(((mInput->x)/mCam->mScreenW)*normX)+convertedZeroX);
mInput->mUserInputConverted->mPos.y=(-(((mInput->y)/mCam->mScreenH)*(normY))+convertedZeroY);
I have created an Android game using a canvas, but when testing, I have found that the speed and distance of the movements such as flying up or falling down are set right on a phone with a resolution of 1920x1080 (401 ppi). But when I tested on a smaller phone with a resolution of 480 x 854 (196 ppi), I found that the movement of my sprites are a lot quicker which is affecting the gameplay. E.g. The main character sprite jumps a lot higher than I want it to.
Is there any way of keeping the speed and distance the same across all device sizes and types?
Here is some code on how I have implemented the movement:
A sprite class.
//class variables
private int GRAVITY_LIMIT = -30;
public int gravity = 0;
//gravity
if(gravity>GRAVITY_LIMIT){
gravity= gravity-2;
}
//fall
y= y-gravity;
Drawing the sprite
canvas.drawBitmap(bmp1, x, y, null);
When onTouch is triggered (Jumping)
bird.gravity=30;
You should base your movement around world coordinates. For example, set your world to be 10meters x 10meters, so that when you jump, you jump 1m. You then need to map that world to screen pixels.
float worldHeight = 10f;
float worldToPixels = screenHeight/worldHeight;
y = bird.y * worldToPixels;
So, on a 500px height screen, you would jump 50px and on a 1000px height screen you would jump 100px.
Gravity and other forces need to be based on the world as well for it to work on all devices.
Lastly, if you're trying to make a game for multiple devices, it would be better to use a library like libGDX. There are lots of helpful classes like ViewPorts to make this easier.
I have found another solution.
This one works well.
y= y-(gravity * game.getResources().getDisplayMetrics().density);
I'm developing a cross platform mobile application that stores positions as geographic coordinates into a db, each of these points represent the top left corner of a graphical object.
This application contains a map control (radmap from telerik), that supports geographic coordinates, so, you just add the rectangle and it draws it correctly.
Due to licensing problem I cannot use the map control in the android version (google doesn't let us use the map control for business applications - https://developers.google.com/maps/faq#tos_commercial, for ios I still didn't check this out), so I will use an image to be used as background.
I've got the top left corner and bottom right geographic coordinates of the "map/image" and I've got to draw the graphical objects inside the area defined by the corners.
Could anyone help me to discover how to convert the geographic coordinates to screen coordinates.
Data example (X,Y):
GEO
Top left corner:
-0,00939846 -0,00504255
Bottom right corner:
0,009398461 -0,01281023
Points to draw:
-0.00464558,-0.00799298
-0.0046509,-0.00845432
-0.00386774,-0.00860988
-0.00344932,-0.00860452
SCREEN
Top left corner:
0 0
Bottom right corner:
? ?
Points to draw
? ?
Screen size
1024 * 768
Thanks for the help,
Luis Pinho
I'm most fluent in iOS so that's what my answers in. I tried to make as many variables as possible so that you could either manually/programmatically override them without changing too much of the other code.
- (CGPoint)convertGeoPoint:(CGPoint)point toView:(UIView*)view {
CGPoint geoTopLeft = CGPointMake(-0.00939846, -0.00504255);
CGPoint geoBottomRight = CGPointMake(0.009398461, -0.01281023);
CGFloat geoWidth = geoBottomRight.x - geoTopLeft.x;
CGFloat geoHeight = geoBottomRight.y - geoTopLeft.y;
// This is the block you would change to suit your needs
CGPoint viewTopLeft = CGPointMake(view.frame.origin.x, view.frame.origin.y);
CGFloat viewWidth = view.frame.size.width;
CGFloat viewHeight = view.frame.size.height;
return CGPointMake(viewTopLeft.x + (point.x-geoTopLeft.x)*viewWidth/geoWidth, viewTopLeft.y + (point.y-geoTopLeft.y)*viewHeight/geoHeight);
}
The basic idea is that you need to convert the point into the new coordinate system. To do this, you need to divide by the appropriate geographic dimension and multiply by the appropriate view dimension.
I'd like to project images on a wall using camera. Images, essentially, must scale regarding the distance between camera and the wall.
Firstly, I made distance calculations by using right triangle trigonometry(visionHeight * Math.tan(a)). It's not 100% exact but yet close to real values.
Secondly, knowing the distance we can try to figure out all panorama height by using isosceles triangle trigonometry formula: c = a * tan(A);
A = mCamera.getParameters().getVerticalViewAngle();
The results are about 30% greater than the actual object height and it's kinda weird.
double panoramaHeight = (distance * Math.tan( mCamera.getParameters().getVerticalViewAngle() / 2 * 0.0174532925)) * 2;
I've also tried figuring out those angles using the same isosceles triangle's formula, but now knowing the distance and the height. I got 28 and 48 degrees angles.
Does it mean that android camera doesn't render everything it shoots ? And, what other solutions you can suggest ?
Web search shows that the values returned by getVerticalViewAngle() cannot be blindly trusted on all devices; also note that you should take into account the zoom level and aspect ratio, see Determine angle of view of smartphone camera
Assuming that i have an android phone with max x=400 and max y=300
I have a 5000 by 5000 px bitmap and i display a 400 by 300 part of it on screen. I scroll it around using touch events. I want to draw a smaller bitmap(30 by 30px) onto that larger bitmap at location (460,370). But i cant statically do it(as my game requires). I want to draw and undraw that smaller image according to the player's input or collision detections.
What i am confused about is that suppose currently (50,50) to (450,350) of the larger image is displayed on the screen and now ,due to a certain requirement of the game at this moment, i have to draw that 30by30 bitmap at (460,370) but this point is in two systems - 1)In the big bitmap's co-ordinate system it's 460,370 but when it will come inside the visible area of screen it will have values something between (0,0) and (400,300) depending upon its position on the screen as per the player's movements...
So how do i track that image- i mean using which co-ordinate system, and how do i go about doing all this?
I know I've put it here in a very confusing way but that's the best way i can put it and i'll be really glad, and grateful to you if somebody could help me on this.....
Terminology:
Let's call the coordinates on the big map world coordinates
Let's call the coordinates on the screen view coordinates. You are talking to the drawing system (canvas?) in view coordinates.
Your currently displayed region of the bigger map is 100% defined by an offset, let's call this (offsetX, offsetY). Formally, this offset maps your world coordinates onto your view coordinates.
If you have the small image displayed at position (x, y) in world coordinates, all you need to do is subtract your offset to get to view coordinates, i.e.
viewX = worldX - offsetX;
viewY = worldY - offsetY;