Android - size of the screen that is currently displayed - android

I am a developer in Korea.
I avail weak point very English.
Try to draw the ocean map using OpenGL ES.
Therefore, it is us with a direct manipulation of movement when you touch the screen, zoom in / zoom out function.
To get the movement value to know the difference between the width height of the screen according to the zoom state of each is likely to be.
Is there a way to check the internal coordinates of the lower right and the most coordinate of the top left of the screen to be projected in the current Android?
We can not speak English, you can complement the description of the picture.
Please help me.
Are unable to work.

I Hope this will help you:
getWindow().getWindowManager().getDefaultDisplay().getWidth();
getWindow().getWindowManager().getDefaultDisplay().getHeight();

Related

Transform Latitude,Longitude-Position on screen in augmented reality app

This is my first post on this forum and I'm very new in programming. I want to build an application where I can see exactly where some gps-values are on my phone. I know a lot of applications, like junaio, mixare and others, but they only show the direction to the objects and they are not very accurate (they don't have the goal to project it on the exact position on screen) - so I want to build it myself. I program in android, but I think it would be the same on iPhone.
I followed the steps suggested from dabhaid :
There are three steps.
1) Determine your position and orientation using sensors.
2) Convert from GPS coordinate space to a planar coordinate space by determining the relative position and bearing of known GPS coordinates using e.g great circle distance and bearing. (your devices stays at the origin of the coordinate space with this scheme)
3) Do a perspective projection http://en.wikipedia.org/wiki/3D_projection#Perspective_projection to figure out where on the plane that is your display (ok, your camera sensor) the objects should appear, so you can augment them.
Step 1: easy, I have the gps-position and all orientations from my mobile device (x,y,z). For further refinements, I can use some algorithm to smooth this values (average, low filter, whatever).
Step 2: I don't know, what is exactly meant by planar coordinate space. I have some different approaches to convert my gps coordinate space. One of them is ECEF (earth centered), where 0,0,0 is the center of the earth. Somehow, this doesn't look good to me, because every little change of ONE axis, results in changes of the other two axis. So if I change the altitude, all of the 3 axis will change. I don't know if I can follow step 3 with this coordinate system.
In step 2 is mentioned: using haversine - this would give me the distance to the point, but I don't get x,y,z from it. Do I have to calculate x,y by using trigometry (bearing (alpha) + distance (hypotenuse)) ?
Step 3: This method looks really cool! If I have my coordinate space from Step 2, I can calculate d_x,d_y,d_z by using the formula on wikipedia. But after this step, I'm not finished yet because i just have the coordinates and for projecting it on my screen, I only need two coordinates? The text from wikipedia is continued by calculating b_x,b_y They use e_x,e_y,e_z which is the viewer's position relative to the display surface -> How can I get these values from my mobile device? (android/ios). Another approach, which is suggested from wikipedia is: Calculating b_x,b_y by by using the formula mentioned on wikipedia. In this formula they use s_x,s_y, which is the screen size and r_x,r_y which is the recording surface size. Again, how can I get the recording surface from my mobile device?
I can't find anything for it on the internet. It seems that nobody on android/ios has ever implemented a perspective projection before...
Thank you very much for all of your answeres! Also, links to useful sites would help!
I think you can find many answers in this other thread: Transform GPS-Points to Screen-Points with Perspective Projection in Android.
Hope it helped, bye!
Here's a simple solution I did on this issue.
A: Mapping GPS locations on the camera preview in Android
Hope it helped. :D

Android Touch Event Direction

I would like to know on how to detect on which part of the screen did the user touch it, not necessarily specific, just the direction(NORTH,SOUTH,EAST,WEST) from the middle point of the screen. Would it also affect the orientation of the screen if ever? I am using a landscape orientation.
Fetch the coordinates of the touch (getX() & getY()) and compare it to the center point of your screen. This should give you a nice hint on the 'direction' of your touch.
Hope I understood the question.
JQCorreia

In game scrolling and zooming

I'm in the process of developing an android game. I have an activity that has a custom class that extends the view and where everything is drawn. Everything works fine and I have implemented a way to draw levels and it looks good.
The problem that I have is that the levels are clearly too big for just 1 screen and re-designing them is not an option as it affects the user experience. The only solution that I see is making the screen scrollable so that you can move around the rendered stuff and zooming. What I'm looking for is double tapping anywhere to zoom in and out and scrolling to move around the map.
What I need help with is how to do this. I know how to detect that the user has scrolled or double tapped but I don't know what I should do to actually zoom in and out and scroll (if a scroll is detected).
I have been looking around and saw some very simple tutorials but all of them deal with zooming in/out of an image which is not what I need. My level is rendered using many different bitmaps so I know I need to redraw all of them when updating the screen (zoom or scroll).
Is my case the same as having a single image. When it comes to scrolling I think what need to be done is calculating how much the screen is "moved" then update the and redraw the view bitmaps with the new scaled coordinates, is this correct? What about zooming?
Any help would be much appreciated. Thanks
Zooming should be easy once you have detected where you are currently at (zoom level i.e.)
image.setBounds((getWidth()/2)-zoomControler, (getHeight()/2)-zoomControler, (getWidth()/2)+zoomControler, (getHeight()/2)+zoomControler);
Something like that should help you zoom out/in on the image. When the zoom levels cross a certain level, you can consider swapping out an image that is more detailed that the one you are currently rendering.

Get actual image coordinates of image in Android webview onTouch event

I'm loading floor plans into an Android application. I need the user to be able to identify problem areas on the floor plan with a single click. My thought process is to identify the click action, reload the html of the webview adding my marker asset to it at the specified location.
I've loaded the image into a webview to take advantage of zoom capabilities. I can get the X,Y coordinates of the webview where the click occurred, but I can't figure out how to get the current zoom level. Also, I'm not sure what math will be required to translate the X,Y,scale to actual pixel coordinates.
Is the webview the right view for me to use?
EDIT:
Using the link suggested below, I got the zoom functionality working. I still can't figure out how to place a drawable marker on top of the TouchImageView, much less get the coordinates that were actually touched on the image.
For your zoom question as well as coor, How can I get zoom functionality for images? should help you
The accepted answer does use a WebView, but the other answer uses a better approach for enabling zoom. It also provides the necessary math required for translating your X and Y coordinates into usable points.

Enlarging Textures in Android OpenGL ES?

I have a small issue.
I need to enlarge(Zoom) Textures when I hold&drag at the corners.
I am using glOrtho() to setup ModelView.
gl.glOrthof(0.0f, screen_width, -screen_height, 0, -1.0f, 1.0f); //Map exact pixel to World Co-od
I am able to do hit-test and detect corners of the images(Textures) on the screen.
Now I need to enlarge(zoom) the image(texture). I have offset values, means how far I moved on the screen in X,Y directions.
If I need to use glScalef(), it will accept values in percentage(I think). How can I map the offset values to this percentage value.
or is there any other way to zoom(ie by enlarging the background polygon vertices, so that the mapped texture will automatically get zoomed) ? In this method, I am fixing the polygon sides at time of Surface Creation.
Please help me in this.
Thanks in advance. Your help is really appreciated.
ok... I am answering my own question..
I did this by changing(increasing / decreasing) the vertices values according to stylus move on the screen.
Now, I am able to zoom the image in specified direction.
But, I am not getting exact behavior, like the center of the image(x,y) is not perfect (I am calculating Image boundaries by using image center values). So, the hit-test is not properly functioning.
I am doubting, it may not be the correct way to zoom an image by changing the vertices values.
Still, need to explore. If anybody got any idea, please feel free to share.
Thanks in advance.

Categories

Resources