I'm building a solution in which there is an image shown on the app and if the phone is pulled closer to the user, the image zooms-in and if its pushed away, the image zooms-out.
I tried using a lot of acceleration detector apps to understand the values. I could see that the movement is pretty much on z axis.
Now I'm stuck at building a scale factor (to zoom in or zoom out image) from this acceleration value. Please help.
Related
Im making an android project where I want to show to the user a map of a place (the place doesnt have any detail in google maps, just a vague outline). I have some physical architectual sketches of the place, that I scanned and cut a bit. Unfortunetly they still have a wide gap between the edges of the image and the actual lines of the sketch, as well as having potential distortions mainly from trimming it.
The main problem is that i can only get accurate locations (longitude and latitude) of some parts in the middle of the image (the lines themselves), and not the edges of it or the middle. When i come to put the image as a ground overlay, it only lets me input a center, a width (and maybe length) and a bearing. However, I have no idea how to transform those 4 co-ordinates into a center, width and bearing. can anyone from here help me with this?
I tried measuring the physical copies of the image with a ruler and doing some algebra but its very slow, prone to mistakes and i dont know if its that accurate. I thought to maybe put the cords manualy and showing the image to check if its allined and mess with it until it works, but when it tried implementing it i realirzed i dont know how to calculate the arguments needed (center width and bearing) from the cords.
i now think of maybe manually inputting the Anchor, Bearing and Width and checking that manually but It will be slow and tedius and very inaccurate, so hopefuly i can get help before too long.
some more info: i have a bird's eye view and GIS of the area but its a building with some floors so i can only know some locations on the outside. I dont know the locations of the bounderies since they are arbitrary on the sketch. I also have no real idea on what the orientation is, i really only have some Longitude Latitude pairs. They are also not rectangle, its just that i can ascosiate between them and certain points on the image.
I am thinking to put markers on images taken from camera output similar to what Google Photoscan application does. As I can see the Google Photoscan app puts four solid circle on image which is overlay and then moves the center hallow circle towards all four solid circles and capture the four images. Stitch them together to create a high quality image.
Screenshots for reference (The Solid dots you can see are always there even on the same color background Even if you move the camera around and back to initial position they will display at same position):
The Solid dots you can see are always there even on the same color background Even if you move the camera around and back to initial position they will display at same position
I am very curious how they are able to stable those four solid circles? Are they using any optical flow algorithm ? Or any motion sensors ? I tested application on white colour or same colour background those dots stay stable.
I implemented this functionality using optical flow algorithm (Lucas–Kanade method in openCV).But they are not stable when I am using them on same colour background or on white colour background (basically in Lucas–Kanade algorithm if it does not find the feature it tries to shift that point). Here is the screenshot for my implementation:
You are almost close. Using single sensor either alone gyroscope or compass will not work. By combining result of these, we can achieve your requirement.
Ingredient 1 : Accelerometer
Accelerometers in mobile phones are used to detect the orientation of the phone. The gyroscope, or gyro for short, adds an additional dimension to the information supplied by the accelerometer by tracking rotation or twist. An accelerometer measures linear acceleration of movement.
Ingredient 2 : gyroscope
In practice, an accelerometer will measure the directional movement of a device but will not be able to resolve its lateral orientation or tilt during that movement accurately unless a gyro is there to fill in that info.
Ingredient 3 : Digital compass
The digital compass that's usually based on a sensor called magnetometer provides mobile phones with a simple orientation in relation to the Earth's magnetic field. As a result, your phone always knows which way is North so it can auto rotate your digital maps depending on your physical orientation.
With an accelerometer you can either get a really "noisy" info output
that is responsive, or you can get a "clean" output that's sluggish.
But when you combine the 3-axis accelerometer with a 3-axis gyro, you
get an output that is both clean and responsive in the same time.
Coming back to your question, Lucas–Kanade method in openCV result delayed causing glitch or the sensors not giving accurate result from your device.
It's more of a CV problem.
I really appreciate #Jeek Axio's answer. You can use multiple sensors on Android device as 'prime' factors in CV problem.
However as of state-of-the-art CV methods, it's possible to solve this tracking problem on a very good accuracy.
You may use EKLT, PointTrack methods to track the feature points.
There's also a full-featured toolbox called FTK.
I am a developer in Korea.
I avail weak point very English.
Try to draw the ocean map using OpenGL ES.
Therefore, it is us with a direct manipulation of movement when you touch the screen, zoom in / zoom out function.
To get the movement value to know the difference between the width height of the screen according to the zoom state of each is likely to be.
Is there a way to check the internal coordinates of the lower right and the most coordinate of the top left of the screen to be projected in the current Android?
We can not speak English, you can complement the description of the picture.
Please help me.
Are unable to work.
I Hope this will help you:
getWindow().getWindowManager().getDefaultDisplay().getWidth();
getWindow().getWindowManager().getDefaultDisplay().getHeight();
I am trying to emulate point sprites in Android using OpenGL, specifically the characteristic of the size staying the same if the view is zoomed.
I do not want to use point sprites themselves as they pop out of the frustum when the "point" reaches the edge, regardless of size. I do not want to go down the route of Orthographic projection either.
Say I have a billboard square with a size of 1, when the use zooms in, I would need to decrease the size of the square so it looks the same size, if the user zooms out, I increase it. I have the projection and model matrices to hand if these are required as well as the FOV. My head just goes blank every time I sit down and think about it! Any ideas on the necessary algorithm?
Ok, By changing field of view to zoom into the environment I divide the quad size by (max_fov / current_fov). It works for me.
Ive been trying to implement a limit to prevent the user from scaling the image too much in my multitouch zoom app. Problem is, when i set the max zoom level by dumping the matrix, the image starts to translate downward once the overall scale of the image hits my limit. I believe it is doing this because the matrix is still being affected by postScale(theScaleFactorX,theScaleFactorY,myMidpointX,myMidpointY) where theScaleFactorX/Y is the amount to multiply the overall scale of the image (so if the theScaleFatorX/Y is recorded as 1.12, and the image is at .60 of its origional size, the overall zoom is now .67). It seems like some sort of math is going on thats creating this translation, and was wondering if anyone knew what it was so i can prevent it from translating, and only allow the user to zoom back out.
Still unsure how postScale affects translation, but I fixed it by having an if statement saying as long as we are within the zoom limit i set, post scale as normal. Otherwise, divide the set zoom limit by the saved global zoom level recorded on ACTION_DOWN and set the scale so it will keep the image at the proper zoom level