GroundOverlay images with dimention lengths of power 2 - android

I'm using a GroundOverlay to display images over a Google Map on Android. The images I'm using are being pulled from a server and aren't guaranteed to have width/height with a value that is a power of 2.
According to the docs if I don't make the sides of length power 2 the API will do this for me.
Note: When the image is added to the map it will be converted to an image with sides that are powers of two. You can avoid this conversion by using an original image with dimensions that are powers of two — for example, 128x512 or 1024x1024.
Does anyone on the Google Maps for Android team have any information on whether the images are stretched/cropped etc in order to make them have sides of length power 2?
If they are stretched/cropped is this taken into account when rendering the bitmap if I specify a certain LatLngBounds as the region to overlay the image onto (e.g. are the bounds increased to match the new width/height)?
Also, is the aspect ration of the image preserved? Is this even possible if the original source image doesn't have an aspect ratio that would allow resizing to have sides of length power 2?
Thanks,
Andy.

Related

How to align depth and color in Arcore?

I'm writing a small Android app for my P30Pro in which I want to find an object in the color image (face, visual marker, ...) and then get its relative position by using the dense depth image from the Time-of-Flight camera. To simultaneously track the camera, I run ArCore and use a SharedSession to access the color image as well as the depth image via frame.acquireDepthImage(). (Both works well, I get a high resolution color image and a 120x160 depth image)
For the color image, I get the intrinsic calibration via camera.getImageIntrinsics, so I can map between a pixel and the corresponding ray.
I however found no corresponding function for the depth camera, so that I can't create a pointcloud from the depth image or get the corresponding depth for a pixel in the color image.
So: "How can I find the corresponding 3D Point for a given pixel in the color image by using the dense depth image?"
I previously worked on a project that used P20 Pro's dense depth maps, and I found they were already aligned with the high resolution color image, even though they were much smaller resolution.
In other words, you should just upsample the depth map so it matches the color image's resolution. Once you do that, you should find that the interpolated depth value at (r,c) in the upsampled depth image corresponds to the pixel at (r,c) in the color image.

Google Maps - TileOverlay - Stretch tiles for higher zoom levels

I have built a custom TileProvider from a map image, but the original image does not cover the required map area for the resolution corresponding to the highest zoom level.
By default, the provider returns no tiles if I do not create images for the corresponding zoom levels. Is it possible to zoom on the existing tile rather? I could create zoomed tiles these which would be basically stretched and cut versions of the highest resolution I have, but this seems redundant and would take unnecessary disk space/processing.
Is there a way to stretch tiles when none is available for a high zoom level, rather than creating those tiles explicitly? I could always set the maxZoom property on the map, but I have different overlays with different resolutions. I could also add some smart processing in the provider to return a subsampled version of a tile at lower resolution on the fly, but I am hoping there is a built in way to do this.
You cannot stretch the tiles per zoom level (automatically) but you can instead wrap your URL Tile provider with one that allows you to customize the behaviour. For example I've done (some years ago) a custom tile provider whose cached the tiles in a specific folder of the phone (better and longer caching than gmaps one), but could also be possibile to check the zoom (z) and if it is higher than a specific value, you can retrieve the tile for a lower zoom and split by 4 (no zoom but cut and zoom.
The result will be very poor, and honestly I find better to create them on server side (are you using a WMS provider maybe?).

get real size based on pixel size

I am trying to get the true object size in android app using OpenCV library.
I've created an app that is able to recognize a car (looking on it side). Now the thing that I need to do is too get it true width and height.
What I have:
I have the height and width in pixels of the object (I' wrapping the car in rectangle), I have the resolution of the camera that is used.
Can I convert in some way pixels to true size ? I can't use distance from me to object cause I don't know it.
Can I convert in some way pixels to true size ? I can't use distance from me to object cause I don't know it
No, you can't without knowing the distance. The perspective projection of real world objects (the car) onto the 2D image plane results in an information loss. For example, take a small toy car near the camera and a normal car far from the camera. Both could result in the same projected size in pixels although their true sizes are different.

What camera viewport width and height should I use with orthographic camera

I am very beginner to game programming and Libgdx. I am really confused what camera viewport size should I use. In some articles i found that they used 480x800 which is same as target device size. In some articles I found the use meters as 5x5 meter.
So which method is better and how (if you can give some benefits).
If I use meter unites for camera viewport then which is first mapped, width or height.
If i use 5x5 meter for 480x800 pixels device then visible area of world
height = 5 meter = 480px and
width = 800/480 * 5 = 8.33 meter
or
width = 5 meter = 800px and
height = 480/800 * 5 = 3 meter
Is it correct calculation of visible world size and which is used first or second.
I am confused when they start using meter for size everywhere instead of pixels. like actor size is 1x1 meter even it is only 64x64 px. It is really difficult to estimate position and size for me.
Please link any good article about camera and camera units.
Whatever dimensions you specify, they'll be mapped to the entire screen by default. If the aspect ratios don't match, the displayed graphics will be stretched. So if you set your camera to a 5x5 coordinate system on a non-square screen without changing the drawing area, it'll be heavily distorted. If you render it in a square desktop window, it'll be fine.
The advantage of using smaller coordinate systems is that it's easier to calculate with, and possibly more meaningful in the context of a game - e.g. you can think of them as meters, as you said. It's useful in cases where the content matters more than the exact positions on the screen - like drawing the game world.
Using larger coordinates which match the resolution of some devices can be more useful when you're drawing UI. You can see how large you should make each image, for example, if you target that resolution. (Scaling can cause distortions.)
But ultimately, it's a matter of preference. Personally, I like smaller coordinate systems more, so I recently coded my level select menu in a 20*12 system. (I did run into problems when rendering a BitmapFont though - they were not very well made for scaling like this.) Others might prefer to work with resolution-sized coordinates for gameplay rendering as well. What matters is that you should make sure you're not distorting the graphics too much by badly matching aspect ratios.
try 136 for Width and 204 for Height

Influence the Tile size in Android Maps TileProvider?

I have been playing around with the TileOverlay in Android Maps v2 and I have built a custom TileProvider very very similar to this one
But there is something that strikes me as odd. No matter which number I pass on to the Tile constructor, the image on the screen is always the same - 4 to 9 Tiles sharing the screen space evenly, like this:
Of course this is something you would expect from reading the documentation:
The coordinates of the tiles are measured from the top left (northwest) corner of the map. At zoom level N, the x values of the tile coordinates range from 0 to 2N - 1 and increase from west to east and the y values range from 0 to 2N - 1 and increase from north to south.
But you might guess that there is in fact such a functionality from looking at the Constructors documentation
Constructs a Tile. Parameters
width the width of the image in pixels
height the height of the image in pixels
data A byte array containing the image data. The image will be created from this data by calling decodeByteArray(byte[], int, int).
So obviously I misunderstood something here. My personal guess is that the tiles have to cover an entire "Map Tile" and can therefore not be shrunken
My goal would be to make my tiles about 10dp of the screen. Therefore again my question to you:
Can I realize this with TileOverlay or will I end up using custom Markers?
The size of the tile specified in the constructor is the size of (every) bitmap tile you are supplying to the map. This allows you to provide tiles at different densities for different screens if you have such resources.
It will not change the size of the image that is drawn on the map. The physical size of a map tile is defined by the zoom level, where a zoom level of 0 is a single tile covering the entire world, 1 is 2x2 tiles, etc. This is part of an open web map standard for map tiles, not defined by Google.
API docs:
https://developers.google.com/maps/documentation/android/tileoverlay
Ref:
http://www.maptiler.org/google-maps-coordinates-tile-bounds-projection/

Categories

Resources