Displaying and navigating large custom maps offline using Phonegap - android

My question is how to efficiently display large custom maps in an offline Phonegap application, allowing them to be panned and zoomed smoothly while still supporting older mobile devices?
I’m developing a mobile application that involves using geolocation to navigate walking routes in remote areas where it’s likely the user won’t have a signal and therefore an internet connection. It’s important that the app works well with Android 2.2+ (so SVG is not an option) as well as iOS4+.
I’ve drawn custom vector maps using Adobe Illustrator at resolutions appropriate to each route, the average being about 2000x2000 pixels and the largest of which so far results in an image 4000x2400 pixels.
I’ve chosen to go with Phonegap/JQM rather than native simply because I come from a web programming background and it seemed the fastest way to get a user interface up and running without needing to delve into native code too much, although I’ve written a couple of Phonegap plugins using native code for the purposes of power and screen management.
The application needs to allow the user to pan around the map (by dragging) and zoom in/out (by pinching) between about 25% to 200% of the original image size.
Most of the testing I’ve done has been on an HTC Desire running Android 2.3.3 and an HTC Wildfire running Android 2.2 since these are likely to be some of the lowest spec devices the app is going to have to run on.
I’ve tried out various approaches to display the map (detailed below), but so far each has proved unfit for purpose either because the memory usage of the app is too great, the storage space required makes the app too large to download or the CPU usage is too intensive causing lag when panning/zooming.
Any suggestions much appreciated. Thanks in advance.
Approaches I’ve tried:
1. Display map as raster PNG using tag
This was the first approach I tried. Exporting the 4000x2400 pixel image from Illustrator as a 128 colour PNG-8 resulted in a 746Kb file. I panned the image by absolutely positioning it relative to the viewport and zoomed the image by scaling the width/height attributes of the tag.
The problem with this approach was that even at a 1:1 zoom level, the Android application used 60Mb of RAM for the image and zooming in to 200% caused this to increase 120Mb, causing the app to crash on the HTC Wildfire.
2. Display portions of raster PNG using HTML5 canvas
To avoid the problem of zooming-in causing a proportional increase in memory usage, I tried loading the image via JS then copying the portion of the image to be displayed to a canvas the size of the viewport, something like:
var canvas = $(‘canvas#mycanvas’);
canvas.width = $(window).width;
canvas.height = $(window).height;
...
var img = new Image();
img.src = “map.png”;
...
var context = canvas[0].getContext("2d");
context.drawImage(img, x, y, w, h, 0, 0, canvas.width, canvas.height);
where x,y is the top-left corner within the source image defined by panning
and w,h is the area size within the source image determined by zoom level
The problem here was that large map images were somehow losing quality while in memory (I can only assume there’s some upper memory limit which is resulting in dithering), causing the maps to look distorted in the app: see here for an example screenshot
3. Display map as vector using HTML5 canvas
A bit of Googling led me to discover ai2canvas, an Illustrator plugin that enables you to export artwork as vectors displayed in an HTML5 canvas. The result of the export is an html file containing a chunk of JS which represents all the paths in illustrator as bezier curves. Exporting my 4000x2400 map resulted in a 550Kb html file containing the vector paths.
In my test app, I rendered the entire map to an in-memory canvas (not attached to the DOM) of 4000x2400 pixels, then copied the relevant portions of it to a viewport-sized canvas using context.drawImage() with the in-memory canvas as the source.
On the HTC Wildfire, although the initial render of all the bezier curves to the in-memory canvas took around 2000ms, copying between canvases was fast enough to allow smooth panning and zooming. The problem was when I looked at the memory usage of the app, it was using 120Mb for the in-memory canvas once all the vectors had rendered.
I tried a second approach using the vector map; instead of rendering all the vectors to a large in-memory canvas, I made the app calculate which vector paths were visible within the viewport at the current pan position/zoom level during each drag/pinch event and only draw the visible vectors to the viewport-sized canvas. While this reduced the required memory usage to 10Mb, the CPU cycles required to perform these calculations on every drag/pinch cycle made the app lag so much on the old android phones it was unusable.
4. Display map using offline tiling
Using map tiler, I created PNG tiles for my maps at zoom levels from 25% to 100%. In my test app, I was then able to lazy load the tiles on demand reducing memory usage and producing a smooth pan/zoom experience even on the HTC Wildfire. I thought I’d found the solution until I looked at the size of the APK produced: for my 4000x2400 map, map tiler produced 4Mb of tile images. Most of my maps are around 2000x2000 pixels, resulting in about 2Mb of tiles. The code of my proper application plus the Phonegap overhead is another 2Mb.
My intention is to release a series of apps available on the Android/Apple markets, each with a set of around 10 maps, but with tiling each map weighs in at between 1-4Mb so the resulting app becomes a very large download.

In case this is of interest to anyone else, I solved this by using map tiling in the end, using a tool called pnqnq to create 8-bit PNGs constrained to 256 colours. The resulting set of tiles for my 4000x2000 map was about 800K in size as opposed to 4Mb for PNG-24, which was an acceptable size for assets in my Android and iOS applications.

Related

Augmented image getting detected but not tracked

I am working on augmented image example in arcore where I am able to detect the image but the image is not getting tracked and the object is not getting placed.I am referring augmented image example from codelabs. I have changed the image (hand made image), whose arcoreimg score in 100 and also done following changes to the code. It's getting detected continuously but not tracked.
config.setUpdateMode(Config.UpdateMode.LATEST_CAMERA_IMAGE);
config.setFocusMode(Config.FocusMode.AUTO);
For successive detection and tracking of Augmented Images in ARCore follow these basic rules:
In ARCore 1.15+, if your image doesn't move (like a poster on a wall), you should attach a global anchor to the image to increase the tracking's stability.
The physical image has to occupy 1/4 of the camera feed.
The smallest image resolution should be 300 x 300 pixels.
You must track your image under appropriate lighting conditions. Barely-lit room is not good environment for AR user experience.
It's much better to specify an expected physical size of a tracked image. Additional metadata improves tracking performance, especially for large physical images (more than 75 cm in size).
When ARCore detected a desired image with no expected physical size specified, its tracking state will be automatically paused. For user it means that ARCore has recognised the image, but hasn't gathered enough data to estimate its location in 3D space. Do not use the image's pose and size estimates until the image's tracking state is tracking.
Augmented Images support .png and .jpeg. However, avoid heavy compression for .jpeg.
Use images with a high contrast content, it's no matter whether they are color or black-and-white.
Avoid images with repetitive patterns (like Polka dot) and sparse features.
Andy's answer is correct, but maybe insufficiently specific. I had this issue as well and as soon as I added an expected width in meters, it started working almost immediately.
Instead of augmentedImageDatabase.addImage(DEFAULT_IMAGE_NAME, augmentedImageBitmap);
Use augmentedImageDatabase.addImage(DEFAULT_IMAGE_NAME, augmentedImageBitmap, <width in meters>);
Then it'll start tracking almost as soon as it is detected and you won't have to deal with this paused shenanigans. Worked great for me with a 7cm image with a 95 score. It even works great with an image with a score of 40. 40 score image with a set width works better than 100 score image without a set width.

Optimizing performance in android graphics

I'm creating a maps application that needs to display a very large bitmap of a world map which is bigger than most screens and consumes a lot of memory.
To solve the problem, I'm using a similar idea to google maps and its app by splitting the map into smaller 256x256 pieces and then calculating the pieces that fall in the view area which on my device is 12 pieces at any one time.
The pieces are all stored in the assets folder which get loaded using the assetmanager and then decoded into a bitmap using bitmapfactory and then drawn onto my views canvas.
This is very slow and even after going further in the literature and having it run in a seperate thread, the graphics are jerky when scrolling around to new locations.
How do other games and apps (like google maps) dispaly graphics with such smooth scrolling?
After many attempts it seems that the problem was with the reading from the assets folder. I solved the problem by calculating the maximum number of map pieces visible at one time from the users screen size. The Draw procedure first searches for pieces in the array that are now off the screen and nulls the slots. It then looks for pieces that are now visible and loads them into an empty slot.
The app then uses the array as a source of data and it has reduced the assets folder reads down to 6 as opposed to the direct method loading 20 pieces per Draw.
The game now runs with the speed of popular mobile games!

Android: pixel quality reduction in Images loaded in WebView

I am building Javascript application for mobile browsers (not wrapped-as-native app).
I noticed that Android (tested 2.3 emulator and Galaxy S device) reduces the quality of loaded images if the image dimensions exceed certain threshold (width above 1400 px or so). This make it impossible to load big bitmap images (2000 x 2000 px) without the quality going unusable.
I tested this by
Loading one big image and drawing it on the - I got pixel garbage out. If I draw grid lines using lineTo() on they have perfect quality, so the bad must be in the image pixel data
Slicing the big image to 100 x 100 slices and drawing them to a canvas - this is the only method I found resulting no quality reduction. However, slicing is cumbersome, adds extra step to preprocess images and page loading times suffers
I tested tring to load image with new Image() object, tag and CSS background: everything suffers from the reduced quality, so I suspect the probelm is the image loader itself
I also tried everything with CSS image-rendering https://developer.mozilla.org/En/CSS/Image-rendering - no luck
Viewport tag seems to have no effect to the image loading - the data is already garbage when you try to touch the loaded pixel data. I tried all possible values suggested in Android's SDK documentation http://developer.android.com/reference/android/webkit/WebView.html
Tested also Firefox mobile, desktop browsers, iOS: everything is good there.
So, what is going on - Android WebView simply can't load big images?
(smiley of hung Android robot here)
Android unconditionally resamples images and reduces quality if a certain threshold of memory usage is exceeded.
https://android.googlesource.com/platform/external/webkit/+/android-3.2.4_r1/WebCore/platform/graphics/android/ImageSourceAndroid.cpp
There is no way to access the original image data in intact.
I posted a question regarding this to android-developers Google Group and kindly asking to maybe provide some kind of flag to opt-out from this behavior.
Meanwhile, if you are considering developing HTML5 web apps and you might use big images, you simply need to preprocess them on the server-side by slicing, send in smaller images to the device and then reconstuct the original image using or putting many tags inside a container element.
Another option would be load image "manually" by writing a PNG decoder which directly loads the image to , bypassing ImageSourceAndroid class.
The question is old so probably few things changed, but if you are having image quality issues with WebView then consider converting your image into PNG format.
Somehow when I load the jpeg version of the image the quality is low, while loading the png image with the same resolution the quality is high.

Is Android (read typical devices) fast enough for a game that requires plotting pixel by pixel rather than blitting

i have an idea for an Android game which is a little different from the typical game that usually moves sprites(bitmaps) around the screen. Id want to plot lots of little pixels to create my visuals.
PROS
no bitmaps required
pixel plotting of stuff like "fire" can react to wind.
no need to scale bitmaps, works w/ any screen res (lets pretend device can handle more drawing because its got a bigger screen).
CONS
slower to plot pixels than blit bitmaps
need lot of animation frames.
WISHES
id like to update my game in real time, more is better 30fps is good but not essential, 15fps is enough.
PERFORMANCE Q...
Is the typical Android device fast enough to plot say half a screenful of pixels w/ a default background ?
if full screen is not practical what window size should be able to handle such refreshes
You can achieve it in native code using C/C++.
You would create offscreen bitmap, get its memory address, draw raw RGB pixels to it, then blit entire offscreen bitmap to screen in View.onDraw method. You can get nice framerates in fullscreen.
Btw, Android also manipulates pixels in low level by native code (using Skia library). Java classes like Canvas, Bitmap etc just jump to native code to get the work done.
Using Android's Canvas in Java to paint pixels would be unrealistically slow.

handling large bitmaps in OpenGL ES and Android

I created a map app that uses a very large image as my map. It shows high resolution tiles when the user zooms in to a certain degree and a lower-res bitmap of the whole image when zoomed out past 50%. It works OK, on my origional droid sometimes I can see a very slight lag due to the new tiles being drawn into and from the garbage collector.
Im now thinking about how OpenGL ES would be a better way to render. Ive never touched it before, but from what ive been reading in different turoials, it seems i could create a quad that has 8 faces or so, and enable culling so it only draws the images that are currently visible on the screen.
Would this eliminate the lag completely? In my test app now I have tiles being loaded on app startup but i can tell the lag is due to those tiles being drawn for the first time. Thanks.
I have written an OpenGL-based map view and it's bloody awesome if I say so myself. Sadly it's for a commercial project so I can't offer code. I can however tell you that it has 1 rendering thread, a pool of 8 tile downloading threads, and (most pertinent to your question) 1 storage thread that loads and saves tiles to the NAND flash or the SD card. Rendering is done one tile at a time, each tile being 2 triangles (there are no quads in ES). It's still blisteringly quick.

Categories

Resources