XMLHTTPrequest and CrossWalk Android ( edit ..after i found a solution !) - android

original post: ( before fix )..
I develop an application (transformer.html) with XDK (ThreeJS and WebGL ) to control another application. The communication is established via a WebService (VB ASP.Net). The first version of this transformer.html had NO WebGL neither crosswalk. In debug and emulator-mode all is fine. Also compiled as a legacy hybrid mobile APK and published to my Samsung Galaxy Tab 3 without any problems. The problems pop up when I implement CrossWalk AND XMLHTTPRequest in my app.
The same origin policy seems not to be a problem. I placed this on web server side (in web.config).
<webServices>
<protocols>
<add name="HttpGet"/>
<add name="HttpPost"/>
</protocols>
</webServices>
As said - the constellation Transformer-APP on Tablet -- WebService --- TargetWebApplication is running perfectly in my local network!
The Problem:
The problem I run into came up with implementing a simple WebGL graphic. Everything is running perfectly in simulation and debug, and when I just run this in a Firefox browser. But when I build an APK this combination (CrossWalk & XMLHTTPRequest) fails!
*kristina/14.03.15: it´s working now. i consequently used this XDK-example: https://github.com/gomobile/sample-webgl-threejs
i took my httprequest in and it was working fine!*
Before:
XMLHTTPRequest (or even the POST via jQuery!) work fine under Android Legacy build. I could make the APK running but there was no Crosswalk WebGL graphic visible on my app. So HTTP Post was OK, but not Crosswalk. I´m wondering if it is possible to build a legacy App with Crosswalk. Even with the XDK's own Crosswalk demo, I was not able to build as a hybrid Legacy APK.
CrossWalk was OK on my app when I build with CrossWalk for Android, but, in this case, XMLHTTPRequest seems not possible. My connection to the WebService fails; I got a 404. But, as I said, all communication should be there, as in other modes (legacy, emulation, browser, whatever...), its working.
kristina/14.03.15: XDK recomment to build it in Android/Crosswalk. This is working NOW. The trick is to set specific host parameters in the build section!
<access origin="*"/>
( handle with care. e.g. reduce this to limited hosts! my setup only is working in my smal local environment. This way it´s ok for me )
those info was very helpful during the errorTracking:
*https://software.intel.com/en-us/xdk/docs/adding-special-build-options-to-your-xdk-cordova-app-with-the-intelxdk-config-additions-xml-file
https://software.intel.com/en-us/xdk/docs/using-the-cordova-for-android-ios-etc-build-option
http://www.ilinsky.com/articles/XMLHttpRequest/#usage
This was NOT very helpful - as in XDK/cordova/Crosswalk the manifest.json seem not effect anything(!?):
https://crosswalk-project.org/documentation/manifest/content_security_policy.html
As so soften it was the same-origine topic i struggled..
Many Thanks to Paul (Intel) who gave me the final hint :-)
Now the construction
CrossWalk - WebGL - XMLHttpRequest - Webservice is working perfect
*
My Setup:
XDK1826 (latest release I got automatically!)
Samsung Galaxy Tab 3
If you need more info please let me know.
THIS code is running after the fixes:
main.js::
/*jslint browser:true, devel:true, white:true, vars:true, eqeq:true */
/*global THREE:false, requestAnimationFrame:false*/
/*
* Based on http://threejs.org/examples/canvas_geometry_cube.html
*/
document.addEventListener ('DOMContentLoaded', function () {
var camera, scene, renderer;
var cube, plane;
var targetRotation = 0;
var targetRotationOnMouseDown = 0;
var mouseX = 0;
var mouseXOnMouseDown = 0;
var windowHalfX = window.innerWidth / 2;
var windowHalfY = window.innerHeight / 2;
var auto_timer = 0;
init();
animate();
function init() {
renderer = new THREE.WebGLRenderer( { antialias: true, alpha: true, devicePixelRatio: 1 } );
renderer.setSize (window.innerWidth, window.innerHeight);
document.body.appendChild (renderer.domElement);
camera = new THREE.PerspectiveCamera (
70, window.innerWidth / window.innerHeight, 1, 1000);
camera.position.y = 150;
camera.position.z = 500;
scene = new THREE.Scene();
// Cube
var geometry_cube = new THREE.CubeGeometry (200, 200, 200);
var texture = THREE.ImageUtils.loadTexture ('textures/crosswalk.png');//Works on mobile Android NOT in Browser or Intel XDK
texture.anisotropy = renderer.getMaxAnisotropy ();
var material_cube = new THREE.MeshBasicMaterial ( { map: texture } );
cube = new THREE.Mesh (geometry_cube, material_cube);
cube.position.y = 150;
scene.add( cube );
// Plane
var geometry_plane = new THREE.PlaneGeometry (180, 180);
geometry_plane.applyMatrix (new THREE.Matrix4 ().makeRotationX (-Math.PI / 2));
var material_plane = new THREE.MeshBasicMaterial ( { color: 0xde613e } );
plane = new THREE.Mesh (geometry_plane, material_plane);
scene.add (plane);
document.addEventListener ('mousedown', onDocumentMouseDown, false);
document.addEventListener ('touchstart', onDocumentTouchStart, false);
document.addEventListener ('touchmove', onDocumentTouchMove, false);
// Generic setup
window.addEventListener ('resize', onWindowResize, false);
}
function onWindowResize () {
windowHalfX = window.innerWidth / 2;
windowHalfY = window.innerHeight / 2;
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix ();
renderer.setSize (window.innerWidth, window.innerHeight);
}
function stopAutoRotate () {
if (auto_timer)
window.clearTimeout (auto_timer);
auto_timer = window.setTimeout (startAutoRotate, 1000);
}
function startAutoRotate () {
auto_timer = 0;
}
function animate () {
requestAnimationFrame (animate);
plane.rotation.y = cube.rotation.y += (targetRotation - cube.rotation.y) * 0.05;
if (auto_timer === 0) {
targetRotation += 0.025;
}
renderer.render (scene, camera);
}
function onDocumentMouseDown (e) {
e.preventDefault();
document.addEventListener ('mousemove', onDocumentMouseMove, false);
document.addEventListener ('mouseup', onDocumentMouseUp, false);
document.addEventListener ('mouseout', onDocumentMouseOut, false);
mouseXOnMouseDown = e.clientX - windowHalfX;
targetRotationOnMouseDown = targetRotation;
stopAutoRotate ();
}
function onDocumentMouseMove (e) {
mouseX = e.clientX - windowHalfX;
targetRotation = targetRotationOnMouseDown +
(mouseX - mouseXOnMouseDown) * 0.02;
stopAutoRotate ();
}
function onDocumentMouseUp (e) {
document.removeEventListener ('mousemove', onDocumentMouseMove, false);
document.removeEventListener ('mouseup', onDocumentMouseUp, false);
document.removeEventListener ( 'mouseout', onDocumentMouseOut, false);
stopAutoRotate ();
}
function onDocumentMouseOut (e) {
document.removeEventListener ('mousemove', onDocumentMouseMove, false);
document.removeEventListener ('mouseup', onDocumentMouseUp, false);
document.removeEventListener ('mouseout', onDocumentMouseOut, false);
stopAutoRotate ();
}
function onDocumentTouchStart (e) {
if (e.touches.length === 1) {
e.preventDefault ();
miniHttpTest();
getSphereParametersWSxhr();
mouseXOnMouseDown = e.touches[ 0 ].pageX - windowHalfX;
targetRotationOnMouseDown = targetRotation;
stopAutoRotate ();
}
}
function onDocumentTouchMove (e) {
if (e.touches.length === 1) {
e.preventDefault ();
mouseX = e.touches[0].pageX - windowHalfX;
targetRotation = targetRotationOnMouseDown +
(mouseX - mouseXOnMouseDown) * 0.05;
stopAutoRotate ();
}
}
});
function XHRObject() {
var xhr;
xhr = new XMLHttpRequest();
xhr.onerror = function () {};
xhr.onstart = function () {};
xhr.success = function () {};
return xhr;
}
function getSphereParametersWSxhr() {
var url = "http://192.444.2.444/transporter.asmx/getVideoCubeParameters";
xhr = new XMLHttpRequest();
var params = "";
console.log(xhr);
alert("----------------------------- getSphereParametersWSxhr - before open POST : " + xhr);
xhr.open("POST", url, true);
xhr.setRequestHeader("Content-Type", "application/x-www-form-urlencoded");
// forbidden xhr.setRequestHeader("Content-length", params.length);
alert("after open POST : " + xhr);
try {
xhr.onreadystatechange = function () {
alert("xhr.readyState == " + xhr.readyState + " xhr.status == " + xhr.status + " xhr.statusText: " + xhr.statusText + " xhr.responseText" + xhr.responseText);
if (xhr.readyState == 2 && xhr.status == 404) {
console.log("404 page not found: " + xhr);
alert("404 page not found: " + xhr);
}
if (xhr.readyState == 3) {
console.log("ready state 3: " + xhr.statusText + " " + xhr.status);
alert("ready state 3: " + xhr.statusText + " " + xhr.status);
}
if (xhr.readyState == 4) { //&& xhr.status == 200
console.log("ready state 4: " + xhr.statusText + " " + xhr.responseText);
alert("ready state 4: " + xhr.statusText + " " + xhr.responseText);
var erg1 = xhr.responseXML.getElementsByTagName("videoCubeSizeX")[0].textContent;
var stringList = erg1.split(";");
console.log(erg1);
alert("videoCubeSizeX: " + erg1);
alert(xhr.responseText);
}
}
xhr.send(params);
} catch (e) {
console.log("XHR Post : " + e);
alert("XHR Post : " + e);
}
}
function miniHttpTest() {
alert("miniHttpTest: mit GET ");
var xhr = new XMLHttpRequest();
xhr.open("GET", "http://crosswalk-project.org/", true);
xhr.onreadystatechange = function () {
if (xhr.readyState == 4) {
alert("ready state 4: " + xhr.statusText + " " + xhr.responseText);
}
}
xhr.send();
}

Related

How to use webVR-polyfill by default on chrome?

Device Info: Android 9;HUAIWEI COL-AL10; Chrome 80.0.3987.99
I am working on applying A-Frame to my mobile phone based HMD with my customized camera orientation and distortion.
However after I modified the cardboad distorter (CardboardDistorter.prototype.computeMeshVertices_ in Line 63069 in aframe.js) and run the samples, the distortion is still using cardboad's distortion. While I modified the same distorter in webvr-polyfill js, the new distortion works for me.
And when I use other phones to test my a-frame demo, the new distortion also works......
Seems like a-frame treat my phone as a native VR device so it triggered Google VR service in the backend....
How to enable the new distortion I changed in CardboardDistorter? I need to run webVR-polyfill DIRECTLY on my phone.
I have built some unity VR games based on Google VR recently, maybe they made my phone a "Native google VR Device"?
cardboard distortion:
cardboard distortion
my test distortion(when using polyfill it should display like this):
new distortion
Here is my new distortion for test:
CardboardDistorter.prototype.computeMeshVertices_ = function (width, height, deviceInfo) {
var vertices = new Float32Array(2 * width * height * 5);
var lensFrustum = deviceInfo.getLeftEyeVisibleTanAngles();
var noLensFrustum = deviceInfo.getLeftEyeNoLensTanAngles();
var viewport = deviceInfo.getLeftEyeVisibleScreenRect(noLensFrustum);
var vidx = 0;
for (var e = 0; e < 2; e++) {
for (var j = 0; j < height; j++) {
for (var i = 0; i < width; i++, vidx++) {
var u = i / (width - 1);
var v = j / (height - 1);
var s = u;
var t = v;
var x = lerp(lensFrustum[0], lensFrustum[2], u);
var y = lerp(lensFrustum[3], lensFrustum[1], v);
var d = Math.sqrt(x * x + y * y);
var r = deviceInfo.distortion.distortInverse(d);
var p = x * r / d;
var q = y * r / d;
// test distortion
u = u - 0.5;
v = v - 0.5;
u = u * (v + 0.7);
v = v * 0.7;
u = u + 0.5;
v = v + 0.5;
//u = (p - noLensFrustum[0]) / (noLensFrustum[2] - noLensFrustum[0]);
//v = (q - noLensFrustum[3]) / (noLensFrustum[1] - noLensFrustum[3]);
u = (viewport.x + u * viewport.width - 0.5) * 2.0;
v = (viewport.y + v * viewport.height - 0.5) * 2.0;
vertices[vidx * 5 + 0] = u;
vertices[vidx * 5 + 1] = v;
vertices[vidx * 5 + 2] = s;
vertices[vidx * 5 + 3] = t;
vertices[vidx * 5 + 4] = e;
}
}
var w = lensFrustum[2] - lensFrustum[0];
lensFrustum[0] = -(w + lensFrustum[0]);
lensFrustum[2] = w - lensFrustum[2];
w = noLensFrustum[2] - noLensFrustum[0];
noLensFrustum[0] = -(w + noLensFrustum[0]);
noLensFrustum[2] = w - noLensFrustum[2];
viewport.x = 1 - (viewport.x + viewport.width);
}
return vertices;
};
Here is what I have tried:
Line 3511
function shouldUseNative() {
return false;
}
Line 80341
window.hasNativeWebVRImplementation = false;
window.hasNativeWebXRImplementation = false;
[UPDATE]
I find that A-Frame 0.9.2 uses polyfill directly and since 1.0.0 it uses native Google VR service to display.
enterVR: {
value: function (useAR) {
var self = this;
var vrDisplay;
var vrManager = self.renderer.xr;
// Don't enter VR if already in VR.
if (this.is('vr-mode')) { return Promise.resolve('Already in VR.'); }
// Has VR.
if (this.checkHeadsetConnected() || this.isMobile) {
vrDisplay = utils.device.getVRDisplay();
vrManager.enabled = true;
vrManager.setDevice(vrDisplay);
if (this.hasWebXR) {
// XR API.
if (this.xrSession) {
this.xrSession.removeEventListener('end', this.exitVRBound);
}
navigator.xr.requestSession(useAR ? 'immersive-ar' : 'immersive-vr', {
requiredFeatures: ['local-floor'],
optionalFeatures: ['bounded-floor']
}).then(function requestSuccess (xrSession) {
self.xrSession = xrSession;
vrManager.setSession(xrSession);
xrSession.addEventListener('end', self.exitVRBound);
if (useAR) {
self.addState('ar-mode');
}
enterVRSuccess();
});
} else {
vrDisplay = utils.device.getVRDisplay();
vrManager.setDevice(vrDisplay);
if (vrDisplay.isPresenting &&
!window.hasNativeWebVRImplementation) {
enterVRSuccess();
return Promise.resolve();
}
var rendererSystem = this.getAttribute('renderer');
var presentationAttributes = {
highRefreshRate: rendererSystem.highRefreshRate,
foveationLevel: rendererSystem.foveationLevel
};
return vrDisplay.requestPresent([{
source: this.canvas,
attributes: presentationAttributes
}]).then(enterVRSuccess, enterVRFailure);
}
return Promise.resolve();
}
// No VR.
enterVRSuccess();
return Promise.resolve();
// Callback that happens on enter VR success or enter fullscreen (any API).
function enterVRSuccess () {
// vrdisplaypresentchange fires only once when the first requestPresent is completed;
// the first requestPresent could be called from ondisplayactivate and there is no way
// to setup everything from there. Thus, we need to emulate another vrdisplaypresentchange
// for the actual requestPresent. Need to make sure there are no issues with firing the
// vrdisplaypresentchange multiple times.
var event;
if (window.hasNativeWebVRImplementation && !window.hasNativeWebXRImplementation) {
event = new CustomEvent('vrdisplaypresentchange', {detail: {display: utils.device.getVRDisplay()}});
window.dispatchEvent(event);
}
self.addState('vr-mode');
self.emit('enter-vr', {target: self});
// Lock to landscape orientation on mobile.
if (!isWebXRAvailable && self.isMobile && screen.orientation && screen.orientation.lock) {
screen.orientation.lock('landscape');
}
self.addFullScreenStyles();
// On mobile, the polyfill handles fullscreen.
// TODO: 07/16 Chromium builds break when `requestFullscreen`ing on a canvas
// that we are also `requestPresent`ing. Until then, don't fullscreen if headset
// connected.
if (!self.isMobile && !self.checkHeadsetConnected()) {
requestFullscreen(self.canvas);
}
self.renderer.setAnimationLoop(self.render);
self.resize();
}
function enterVRFailure (err) {
if (err && err.message) {
throw new Error('Failed to enter VR mode (`requestPresent`): ' + err.message);
} else {
throw new Error('Failed to enter VR mode (`requestPresent`).');
}
}
},
writable: true
},
I find that in 0.9.2 this.hasXR = false and in 1.0.0 this.hasXR = true. So in 1.0.0 I set this value to false and then in else{} it says vrDisplay.isPresenting is not defined. It seems that vrDisplay = utils.device.getVRDisplay(); cannot get the right value like those in 0.9.2.
Then I find this code:
function getVRDisplay () { return vrDisplay; }
For the right return vrDisplay, I tried to change the codes above it:
if (false) {
var updateEnterInterfaces = function () {
var sceneEl = document.querySelector('a-scene');
if (sceneEl.hasLoaded) {
sceneEl.components['vr-mode-ui'].updateEnterInterfaces();
} else {
sceneEl.addEventListener('loaded', updateEnterInterfaces);
}
};
var errorHandler = function (err) {
error('WebXR session support error: ' + err.message);
};
if (navigator.xr.isSessionSupported) {
// Current WebXR spec uses a boolean-returning isSessionSupported promise
navigator.xr.isSessionSupported('immersive-vr').then(function (supported) {
supportsVRSession = supported;
updateEnterInterfaces();
}).catch(errorHandler);
navigator.xr.isSessionSupported('immersive-ar').then(function (supported) {
supportsARSession = supported;
updateEnterInterfaces();
}).catch(function () {});
} else if (navigator.xr.supportsSession) {
// Fallback for implementations that haven't updated to the new spec yet,
// the old version used supportsSession which is rejected for missing
// support.
navigator.xr.supportsSession('immersive-vr').then(function () {
supportsVRSession = true;
updateEnterInterfaces();
}).catch(errorHandler);
navigator.xr.supportsSession('immersive-ar').then(function () {
supportsARSession = true;
updateEnterInterfaces();
}).catch(function () {});
} else {
error('WebXR has neither isSessionSupported or supportsSession?!');
}
} else {
console.log('navigator.getVRDisplays',!!navigator.getVRDisplays);
if (navigator.getVRDisplays) {
navigator.getVRDisplays().then(function (displays) {
console.log('displays[0]',displays[0]);
var sceneEl = document.querySelector('a-scene');
vrDisplay = displays.length && displays[0];
console.log(vrDisplay);
if (sceneEl) { sceneEl.emit('displayconnected', {vrDisplay: vrDisplay}); }
});
}
}
Then the vrDisplay has the right value(CardboardVRDisplay). However, in this way when I enter VR mode the screen becomes black. After comparing the vrDisplay between 0.9.2 and 1.0.0, I find the distorter of vrDisplay is null in 1.0.0.
I think the keypoint is vrDisplay = utils.device.getVRDisplay() in EnterVR but I don't know how to solve it, hope someone could see this question and help this poor newbie..

Select camera on Android Chrome

I came across several questions on this subject. I'm trying to select the rear camera on an Android device running Chrome.
So, after some reading :
var selector = document.getElementById('video-source-selector');
navigator.mediaDevices.enumerateDevices()
.then(function(devices) {
var videoDevices = devices.map(function (item) {
if(item.kind === 'videoinput'){
return item;
}
}).filter(function( element ) {
return element !== undefined;
});
var max = videoDevices.length;
videoDevices.forEach(function(device, i) {
var html = '';
var div = document.createElement('div');
if(i === max-1){ // last element reached
html += '<option value="'+device.deviceId+'" selected>'+ device.label +'</option>';
}
else {
html += '<option value="'+device.deviceId+'">'+ device.label +'</option>';
}
div.innerHTML = html;
selector.appendChild(div.childNodes[0]);
console.log(device.kind + ": " + device.label +
" id = " + device.deviceId);
});
})
.catch(function(err) {
console.log(err.name + ": " + err.message);
});
selector.addEventListener("change", function(){
console.log(selector.value); // Works as supposed : returns the ID of the selected device
});
Then, as I'm using Three.js in this app, I'm binding this ID to Jerome Etienne three extension WebcamGrabbing (https://github.com/jeromeetienne/threex.webar):
var videoGrabbing = new THREEx.WebcamGrabbing(selector.value);
Then I had to modify THREEx.WebcamGrabbing class this way (I removed the irrelevant parts):
THREEx.WebcamGrabbing = function(sourceDeviceId){
...
console.log('webcamgrabbing : ', sourceDeviceId); // returns the expected ID
var constraints = {
video: {
optional: [{
sourceId: sourceDeviceId
}]
}
}
// try to get user media
navigator.getUserMedia( constraints, function(stream){
domElement.src = URL.createObjectURL(stream);
}, function(error) {
console.error("Cant getUserMedia()! due to ", error);
});
...
}
But still, Chrome on Android is still giving me the stream of the face camera, whatever device I select...
What do I miss?
EDIT : Based on this topic (GetUserMedia - facingmode), I came up with some logs to see what's happening here :
var constraints = {
audio: false,
video: { facingMode: { exact: "environment" } }
}
console.log('Try to get stream with constraints:', constraints);
navigator.getUserMedia( constraints, function(stream){
var videoTracks = stream.getVideoTracks();
console.log('Got stream with constraints:', constraints); // Ok
console.log('Using video device: ' + videoTracks[0].label); // > Using video device: camera 0, facing back
for(var i = 0; i < videoTracks.length; i++){
console.log('Found video device with contraints : ', videoTracks[i].label); // Found video device with contraints : camera 0, facing back
}
domElement.src = URL.createObjectURL(stream);
}, function(error) {
console.error("Cant getUserMedia()! due to ", error);
});
An alternative way to select the back camera on chrome is to use the enumerateDevices method.
First get all the video input id's:
navigator.mediaDevices.enumerateDevices().then(function(devices) {
devices.forEach(function(device) {
if(device.kind=="videoinput"){
//If device is a video input add to array.
}
});
Then the first element of the array will contain the id of the front camera, the second element will contain the id of the back camera.
Finally put the id of the camera that you want to use
navigator.getUserMedia({audio: false, video: { sourceId: VideoId } }, successCallback, errorCallback);

How to schedule data sync in PhoneGap app, retrieving JSON

I am building a PhoneGap app, and currently have setup some datasources that have a JSON feed I pull from, to populate my app with data. Right now, it only pulls that data once, the first time the app is run.
I would like to download data everytime the app first opens, and then if it stays open for longer than 15 minutes, it updates again. The json feed can be queried with a last_mod_day in the URL so it pulls only the data that has changed.
What would be recommended to go about this, and how to check for a WiFi/Data Connection on the phone, and if not it fails quietly?
Below is the code for my current function to grab the feed.
function downloadFileError(evt) {
console.log('downloadFileError: ');
console.log(evt.target.error);
}
function downloadFile(ep) {
window.requestFileSystem(
LocalFileSystem.PERSISTENT,
0,
function onFileSystemSuccess(fileSystem) {
fileSystem.root.getFile("dummy.json", {
create: true,
exclusive: false
},
function gotFileEntry(fileEntry) {
var filename = cordova.file.dataDirectory + 'assets/json/' + ep + '.json';
var fileTransfer = new FileTransfer();
fileEntry.remove();
console.log('looking at ' + filename);
fileTransfer.download(
encodeURI("http://www.myURL.com/theApp?ep=" + ep),
filename,
function(theFile) {
console.log("download complete: " + theFile.toURL());
},
function(error) {
console.log("DLERR: src=" + error.source + " t=" + error.target);
}
);
},
function(evt) {
console.log(evt);
console.log('fn: ' + filename);
}
);
},
downloadFileError);
}
function downloadDynamicPages() {
var deferred = $.Deferred();
var pages = ['setOne','setTwo','setThree','setFour','setFive','setSix'];
var cnt = 0;
var total_pages = pages.length;
//checkConnection();
$.each(pages,function(k,v) {
console.log('looking at ' + v);
downloadFile(v);
cnt++;
if(cnt >= total_pages) {
deferred.resolve('all done with files');
}
});
return deferred.promise();
}
Any help on any part of these questions would help me greatly. If needed, I can answer any questions. Thank you Stack.

Android + JQM+ jsonp not working consistently

I am crunntly working on Hybrid Android App using JQM + PhoneGap
I am doing a JSONP request with AJAX
It works well on all Chrome PC browser, also works well on my android application when WiFi is connected,
but when changing network connection to 3G the AJAX request is not responding.
I found #BruceHill related post that wrote the following :
"mobile operators do content modification before delivering it to the phone and this modification breaks jQuery"
Jquery mobile page not loading correctly on Netherlands 3G connections
Although I am not living in Holland, I tried doing what he suggests by locating all the JS files on a remote server and called it via CDN, but unfortunately it didn't help.
I will be happy to get some help on this one...
this is my AJAX request:
$.mobile.allowCrossDomainPages = true;
$('#expertsListPage').live('pageshow', function (event) {
$.mobile.showPageLoadingMsg();
getExpertsList();
});
var catId;
var catName
function getExpertsList() {
$('#expertsList li').remove();
catId = getUrlVars()["id"];
catName = getUrlVars()["cat"] ;
$('h1').text( unescape(catName) );
var url = 'http://apis.mydomain.com/mydata.svc/jsonp'
$.ajax({
cache: true,
type: 'GET',
url: url,
dataType: 'jsonp' ,
jsonp: 'callback',
success:api_do
});
}
var expertss;
function api_do(obj) {
$('#expertsList li').remove();
expertss = obj.experts;
$.each(expertss, function (index, expert) {
$('#expertsList').append('<li><a href="ExpertDetails.html?id=' + expert.id + '&catid=' + catId +'&catName=' + catName + '">' +
'<img style="width:160px;height:160px;" src="' + expert.thumbnail + '"/>' +
'<h4>' + expert.name + '</h4>' +
'<p>' + expert.description + '</p>' +
'</a></li>');
});
$('#expertsList').listview('refresh');
$.mobile.hidePageLoadingMsg();
}
function getUrlVars() {
var varsExperts = [], hash;
var hashes = window.location.href.slice(window.location.href.indexOf('?') + 1).split('&');
for (var i = 0; i < hashes.length; i++) {
hash = hashes[i].split('=');
varsExperts.push(hash[0]);
varsExperts[hash[0]] = hash[1];
}
return varsExperts;
}
Try adding this code to your javascript, might help.
$( document ).on( "mobileinit", function() {
// Make your jQuery Mobile framework configuration changes here!
$.support.cors = true;
$.mobile.allowCrossDomainPages = true;
});

Infobox on polygons in Bing Maps Android SDK

I'm using the Bing Maps Android SDK and I'm looking for a way to click on the polygons that I have created and show an infobox. I've been able to accomplish this for a pushpin, but not for a polygon. I have seen this answer, but my app needs will make hundreds of such polygons, and I'm looking for a faster solution using the addHandler method on polygons. I know this is possible for the AJAX v7 flavour of the SDK, which is the underlying base of the Android SDK.
The code I tried for the AJAX version (tested using this emulator.)
map.entities.clear();
latlon = map.getCenter();
var polygon = new Microsoft.Maps.Polygon([new Microsoft.Maps.Location(latlon.latitude, latlon.longitude-0.15), new Microsoft.Maps.Location(latlon.latitude+0.1, latlon.longitude-0.05), new Microsoft.Maps.Location(latlon.latitude+0.1, latlon.longitude+0.05), new Microsoft.Maps.Location(latlon.latitude, latlon.longitude+0.15), new Microsoft.Maps.Location(latlon.latitude-0.1, latlon.longitude+0.05), new Microsoft.Maps.Location(latlon.latitude-0.1, latlon.longitude-0.05), new Microsoft.Maps.Location(latlon.latitude, latlon.longitude-0.15)], null);
Microsoft.Maps.Events.addHandler(polygon, 'click', DisplayInfo);
map.setView( {zoom:10});
map.entities.push(polygon);
function DisplayInfo (e) {
var vertices = e.target.getLocations();
var verticeCenter = new Microsoft.Maps.Location(0,0);
//Calculating location of center
for (i=0; i<vertices.length-1; i++) {
verticeCenter.latitude = verticeCenter.latitude + vertices[i].latitude;
verticeCenter.longitude = verticeCenter.longitude + vertices[i].longitude;
}
verticeCenter.latitude = verticeCenter.latitude / (vertices.length - 1);
verticeCenter.longitude = verticeCenter.longitude / (vertices.length - 1);
defaultInfobox = new Microsoft.Maps.Infobox(verticeCenter, {width: 200, height: 50} );
map.entities.push(defaultInfobox);
}
However, I pushed a similar code to the BingMapsAndroid.js from the assets folder of the SDK, but that doesn't work. The handler is attached, as I checked using the hasHandler method. Touches are recorded and their lat and long values are sent to the log, but the polygon event is not evoked even when the touch lies inside a polygon.
Polygon test function in BingMapsAndroid.js:
this.PolygonTest = function() {
_map.entities.clear();
latlon = new Microsoft.Maps.Location(1,1);
console.log("Polygon test function");
var polygon = new Microsoft.Maps.Polygon([new Microsoft.Maps.Location(latlon.latitude, latlon.longitude-0.15), new Microsoft.Maps.Location(latlon.latitude+0.1, latlon.longitude-0.05), new Microsoft.Maps.Location(latlon.latitude+0.1, latlon.longitude+0.05), new Microsoft.Maps.Location(latlon.latitude, latlon.longitude+0.15), new Microsoft.Maps.Location(latlon.latitude-0.1, latlon.longitude+0.05), new Microsoft.Maps.Location(latlon.latitude-0.1, latlon.longitude-0.05), new Microsoft.Maps.Location(latlon.latitude, latlon.longitude-0.15)], null);
try {
Microsoft.Maps.Events.addHandler(polygon, 'click', function(e) { console.log("Polygon click!"); }); //This is never evoked
Microsoft.Maps.Events.addHandler(_map, 'click', function(e) { var point = new MM.Point(e.getX(), e.getY()); var loc = e.target.tryPixelToLocation(point); console.log("lat: " + loc.latitude + ", lon: " + loc.longitude); });
} catch(e) {
alert("Error");
}
_map.setView( {zoom:10});
_map.entities.push(polygon);
if (Microsoft.Maps.Events.hasHandler(polygon,'click')) {
console.log("Polygon has click handler."); //This works
}
//This function should be added to the click handler for polygon. I'll add it when I know the handler works.
function DisplayInfo (e) {
console.log("Polygon has been clicked.");
var vertices = e.target.getLocations();
var verticeCenter = new Microsoft.Maps.Location(0,0);
for (i=0; i<vertices.length-1; i++) {
verticeCenter.latitude = verticeCenter.latitude + vertices[i].latitude;
verticeCenter.longitude = verticeCenter.longitude + vertices[i].longitude;
}
verticeCenter.latitude = verticeCenter.latitude / (vertices.length - 1);
verticeCenter.longitude = verticeCenter.longitude / (vertices.length - 1);
defaultInfobox = new Microsoft.Maps.Infobox(verticeCenter, { width: 200, height: 50 });
_map.entities.push(defaultInfobox);
}
}
After much testing, I found the issue lies in Android. I tried the code on Bing Maps Interactive SDK for AjaxV7 and these are my results:
Desktop browsers: works (as written in the question)
iPhone 4S: works
Android 2.3.4 with default browser: does not work
Android 2.3.4 with Opera: works
Android ICS with Opera: works
Android ICS with default browser: does not work

Categories

Resources