Can anyone elaborate on what mobile browsers support multitouch, in particular the ability to press multiple "buttons" (or joysticks) at the same time? It is required for a game I'm making and would like to know, so I can switch to native app instead if necessary, though prefer not to.
If the answer is, it is generally not supported, does anyone happen to know a library/framework which can create screens from XML format or similar, like HTML, cross platform-wise and cross resolution-wise?
//get coordinates of 3 touch
function touch(e){
var touch1 = {x:e.changedTouches[0].clientX,y:e.changedTouches[0]};
var touch2 = {x:e.changedTouches[1].clientX,y:e.changedTouches[1]};
var touch3 = {x:e.changedTouches[2].clientX,y:e.changedTouches[2]};
}
you can get coordinates of each touch flowing up.
Multitouch is supported on Honeycomb and higher, and iOS 4 and higher.
Related
The ARCore sceneform sample project "hellosceneform" is cool and works really well.
Problem is the requirement to move the phone around in order to get a surface on which to place anchors. It's too slow.
My application does not require anything to show up on a vertical plane (a wall), but only ever on the floor. Is there anyway I can skip the "move the phone around" step or at least speed it up?
I've tried:
session.getConfig().setPlaneFindingMode(Config.PlaneFindingMode.HORIZONTAL);
Thinking that if I remove the need to look for vertical planes then it would all work faster..... not quite fast enough it seems.
Thanks!
Unfortunately the framework is limited by (read: enabled by) the computer vision models that it uses to detect planes. The plane discovery controller (i.e. the "move the phone around" step) is a nudge to the user to provide the models with the depth information through the camera that they need to detect those planes. Removing this step won't speed up the process, it'll just leave the user without any instructions.
Without improvements to the core plane detection models I wouldn't expect that there's a way to make this faster. The best that we can do is come up with UX nudges that encourage the user to move the phone laterally more efficiently.
To hide the animation that shows users how they should move their phone
use
arFragment.planeDiscoveryController.hide()
arFragment.planeDiscoveryController.setInstructionView(null)
To speed up plane detection in ARCore is quite easy. Here's a code snippet:
class MainActivity : AppCompatActivity() {
lateinit var arFrag: ArFragment
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_ux)
arFrag = supportFragmentManager.findFragmentById(R.id.ux_fragment) as ArFragment
arFrag.planeDiscoveryController.hide()
arFrag.planeDiscoveryController.setInstructionView(null)
arFrag.arSceneView.planeRenderer.isEnabled = false
arFrag.arSceneView.scene.setOnUpdateListener(::onFrame)
}
// ........................................................
}
Hope this helps.
I am working with minko and seem to be facing a light issue with Android.
I managed to compile for linux64, Android and html a modified code (based on the tutorials provided by Minko). I simply load and rotate 4 .obj files (the pirate one provided and 3 found on turbosquid for demo purposes only).
The correct result is viewed in the linux64 and html version but the Android one has a "redish" light thrown into it, although the binaries are being generated from the same c++ code.
Here are some pics to demonstrate the problem:
linux64 :
http://tinypic.com/r/qzm2s5/8
Android version :
http://tinypic.com/r/23mn0p3/8
(Couldn’t link the html version but it is close to the linux64 one.)
Here is the part of the code related to the light :
// create the spot light node
auto spotLightNode = scene::Node::create("spotLight");
// change the spot light position
//spotLightNode->addComponent(Transform::create(Matrix4x4::create()->lookAt(Vector3::zero(), Vector3::create(0.1f, 2.f, 0.f)))); //ok linux - html
spotLightNode->addComponent(Transform::create(Matrix4x4::create()->lookAt(Vector3::zero(), Vector3::create(0.1f, 8.f, 0.f))));
// create the point light component
auto spotLight = SpotLight::create(.15f, .4f); //ok linux and html
// update the spot light component attributes
spotLight->diffuse(4.5f); //ori - ok linux - html
// add the component to the spot light node
spotLightNode->addComponent(spotLight);
//sets a red color to our spot light
//spotLightNode->component<SpotLight>()->color()->setTo(2.0f, 1.0f, 1.0f);
// add the node to the root of the scene graph
rootNode->addChild(spotLightNode);
As you can notice the color()->setTo has been turned off and works for all except Android (clean and rebuild). Any idea what might be the source of the problem here ?
Any pointer would be much appreciated.
Thx.
Can you test it on other Android devices or with a more recent ROM and give us the result? LG-D855 (LG G3) is powered by an Adreno 330: those GPUs are known to have GLSL compiling deffects, especially with loops and/or structs like we use in Phong.fragment.glsl on the master branch.
The Phong.fragment.glsl on the dev branch has been heavily refactored to fix this (for directional lights only for now).
You could try the dev branch and a directional light and see if it fixes the issue. Be careful though: the dev branch introduces the beta 3, with some API changes. The biggest API change being the math API now using GLM, and the *.effect file format. The best way to go is simply to update your math code to use the new API, everything else should be straight forward.
I'm interested in AR applications of mobile devices and naturally I would like to make better use of the compass.
The only issue I've been having to work against isn't how twitchy the compass is. (Angular Smoothing seems to solve this issue just fine) My main issue is that when the device is held Vertical the compass values start freaking out. Causing an on screen compass to flip about all over the place. I don't have a lot of experience with mobile application development so I'm not sure what would be causing this issue, if its a Unity issue or if its just a limitation of the digital compass. I know other apps do seem to be able to use the compass fine in any orientation, but this is all stupidly new to me.
I've definitely tried moving the phone in a figure of 8. The device I have to play around with is a Nexus 4.
using UnityEngine;
using System.Collections;
public class Compass : MonoBehaviour {
// Use this for initialization
void Start () {
Input.location.Start ();
Input.compass.enabled = true;
}
// Update is called once per frame
void Update ()
{
var heading = Input.compass.trueHeading;
transform.eulerAngles = new Vector3 (0, 0, heading);
}
}
Preamble :)
First of, I'm not an expert (unfortunately) in subjects that I will talk about. But still, I've decided to share my thoughts.
Theory
The problem can be generalized in the following way. You want to have some continuous function that takes a 3D vector (which is device orientation in your case) and returns another vector that is orthogonal to original vector. Theory says (see hairy ball theorem) that for some arguments that function will return zero vectors. In case when such a function is compass, zero vectors are returned when device is oriented vertical (and this fells quite natural if you have ever used an ordinary compass).
Practice
Sometimes you want your app to tell which side of the world does phone back (rear camera) is pointing to.
Or maybe even you want combined approach:
If the phone is oriented flat, show what is the phone's top pointing to.
If the phone is oriented vertical, show what is the phone's back pointing to.
In both cases you need to use gyroscope in addition to compass.
I've been trying to sort out an issue for a week or so now. Googled to no avail. I'm currently working on an iOS/Android app that has a feature in the game to take a screenshot and have it show up in the mobile device's gallery.
I'm using the CameraRoll object and the issue is that some objects on screen have smoothing applied. However the CameraRoll screenshot ignores this. Which makes the resulting screen shot have some objects with jaggies.
I've found a number of cries for help on the same issue while googling, but no answers.
Any help is much appreciated.
Jaggies in flash are common since smoothing on bitmaps is disabled by default (more cpu intensive). I'd recommend creating a new bitmap from the CameraRoll MediaEvent.SELECT event. Inside, it should return event.data which is a MediaPromise object. Inside that, you should find a read-only file property where you should be able to find the image.
Then it's just a matter of creating your new image with smoothing.
var img:Bitmap = new Bitmap();
img.bitmapData = file.bitmapData;
img.smoothing = true;
addChild(img);
I've never tried this on mobile before, but it's a common issue which I believe you're encountering.
Addendum:
If you're having an issue with the system based screenshot services, you could create your own using pure AS3. The logic being, AS3 should do a pixel-by-pixel block copy of the stage (thereby respecting the smoothing values of your images).
Try this:
var myBitmapData:BitmapData = new BitmapData(stage.stageWidth, stage.stageHeight);
myBitmapData.draw(stage);
In my current project I need my users to be able to scroll over and zoom in on large SVG
Images. A major problem i encountered though, is the limit the android WebView class puts on zooming in and out. Is there any way I can remove or change these limits to my own likings?
The standard zoom controls do not seem to support releasing these boundries.
If my question is unclear, or if I need to elaborate on my question do not hesitate to ask.
Greets,
Wottah
Since no one seems to have come up with a different solution than using reflection - I'm not aware of any alternatives at this point - I wrote up a quick code snippet that illustrates how to bypass the upper limit on the zoom-in action.
Note that the code below will only work on ICS, and possibly Honeycomb, but I currently don't have a tablet lying around to inspect if the inner workings rely on the same ZoomManager class. Gingerbread, Froyo and Eclair all appear to implement the zooming functionality more or less directly in the WebView class. With the example below it should be fairly easy to add some code to also take those operating systems into account.
// just set an Activity's content view to a single WebView for this test
WebView mWebview = new WebView(this);
setContentView(mWebview);
// retrieve the ZoomManager from the WebView
Class<?> webViewClass = mWebview.getClass();
Field mZoomManagerField = webViewClass.getDeclaredField("mZoomManager");
mZoomManagerField.setAccessible(true);
Object mZoomManagerInstance = mZoomManagerField.get(mWebview);
// modify the "default max zoom scale" value, which controls the upper limit
// and set it to something very large; e.g. Float.MAX_VALUE
Class<?> zoomManagerClass = Class.forName("android.webkit.ZoomManager");
Field mDefaultMaxZoomScaleField = zoomManagerClass.getDeclaredField("mDefaultMaxZoomScale");
mDefaultMaxZoomScaleField.setAccessible(true);
mDefaultMaxZoomScaleField.set(mZoomManagerInstance, Float.MAX_VALUE);