I am currently creating a game in HTML5 using Canvas. Right now, it works pretty well with all the internet browsers except Google Chrome for Android which refuses to display my "filltext" commands...
I noticed that the game worked when I disabled the 2D acceleration of Chrome Android through chrome://flags/... but i obviously cannot ask users to disable their 2D acceleration feature of Chrome prior to playing. Does anybody have a solution to display my filltext under google chrome for android?
You will find below the code: basically, i can see the text in all the browsers except chrome for android... it draws the background and not the text.
//Get the canvas
var canvas2 = document.getElementById("layer2");
var ctx2 = canvas2.getContext("2d");
//Rendering function (draw background and draw image)
var render = function () {
ctx2.drawImage(background, 0, 0);
ctx2.fillText("Lolo", 400, 400);
}
// main loop
var main = function () {
var now = Date.now();
var delta = now - then;
update(delta / 1000);
render();
then = now;
};
var then = Date.now();
setInterval(main, 1);
Thank you!
Laurent
This appears to be a bug with the software renderer, you will need to make your canvas at least 256 pixels in size to force it in to the GPU path. Paul Lewis face a similar problem and blogged about it
If you can provide a reduced demo, I will file the Bug against Chrome (or you can do it at http://m.crbug.com/new)
Related
Quick Summary:
I'm working on a VR application, and we want to render a website in 3D space in our VR scene, preferably on a texture. There are many ways to accomplish this on desktop (PC/Mac), but not on Android devices.
Details:
We have a working prototype on Windows that relies on Zen Fulcrum's Embedded Browser plugin, but we want to support Android devices such Google DayDream, Gear VR, and eventually the Android-based standalone headsets that will be released within the next year.
Here's a list of all the Unity webpage rendering solutions I've found so far:
Embedded Browser by Zen Fulcrum - supports Windows and OSX
UniWebView by Yumigi - They do not support rendering webpages to textures. Webpages are rendered on a flat layer on top of the graphics engine's rendering.
In-App Web Browser by Piotr Zmudzinski - Appears to have the same limitation
Webkit for iOS by Chestnut Games - Only works for iOS, their company website doesn't seem to work.
HTML Engine for NGUI & Unity GUI by ZHing - Doesn't appear to be a real web browser. It just seems like someone made some scripts that convert HTML into UI elements.
EasyWebViewTexture For Android by JaeYunLee - I know some devs who used this but the project was mysteriously taken down and discontinued.
Awesomium for Unity (Beware: dead/suspicious link) by Khrona Software - Their wiki articles for Unity are mysteriously blank both on their wiki site and on their GitHub. The entire awesomium website has been shut down as well, and so has the Krhona Software company website. They have a GitHub Repo for their Unity integration but their readme says it only supports Windows & Mac.
uWebKit3 by Mythos Labs - No longer available. Their original website no longer exists. Previous versions of uWebKit claimed to support Android. Their announcement on the Unity forums about them closing down was very mysterious and abrupt. I found something on GitHub that claims to be uWebKit3, but the readme claims to only support PC and Mac.
Unreal Engine 4's Web Browser Widget - This is an experimental widget that's being built into UE4, and on desktop it renders webpages in the 3D world with little problems. Unfortunately, even though it claims to support Android, when you actually deploy to a device you'll find that the browser just gets rendered flat on top of the game engine rendering, not in the 3D world.
I really can't find any info on why this is no longer supported on Android when it used to be supported. Maybe something changed on the Android stack? If I was to try messing with Chromium myself and get it to work on Android, would I just run into whatever dead-ends killed all these projects?
The Android SDK offers the native WebView component, which is rendered as an independent on-screen element that we can't really hook into. Google just released a preview for the Chrome VR browser in Daydream, but it's very early in development and I really don't think they plan to provide a solution for VR devs to use anytime soon. Oculus has the experimental Carmel Browser but it seems more focused on rendering WebVR than providing tools for VR devs to hook into. I've been in touch with someone from the Oculus Web Browser team (met him at PAX, lol), and they plan to release a feature to make it easier for devs to launch web pages that open in Gear VR's built-in Oculus Web Browser. But that's an app-switching scenario, it doesn't let you render a webpage in your own 3D scene.
One possibility that I'm considering exploring: what if we got a server to render webpages for users and stream that content back to their devices as texture data? It'd kinda be like what OnLive did with videogames, except you could tolerate more latency at times. It could use Selenium (or some other webdev visual regression testing tool) to handle the rendering... Eh, it sounds like a total pain to make, though, not sure if our company can afford to spend months on something like that. -_-
Any suggestions? Thanks!
I looked around for webview plugins that support video and the Unity 3D webview plugin for Android is the only one I could find that does. I've been using it for a while now and would recommend it. Here's an example of using it:
using UnityEngine;
using Vuplex.WebView;
class SceneController : MonoBehaviour {
// I set this in the editor to reference a webview in the scene
public WebViewPrefab webPrefab;
void Start() {
webPrefab.Init(1.5f, 1.0f);
webPrefab.Initialized += (sender, args) => {
// load the video so the user can interact with it
webPrefab.WebView.LoadUrl("https://www.youtube.com/watch?v=dQw4w9WgXcQ");
};
}
}
UPDATE: I updated the repo to support video and is a fully functioning 3D browser based on the GeckoView browser engine. It relies on the OVROverlay from Oculus to render frames generated in an Android plugin onto a Unity3D texture.
I'm not sure if this question is a duplicate or the newer one is, but this is a repo I made in the hopes we can implement a nice in-game browser. It's a bit buggy/slow but it works (most of the time).
It uses a Java plugin that renders an Android WebView to a Bitmap by overriding the view's Draw method, converts that to a png and passes it to a unity RawImage. There is plenty of work to do so feel free to improve it!
How to use it:
At the repo you can find the plugin (unitylibrary-debug.aar) you need to import to Assets/Plugins/Android/ and BrowserView.cs and UnityThread.cs which you can use to convert an Android WebView to a texture that Unity's RawImage can display. Fill BrowserView.cs's public fields appropriately. Make sure your API level is set to 25 in Unity's player settings.
Code samples
Here's overriding the WebView's Draw method to create the bitmap and PNG, and init-ing the variables you need:
public class BitmapWebView extends WebView{
private void init(){
stream = new ByteArrayOutputStream();
array = new ReadData(new byte[]{});
bm = Bitmap.createBitmap(outputWindowWidth,
outputWindowHeight, Bitmap.Config.ARGB_8888);
bmCanvas = new Canvas(bm);
}
#Override
public void draw( Canvas ){
// draw onto a new canvas
super.draw(bmCanvas);
bm.compress(Bitmap.CompressFormat.PNG, 100, stream);
array.Buffer = stream.toByteArray();
UnityBitmapCallback.onFrameUpdate(array,
bm.getWidth(),
bm.getHeight(),
canGoBack,
canGoForward );
stream.reset();
}
}
// you need this class to communicate properly with unity
public class ReadData {
public byte[] Buffer;
public ReadData(byte[] buffer) {
Buffer=buffer;
}
}
Then we pass the png to a unity RawImage.
Here's the Unity receiving side:
// class used for the callback with the texture
class AndroidBitmapPluginCallback : AndroidJavaProxy
{
public AndroidBitmapPluginCallback() : base("com.unityexport.ian.unitylibrary.PluginInterfaceBitmap") { }
public BrowserView BrowserView;
public void onFrameUpdate(AndroidJavaObject jo, int width, int height, bool canGoBack, bool canGoForward)
{
AndroidJavaObject bufferObject = jo.Get<AndroidJavaObject>("Buffer");
byte[] bytes = AndroidJNIHelper.ConvertFromJNIArray<byte[]>(bufferObject.GetRawObject());
if (bytes == null)
return;
if (BrowserView != null)
{
UnityThread.executeInUpdate(()=> BrowserView.SetTexture(bytes,width,height,canGoBack,canGoForward));
}
else
Debug.Log("TestAndroidPlugin is not set");
}
}
public class BrowserView : MonoBehaviour {
// Browser view needs a RawImage component to display webpages
void Start () {
_imageTexture2D = new Texture2D(Screen.width, Screen.height, TextureFormat.ARGB32, false);
_rawImage = gameObject.GetComponent<RawImage>();
_rawImage.texture = _imageTexture2D;
#if !UNITY_EDITOR && UNITY_ANDROID
// Get your Java class and create a new instance
var tempAjc = new AndroidJavaClass("YOUR_LIBRARY.YOUR_CLASS")
_ajc = tempAjc.CallStatic<AndroidJavaObject>("CreateInstance");
// send the callback object to java to get frame updates
AndroidBitmapPluginCallback androidPluginCallback = new AndroidBitmapPluginCallback {BrowserView = this};
_ajc.Call("SetUnityBitmapCallback", androidPluginCallback);
#endif
}
// Android callback to change our browser view texture
public void SetTexture( byte[] bytes, int width, int height, bool canGoBack, bool canGoForward)
{
if (width != _imageTexture2D.width || height != _imageTexture2D.height)
_imageTexture2D = new Texture2D(width, height, TextureFormat.ARGB32, false);
_imageTexture2D.LoadImage(bytes);
_imageTexture2D.Apply();
_rawImage.texture = _imageTexture2D;
}
}
I have worked so hard on an app that displays perfectly on my Galaxy Note 3. However, it does not display right on iPhone and also one other Droid I tested on. My issue is with addChild() and then resizing it to fit the screen. For some reason when I add the Background (addBG(); The screen size works but if I load addChild to the BG, this works great on my Note 3 but not on the iPhone or another android device.
My issue is with the screenX, screenY var I created. They seem to be different for devices. Or it is a "rendering order issue" I am not sure. Would be so greatful for the help to fix this issue so it looks great on each phone. I have read some tuts on this subject but they are confusing. I think my code is close, I hope, and maybe it just needs a tweak. !
Here is a shot of the about screen on an IPhone. See the white does not fit the whole screen.
and Here is a shot from my Droid Note 3.
Declared Vars in a package:
This is not my full code, of course but only that which I believe is relevant.
public var screenX:int;
public var screenY:int;
public function Main()
{
if (stage)
{
setStage();
addBG();
}
}
public function setStage()
{
stage.scaleMode = StageScaleMode.NO_SCALE;
stage.align = StageAlign.TOP_LEFT;
if (flash.system.Capabilities.screenResolutionX > stage.stageWidth)
{
screenX = stage.stageWidth;
screenY = stage.stageHeight;
}
else
{
screenX = flash.system.Capabilities.screenResolutionX;
screenY = flash.system.Capabilities.screenResolutionY;
}
}
This works: addBG();
public function addBG()
{
theBG = new BG();
addChild(theBG);
theBG.width = screenX;
theBG.height = screenY;
}
This does not: addAbout();
public function addAbout()
{
About = new viewAbout();
About.width = screenX;
About.height = screenY;
theBG.addChild(About);
TweenMax.fromTo(About,1, {alpha:0}, {alpha:1, ease:Expo.easeOut} );
}
UPDATE: And yet another more complex load is also called from a button and having the same issue. I hope you see the logic of what I am trying to do. First set the BG to the device then load and resize content to fit proportionally into the BG I loaded. The reason being, the BG will be distorted to different proportions and that's ok, but the content can not. So here is the example that loads content into the BG and then content into that container.
public function addRosaryApp()
{
Rosary = new viewRosaryApp();
Rosary.width = screenX;
Rosary.height = screenY;
theBG.addChild(Rosary);
TweenMax.fromTo(Rosary,1, {alpha:0}, {alpha:1, ease:Expo.easeOut} );
contentRosary = new contentRosaryApp();
theBG.addChild(contentRosary);
contentRosary.width = screenX;
contentRosary.scaleY = contentRosary.scaleX;
contentRosary.x = screenX/2 - contentRosary.width/2;
contentRosary.y = menuBanner.height;
}
Have you tried adding the child to stage first, and then setting the size? That's the only difference I can see between addBG and addAbout
About = new viewAbout();
theBG.addChild(About); // do this first
About.width = screenX; // then set width
About.height = screenY; // and height
I think your problem may have to do with either of several things.
My findings so far from my own devices (an Ipad Air, an iphone 4S, an LG G2 and an Acer Iconia A500) is that the only device size that's being reported correctly at all times is the stage.fullScreenWidth and the stage.fullScreenHeight
1st, Android reports Capabilities.screenResolutionX as the LONG side of your Android device.
IOS devices however report the SHORT side of your device to Capabilities.screenResolutionX.
2nd, stage.stageWidth and stage.stageHeight seem to report the wrong size on both Android and IOS.
And in fact if you check these before and after setting stage.scaleMode and stage.align then the values will differ completly (although Android will show it almost correctly).
My suggestion is that you set a base size (width and height) for your app and then compare it with the actual stage.fullScreenWidth and stage.fullScreenHeight reported by the device. That way you can get some nice scaleratios to use to scale your displayobjects. That's what I'm currently on my apps and it scales just fine :-)
I'm developing a feature in a web app, which enables the user to take a photo with whatever camera is attached to the device. This is mostly to be used in an Android v4.0+ phone with Google Chrome v28+.
In both desktop and phone I can setup a video tag to properly display video form the device's camera using getUserMedia and createObjectURL. My problem is that when I try to draw a snapshot from the video element nothing gets copied to the canvas:
var oVideo = jQuery('#myVideo');
var oCanvas = jQuery('#myCanvas');
var oContexto = oCanvas[0].getContext("2d");
var nAncho = oVideo.width();
var nAlto = oVideo.height();
//resizes the canvas: css
oCanvas.width(nAncho);
oCanvas.height(nAlto);
//resizes the canvas: image resolution
oCanvas[0].width = nAncho;
oCanvas[0].height = nAlto;
oContexto.fillRect(20, 20, 40, 40);
oContexto.drawImage(oVideo[0], 0, 0, nAncho, nAlto);
oContexto.fillRect(80, 80, 40, 40);
I added the two fillRect just to be sure that the code was being executed. The result is that the two black rectangles are being drawn but the snapshot is not.
The problem only occurs in Google Chrome v28 for Android, but works properly in Google Chrome 28 (and Firefox 22) for windows.
Is it a Google Chrome's bug (I couldn't find it in http://code.google.com/p/chromium/)? Is there a work around? Or I'm simply doing some thing wrong?
I'll appreciate any insight to help me understand what is going on.
The problem of not being able to draw video frames to a canvas is mentioned in the following issues:
https://code.google.com/p/chromium/issues/detail?id=174642
https://code.google.com/p/chromium/issues/detail?id=181037
Both suggest the problem has been solved, but as I'm not familiar with the development process I don't know if this means it will work on our phones any time soon.
Edit: I just tried this in Chrome Beta and it works fine.
I have to write a simulation of balls falling in a container for Android. First, I tried using Box2Dweb in a HTML5 canvas, but with 3 solid bodies and 50 balls, it performs really slow, even in desktop computer with Firefox (curiously, with Chrome it performs really well). Here is the live demo.
And here is the code.
var b2Vec2 = Box2D.Common.Math.b2Vec2
, b2BodyDef = Box2D.Dynamics.b2BodyDef
, b2Body = Box2D.Dynamics.b2Body
, b2FixtureDef = Box2D.Dynamics.b2FixtureDef
, b2Fixture = Box2D.Dynamics.b2Fixture
, b2World = Box2D.Dynamics.b2World
, b2PolygonShape = Box2D.Collision.Shapes.b2PolygonShape
, b2CircleShape = Box2D.Collision.Shapes.b2CircleShape
, b2DebugDraw = Box2D.Dynamics.b2DebugDraw
;
var world = new b2World(
new b2Vec2(0, 10) //gravity
, true //allow sleep
);
var fixDef = new b2FixtureDef;
fixDef.density = 1.0;
fixDef.friction = 0.5;
fixDef.restitution = 0.2;
var bodyDef = new b2BodyDef;
//create ground
var canvas = $('#canvas'),
offsetX = (canvas.width() / 30) / 4,
offsetY = (canvas.height() / 30) / 5; //center the machine on the screen.
bodyDef.type = b2Body.b2_staticBody;
fixDef.shape = new b2PolygonShape;
fixDef.shape.SetAsBox(5, 0.5);
bodyDef.position.Set(5 + offsetX, 10 + offsetY);
world.CreateBody(bodyDef).CreateFixture(fixDef);
fixDef.shape.SetAsBox(0.5, 7);
bodyDef.position.Set(0 + offsetX, 3 + offsetY);
world.CreateBody(bodyDef).CreateFixture(fixDef);
bodyDef.position.Set(10 + offsetX, 3 + offsetY);
world.CreateBody(bodyDef).CreateFixture(fixDef);
//create some objects
var numObjects = 50;
bodyDef.type = b2Body.b2_dynamicBody;
for(var i = 0; i < numObjects; ++i) {
fixDef.shape = new b2CircleShape(
0.6 //Math.random() + 0.1 //radius
);
bodyDef.position.x = Math.random() * 9 + offsetX;
bodyDef.position.y = Math.random() * 9 - 2;
world.CreateBody(bodyDef).CreateFixture(fixDef);
}
//setup debug draw
var debugDraw = new b2DebugDraw();
debugDraw.SetSprite(document.getElementById("canvas").getContext("2d"));
debugDraw.SetDrawScale(30.0);
debugDraw.SetFillAlpha(0.5);
debugDraw.SetLineThickness(1.0);
debugDraw.SetFlags(b2DebugDraw.e_shapeBit | b2DebugDraw.e_jointBit);
world.SetDebugDraw(debugDraw);
var rate = 60;
window.requestAnimFrame = (function(){
return window.requestAnimationFrame ||
window.webkitRequestAnimationFrame ||
window.mozRequestAnimationFrame ||
function( callback ){
window.setTimeout(callback, 1000 / rate);
};
})();
//update
(function update() {
requestAnimFrame(update);
world.Step(1 / rate, 10, 10);
world.DrawDebugData();
world.ClearForces();
})();
My question is, what if I write a native implementation using Android canvas (no the HTML5 one) and Box2D? Will I achieve a smooth movement for the balls?
And the hidden cuestion is: Is the performance so poor because of drawing or because of so many physical calculus? Usually, how much performance can I win going to native when there are physical calculus involved?
The main difference is that with HTML5 and Box2DWeb your game is limited by the browser optimizations and your own code optimizations.
Some browsers doesn't have hardware accelerated canvas; or the browser's Javascript Engine are not optimized enough. You can see that difference even in desktop browsers. Google Chrome for instance do a lot of optimizations behind the scene inside his V8 engine.
Because there's so many differences between Browser's Javascript Engines (as you notice with Firefox and Chrome) it's harder to do code optimizations for all of them.
Since mobile hardware is usually very limited and the mobile browsers are not evolved enough, make optimizations to active high frame rates is very painful and may not be accomplished at all.
For instance, the code you provide might suffer in browsers that haven't no native requestAnimationFrame. Also, drawing shapes on the fly is too expensive for low hardware devices. So the answer for your last question might be: both, drawing and the physics calculation are killing the performance. But the major bottleneck is the drawing for sure.
The use of native Android canvas allow you quick responses since the game will use the device hardware more efficiently than browsers.
In addiction, the Box2D for android is much more efficient than the Box2DWeb(a nice port of the original Box2D but still suffer with performance gaps).
Bottomline, if your target is primary Android devices you should go with the native implementation. But if you want to target a huge range of browsers and devices without do code again for every platform, go with the beautiful HTML5. (Every choice implies consequences, you have to choose that which best suit your needs).
If you decide goes with HTML5 canvas see this answer. (It's you own question, by the way :) )
And if you are really engaged learn a little about WebGL and OpenGL ES.
When you hide the addressbar in WebbApps you got nearly the feeling of native Apps. Unfortunately, to hide the address bar works in jQuery mobile only if the content is large enough. If the content is too small it does not work - also with fullscreen and fixed footer - see http://jquerymobile.com/demos/1.2.0/docs/toolbars/bars-fixed.html. Reason appears to be a bug in the browser in Android. If the content is too small the jQuery funktion $(window).height() delivers only 450px as screen height (I'm using Android 2.3.6 with a Galaxy S2). The height of the addressbar is missing. Is the content big enough the function delivers 508px - as expected. Also a quite good approach - http://www.semicomplete.com/blog/tags/jquery%20mobile -does not work. I found a solution which works but need a delay of 500ms. It brings a flipping adressbar when you load a page for a small time. Is there an another approch, that there is no flipping of the addressbar?
But maybe someone has an idea how this could work even better. And does it work well with iOS? It would be great if someone could test this with an iPhone.
Thanks in advance.
Here the code:
var fixgeometry = function() {
/* Calculate the geometry that our content area should take */
var header = $(".header:visible");
var footer = $(".footer:visible");
var content = $(".content:visible");
var viewport_height = $(window).height()+60; /*Here is the idea! Make it bigger here and make it later smaller*/
var content_height = viewport_height - header.outerHeight() - footer.outerHeight();
/* Trim margin/border/padding height */
content_height -= (content.outerHeight() - content.height());
content.height(content_height);
setTimeout(function(){
window.scrollTo(0, 1);
content.height(content_height-60); }, 500 );
};
$(document).ready(function(){
$(window).bind("orientationchange resize pageshow", fixgeometry);
});
The scrollTo-function is implemented in jQuery, but it doesnt work after an oriantationchange or an input in a textfield. So its necessary here in the setTimeout-function.