Render website as texture in Unity3D on Android - android

Quick Summary:
I'm working on a VR application, and we want to render a website in 3D space in our VR scene, preferably on a texture. There are many ways to accomplish this on desktop (PC/Mac), but not on Android devices.
Details:
We have a working prototype on Windows that relies on Zen Fulcrum's Embedded Browser plugin, but we want to support Android devices such Google DayDream, Gear VR, and eventually the Android-based standalone headsets that will be released within the next year.
Here's a list of all the Unity webpage rendering solutions I've found so far:
Embedded Browser by Zen Fulcrum - supports Windows and OSX
UniWebView by Yumigi - They do not support rendering webpages to textures. Webpages are rendered on a flat layer on top of the graphics engine's rendering.
In-App Web Browser by Piotr Zmudzinski - Appears to have the same limitation
Webkit for iOS by Chestnut Games - Only works for iOS, their company website doesn't seem to work.
HTML Engine for NGUI & Unity GUI by ZHing - Doesn't appear to be a real web browser. It just seems like someone made some scripts that convert HTML into UI elements.
EasyWebViewTexture For Android by JaeYunLee - I know some devs who used this but the project was mysteriously taken down and discontinued.
Awesomium for Unity (Beware: dead/suspicious link) by Khrona Software - Their wiki articles for Unity are mysteriously blank both on their wiki site and on their GitHub. The entire awesomium website has been shut down as well, and so has the Krhona Software company website. They have a GitHub Repo for their Unity integration but their readme says it only supports Windows & Mac.
uWebKit3 by Mythos Labs - No longer available. Their original website no longer exists. Previous versions of uWebKit claimed to support Android. Their announcement on the Unity forums about them closing down was very mysterious and abrupt. I found something on GitHub that claims to be uWebKit3, but the readme claims to only support PC and Mac.
Unreal Engine 4's Web Browser Widget - This is an experimental widget that's being built into UE4, and on desktop it renders webpages in the 3D world with little problems. Unfortunately, even though it claims to support Android, when you actually deploy to a device you'll find that the browser just gets rendered flat on top of the game engine rendering, not in the 3D world.
I really can't find any info on why this is no longer supported on Android when it used to be supported. Maybe something changed on the Android stack? If I was to try messing with Chromium myself and get it to work on Android, would I just run into whatever dead-ends killed all these projects?
The Android SDK offers the native WebView component, which is rendered as an independent on-screen element that we can't really hook into. Google just released a preview for the Chrome VR browser in Daydream, but it's very early in development and I really don't think they plan to provide a solution for VR devs to use anytime soon. Oculus has the experimental Carmel Browser but it seems more focused on rendering WebVR than providing tools for VR devs to hook into. I've been in touch with someone from the Oculus Web Browser team (met him at PAX, lol), and they plan to release a feature to make it easier for devs to launch web pages that open in Gear VR's built-in Oculus Web Browser. But that's an app-switching scenario, it doesn't let you render a webpage in your own 3D scene.
One possibility that I'm considering exploring: what if we got a server to render webpages for users and stream that content back to their devices as texture data? It'd kinda be like what OnLive did with videogames, except you could tolerate more latency at times. It could use Selenium (or some other webdev visual regression testing tool) to handle the rendering... Eh, it sounds like a total pain to make, though, not sure if our company can afford to spend months on something like that. -_-
Any suggestions? Thanks!

I looked around for webview plugins that support video and the Unity 3D webview plugin for Android is the only one I could find that does. I've been using it for a while now and would recommend it. Here's an example of using it:
using UnityEngine;
using Vuplex.WebView;
class SceneController : MonoBehaviour {
// I set this in the editor to reference a webview in the scene
public WebViewPrefab webPrefab;
void Start() {
webPrefab.Init(1.5f, 1.0f);
webPrefab.Initialized += (sender, args) => {
// load the video so the user can interact with it
webPrefab.WebView.LoadUrl("https://www.youtube.com/watch?v=dQw4w9WgXcQ");
};
}
}

UPDATE: I updated the repo to support video and is a fully functioning 3D browser based on the GeckoView browser engine. It relies on the OVROverlay from Oculus to render frames generated in an Android plugin onto a Unity3D texture.
I'm not sure if this question is a duplicate or the newer one is, but this is a repo I made in the hopes we can implement a nice in-game browser. It's a bit buggy/slow but it works (most of the time).
It uses a Java plugin that renders an Android WebView to a Bitmap by overriding the view's Draw method, converts that to a png and passes it to a unity RawImage. There is plenty of work to do so feel free to improve it!
How to use it:
At the repo you can find the plugin (unitylibrary-debug.aar) you need to import to Assets/Plugins/Android/ and BrowserView.cs and UnityThread.cs which you can use to convert an Android WebView to a texture that Unity's RawImage can display. Fill BrowserView.cs's public fields appropriately. Make sure your API level is set to 25 in Unity's player settings.
Code samples
Here's overriding the WebView's Draw method to create the bitmap and PNG, and init-ing the variables you need:
public class BitmapWebView extends WebView{
private void init(){
stream = new ByteArrayOutputStream();
array = new ReadData(new byte[]{});
bm = Bitmap.createBitmap(outputWindowWidth,
outputWindowHeight, Bitmap.Config.ARGB_8888);
bmCanvas = new Canvas(bm);
}
#Override
public void draw( Canvas ){
// draw onto a new canvas
super.draw(bmCanvas);
bm.compress(Bitmap.CompressFormat.PNG, 100, stream);
array.Buffer = stream.toByteArray();
UnityBitmapCallback.onFrameUpdate(array,
bm.getWidth(),
bm.getHeight(),
canGoBack,
canGoForward );
stream.reset();
}
}
// you need this class to communicate properly with unity
public class ReadData {
public byte[] Buffer;
public ReadData(byte[] buffer) {
Buffer=buffer;
}
}
Then we pass the png to a unity RawImage.
Here's the Unity receiving side:
// class used for the callback with the texture
class AndroidBitmapPluginCallback : AndroidJavaProxy
{
public AndroidBitmapPluginCallback() : base("com.unityexport.ian.unitylibrary.PluginInterfaceBitmap") { }
public BrowserView BrowserView;
public void onFrameUpdate(AndroidJavaObject jo, int width, int height, bool canGoBack, bool canGoForward)
{
AndroidJavaObject bufferObject = jo.Get<AndroidJavaObject>("Buffer");
byte[] bytes = AndroidJNIHelper.ConvertFromJNIArray<byte[]>(bufferObject.GetRawObject());
if (bytes == null)
return;
if (BrowserView != null)
{
UnityThread.executeInUpdate(()=> BrowserView.SetTexture(bytes,width,height,canGoBack,canGoForward));
}
else
Debug.Log("TestAndroidPlugin is not set");
}
}
public class BrowserView : MonoBehaviour {
// Browser view needs a RawImage component to display webpages
void Start () {
_imageTexture2D = new Texture2D(Screen.width, Screen.height, TextureFormat.ARGB32, false);
_rawImage = gameObject.GetComponent<RawImage>();
_rawImage.texture = _imageTexture2D;
#if !UNITY_EDITOR && UNITY_ANDROID
// Get your Java class and create a new instance
var tempAjc = new AndroidJavaClass("YOUR_LIBRARY.YOUR_CLASS")
_ajc = tempAjc.CallStatic<AndroidJavaObject>("CreateInstance");
// send the callback object to java to get frame updates
AndroidBitmapPluginCallback androidPluginCallback = new AndroidBitmapPluginCallback {BrowserView = this};
_ajc.Call("SetUnityBitmapCallback", androidPluginCallback);
#endif
}
// Android callback to change our browser view texture
public void SetTexture( byte[] bytes, int width, int height, bool canGoBack, bool canGoForward)
{
if (width != _imageTexture2D.width || height != _imageTexture2D.height)
_imageTexture2D = new Texture2D(width, height, TextureFormat.ARGB32, false);
_imageTexture2D.LoadImage(bytes);
_imageTexture2D.Apply();
_rawImage.texture = _imageTexture2D;
}
}

Related

OpenSceneGraph integration into Qt Quick

I want to integrate OSG scene into my Qt Quick application.
It seems that the proper way to do it is to use QQuickFramebufferObject class and call osgViewer::Viewer::frame() inside QQuickFramebufferObject::Renderer::render(). I've tried to use https://bitbucket.org/leon_manukyan/qtquick2osgitem/overview.
However, it seems this approach doesn't work correctly in all cases. For example, in Android platform this code renders only the first frame.
I think the problem is that QQuickFramebufferObject uses the same OpenGL context both for Qt Quick Scene Graph and code called within QQuickFramebufferObject::Renderer::render().
So I'm wondering, is it possible to integrate OpenSceneGraph into Qt Quick using QQuickFramebufferObject correctly or it is better to use implementation that uses QQuickItem and separate OpenGL context such as https://github.com/podsvirov/osgqtquick?
Is it possible to integrate OpenSceneGraph into Qt Quick using
QQuickFramebufferObject correctly or it is better to use
implementation that uses QQuickItem and separate OpenGL context?
The easiest way would be using QQuickPaintedItem which is derived from QQuickItem. While it is by default offering raster-image type of drawing you can switch its render target to OpenGL FramebufferObject:
QPainter paints into a QOpenGLFramebufferObject using the GL paint
engine. Painting can be faster as no texture upload is required, but
anti-aliasing quality is not as good as if using an image. This render
target allows faster rendering in some cases, but you should avoid
using it if the item is resized often.
MyQQuickItem::MyQQuickItem(QQuickItem* parent) : QQuickPaintedItem(parent)
{
// unless we set the below the render target would be slow rastering
// but we can definitely use the GL paint engine just by doing this:
this->setRenderTarget(QQuickPaintedItem::FramebufferObject);
}
How do we render with this OpenGL target then? The answer can be still good old QPainter filled with the image called on update/paint:
void MyQQuickItem::presentImage(const QImage& img)
{
m_image = img;
update();
}
// must implement
// virtual void QQuickPaintedItem::paint(QPainter *painter) = 0
void MyQQuickItem::paint(QPainter* painter)
{
// or we can precalculate the required output rect
painter->drawImage(this->boundingRect(), m_image);
}
While QOpenGLFramebufferObject used behind the scenes here is not QQuickFramebufferObject the semantics of it is pretty much what the question is about and we've confirmed with the question author that we can use QImage as a source to render in OpenGL.
P.S. I successfully use this technique since Qt 5.7 on PC desktop and singleboard touchscreen Linux device. Just a bit unsure of Android.

How to create an Android 2D game?

I'm an dev still learning in Android, I've created two apps so far, an alarm clock, a widget and a pass manager using databases, I have a little bit of experience, but I'd like to create a 2D side scroller game, I check on the web and there are different tutorials, but, what's the best way to start working on it? I've read about libgdx but I'm not sure if it's outdated.
I've seen that all the games are made in Java, and then ported to Android, is this correct? I would appreciate some guidance, thanks!
You have multiple options, you can either go for AndEngine (which to me seemed extremely underdocumented and random), make your own "native" Android game with extending from a SurfaceView (which isn't impossible but it certainly doesn't make your life easy, especially when handling images and especially sound, but here's a setup for it: Using a custom SurfaceView and thread for Android game programming (example)), and there's LibGDX.
I personally recommend LibGDX, I even made a fairly simple 4-player multiplayer game in it and it certainly was not difficult. I'd recommend the following tutorial on how to get to it: http://www.gamefromscratch.com/page/LibGDX-Tutorial-series.aspx
And the basics are the following:
When you create a project, the first thing you want to do is change the ApplicationAdapter to Game so you'll have access to the setScreen(Screen) delegation function, so that you can seperate the display and logic of your game into Screens.
You want to handle elapsed time in your Screen, which is done as the following: How to track time in Libgdx(android)
You probably want to make a menu, which of course can be done with pretty pictures and BitmapFonts, but I'll point you to the official wiki ( https://github.com/libgdx/libgdx/wiki ) with that. You can use Scene2D, although I found it slightly difficult, so I personally made a menu made of rectangles, it worked fairly well: LibGDX - Custom Click Listener?
A bit more "click oriented" guide on how I handled touch events using LibGDX: https://stackoverflow.com/a/24511980/2413303
Afterwards, it's literally just implementing game logic, timers, data models, behavior.
The way I solved the stretching rather than using a StretchingViewport or the in-built cameras was the following:
public class Resources
{
public static Texture texture;
public static SpriteBatch batch;
public static Matrix4 normalProjection;
public static BitmapFont bitmapFont;
public static ShapeRenderer shapeRenderer;
....
}
public static void initialize()
{
int width = Gdx.graphics.getWidth();
int height = Gdx.graphics.getHeight();
Resources.bitmapFont = new BitmapFont();
Resources.shapeRenderer = new ShapeRenderer();
Gdx.gl.glLineWidth((width < 640 && height < 480) ? 2.5f : 6f);
//camera = new OrthographicCamera(1, h / w); //I didn't use this at all
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
loadTextures();
Resources.batch = new SpriteBatch();
Resources.normalProjection = new Matrix4().setToOrtho2D(0, 0, 480, 320); //model is 480x320
Resources.batch.setProjectionMatrix(Resources.normalProjection);
Resources.shapeRenderer.setProjectionMatrix(Resources.normalProjection);
}
public class InputTransform
{
private static int appWidth = 480;
private static int appHeight = 320;
public static float getCursorToModelX(int screenX, int cursorX)
{
return (((float)cursorX) * appWidth) / ((float)screenX);
}
public static float getCursorToModelY(int screenY, int cursorY)
{
return ((float)(screenY - cursorY)) * appHeight / ((float)screenY) ;
}
}
Make sure to dispose resources that need disposing, in the Game's dispose() callback.
Libgdx is not outdated and is, IMHO, the best way to program for android. The reason ist, that you can develop 99% on desktop (ofc think about the controlls, which won't be a keyboard on android) and then you have a working android app with a few lines only.
If you instead develop for android directly, you need to use the verry slow emulator or you have to send the app to a testphone, just to debug your code. This is a lot slower then debuging on desktop directly.
Libgdx is verry efficient, easy to use (as soon as you understan how it works) and has a verry good documentation.
For tutorials: I wrote an answer here on SO, which seemed to help some people. It is a short "tutorial" which shows only the verry basics and i have added the links to some tutorials which helped me learning it. So i hope it helps you to^^

Why can't copy html5 video (from camera) to canvas in Google Chrome for Android

I'm developing a feature in a web app, which enables the user to take a photo with whatever camera is attached to the device. This is mostly to be used in an Android v4.0+ phone with Google Chrome v28+.
In both desktop and phone I can setup a video tag to properly display video form the device's camera using getUserMedia and createObjectURL. My problem is that when I try to draw a snapshot from the video element nothing gets copied to the canvas:
var oVideo = jQuery('#myVideo');
var oCanvas = jQuery('#myCanvas');
var oContexto = oCanvas[0].getContext("2d");
var nAncho = oVideo.width();
var nAlto = oVideo.height();
//resizes the canvas: css
oCanvas.width(nAncho);
oCanvas.height(nAlto);
//resizes the canvas: image resolution
oCanvas[0].width = nAncho;
oCanvas[0].height = nAlto;
oContexto.fillRect(20, 20, 40, 40);
oContexto.drawImage(oVideo[0], 0, 0, nAncho, nAlto);
oContexto.fillRect(80, 80, 40, 40);
I added the two fillRect just to be sure that the code was being executed. The result is that the two black rectangles are being drawn but the snapshot is not.
The problem only occurs in Google Chrome v28 for Android, but works properly in Google Chrome 28 (and Firefox 22) for windows.
Is it a Google Chrome's bug (I couldn't find it in http://code.google.com/p/chromium/)? Is there a work around? Or I'm simply doing some thing wrong?
I'll appreciate any insight to help me understand what is going on.
The problem of not being able to draw video frames to a canvas is mentioned in the following issues:
https://code.google.com/p/chromium/issues/detail?id=174642
https://code.google.com/p/chromium/issues/detail?id=181037
Both suggest the problem has been solved, but as I'm not familiar with the development process I don't know if this means it will work on our phones any time soon.
Edit: I just tried this in Chrome Beta and it works fine.

Chrome for android issue regarding html5 canvas and text / filltext

I am currently creating a game in HTML5 using Canvas. Right now, it works pretty well with all the internet browsers except Google Chrome for Android which refuses to display my "filltext" commands...
I noticed that the game worked when I disabled the 2D acceleration of Chrome Android through chrome://flags/... but i obviously cannot ask users to disable their 2D acceleration feature of Chrome prior to playing. Does anybody have a solution to display my filltext under google chrome for android?
You will find below the code: basically, i can see the text in all the browsers except chrome for android... it draws the background and not the text.
//Get the canvas
var canvas2 = document.getElementById("layer2");
var ctx2 = canvas2.getContext("2d");
//Rendering function (draw background and draw image)
var render = function () {
ctx2.drawImage(background, 0, 0);
ctx2.fillText("Lolo", 400, 400);
}
// main loop
var main = function () {
var now = Date.now();
var delta = now - then;
update(delta / 1000);
render();
then = now;
};
var then = Date.now();
setInterval(main, 1);
Thank you!
Laurent
This appears to be a bug with the software renderer, you will need to make your canvas at least 256 pixels in size to force it in to the GPU path. Paul Lewis face a similar problem and blogged about it
If you can provide a reduced demo, I will file the Bug against Chrome (or you can do it at http://m.crbug.com/new)

AS3 AIR (ios and android) CameraRoll issue

I've been trying to sort out an issue for a week or so now. Googled to no avail. I'm currently working on an iOS/Android app that has a feature in the game to take a screenshot and have it show up in the mobile device's gallery.
I'm using the CameraRoll object and the issue is that some objects on screen have smoothing applied. However the CameraRoll screenshot ignores this. Which makes the resulting screen shot have some objects with jaggies.
I've found a number of cries for help on the same issue while googling, but no answers.
Any help is much appreciated.
Jaggies in flash are common since smoothing on bitmaps is disabled by default (more cpu intensive). I'd recommend creating a new bitmap from the CameraRoll MediaEvent.SELECT event. Inside, it should return event.data which is a MediaPromise object. Inside that, you should find a read-only file property where you should be able to find the image.
Then it's just a matter of creating your new image with smoothing.
var img:Bitmap = new Bitmap();
img.bitmapData = file.bitmapData;
img.smoothing = true;
addChild(img);
I've never tried this on mobile before, but it's a common issue which I believe you're encountering.
Addendum:
If you're having an issue with the system based screenshot services, you could create your own using pure AS3. The logic being, AS3 should do a pixel-by-pixel block copy of the stage (thereby respecting the smoothing values of your images).
Try this:
var myBitmapData:BitmapData = new BitmapData(stage.stageWidth, stage.stageHeight);
myBitmapData.draw(stage);

Categories

Resources