empty view when displaying WebView (WebGL an Video) inside 3D app - android

I inserted a webview in my 3D Game. I can manage to retrieve URL and display it on a canvas and texture it in my 3D rendering.
Everything work fine except from the Video the WebGL and Threeje rendering.
For this 3 case I cannot see anything except the 2 canvas from threeJS (the frame canvas information) and ( the check box option)
In all these 3 case the view where WebGL or video are empty should be rendered are empty.
For the video I found out that I can see the image only when I touch the seekbar under the play button and audio button.
So it looks like the rendering view seems to be hidden or not visible or something else.
So I tried to get the view rendered by WebView to check the status. But WebView only got One View and it is not possible to get the elements view inside to check it.
So I am not sure if the problem come from the view status or from conflict with the 3D environment.
So I would like to know if someone have an idea about the problem. Or if someone could tell me how to retrieve the detailed view from webview, if it is possible.
NEW INFORMATION
I think that the problem could come from:
running WEBGL(opengl 1.4) from WebView inside GvrActivity(OpenGL es 2)
may cause a conflict when rendering both Android OpenGL at same time.
Concerning the media and audio. I am also running voice recognition
(Context.AUDIO_SERVICE) and mediaplayer to watch loader video.
LAST TESTING:
So if I run WebView in another activity everything is fine. The problem is that I would like to retrieve an access to this activity to get the Webview VIEW displayed in the main activity layout.
Is that possible?
LAST TESTING:
Changing Context Activity starting WebView does not resolve the problem.
So I found out that if I attach the WebView to the getWindow().addContentView
there is no problem But is if I add the view to MyGLSurfaceView extends GvrView
i got the problem. An I think it is because I already use the Media player to render video To 3D mesh and OpenGL to draw The 3D scene. But I am not sure of that.
LAST TESTING : (je rame (french expression))
I try everything. I think I did. And I think that after 3 weeks I start to be out of resources.
Concerning the audio from the webview I am sure a get in conflict with the voice recognition mAudioManager = (AudioManager) context.getSystemService(Context.AUDIO_SERVICE); Which I think is use by Webview because I can use it displaying video inside canvas in 3D OpenglES.
Concerning the video from Webview in need to touch The progression bar to see something. But as Webview is a black box. No way to get the progress bar view and act on it.
Concerning WebGL and threeJS. I can display web text and image but nothing related to opengl, white display but not transparent, because a set transparency to check. Can only be displayed outside of the OPenGL environment and outside of Surface and surfacetexture.
Concerning Webview: Only one view output. So all the view rendering seems to be done internally using tag parser for position and canvas construction, so no view may be used. But is I consider that the ourput view of Webview is a complete bitmap of all the tag canvas (that I my one interpretation it is easy to use tag tree for rendering fast access and easy structured design information structure). But I am wondering in this case why WebGL and ThreeJs cannot be copied using surfacetexture.updateTexImage() if webview output is a bitmap canvas.
Because everything are 2D canvas when they are displayed inside view.
So 3 week trying to find answer. I hope that someone will find it.
Because I was planing to do an ART gallery in VR where anyone could watch video or 3D ART. For the video and 360 video I could make it but not for WebGL and ThreeJS.
3D inside 3D is the top in ART technology. Imagine that you could go in any 3D shop or Accessing Web URL to any Video of artiste or whatever. The last thing is possible but not the WEGgl and TrheeJS. Watching YouTube 360 VR is easy a can make it, it works very well with voice command, just Hack to get the download address, easy. Why to block when the best is to be open and let the imagination create the tomorrow tools?
So I give up and go back to my OPenCL recognition application which is nearly finished. Form recognition could be great new tools.
By the way. Do not hesitate to ask me for my APK if you want to check by yourself. Stack Overflow are allowed to contact me as they got my email address.
LAST TEST (07/07/2020)
So i think I understood the problem concerning the non accessibility to the Video and webgL display. The first intention concerning Video is surely the fact that they wanted to avoid the ability to copy the video and the problem of optimization of the rendering. Concerning WebGL their is no need to avoid the copying. But it need to be optimized to avoid to redraw the all webview picture when only one part is modified. So it look like the rendering of video and webGL is done in another thread and that the rendering is independent of the other HTML TAG.
Some said that because it is done in other thread it is not possible to access it. Which is false, because the canvas where the rendering is done must be accessible if we want it to be accessible. It is just a bitmap for the video and a frame buffer(viewport) for webGL. I agree that is done by GPU card and not by software but the resulting canvas(rectangle) could be accessible because at the end it is displayed. The problem is that it is not part anymore of the WebView final view, they just send the position where the display must be done to the GPU as viewport coordinate.
So when I try to SurfaceTexture.updateTexImage() I just get some part of the web page, no video and no webgl.
Here is an example to anderstand:
if you load this URL "https://webglfundamentals.org/webgl/webgl-2d-triangle-with-position-for-color.html" and look at the code in you browser.
I can see and manipulate the slider without problem but the GL result is not visible. Because the canvas is an openGL context and it is process by the GPU and directly copy to the final frame buffer at the right place but it is not part of the Webview final view, which is unique. For the video if a can get the slider i can have some picture but i have to get it and check is modification, hard work and i am not really interested by video, found other way to watch it in OpenGL .
So I think that Android could make an effort to try to give the access to the Canvas for webgl rendering. And find a way to give an access to it inside WebView. 3D inside 3D is already 4 dimension. Could be good. But not available at the moment. But it will.
I will look at it, just by curiosity. But no chance I think.

here is the only answer i have found :
https://groups.google.com/a/chromium.org/forum/#!topic/chromium-dev/3wrULcul8lw
today (24/09/2020) i read this :
DRM-protected video can be presented only on an overlay plane. Video players that support protected content must be implemented with SurfaceView. Software running on unprotected hardware can't read or write the buffer; hardware-protected paths must appear on the Hardware Composer overlay (that is, protected videos disappear from the display if Hardware Composer switches to OpenGL ES composition).
30/09/2020 :
i am able to see and manipulate WebGl display using Surface.lockHardwareCanvas but a looze the focus during manipulation and sometimes it does not work, little probleme ;)).
But using lockHardwareCanvas when a display youtube web page with video or without, everything is frezzing, cannot udtate the view neither scroll it.
Mais ça avance ;))

Here is my solution.
WebGL and Jthree can be display using Surface.lockHardwareCanvas
the problem is using Surface.lockHardwareCanvas with htlm other than accelerated View. In this case there is no frame to listen so no refreshing of the view. so must be done in other way.
concerning the OpenGL,WebGL and Jthree. It depends on the application somme are not weel displayed for many reason (no GL_TEXTURE_EXTERNAL_OES, canvas not facteur of 2, and bad view size). And the problem is that there is no new onframeavailable so need to find other way to refresh the view.
It took quite a long time to anderstand but it is working even in GVRActivity ;))
Regards.

Related

Using OCR mobile vision to anchor image to detected text

I am using the Text Recognition (mobile vision/ML) by Google to detect text on Camera feed. Once I detect text and ensure it is equal to "HERE WE GO", I draw a heart shape beside the detected text using the passed boundries.
The problem I am facing that the shape is jumping and lagging behind. I want it more like Anchored to the detected text. Is there something I can do to improve that?
I heard about ArCore library but it seems it is based on existing images to determine the anchor however in my case it can be any text that matches "HERE WE GO".
Any suggestions ?
I believe you are trying to overlay text on the camera preview in realtime. There will be small delay between the camera input and detection. Since the API is async by the time the output returns you would be showing another frame.
To alleviate that you can either make the processing part sync with using some lock/mutex or overlay another image that only refreshes after the processing is done.
We have some examples here: https://github.com/firebase/quickstart-android/tree/master/mlkit
and also I fixed a similar problem on iOS by using DispatchGroup https://github.com/googlecodelabs/mlkit-ios/blob/master/translate/TranslateDemo/CameraViewController.swift#L245
Option 1: Refer tensor flow android sample here
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android
especially these classes:
1. Object tracker: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/android/src/org/tensorflow/demo/tracking/ObjectTracker.java
2.Overlay
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/android/src/org/tensorflow/demo/OverlayView.java
3.Camera Activity and Camera Fragment https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/android/src/org/tensorflow/demo/CameraActivity.java
Option 2: A sample code can be found in below code lab. They are doing something similar for barcode.
https://codelabs.developers.google.com/codelabs/barcodes/index.html?index=..%2F..index#0

UI/Canvas appears in front of objects, even though it's set to World Space

In my Google Cardboard project, I have several canvases attached to walls in a room. These canvases have buttons the player can interact with. I set each canvas to World Space, but for some reason, the buttons are rendering in front of objects that should appear in front of the buttons.
update1:
The UI appears behind the cube in the game and scene windows when not running. It's only when I hit play that the image appears in front of the cube. I am adding images to the UI button programatically, but the problem happens even if I add images only using the editor.
update2:
If I disable the cardboard elements in my scene (i.e. use a standard fps camera setup), I do not get the issue.
Picture below: Checkerboard is UI. Gray block is 3D block. I want UI behind the block, on the wall.
Manually move the parent UI object behind the gray object.
Someone on answers.unity3D made this suggestions:
The problem may come from the shader used by the UI element with a
ZTest value set to Off or the Queue tag of the UI element's shader is
"higher" than the tag of the cube's shader
Turns out, and this is apparent in the 3rd screenshot, my UI buttons did not have a material. Once I provided one, I used one that used a standard UI shader (I initially tried unlit, but that made them disappear).
I guess there are still some issues with shaders and google cardboard. Thanks guys

Display Watch Face in android activity

I'd like to display a watch face I've developed in my app and have it appear live as though it was on a watch. The class and engine already exist so i feel like it shouldn't be too hard to get it to appear within an activity. Does anyone have experience with this or have a suggestions as to which path to take in attempting this?
It should be fairly easy to achieve. What you need to do is this:
extract all the drawing logic; which is whatever code is interacting with Canvas.
create a custom View and in View.onDraw(Canvas) use the extracted code to draw the watch face.
In the end everything draws on a Canvas, so you can (more or less) transfer functionality from View objects to WallpapersService. View system is an abstraction on top of Canvas.

Cocos2d android Texture issue on Motorola xoom

I am developing a game in android with Cocos2d framework with latest build from github(Weikuan Zhou).
In my game, I used lots of images(total images size is around 11MB).
Problem:
I am getting the black box instead of images when I play my game more than 3 times.
What steps will reproduce the problem?
1. When I play my game more than 3 times via "Play Again" functionality of my game.
What is the expected output? What do you see instead?
- images should be displayed properly instead of "BLACK BOX".
and in my logcat, I see the Heap memory goes around 13Mb.
I already release Texture via below method
CCTextureCache.sharedTextureCache().removeAllTextures();
I also tried to remove sprite manually ex. removeChild() method.
But so far not succeeding to find any solution.
If any one have solution for this please let me know.
From what you're describing, you're hitting exactly the same problem i did, which is that cocos2d for android is really buggy when dealing with lots of single sprites loaded individually.
The best route to take to resolve this problem is to get hold of (unless you've got a mac) the free flash version of zwoptex from here http://zwopple.com/zwoptex/static/downloads/zwoptex-flashversion.zip
this will let you build up spritesheets, i suggest relating as many sprites as you can onto each sheet, whilst trying to keep them sensibly grouped.
This is mainly down to cocos doing a single render for ALL sprites in a spritesheet, rather than one render per sprite for normal sprites, hence massively reduced processing time, and much less memory usage.
you can then load the spritesheet with code such as (can't guarantee this code will execute as i'm grabbing snippets from a structured project but it will lead you to the right solution)
CCSpriteFrameCache.sharedSpriteFrameCache().addSpriteFrames("menus.plist"); // loads the spritesheet into the frame cache
CCSpriteSheet menuSpriteSheet = CCSpriteSheet.spriteSheet("menus.png", 20); // loads the spritesheet from the cache ready for use
.... // menu is a CCLayer
CCSprite sprite = CCSprite.sprite(CCSpriteFrameCache.sharedSpriteFrameCache().spriteFrameByName("name of sprite from inside spritesheet.png"));
menuSpriteSheet.addChild(sprite, 1, 1); / you add the sprite to its spritesheet
menu.addChild(menuSpriteSheet, 1); // then add the spritesheet to the layer
This problem happens when you load resources at run time. So it is better to load resources before the scene starts.You can do as follows.
Using the sprite sheets for resources in your game.
Implement your Ui in constructor of your class.
Implement your functionality in overridden method onEnter() in your layer.
You must unload your sprite sheets after finishing your scene.
These is the procedure that I am following.
Thanks.

set the origin (x,y) of a view inside of a RelativeLayout

I have some game pawns on a screen inside of a RelativeLayout. When the user clicks the pawn I would like then to be able to drag it under there finger. I have the MotionEvent captured but can't seem to find how to adjust the orion of the pawn.
I've seen posts saying to adjust the margin but that seems questionable. I still want to do hit tests for the pawns after they've been moved and don't understand how to work with the margins in that case.
thanks!
I would recommend not using a Relative Layout at all.
A Canvas is a much better option
Or if you really want to use a layout, possibly an AbsoluteLayout is a better option
Using a layout for a game may prove unsatisfactory as you proceed. I can recommend using the free and open source game engine AndeEngine for making 2D games. The problems you have with collision detection and x,y positioning are trivially easy to implement with it. I've made 2 games and a visualization view within an app with is so far.
Check it out here:http://www.andengine.org/
You can download the demo app to your android device and see its out-of-the-box capabilities. (They include Sprites, sound, animation and more.)
This one game I made with it.
https://market.android.com/details?id=com.plonzogame&hl=en

Categories

Resources