I am working on my first VR project, in which I am displaying satellite data inside of a sphere. The camera/observer is placed in the middle of the sphere and "looks up" at the satellite data, which is rendered in all directions. I am doing this under Unity 2021 using the latest Cardboard SDK and running it on a Pixel-3 on Android 12. After some tinkering, I managed to get the scene to render, but the observer is MUCH too close to the scene. I am aware that the FOV is fixed by the device, but is seems to me that I should be able to scale the scene to "zoom out". However, nothing I have tried works, including the following;
Simply changing the size of the sphere (which is just a single "flip-normalled" object)
Changing the camera parameters (Note: I now understand that these have zero effect in VR, as the device sets the FOV).
Placing the camera object, embedded in an XRRig prefab in my case, inside an arbitrary "GameObject" and re-scaling the object (as specified here)
As in 3, but placing every object inside the GameObject
None of these have any effect on the eventual scene as built on the device. I am at a loss. Surely what I am attempting is possible? I really just want a tiny observer, i.e. to make the "sky" seem much farther away. Any/all help appreciated.
Cheers.
Perhaps you should make the satellites, or the images, smaller when rendering them to the sphere. Just scaling the sphere itself will probably just make everything larger, or smaller, to match the size of the sphere.
I am new to Android development. I sent my app for testing on a couple of devices and a couple were fine, but a couple had the game half way off the screen. In the game scene everything is fine so what could be causing this in some devices?
I am using one main camera set to orthographic and it is a simple 2d game.
I guess one of my mistakes was not using the canvas properly and so I will make improvements, but why would this be like this in the game scene where I cannot use a canvas? Do I have to fix the position of main camera also?
Thankyou for help with this.
Try playing around with your canvas' screen match mode. I would try the expand or shrink option. This is pretty self explanatory, it will either shrink or expand your canvas to match the width or the height (depending on your settings for the canvas) of your display. If you're stuck, take a look at the doc for the canvas:
http://docs.unity3d.com/Manual/script-CanvasScaler.html
And the last thing is to play around with your UI elements' anchor positions. Here's the doc for the anchor positions:
http://docs.unity3d.com/Manual/UIBasicLayout.html
Hope this helps!
I'm developing an Android game using Canvas element. I have many graphic elements (sprites) drawn on a large game map. These elements are drawn by standard graphics functions like drawLine, drawPath, drawArc etc.
It's not hard to test if they are in screen or not. So, if they are out of the screen, i may skip their drawing routines completely. But even this has a CPU cost. I wonder if Android Graphics Library can do this faster than I can?
In short, should I try to draw everything even if they are completely out of the screen coordinates believing Android Graphics Library would take care of them and not spend much CPU trying to draw them or should I check their drawing area rectangle myself and if they are completely out of screen, skip the drawing routines? Which is the proper way? Which one is supposed to be faster?
p.s: I'm targeting Android v2.1 and above.
From a not-entirely-scientific test I did drawing Bitmaps tiled across a greater area than the screen, I found that checking beforehand if the Bitmap was onscreen doesn't seem to make a considerable different.
In one test I set a Rect to the screen size and set another Rect to the position of the Bitmap and checked Rect.intersects() before drawing. In the other test I just drew the Bitmap. After 300-ish draws there wasn't a visible trend - some went one way, others went another. I tried the 300-draw test every frame, and the variation from frame to frame was much greater than difference between checked and unchecked drawing.
From that I think it's safe to say Android checks bounds in its native code, or you'd expect a considerable difference. I'd share the code of my test, but I think it makes sense for you to do your own test in the context of your situation. It's possible points behave differently than Bitmaps, or some other feature of your paint or canvas changes things.
Hope that help you (or another to stumble across this thread as I did with the same question).
would be very grateful if anyone can advice how to solve the problem.
I got to draw a large and rather complicated structure (railway track layout). In order to have a smooth scrolling I wanted to draw the layout into a bitmap and then just to copy necessary part into the screen canvas in onDraw method.
The problem is that the layout is much larger than 2048x2048 (max allowed texture size on my Asus Prime) and I'm getting 'Bitmap too large to be uploaded into a texture'.
And this's even without zoom.
The layout is just a set of 2d geometrical primitives so maybe it's possible to work on geometrical rather than bitmap level, but how to implement smooth scroll and zoom then ?
What are common ways of solving this issue ?
Thanks in advance.
You should use a tiled approach. Divide your large map within small ones and render them while you move. Like google maps does.
You could use the TMX format: http://www.mapeditor.org/
AndEngine http://www.andengine.org/ has an implementation for it.
I'm tried to determine the "best" way to scroll a background comprised of tiled Bitmaps on an Android SurfaceView. I've actually been successful in doing so, but wanted to determine if there is a more efficient technique, or if my technique might not work on all Android phones.
Basically, I create a new, mutable Bitmap to be slightly larger than the dimensions of my SurfaceView. Specifically, my Bitmap accomodates an extra line of tiles on the top, bottom, left, and right. I create a canvas around my new bitmap, and draw my bitmap tiles to it. Then, I can scroll up to a tile in any direction simply by drawing a "Surfaceview-sized" subset of my background Bitmap to the SurfaceHolder's canvas.
My questions are:
Is there a better bit blit technique than drawing a background bitmap to the canvas of my SurfaceHolder?
What is the best course of action when I scroll to the edge of my background bitmap, and wish to shift the map one tile length?
As I see it, my options are to:
a. Redraw all the tiles in my background individually, shifted a tile length in one direction. (This strikes me as being inefficient, as it would entail many small Bitmap draws).
b. Simply make the background bitmap so large that it will encompass the entire scrolling world. (This could require an extremely large bitmap, yet it would only need to be created once.)
c. Copy the background bitmap, draw it onto itself but shifted a tile length in the direction we are scrolling, and draw the newly revealed row or column of tiles with a few individual bitmap draws. (Here I am making the assumption that one large bitmap draw is more efficient than multiple small ones covering the same expanse.)
Thank you for reading all this, and I would be most grateful for any advice.
I originally used a similar technique to you in my 'Box Fox' platformer game and RTS, but found it caused quite noticeable delays if you scroll enough that the bitmap needs to be redrawn.
My current method these games is similar to your Option C. I draw my tiled map layers onto a grid of big bitmaps (about 7x7) taking up an area larger than the screen. When the user scrolls onto the edge of this grid, I shift all the bitmaps in the grid over (moving the end bitmaps to the front), change the offset of grid, and then just redraw the new edge.
I'm not quite sure which is faster with software rendering (your Option C or my current method). I think my method maybe faster if you ever change to OpenGL rendering as you wouldn't have to upload as much texture data to the graphics card as the user scrolls.
I wouldn't recommend Option A because, as you suggest, the hundreds small bitmap draws for a tiled map kills performance, and it gets pretty bad with larger screens. Option B may not even be possible with many devices, as it's quite easy to get a 'bitmap size exceeds VM budget' error as the heap space limit is set quite low on many phones.
Also if you don't need transparency on your map/background try to use RGB_565 bitmaps, as it's quite a lot faster to draw in software, and uses up less memory.
By the way, I get capped at 60fps on both my phone and 10" tablet in my RTS with the method above, rendered in software, and can scroll across the map smoothly. So you can definitely get some decent speed out of the android software renderer. I have a 2D OpenGL wrapper built for my game but haven't yet needed to switch to it.
My solution in a mapping app relies on a 2 level cache, first tile objects are created with a bitmap and a position, these are either stored on disk or in a Vector (synching is important for me, multithreaded HTTP comms all over the place).
When I need to draw the background I detect the visible area and get a list of all the tiles I need (this is heavily optimised as it gets called so often) then either pull the tiles from memory or load from disk. I get very reasonable performance even on slightly older phones and nice smooth scrolling with no hiccups.
As a caveat, I allow tiles not to be ready and swap them with a loading image, I don't know if this would work for you, but if you have all the tiles loaded in the APK you should be fine.
I think one efficent way to do this would be to use canvas.translate.
On the first drawing the entire canvas would have to be filled with tiles. New android phones can do this easily and quickly.
When the backround is scrolled I would use canvas.translate(scrollX, scrollY), then I would draw individualy one by one tile to fill the gaps, BUT, I would use
canvas.drawBitmap(tileImage[i], fromRect, toRect, null) which would only draw the parts of the tiles that are needed to be shown, by setting fromRect and toRect to correspond to scrollX and scrollY.
So all would be done by mathematics and no new bitmaps would be created for the background - save some memory.
EDIT:
However there is a problem using canvas.translate with surfaceView, because it is double buffered and canvas.translate will translate only one buffer but not the second one at the same time, so this alternating of buffers would have to be taken into account when depending on surfaceView to preserve the drawn image.
I am using your original method to draw a perspective scrolling background. I came up with this idea entirely by accident a few days ago while messing around with an easy technique to do a perspective scrolling star field simulation. The app can be found here: Aurora2D.apk
Just tilt your device or shake it to make the background scroll (excuse the 2 bouncing sprites - they are there to help me with an efficient method to display trails). Please let me know if you find a better way to do it, since I have coded several different methods over the years and this one seems to be superior. Simply mail me if you want to compare code.