I am building my game for a resolution of 800x480.
I would like to know will the engine automatically scale the scene if it is on a smaller or larger device?
Also, how do you set a default width and height for the new Robovm projects?
I am using Box2D and not to sure how the screen supports work for the different devices such as iphone and the android.
P.S i know scaling isnt the best option but for my case it will be fine.
A piece of advice, if you are using box2d than with this approach you are going to face lot of problems with your physics behaviour as your values are to large for camera.For example Bodies will pass each other without colliding and other such stuff.
To solve this problem use camera value as
camera.viewportHeight = scrh/40f;
camera.viewportWidth = scrw/40f;
camera.position.set(camera.viewportWidth/2f,camera.viewportHeight/2f, 0f);
camera.update();
`
and while drawing any asset set its size to orignal size/40f
so that world step calculations will be in terms of 20 ,12 and not 800,480 an dthus improving ur physics behaviour.
For non box2d screens Sandeep's solution will always work fine for u.
yes the engine will scalee it but for that you have to set a view port
in your create method do this
float scrw = 800;
float scrh = 480;
camera = new OrthographicCamera();
camera.viewportHeight = scrh;
camera.viewportWidth = scrw;
camera.position.set(camera.viewportWidth * .5f,camera.viewportHeight * .5f, 0f);
camera.update();`
Yes you have to do this in each screen...
Related
I have read a few articles about resolutions, screens, viewports and cameras on the mobile phone, but I am much more confused now, that I were before. Could you please help me to keep up with issue and handle it, as currently I am working on mobile game, but without any success. I am using LibGDX.
Regarding to answer below I changed my program (thanks for explanation Xoppa :)
New piece of code:
#Override
public void create () {
orthographicCamera = new OrthographicCamera();
fillViewport = new FillViewport(960, 600, orthographicCamera);
orthographicCamera.position.set(orthographicCamera.viewportWidth * 0.5f, orthographicCamera.viewportHeight * 0.5f, 0);
fillViewport.apply();
}
.
#Override
public void render () {
Gdx.gl.glClearColor(0.22f, 0.22f, 0.22f, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
...
}
.
#Override
public void resize (int width, int height) {
fillViewport.update(960, 600);
orthographicCamera.position.set(960 * 0.5f, 600 * 0.5f, 0);
}
But the result is the same.
Output:
Two small dots are my players. :(
Even if I change the Viewport resolution, size of my players do not change. The only thing that change was visible resolution of viewport, behind it I do not see mz players. I sketeched it for better imagination (values are just for imagination).
Physics body of my players:
public Character(Vector2 startPosition) {
BodyDef bodyDef = new BodyDef();
bodyDef.type = BodyType.DynamicBody;
bodyDef.position.set(startPosition);
// Create our body in the world using our body definition
body = Physic.gameWorld.createBody(bodyDef);
// Create a circle shape and set its radius to 6
CircleShape circle = new CircleShape();
circle.setPosition(new Vector2());
circle.setRadius(0.39f);
// Create a fixture definition to apply our shape to
FixtureDef fixtureDef = new FixtureDef();
fixtureDef.shape = circle;
fixtureDef.density = 1f;
// Create our fixture and attach it to the body
body.createFixture(fixtureDef);
circle.dispose();
}
Even if I change the radius of the circle, body will need much more energy to manipulate with and also body is not big enough. Of course I can not set the radius to value which is more than 10f as Box2D doc not recommended it.
But I do not see anything, when I run it, or created physical world objects are too small or flattened (physical world configuration and initialization is good I think; radius of circle physical objects are 0.39). Or am I missing something in the code, some statements or anything else?
But I think I have problem with correct understanding of mentioned issues.
Could you please help me with this or explain it?
First make sure to understand what a camera and viewport is and does. Perhaps this example might help:
Imagine that you're in the park and take a photo with your camera/smartphone of a tree and a bench. The bench is e.g. 3 meters wide and half a meter in height and depth. The tree might be e.g. half a meter wide and 10 meters in height. Now if you look at the photo on the screen of your camera/smartphone, then you'll see that the bench is no longer 3 meters wide, instead it is just a few millimeters wide. The actual size depends on how you've setup your camera (e.g. zoom) as well as the resolution of the photo (in pixels) and the density of the screen you're watching it on.
So practically in the example above the bench and the tree have two different sizes: the actual size in the park in the physical world and the size on the screen you're watching the photo on. Of course the tree and the bench don't actually shrink depending on the photo, they stay the same size. The size on the photo is only the size of the projection of the tree and the bench.
Projection is practically transforming world objects onto the screen, based on various camera settings (like zoom, position, etc.).
The park is much bigger than only the portion you've taken the photo of. When you took the photo you decided which portion of the park (the physical world) you want to project onto the photo. Let's call this the park's viewport.
Likewise you also don't have infinite storage to project the photo, you'll have to define the portion (the resolution of the photo) you want to project onto. Let's call this the photo's viewport.
The park's viewport is expressed in real world units, like meters or inches e.g. The photo's viewport is expressed in pixels (the resolution you've set it to).
When making a game you typically aren't making a photo but rendering to screen. Therefor the photo's viewport is then called the screen viewport. And when making a game your park might not even exist, your game world is virtual. Therefor the park's viewport is then called the virtual viewport.
On a small side-note: pixels are always integers, therefor screenviewport is always expressed in int. World units like meters or inches can be fractional, therefor virtualviewport is always expressed in float.
The Camera class of libgdx does exactly what is described above, it performs the the projection of your virtual world onto the screen. However, in practice that comes with some problems. E.g. not every screen has the same aspect ratio. Therefor you need to define how you want to cope with differences in aspect ratio. E.g. add black bars, expand the virtual world or stretch it, etc.
The Viewport class of libgdx solves that problem by implementation various strategies you can choose from on how to define your virtualviewport and screenviewport. To do this it encapsulates (and manages) a camera for you.
In your code you've given it a OrthographicCamera in the constructor to manage, which is typically used for 2D games. The virtualviewport you've set the OrthographicCamera to in the constructor is overwritten though. Because you've chosen to use a ScreenViewport. This is an implementation that makes the virtualviewport (the portion of the park) the same as the screenviewport (the portion of the screen).
It is very unlikely that you want to use that viewport implementation. Instead you probably want to use e.g. a FillViewport:
public void create () {
viewport = new FillViewport(50f, 50f);
}
This creates a virtualviewport that grows if needed to maintain aspect ratio. The 50 by 50 is in world units, e.g. meters or inches.
Now we need to tell the viewport which screenviewport to use. This depends on the size of the screen of the device and is therefor best set in the resize method.
public void resize (int width, int height) {
viewport.update(width, height);
}
All other code of these methods of you can be removed, you dont need them. You dont need to update the camera in your render method. You can just use the viewport as you normally would. And if you need access to the camera directly (but not to modify it, only to read e.g. its projection matrix), then you can use:
viewport.getCamera()
I know this is one of the most repeated questions, however, no working solution found anywhere, after putting so much of efforts.
This is really a killing issue though it might be a simple one for the experts.
I am working on opencv Haar Cascade classifiers.(Eg: Face Detection, Eye Pair Detection)
I have just taken the face-detection sample code from the "OpenCV-2.4.9-android-sdk" - samples.
This sample code is set in Landscape mode and everything is working fine.
However, i want to make make the classifiers work in portrait mode. I know the haar classifiers were not made to work with portrait mode.
As opencv uses "CameraBridgeViewBase", i don't have all controls to play with the resolution of the camera and displaying the
images back on the screen.(preview)
Now, the moment i set the screen orientation as "android:screenOrientation="portrait", the images is rotated 90deg clock wise.
What have i tried:
To preview the portrait image without rotation: i modified the "deliverAndDrawFrame" in "CameraBridgeViewBase" by adding
Matrix matrix = new Matrix();
matrix.preTranslate((canvas.getWidth() - mCacheBitmap.getWidth()) / 2,(canvas.getHeight() - mCacheBitmap.getHeight()) / 2);
if(getResources().getConfiguration().orientation == Configuration.ORIENTATION_PORTRAIT)
matrix.postRotate(90f,(canvas.getWidth()) / 2,(canvas.getHeight()) / 2);
canvas.drawBitmap(mCacheBitmap, matrix, null);
To make the classifier work with portrait mode:
I played with all kinds of permutation and combination of transpose and flip to rotate the grey scale image that i pass to detectmultiscale in "onCameraFrame".
What is my issue?
I am not able to get the images with native resolution which is higher than what i get at "onCameraFrame".
How to get the original or actual or native quality of the images upto the camera sensors capacity?
As i already put a lot efforts on making the classifier work with portrait mode without any luck, any kind of suggestions/sample codes/reference which are working,would help me a lot.
I found a solution:
Matrix matrix = new Matrix();
matrix.preTranslate((canvas.getWidth() - mCacheBitmap.getWidth()) / 2,(canvas.getHeight() - mCacheBitmap.getHeight()) / 2);
matrix.postRotate(90f,(canvas.getWidth()) / 2,(canvas.getHeight()) / 2);
float scale = (float) canvas.getWidth() / (float) mCacheBitmap.getHeight();
matrix.postScale(scale, scale, canvas.getWidth()/2 , canvas.getHeight()/2 );
canvas.drawBitmap(mCacheBitmap, matrix, new Paint());
This delivers portrait mode with CORRECT scale :D try it please. I'm using OpenCV 3.4 .
I am developing a game in libgdx where I need to zoom in and zoom out a popup for clearing every stage in the game. Could you please guide me how to do zoom effects in libgdx.
Just a note I am doing this for android device.
Kindly assist.
You can zoom using the property of same name of OrtographicCamera.
camera.zoom = 1; //Normal zoom (default)
camera.zoom = 2; //Zoomed in
camera.zoom = 0.5F; //Zoomed out
If you mean "zooming" a particular image, then just make it bigger and smaller.
To make your sprite larger or smaller use sprite.setScale, where 1 is the default size, 2 is 2 time bigger, 0.5 half.. you understand.
But if you want to make like a screen transition better use camera zoom, as described in the post above.
I have a game what I made in 480x320 resolution (I have set it in the build settings) in Unity. But I would like to publish my game for every Android device with every resolution. How can I do it, to tell Unity to scale my game up to the device's resolution? Is it possible to do?
Thanks in advance!
The answer to your question largely depends on how you've implemented the game. If you've created it using GUI textures, then it largely depends on how you've placed/sized your objects versus screen size, which makes things a little tricky.
If the majority of your game is done using objects (such as planes, cubes, etc) then there's two methods I usually choose to use.
1) First method is very easy to implement, though doesn't always look too good. You can simply change the camera's aspect ratio to match the one you've designed your game around. So in your case, since you've designed your game at 4:3, you'd do something like this:
Camera.aspect = 4f/3f;
However, if someone's playing on a screen meant for 16:9, the game will end up looking distorted and stretched.
2) The second method isn't as easy, requiring quite a bit of work and calculations, but will give a much cleaner looking result for you. If you're using an orthographic camera, one important thing to keep in mind is that regardless of what screen resolution is being used, the orthographic camera keeps the height at a set height and only changes the width. For example, with an orthographic camera at a size of 10, the height will be set to 2. With this in mind what you'd need to do is compensate for the widest possible camera within each level (for example, have a wide background) or dynamically change the Orthographic Size of the camera until its width matches what you've created.
If you've done a 3d game with a stereoscopic camera , screen resolution shouldn't really affect how it looks, but I guess that depends on the game, so more info would be required
The way i did is to change camera viewport according to device aspect ratio
Consider you made the game for 800x1280
The you can do this in any one of the script
float xFactor = Screen.width / 800f;
float yFactor = Screen.height / 1280f;
Camera.main.rect=new Rect(0,0,1,xFactor/yFactor);
and this works like magic
A easy way to do this is considering your target, I mean if you're doing a game for Iphone 5 then the aspect ratio is 9:16 v or 16:9 h.
public float targetRatio = 9f/16f; //The aspect ratio you did for the game.
void Start()
{
Camera cam = GetComponent<Camera>();
cam.aspect = targetRatio;
}
Here is my script for scaling the ortographic camera in 2D games
public float screenHeight = 1920f;
public float screenWidth = 1080f;
public float targetAspect = 9f / 16f;
public float orthographicSize;
private Camera mainCamera;
// Use this for initialization
void Start () {
// Initialize variables
mainCamera = Camera.main;
orthographicSize = mainCamera.orthographicSize;
// Calculating ortographic width
float orthoWidth = orthographicSize / screenHeight * screenWidth;
// Setting aspect ration
orthoWidth = orthoWidth / (targetAspect / mainCamera.aspect);
// Setting Size
Camera.main.orthographicSize = (orthoWidth / Screen.width * Screen.height);
}
I assume it's 2D instead of 3D, this what I do:
Create a Canvas object
Set the Canvas Scaler to Scale with Screen Size
Set the Reference Resolution to for example: 480x320
Set the Screen Match Mode to match width or height
Set the match to 1 if your current screen width is smaller (0 if height is smaller)
Create an Image as background inside the Canvas
Add Aspect Ratio Fitter script
Set the Aspect Mode to Fit in Parent (so the UI anchor can be anywhere)
Set the Aspect Ratio to 480/320 = 1.5
And add this snippet on main Canvas' Awake method:
var canvasScaler = GetComponent<CanvasScaler>();
var ratio = Screen.height / (float) Screen.width;
var rr = canvasScaler.referenceResolution;
canvasScaler.matchWidthOrHeight = (ratio < rr.x / rr.y) ? 1 : 0;
//Make sure to add Using Unity.UI on top of your Aspect Ratio Script!
For 3D objects you can use any of the answers above
The best solution for me is to use the theorem of intersecting lines so that there is neither a cut-off on the sides nor a distortion of the game view. That means that you have to step back or forward depending on the different aspect ratio.
If you like, I have an asset on the Unity asset store which automatically corrects the camera distance so you never have a distortion or a cut off no matter which handheld device you are using.
I just got started on a renderer for my cross platform framework (iOS and Android) using opengl es. When I got to the viewport stuff (which is needed for splitscreen stuff) and noticed there is a difference between iOS and Android. Here are two images.
Android
There is actually another glitch. IT seems to wrap.
iOS
My question.
Which of the two is correct? I have no transformations applied but one to bring the drawn quad back a bit. glTranslatef(0.0f, 0.0f, -5.f);
Initialisation code:
glEnable(GL_TEXTURE_2D);
glShadeModel(GL_SMOOTH); //Enable Smooth Shading
glClearColor(0.f, 0.f, 0.f, 1.0f); //Black Background
glClearDepthf(1.0f); //Depth Buffer Setup
glEnable(GL_DEPTH_TEST); //Enables Depth Testing
glDepthFunc(GL_LEQUAL); //The Type Of Depth Testing To Do
//Really Nice Perspective Calculations
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
Viewport and project code
glViewport(viewportX, viewportY, viewportW, viewportH);.
glEnable(GL_SCISSOR_TEST);
glScissor(viewportX, viewportY, viewportW, viewportH);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
... And finally the frustrum is calculated and set glFrustrum.
I have also used this code:
float widthH = width * .1f;
float heightH = height * .1f;
glOrthof(-widthH, widthH, -heightH, heightH, .1f, 100.f);
glScalef(widthH, heightH, 1.f);
Maybe Android or iOS has something set by default? I am clueless.
Answering my own question for those who have the same issue.
I use GLKView which apparently calls the glViewport on each render call, resetting what I just did in the previous frame. So if you use GLKView make sure to call glViewport each frame! ... or roll your own EAGLView to have some real control which I think, I am about to.
This looks like you are not accounting for the scale factor of the iOS device. Bear in mind that the most recent iOS devices have retina displays with an extremely high ppi. You can see this artifact in the bottom left of the iOS screenshot. It is only displaying the bottom 25% of your texture because the entire view has a scale factor of 2.
In short, ensure you account for the scaleFactor on iOS and use this factor in your glScalef call.