I'm using gdx-freetype library in android to generate BitmapFont from TrueType font which is in assets/fonts/arial.ttf
This is the way I use to show some text on screen
generator = new FreeTypeFontGenerator(Gdx.files.internal("fonts/arial.ttf"));
font30 = generator.generateFont(60 , "ConectigTsrv" , false);
generator.dispose();
Label l = new Label("Connecting to server...", new LabelStyle(font30 , Color.BLUE));
l.setX(400 - l.getWidth()/2f);
l.setY(480 - l.getHeight() - 10);
stage.addActor(l);
text will be printed to screen very well
But after a while or when I push Home button and then come back to application all characters turn to black rectangles
Any idea about whats going on?
The problem is that the textures created by FreeTypeFontGenerator were not managed until this recent commit. The fact that textures are unmanaged means that they have to be reloaded following a loss of the OpenGL context, which occurs in scenarios such as the one you described.
If you upgrade libgdx to the latest nightlies then the problem will probably go away.
For more information, the problem with unmanaged textures is described very well in this article.
In case anyone is still having this problem. Using LibGDX 1.9.2, I had this problem as well:
Run the game, navigate 'back' to Android home screen, go back to the game and fonts would be black rectangles.
Turned out I was loading all textures in a static way, which only loads them once at game start and never again:
//THIS IS WRONG
public class Styles {
public static final BitmapFont HEADER_FONT;
public static final FreeTypeFontGenerator _freeTypeFontGenerator = ...
static {
FreeTypeFontGenerator.FreeTypeFontParameter params = ...
HEADER_FONT = freeTypeFontGenerator.generateFont(params);
}
}
This causes trouble when the game is reloaded in memory. As far as I understand, the final fields now refer to non-existent texture data. To fix that, I got rid of the final properties and load them in the create() function, recreating all assets each time the game is reloaded in memory:
public void onCreate() {
Styles.loadAssets();
}
And in Styles:
//STATIC RESOURCES CAN CAUSE TROUBLE, KEEP IT IN MIND
public class Styles {
public static BitmapFont HEADER_FONT;
public static FreeTypeFontGenerator FONT_GENERATOR = ...
public static void loadAssets() {
FreeTypeFontGenerator.FreeTypeFontParameter params = ...
HEADER_FONT = FONT_GENERATOR.generateFont(params);
}
}
I prefer my read-only assets to be static to be memory friendly. However, the use of static resources may still create problems I'm not aware of, as per the manual.
Related
I am adroid proggrammer,because of many object in scene my game has lagging
i have theory for remove lagging in my game.
if i can control rendering in unity i can remove lagging.
using UnityEngine;
using System.Collections;
public class Enemy : MonoBehaviour {
void Update(){
void Start(){
GetComponent<Renderer>().enabled = false;
}
object2 = GameObject.Find("TR");
var distance = Vector3.Distance(gameObject.transform.position, object2.transform.position);
print (distance);
if(distance <= 80){
GetComponent<Renderer>().enabled = true;
}
}
}
Don't work.how can i have boolean render that when have collision will render
else remove.
i want have zone that all object in my zone rendered and allthing outside do not render.
void OnTriggerEnter(Collider collision)
{
if(collision.gameObject.tag == "zone")
{
GetComponent<Renderer>().enabled = true;
}
else{
GetComponent<Renderer>().enabled = false;
}
don't work
void OnTriggerEnter(Collider collision)
{
if(collision.gameObject.tag == "zone")
{
gameObject.SetActive(false);
}
else{
gameObject.SetActive(true);
}
This is either implemented in Unity or implementing it is a bad idea because raycasts are expensive and you need a lot of them. Try finding other problems which cause lagging in your game, disable feature by feature and write how many frames you have, this will get you best overview of what's the problem. Look online which methods are expensive (Instantiating, Destroy, try merging all models you have, smaller amount of shaders, fast shaders, less textures to load, FindGameObjectByName (or tag...)).
Here you will find a great document about optimization. It's preapared for mobile devices but i hope you will find what you need: Unity Optimization Guide for x86 Android
I would recommend having your blue blobs in an object pool, and the ones leaving your screen getting disabled.
You know your position and you know the position of the objects in the pool, you can math your distance in one direction, for instance behind you and disable after x amount.
Raycasting or collisions are abundant.
On your terrain generation scripts, check for disabled pool objects and if one exist, it should be put ahead in the level and repositioned or w/e logic you have there.
Don't instantiate and destroy unless you really need it, do it on level-load instead of on the fly.
(It's expensive.)
There's some really good tutorials on the unity page, have a look there.
They cover things like endless-runners.
I'm an dev still learning in Android, I've created two apps so far, an alarm clock, a widget and a pass manager using databases, I have a little bit of experience, but I'd like to create a 2D side scroller game, I check on the web and there are different tutorials, but, what's the best way to start working on it? I've read about libgdx but I'm not sure if it's outdated.
I've seen that all the games are made in Java, and then ported to Android, is this correct? I would appreciate some guidance, thanks!
You have multiple options, you can either go for AndEngine (which to me seemed extremely underdocumented and random), make your own "native" Android game with extending from a SurfaceView (which isn't impossible but it certainly doesn't make your life easy, especially when handling images and especially sound, but here's a setup for it: Using a custom SurfaceView and thread for Android game programming (example)), and there's LibGDX.
I personally recommend LibGDX, I even made a fairly simple 4-player multiplayer game in it and it certainly was not difficult. I'd recommend the following tutorial on how to get to it: http://www.gamefromscratch.com/page/LibGDX-Tutorial-series.aspx
And the basics are the following:
When you create a project, the first thing you want to do is change the ApplicationAdapter to Game so you'll have access to the setScreen(Screen) delegation function, so that you can seperate the display and logic of your game into Screens.
You want to handle elapsed time in your Screen, which is done as the following: How to track time in Libgdx(android)
You probably want to make a menu, which of course can be done with pretty pictures and BitmapFonts, but I'll point you to the official wiki ( https://github.com/libgdx/libgdx/wiki ) with that. You can use Scene2D, although I found it slightly difficult, so I personally made a menu made of rectangles, it worked fairly well: LibGDX - Custom Click Listener?
A bit more "click oriented" guide on how I handled touch events using LibGDX: https://stackoverflow.com/a/24511980/2413303
Afterwards, it's literally just implementing game logic, timers, data models, behavior.
The way I solved the stretching rather than using a StretchingViewport or the in-built cameras was the following:
public class Resources
{
public static Texture texture;
public static SpriteBatch batch;
public static Matrix4 normalProjection;
public static BitmapFont bitmapFont;
public static ShapeRenderer shapeRenderer;
....
}
public static void initialize()
{
int width = Gdx.graphics.getWidth();
int height = Gdx.graphics.getHeight();
Resources.bitmapFont = new BitmapFont();
Resources.shapeRenderer = new ShapeRenderer();
Gdx.gl.glLineWidth((width < 640 && height < 480) ? 2.5f : 6f);
//camera = new OrthographicCamera(1, h / w); //I didn't use this at all
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
loadTextures();
Resources.batch = new SpriteBatch();
Resources.normalProjection = new Matrix4().setToOrtho2D(0, 0, 480, 320); //model is 480x320
Resources.batch.setProjectionMatrix(Resources.normalProjection);
Resources.shapeRenderer.setProjectionMatrix(Resources.normalProjection);
}
public class InputTransform
{
private static int appWidth = 480;
private static int appHeight = 320;
public static float getCursorToModelX(int screenX, int cursorX)
{
return (((float)cursorX) * appWidth) / ((float)screenX);
}
public static float getCursorToModelY(int screenY, int cursorY)
{
return ((float)(screenY - cursorY)) * appHeight / ((float)screenY) ;
}
}
Make sure to dispose resources that need disposing, in the Game's dispose() callback.
Libgdx is not outdated and is, IMHO, the best way to program for android. The reason ist, that you can develop 99% on desktop (ofc think about the controlls, which won't be a keyboard on android) and then you have a working android app with a few lines only.
If you instead develop for android directly, you need to use the verry slow emulator or you have to send the app to a testphone, just to debug your code. This is a lot slower then debuging on desktop directly.
Libgdx is verry efficient, easy to use (as soon as you understan how it works) and has a verry good documentation.
For tutorials: I wrote an answer here on SO, which seemed to help some people. It is a short "tutorial" which shows only the verry basics and i have added the links to some tutorials which helped me learning it. So i hope it helps you to^^
EDIT:
More debugging led me to the fact that glGetAttribLocation returns -1, except for the first start of the Application. Program ID is valid (I guess?), it was 12 in my testing right now. I also tried to retrieve attribute location right before drawing again, but this did not work out neither.
My shader "architecture" now looks like this:
I've turned the shader into a singleton. I.e. only one instance. Using it:
public void useProgram() {
GLES20.glUseProgram(iProgram);
getUniformLocations();
getAttributeLocations();
}
I.e. program will be sent to OpenGL, afterwards I'm retrieving uniform and attribute locations for all my variables, they are stored within a HashMap (one for each shader):
#Override
protected void getAttributeLocations() {
addToGLMap(A_NORMAL, GLES20.glGetAttribLocation(iProgram, A_NORMAL));
addToGLMap(A_POSITION, GLES20.glGetAttribLocation(iProgram, A_POSITION));
addToGLMap(A_COLOR, GLES20.glGetAttribLocation(iProgram, A_COLOR));
}
I don't understand, why the program's ID is for example 12, but all the attribute locations are non-existent in the second and the following run of my Application...
In my Application, I am loading a Wavefront object, as well as I am drawing several lines and cubes, just to try something. After starting the Application "clean", i.e. after rebooting or installing it, everything looks as intended. But if I close the Application and re-open it, it looks weird, screenshot is at the bottom.
What I'm currently doing:
onSurfaceCreated:
Taking care of culling, clear color, etc, etc.
Clear all loaded objects (just for testing, will of course not delete memory in later phase).
Reload objects (threaded).
My objects are stored like this:
public class WavefrontObject {
private FloatBuffer mPositionBuffer = null;
private FloatBuffer mColorBuffer = null;
private FloatBuffer mNormalBuffer = null;
private ShortBuffer mIndexBuffer = null;
}
Buffers are filled upon creation of the element.
They are drawn:
mColorBuffer.position(0);
mNormalBuffer.position(0);
mIndexBuffer.position(0);
mPositionBuffer.position(0);
GLES20.glVertexAttribPointer(mShader.getGLLocation(BaseShader.A_POSITION), 3, GLES20.GL_FLOAT, false,
0, mPositionBuffer);
GLES20.glEnableVertexAttribArray(mShader.getGLLocation(BaseShader.A_POSITION));
// etc...
GLES20.glDrawElements(GLES20.GL_TRIANGLES, mIndexBuffer.capacity(), GLES20.GL_UNSIGNED_SHORT, mIndexBuffer);
Do I need to disable the VertexAttribArrays after drawing them? I am currently overwriting the buffer for each drawing loop, but do they maybe interact with other models being drawn?
The model I am loading displays a small toy-plane. After restarting the Application, it looks like this (loading the object, all colors are set to white (for testing)):
So to me it looks like the buffers either have left-over stuff in them? What's the "best practice" for using these buffers? Disable the arrays? Does OpenGL ES2.0 offer some sort of "clear buffer" method that I can use before putting my values in them?
What was expected to be drawn: At the point where the "weird triangles" and colors origin from, there should be the plane-model. All in white.
When your application loses context its OpenGL context is destroyed.
So all objects (programs and its uniform/attribute handles, etc) are invalidated.
During reopening you have to clear/invalidate all singleton objects like yours...
i'm trying out libgdx as an opengl wrapper , and i have some issues with its graphical rendering :
for some reason , all images (textures) on android device look a little blurred using libgdx . this also includes text (font) .
for text , i thought that it's because i use bitmap-fonts , but i can't find an alternative- i've found out that there is a library called "gdx-stb-truetype" , but i can't find how to download it and use it .
for normal images , even when i show the entire image without any scaling , i expect it to look as sharp as i see it on a computer's screen , especially if i have such a good screen on the device (it's galaxy nexus) .
i've tried to set the anti-aliasing off , by using the next code :
final AndroidApplicationConfiguration androidApplicationConfiguration=new AndroidApplicationConfiguration();
androidApplicationConfiguration.numSamples=0; //tried the value of 1 too.
...
i've also tried to set the scaling method to various methods , but with no luck. example:
texture.setFilter(TextureFilter.Nearest,TextureFilter.Nearest);
as a test , i've found a sharp image that is exactly the same as the seen resolution on the device (720x1184 for galaxy nexus , because of the buttons bar) , and i've put it to be on the background of the libgdx app . of course , i had to add extra blank space in order for the texute to be loaded , so the final size of the image (which will include content and empty space) is still a power of 2 for both width and height (1024x2048 in this case) .
on the desktop app , it look ok . on the device , it looked blurred.
a weird thing that i've noticed is that when i change the device's orientation (horizontal <=> vertical) , for the very short time before the rotating animation starts , i see both the image and the text very well .
surely libgdx can handle this , since the opengl part of the api-tests project of android shows images just fine.
can anyone please help me?
#user1130529 : i do use spritebatch . also , here's what i do for setting the viewport . it occurs whether i choose to keep the aspect ratio or not.
public static final int VIRTUAL_WIDTH =720;
public static final int VIRTUAL_HEIGHT =1280-96;
private static final float ASPECT_RATIO =(float)VIRTUAL_WIDTH/(float)VIRTUAL_HEIGHT;
...
#Override
public void resize(final int width,final int height)
{
// calculate new viewport
if(!KEEP_ASPECT_RATIO)
{
_viewport=new Rectangle(0,0,Gdx.app.getGraphics().getWidth(),Gdx.app.getGraphics().getHeight());
Gdx.app.log("DEBUG","size:"+_viewport);
return;
}
final float currentAspectRatio=(float)width/(float)height;
float scale=1f;
final Vector2 crop=new Vector2(0f,0f);
if(currentAspectRatio>ASPECT_RATIO)
{
scale=(float)height/(float)VIRTUAL_HEIGHT;
crop.x=(width-VIRTUAL_WIDTH*scale)/2f;
}
else if(currentAspectRatio<ASPECT_RATIO)
{
scale=(float)width/(float)VIRTUAL_WIDTH;
crop.y=(height-VIRTUAL_HEIGHT*scale)/2f;
}
else scale=(float)width/(float)VIRTUAL_WIDTH;
final float w=VIRTUAL_WIDTH*scale;
final float h=VIRTUAL_HEIGHT*scale;
_viewport=new Rectangle(crop.x,crop.y,w,h);
Gdx.app.log("DEBUG","viewport:"+_viewport+" originalSize:"+VIRTUAL_WIDTH+","+VIRTUAL_HEIGHT+" aspectRatio:"+ASPECT_RATIO+" currentAspectRatio:"+currentAspectRatio);
}
Try this:
TextureRegion.getTexture().setFilter(TextureFilter.Linear, TextureFilter.Linear);
Try the following:
texture.setFilter(TextureFilter.Nearest, TextureFilter.Nearest);
There are several types of TextureFilters. I assume that the Linear one (is that default?) is blurring.
If you have the Chainfire 3D application or another which reduce textures or change it to 16bit, turn it off; that works for me, and I had the same problem.
I have created a little screenmanager (to handle multiple scenes), where every class extends from a custom class called Screen, and does the following (for example) in its load method:
public Scene load() {
BitmapTextureAtlas mBitmapTextureAtlas = new BitmapTextureAtlas(512, 1024, TextureOptions.BILINEAR_PREMULTIPLYALPHA);
SceneManager.loadTexture(mBitmapTextureAtlas);
scene.attachChild(bgSprite);
return scene;
}
The problem is that sometimes, if you move fast among screens, some sprites are not being rendered, sometimes they are (it depends on how fast I switch between scenes).
I guess the problem might be that I'm attaching the sprites to the scene when they still have been not fully loaded in memory. Can it be? Any idea how to solve this problem?
Yes it happens if you move across scenes, so you can set boolean flags for sprites. if true then perform operations. It is specially useful when perform collisionDetections.