Objects Hidden Using ScrollRect Blocking/Triggering Input Events - android

The project is written in as3, packaged using AIR 28.0, and run on an Android 7 device (when run on a windows desktop environment it works fine).
Objects that are hidden from view by a scrollRect are still receiving mouse events on my Android 7 (S7), but not when I run the same exact application in a desktop environment.
Is this an Android or AIR bug? Or am I doing something wrong?
Example:
EDIT: (Original code example was incomplete see edit below)
var button:Sprite = new Sprite();
button.graphics.beginFill(0x00FF11, 1);
button.graphics.drawRect(0, 0, 50, 50);
button.graphics.endFill();
addChild(button);
button.addEventListener(MouseEvent.CLICK, buttonClick);
var outer:Sprite = new Sprite();
var square:Sprite = new Sprite();
outer.cacheAsBitmap = true;
outer.addChild(square);
var rect:Rectangle = new Rectangle(0, 0, 50, 50);
outer.scrollRect = rect;
addChild(outer);
outer.y = 50;
square.graphics.lineStyle();
square.graphics.beginFill(0x000000, 1);
square.graphics.drawRect(0, 0, 100, 100);
square.graphics.endFill();
square.addEventListener(MouseEvent.MOUSE_DOWN, squareDown);
square.y = -50;
In this example the rectangle drawn in the inner object is only visible within the outer.scrollRect (so 50x50 instead of 100x100) and the button object is visible above it. When a click event happens over the visible button however the innerClick event will fire, not the buttonClick event.
It seems that the invisible portion of the inner container, hidden by the applied scrollRect, is only hidden visually and still blocks input events.
The application is being compiled and installed through my IDE (Flash Builder). I have tried different versions of my IDE and AIR SDK, with the same results.
Any help would be greatly appreciated.
EDIT:
I found instances where scrollRect worked properly so did further testing to narrow down the bug.
Removing the line:
outer.cacheAsBitmap = true;
... fixed the issue. Applying cacheAsBitmap to the square object itself also still worked fine.
For some reason applying cacheAsBitmap to the object with the scrollRect causes this bug, but applying cacheAsBitmap to objects that are children further down the line works as it should.
I'm not experienced enough to know the performance implications or why it is suggested to add cacheAsBitmap to the object with the scrollRect. Maybe someone else can inform.

The scrollRect property is probably the most performance-wise way (with the regular Flash content) to clip some content with a rectangular mask.
The cacheAsBitmap property allows to relieve Flash Player of need to render the content of the object unless something changes inside the object.
To put it simply. Imagine a page of richly illustrated fairy-tale book: an overcomplicated vector picture with lots of details, some texts with fancy fonts, etc. You put it on stage, Flash Player needs to render all of it: vector fills and strokes, fancy fonts. Then, you don't change anything over the course of several frames, Flash Player does not need to render anything, it is good. Then you shift this page 1px to the right and - guess what - Flash Player needs to render the whole thing again.
Any change under that page, or over that page, or to that page - anything withing that page's bounding box - will make FP to render the page's whole content again - which would take a pretty heavy toll on performance.
Using cacheAsBitmap allows FP to render it only once, so only internal changes to that page will make FP to render it again, anything else will make FP just use the cached version which is faster than rendering it over and over again. Which is, I agree, might be a solution to mobile devices for they are slow and CPU-lacking.
Hence if your app has elements that:
contain complicated vector shapes and text
are static or change quite rarely
are not overly large
then you can cache them with cacheAsBitmap = true to gain some performance at the expense of RAM.
However.
If you are building a store-grade product, not just a school project or something for personal amusement, I advise you to look into Gamua Starling because you'll never get your app any close to running smoothly with regular Flash content. As soon as you need to redraw the whole screen the FPS will drop to the bottom of the sea, because Flash is CPU-hungry while mobile devices are CPU-lacking. Starling, on the other hand, utilizes Stage3D and device's GPU resources so you can have something overly rich and animated running at 60 FPS with no sweat.

Related

AS3 rendering bitmaps in GPU mode

Flash Pro CC, AS3, Air for Android (v17), rendering mode GPU, stage quality.LOW, FPS: 60, testing device: an old Nexus One smartphone (Android 2.3.3).
The guides say that GPU makes rendering the bitmaps cheap, somehow i can't grasp how exactly it works.
So i have 49 separate bitmap squares covering the stage and one MovieClip in the middle with a bitmap inside tweened to move up and down (jumping ball). Pretty simple, right?
This is the view: http://i.stack.imgur.com/EKcJ6.png
All graphics are bitmaps (not vectors). Yet i get 55 fps (it varies arround 53-57).
Then i select all 49 squares and put them inside a symbol (MovieClip). Visually nothing changes. It seems to increase the FPS a tiny bit, the average fps is now ~57 (55-59).
Then i take the MovieClip (with all the squares inside it) and set cacheAsBitmap=true. Voila, now i have 60 fps!
What is happening in all 3 different cases? Why i need to put bitmaps into one MC and cache this MC as bitmap - aren't the squares already bitmaps?
I have also tried to make each square a MovieClip and cache it as bitmap, but i still got 55 fps.
Is it possible to keep squares separate at 60 fps?
In my real project i have many MovieClips on the stage (~100) but in most cases only one of them is animated at a time. Yet somehow it seems that the mere presence of other movieclips reduce the performance (fps). Obviously, i cannot put them all into one MC and cache it as bitmap as in the simplified example above.
How can i solve this, what should i do?
Thanks!
I think it relates to this best practice recommendation:
Limit the numbers of items visible on stage. Each item takes some time
to render and composite with the other items around it. When you no
longer want to display a display object, set its visible property to
false. Do not simply move it off stage, hide it behind another object,
or set its alpha property to 0. If the display object is no longer
needed at all, remove it from the stage with removeChild().
By putting all the bitmaps in a single container and setting cacheAsBitmap=true you are essentially turning them into a single bitmap as far as the compositor is concerned. This tends to be faster to composite than multiple individual bitmaps. Setting a bitmap to cacheAsBitmap=true (or in a single container with cacheAsBitmap=true) has no effect because it's already a bitmap.
Also note that GPU mode isn't recommended anymore, it was Adobe's first attempt at GPU accelerating the display and they basically gave up on that path in favor of the new Stage3D rendering pipeline. While GPU render mode can work really well when used just right, it can be somewhat unpredictable and confusing, so I would highly recommend you check out Stage3D.
Hope that helps.

Andengine background image performance

I'm having some trouble using a sprite as background for my scene. I'm setting the background as follows:
Sprite bg = new Sprite(SCENE_WIDTH/2 , SCENE_HEIGHT/2, this.mParallaxBackRegion,getVertexBufferObjectManager());
bg.setCullingEnabled(true);
mScene.setBackground(new SpriteBackground(bg));
Loading of the texture:
this.mParallaxBack = new AssetBitmapTexture(this.getTextureManager(), this.getAssets(), "gfx/_fixed.png", TextureOptions.BILINEAR);
this.mParallaxBackRegion = TextureRegionFactory.extractFromTexture(this.mParallaxBack);
this.mParallaxBack.load();
The png I'm loading is a completely black 960x640 image (same as my scene size), for testing purposes. However, setting the background causes my fps to drop from 60 (when not using the background) to 45 on my HTC Desire. I've tried multiple ways of setting the background, but all seem to be causing the same performance hit. Why does this affect the performance so drastically, and is there something I can do about this?
That is strange that you should get such a big performance hit. But here's one thing to try. Since i seems you likely have other things drawn to the screen besides this background, add the background to the same Texture. OpenGL works faster when there are fewer textures. When openGL has to draw from another texture its switches context *(citation needed) which is a slow operation. So having all your sprites on a single texture makes GL draw calls more efficient.
This alone is not likely to explain the performance problem.
It could also be your antialiasing setting: TextureOptions.BILINEAR.
Bilinear is the highest quality setting. Try using DEFAULT or NEAREST_PREMULTIPLYALPHA and see if that doesn't help.
Also, set your background to ignore updates.
One last thought:
The HTC desire is a fairly old phone at this date (released in 2010) the performance will be in part due to it being an old phone with an old version of android. I test on an HTC incredible, about the same vintage, and you have to be very conservative with your image sizes on that device.
Did you know that you can make your andengine native size 1/2 of the screen size and scale it up using a RatioResolutionPolicy? I used that approach for my first andengine (GLES1) project and had great success on that generation of devices.https://play.google.com/store/apps/details?id=com.plonzogame&hl=en

Cocos2d android Texture issue on Motorola xoom

I am developing a game in android with Cocos2d framework with latest build from github(Weikuan Zhou).
In my game, I used lots of images(total images size is around 11MB).
Problem:
I am getting the black box instead of images when I play my game more than 3 times.
What steps will reproduce the problem?
1. When I play my game more than 3 times via "Play Again" functionality of my game.
What is the expected output? What do you see instead?
- images should be displayed properly instead of "BLACK BOX".
and in my logcat, I see the Heap memory goes around 13Mb.
I already release Texture via below method
CCTextureCache.sharedTextureCache().removeAllTextures();
I also tried to remove sprite manually ex. removeChild() method.
But so far not succeeding to find any solution.
If any one have solution for this please let me know.
From what you're describing, you're hitting exactly the same problem i did, which is that cocos2d for android is really buggy when dealing with lots of single sprites loaded individually.
The best route to take to resolve this problem is to get hold of (unless you've got a mac) the free flash version of zwoptex from here http://zwopple.com/zwoptex/static/downloads/zwoptex-flashversion.zip
this will let you build up spritesheets, i suggest relating as many sprites as you can onto each sheet, whilst trying to keep them sensibly grouped.
This is mainly down to cocos doing a single render for ALL sprites in a spritesheet, rather than one render per sprite for normal sprites, hence massively reduced processing time, and much less memory usage.
you can then load the spritesheet with code such as (can't guarantee this code will execute as i'm grabbing snippets from a structured project but it will lead you to the right solution)
CCSpriteFrameCache.sharedSpriteFrameCache().addSpriteFrames("menus.plist"); // loads the spritesheet into the frame cache
CCSpriteSheet menuSpriteSheet = CCSpriteSheet.spriteSheet("menus.png", 20); // loads the spritesheet from the cache ready for use
.... // menu is a CCLayer
CCSprite sprite = CCSprite.sprite(CCSpriteFrameCache.sharedSpriteFrameCache().spriteFrameByName("name of sprite from inside spritesheet.png"));
menuSpriteSheet.addChild(sprite, 1, 1); / you add the sprite to its spritesheet
menu.addChild(menuSpriteSheet, 1); // then add the spritesheet to the layer
This problem happens when you load resources at run time. So it is better to load resources before the scene starts.You can do as follows.
Using the sprite sheets for resources in your game.
Implement your Ui in constructor of your class.
Implement your functionality in overridden method onEnter() in your layer.
You must unload your sprite sheets after finishing your scene.
These is the procedure that I am following.
Thanks.

AndEngine custom sprites

I started using andEngine yesterday but I'm pretty confused.. I want to make a custom character for each player, so i want to make a database inside the app in Assets/gfx and if for example the player chose a different eyes or nose, the character will change. Is there any way to build something like this without making different sprites and setting up the positions and all of that. (there are some games on the computer that does what i want to do with my app like maplestory, LaTale, Gust online, etc.)
Thanks!
I am not sure it is done this way (I never had a game where I used it, nor tried it), but here is an idea that came to my mind now:
Lets say we have a game with character appearance editing like maplestory. To make it simple, a character is just a circle, or a 2d ball, and you can change it's color and it's eyes color. So you have these folders:
assets/gfx/circles
And
assets/gfx/eyes
Now, lets say we have this circle:
And we have these eyes:
And we want to combine them.
You could do it:
BitmapTextureAtlas playerTextureAtlas = new BitmapTextureAtlas(256, 256 TextureOptions.BILINEAR_PREMULTIPLYALPHA);
TextureRegion playerTextureRegion = BitmapTextureAtlasTextureRegionFactory.createFromAsset(playerTextureAtlas, this, "circles/redcircle.png", 0, 0);
//By executing the next line, we place the eyes over the player texture area.
//There is NO need to keep a reference to the texture region this returns to us, because technically this one and playerTextureRegion are THE SAME - they both hold the same region in the texture (As long as they have the same sizes, of course)
BitmapTextureAtlasTextureRegionFactory.createFromAsset(playerTextureAtlas, this, "eyes/yelloweyes.png", 0, 0);
Remember - the eyes image background has to be transperant so it won't override the circle! Play around with the TextureOptions parameter. I'm not sure if the one I used will fill this purpose - maybe another one will.
And lastly, you should keep the eyes eyes and circles the same size, since this way it is easier to test whether they fit. If you make the eyes just be a small rectangle, you will have to mess with it untill you find the place where you should position it over the circle. Waste of time...
Now, you can just load different bodies/eyes/hairs and so on, place them, and you got a customized player!
I am afraid Jong's solution won't work, at least not in GLES1 version of AndEngine. When I tried to combine sprites this way, the latest one just overwrote anything that was under it. In this case, only the eyes would appear on the screen.

Weird performance issue with Galaxy Tab

I am working on a 2d tutorial and was able to test my current tutorial part on a Samsung Galaxy Tab.
The tutorial simply moves the default icon randomly over the screen. With a tap I create a new moving icon. Everything works fine (constantly 60fps) on the Galaxy as long as I have 25 elements or less on the screen.
With the 26th element the frame rate drops to 25fps.
When I change the size/dimension of the image to a much bigger one, I reach less than 25fps before the 26th element. Thats ok. But at some not really reproducible number of elements the frame drops from (mostly more than) 10fps to 1fps.
On my Nexus One I can add 150 elements and still have 50fps.
What I have done: I changed the bitmap variable to a static one, so not every element has his own image but all use the same. That removed the behavior, but I doubt this solution is a good one. The magic number of 25 would suggest that I can use only 25 different images in that way.
Does someone have any idea what can cause this behavior? Is it a bug in the modified android version of Samsung?
My sample eclipse project is available. I would appreciate if some Samsung owner would check their performance with the sample.
edit
A colleague found a solution. He changed the way the bitmap is loaded from
mBitmap = BitmapFactory.decodeResource(res, R.drawable.icon);
to
mBitmap = BitmapFactory.decodeStream(new BufferedInputStream(res.openRawResource(R.drawable.icon)));
But we still don't really get it why it works this way...
Well, I've been looking on your project and everything seems to be fine, but I have one idea about what's causing you the frame rate drop.
You're allocating objects during runtime. If you don't do that, it will make you create all objects at start, and therefore you should notice a significant drop directly (if my solution doesn't solve your problem).
That said; I'm not sure whether an object pool will solve your problem, but you can try. Initialize your objects in a constructor and instead of making this call in onTouchEvent():
new Element(getResources(), (int) event.getX(), (int) event.getY())
You should have something like mElement.add(objectPool.allocate()), where the object pool finds an unused object in the pool. Also, we should have a specified amount of objects in that object pool and from there you can check if it is the allocating that is causing this error or if it is something else.
With the 26th element the frame rate drops to 25fps.
When (or if) you implements this, you should see the frame rate drop directly (if this doesn't solve your problem though), since the object pool will make you allocating a fixed amount (e.g. maybe 100 elements?) at start (but you're not using them visually).
Also, I have used the memory pool pattern (object pool) in one of my sample applications for Android. In that sample; I add a line to the Canvas on an onTouchEvent() using an object pool (without allocating during runtime). In that source code you can easily change the total amounts of objects and check it out and try it yourself. Write a comment if you want to look at my sample application (and source code) and I will gladly share it since it's not public yet. My comments are in Swedish, but I think you should be able to understand, since the variables and methods are in English. :)
Side note: You wrote that you've tried (and even success) with removing the behaviour by making your Bitmap static. As it is right now, your elements have different instances of a Bitmap, which will make you allocate a new Bitmap everytime you're constructing a new object. That means that every object is pointing to a different Bitmap when they are using the same resource. static is a fully valid solution (although a magic number of 25 seems wierd).
This Bitmap case can be compared to the OpenGL system. If you have 20 objects which all should use the same resource, there are two possible solutions: They can point to the same VRAM texture or either they can point to a different VRAM texture (like your case when you're not using static), but still same resource.
EDIT:
Here is my sample application for Android that demonstrates the Memory Pool.
Regarding your solution with BitmapFactory, that probably depends on how that class works. I'm not sure, but I think that one of the decode...() methods generates a new Bitmap even if it is the same resource. It can be the case that new BufferedInputStream(res.openRawResource(R.drawable.icon)) is reusing the BufferedInputStream from memory, although, that is a big guess.
What you should do (in that case) is to decode a resource and store a reference from it in the Panel class and pass that reference into the new Element(bitmapReference, ...) instead. In that way, you're only allocating it once and every element is pointing to the same Bitmap in memory.
I have tried your code on HTC Desire HD and the frame rate drops down to unusable after added 20th image using Android 2.2 target. When I exported the same code as android version 2.1 then it worked fine and could handle over 200 instances!
I suspect that it is to do with creating instances of your GraphicObject class on 2.2, but not quite sure...
I believe i can shed some light on this problem.
At least on my Galaxy S, Gingerbread 2.3.5
the first code loads my test.png into Bitmap with Bitmap.Config = ARGB_8888, while the second code loads with Bitmap.Config = RGB565. The strange thing is, while Gingerbread by default supposed to create 32bit surface, the RGB565 'renders' (I profiled and compared native call to drawBitmap) much faster.
Hence, The second idea, more appropriate for the your example as a whole, is that ARGB888 Bitmap does have alpha, so in that case rendering overlapping images of 25+ sprites could create some bottleneck in alpha calculation algorithm, while RGB565 image would be fine and fast.

Categories

Resources