Why my opengl output differs for various devices? - android

I have created an android app for drawing of lines,circles.. by using GLSurfaceView in OpenGLES 2.0 like an Auto cad app.
The app works well with Google Nexus 7, in the sense that if we draw a line & then a circle the line doesn't get erased in surface view. But with Samsung Galaxy Note II, it is entirely different.
The line previously drawn before a circle being drawn, gets erased. i.e., each time if we draw a new line or circle, the previous one gets erased.I can draw only one image at a time. What I need is the same output which I get in Google Nexus 7 in Samsung Galaxy Note II.i.e.I want to draw more than one image in the GLSurfaceView at a time.
Note :
Both the Android OS in Google Nexus 7 and Samsung Galaxy Note II are Jelly Bean 4.2. But both devices are different GPU. Google Nexus 7 GPU is ULP GeForce & Samsung Galaxy Note II is Mali400MP.
Would this be an issue in the rendering of output of the Surfaceview ?
Should we take into account of GPU while Coding ?
Can anyone tell me why this problem of different output in different devices ?

Should we take into account of GPU while Coding ? No way, The OpenGL API is a layer between your application and the hardware.
This is largely correct for desktop graphics as all GPUs are immediate renderers, however, this is NOT the case in mobile graphics.
The Mali GPUs use tile-based immediate-mode rendering.
For this type of rendering, the framebuffer is divided into tiles of size 16 by 16 pixels. The Polygon List Builder (PLB) organizes input data from the application into polygon lists. There is a polygon list for each tile. When a primitive covers part of a tile, an entry, called a polygon list command, is added to the polygon list for the tile.
The pixel processor takes the polygon list for one tile and computes values for all pixels in that tile before starting work on the next tile. Because this tile-based approach uses a fast, on-chip tile buffer, the GPU only writes the tile buffer contents to the framebuffer in main memory at the end of each tile. Non-tiled-based, immediate-mode renderers generally require many more framebuffer accesses. The tile-based method therefore consumes less memory bandwidth, and supports operations such as depth testing, blending and anti-aliasing efficiently.
Another difference is the treatment of rendered buffers. Immediate renderers will "save" the content of your buffer, effectively allowing you to only draw differences in the rendered scene on top of what previously existed. This IS available in Mali, however, is not enabled by default as it can cause undesired effects if used incorrectly.
There is a Mali GLES2 SDK example on how to use "EGL Preserve" Correctly available in the GLES2 SDK here
The reason the Geforce ULP based nexus 7 works as intended is that, as an immediate based renderer, it defaults to preserving the buffers, whereas Mali does not.
From the Khronos EGL specification:
EGL_SWAP_BEHAVIOR
Specifies the effect on the color buffer of posting a surface with eglSwapBuffers. A value of EGL_BUFFER_PRESERVED indicates that color buffer contents are unaffected, while EGL_BUFFER_DESTROYED indicates that color buffer contents may be destroyed or changed by the operation.
*The initial value of EGL_SWAP_BEHAVIOR is chosen by the implementation.*
The default value for EGL_SWAP_BEHAVIOUR on the Mali platform is EGL_BUFFER_DESTROYED. This is due to the performance hit associated with having to fetch the previous buffer from memory before rendering the new frame, and storing it at the end as well as the consumption of bandwidth (which is also incredibly bad for battery life on mobile devices). I am unable to comment with certainty as to the default behavior of the Tegra SoCs however, it is apparent to me that their default is EGL_BUFFER_PRESERVED.
To clarify Mali's position with regards to the Khronos GLES specifications - Mali is fully compliant.

I have not seen your code, but you are probably doing something wrong.
Maybe you swap the buffer, erase it, etc. where you must not.
Should we take into account of GPU while Coding ?
No way, The OpenGL API is a layer between your application and the hardware.

Related

ubiqituious support for degenerate triangle strips across android devices?

I'm working with the opengl es 3.0 api for android. I'm writing functionality for a bitmap text renderer, and, well, it works flawlessly on two of my devices, both samsung. but on my kindle fire 10 HD (7th generation) tablet, it's all messed up. it's sampling from wrong portions of the texture atlas, displaying wrong glyphs, and it won't display the entire message sometimes, and sometimes when i go to switch messages, via mapped buffers, it briefly displays the entire message at once before starting the animation of it. Anyhow, I seem to think it's related to degenerate triangle strips that I'm using, so I ask, is support for them not ubiquitous across all android devices supporting opengl es 3.0? I've had trouble before with the kindle fire, in shader-related stuff. It won't work at all if I don't specify a precision, both for floats, and for sampler2DArray, that I've discovered thus far.
A degenerate triangle "should just work"; there is no feature here in the API for the hardware not to support. They are just triangles which happen to have zero area because of two coincident vertices.
They have historically be really common for any content using triangle strips, so to be honest I'd be surprised if they are broken.
It won't work at all if I don't specify a precision, both for floats, and for sampler2DArray, that I've discovered thus far.
That's not a bug; that's what the the specification requires you to do. See "4.7.4. Default Precision Qualifiers" in the OpenGL ES 3.2 shader language specification.

Can I have two ping-pong framebuffers of screen size on Android?

I need to have two ping-pong framebuffers in my OpenGL ES app on Android. I also need them to be of the same size as device screen. Neither depth buffer nor stencil buffer will be attached to them only RGBA8888 color buffer.
I'm planning to use them for adding some Photoshop like blending modes (color burn, overlay, etc.)
Can I afford this on most modern (say above Android 3.0, OpenGL ES 2.0) devices? And if not then Why? And how to determine whenever I can create the prior to their creation?
I think this should be reasonably safe to assume for devices above Android 3.0, I think they should all be running OpenGLES 2.0?
Your bottle necks will be memory (unlikely to run out here though I would have thought) and complexity of whatever you need to ping pong these buffers for. I ported this fluid simulation to Android using OpenGLES 2.0 which relies heavily on ping-ponging frame buffers to perform simulation updates.
I had 6 or so full screen frame buffers on a Samsung Galaxy S4 and Nexus 7 (2012) all of which were created just fine. My problem was that the simulation itself was very complex and these devices just couldn't perform.
I can't imagine memory being an issue for 2 full screen buffers and performance probably will be OK depending on how often you're rendering to them.
Not sure how much this helps as it all depends on your usage.
Can you just check what glFramebufferTexture2D returns when you try to attach it to your FBO in terms of checking if the device can handle it?

Android OpenGL ES 2.0 Texture setting does not seem to work

I have this problem with an older device that has an android version I still like to support (2.3.5) where the texture sometimes works.
I have 5 textures loaded in memory from the start of the game (does not change and is not reloaded). Everything shows up fine in the tutorial but in the game it does not. The rendering process and object loading are exactly the same and they work perfectly on my newer device (Nexus 4) for all game modes and tutorial.
I load 4 texture of 1024x1024 and 1 texture of 512x512. The textures that are not working properly are the last loaded and bound textures. So it could be a memory issue, but how can I find this out? The OpenGL error function does not show any error during gameplay, even though the textures are not shown correctly.
Both devices support OpenGL ES 2.0.
The third and fourth textures do work in the tutorial part of the game, so the device is able to load at least the first four textures, which should indicate it is not the amount of textures being the problem.
The old device supports 1024x1024 textures according to the specs.
Changing all the textures to 512x512 shows the same issue, which should have worked if memory was an issue because you can hold 4 of those textures in 1 1024x1024 texture which already works perfect on all devices. (1x512 and 1x1024 memory space equals 5x512 space)
It works perfectly on the Nexus 4 so a coding error is unlikely.
OpenGL does not provide me with an error of some sort using the openglerror function call in the loading/setting up/rendering calls so that should mean all OpenGL stuff is fine.
The loading of my texturepool, the creation and loading of the objects (pool), the render function (including the shader) is in all modes exactly the same code so that does not effect the difference. I have debugged all the objects rendered with OpenGL to see if some data is corrupt or incorrect but it all is correct when I pass it to the render pipeline. This code is passing the integer '3' to the shader in the rendering code as texture id. The texture loaded at '3' should be the one I need, but somewhere in OpenGL it decides to use texture '1' and not higher at those moments, but in the tutorial it puts the same data to OpenGL but then OpenGL decides to use the '3' texture ID as passed and intended...
posting code is an issue due to the complexity of the engine which handles all loading/rendering etc. of the graphics part of the game. Posting all code of my game/engine seems a little bit overkill :s so if some part is needed for solving this issue I will post it.
I am basically out of ideas to try and solve this issue :( does anyone has any idea or suggestion to what I can try, or maybe a solution?
Fixed this (and appearently also my other posted) problem, it was a bug in the driver software of the GPU, the textureunit id was only taking once, more info: http://androidblog.reindustries.com/hack-bad-gpu-fix-not-using-correct-texture-opengl/

OpenGL ES 2.0 texture not showing on some device

I found a 3D graphics framework for Android called Rajawali and I am learning how to use it. I followed the most basic tutorial which is rendering a shpere object with a 1024x512 size jpg image for the texture. It worked fine on Galaxy Nexus, but it didn't work on the Galaxy Player GB70.
When I say it didn't work, I mean that the object appears but the texture is not rendered. Eventually, I changed some parameters that I use for the Rajawali framework when creating textures and got it to work. Here is what I found out.
The cause was coming from where the GL_TEXTURE_MIN_FILTER was being set. Among the following four values
GLES20.GL_LINEAR_MIPMAP_LINEAR
GLES20.GL_NEAREST_MIPMAP_NEAREST
GLES20.GL_LINEAR
GLES20.GL_NEAREST
the texture is only rendered when GL_TEXTURE_MIN_FILTER is not set to a filter using mipmap. So when GL_TEXTURE_MIN_FILTER is set to the last two it works.
Now here is the what I don't understand and am curious about. When I shrink the image which I'm using as the texture to size 512x512 the GL_TEXTURE_MIN_FILTER settings does not matter. All four settings of the min filter works.
So my question is, is there a requirement for the dimensions of the image when using min filter for the texture? Such as am I required to use an image that is square? Can other things such as the wrap style or the the configuration of the mag filter be a problem?
Or does it seem like a OpenGL implementation bug of the device?
Good morning, this a typical example of non-power of 2 textures.
Textures need to be power of 2 in their resolution for a multitude of reasons, this is a very common mistake and it did happen to everybody to fall in this pitfall :) too me too.
The fact that non power of 2 textures work smoothly on some devices/GPU, depends merely to the OpenGL drivers implementation, some GPUs support them clearly, some others don't, I strongly suggest you to go for pow2 textures in order to be able to guarantee the functioning on all the devices.
Last but not least, using non power of 2 textures can lead you to a cathastrophic scenarious in GPU memory utilization since, most of the drivers which accept non-powerof2 textures, need to rescale in memory the textures to the nearest higher power of 2 factor. For instance, having a texture of 520X520 could lead to an actual memory mapping of 1024X1024.
This is something you don't want because in real world "size matters", especially on mobile devices.
You can find a quite good explanation in the OpenGL Gold Book, the OpenGL ES 2.0:
In OpenGL ES 2.0, textures can have non-power-of-two (npot)
dimensions. In other words, the width and height do not need to be a
power of two. However, OpenGL ES 2.0 does have a restriction on the
wrap modes that can be used if the texture dimensions are not power of
two. That is, for npot textures, the wrap mode can only be
GL_CLAMP_TO_EDGE and the minifica- tion filter can only be GL_NEAREST
or GL_LINEAR (in other words, not mip- mapped). The extension
GL_OES_texture_npot relaxes these restrictions and allows wrap modes
of GL_REPEAT and GL_MIRRORED_REPEAT and also allows npot textures to
be mipmapped with the full set of minification filters.
I suggest you to evaluate this book since it does a quite decent coverage to this topic.

Hardware Acceleration pre-Honeycomb

I'm playing around with the Android API with a long term prospect of developing a 2D game. Normally, I could live with just using the Canvas for drawing sprites as the graphics for my game. I'd like to be able to perform lots of drawing for visual effects, but it seems that the Android versions prior to Honeycomb (3.0, API level 11) don't support hardware acceleration.
I'm not sure what that exactly means though. I can't get myself round to believe that the drawing is done by the CPU, pixel by pixel !?! If I end up having effects like glow, lens effects etc... I'll be drawing over each pixel quite a few times. Am I right to believe that a typical smartphone CPU will not be able to cope with that at ~30 FPS?
I don't have the luxury to target Android versions >=3.0 as they constitute 8% of the already not SO big Android market. Should I take my time to go the OpenGL way (I'm a beginner at OpenGL)? If I do so, do you think I'll gain anything by overlaying a GLSurfaceView taking care of the effects on top of a custom android view using a Canvas to do the drawing otherwise. Or is it for any reason a bad idea to mix the two?
Oh God yes. Esepecially if you're targetting pre Android 3 devices, going from SurfaceView (with Canvas.drawXxx() calls) to OpenGlSurface works great. Not only do you have faster frames (updates) per second but memory consumption is A LOT better.
Here's some of the points I've noticed:
If you want to do (multi) sprite animations, doing them with the images loaded as OpenGL textures and displayed on an OpenGL quad gives you a lot more memory space. This is because while regular Bitmap objects are capped by the Android-per-process memory limit (which is something like 16-48 Mb, depending on device and Android version), creating OpenGL textures out of those images (and clearing the iamges right after) doesn't have this limitation. You're only limited by the total memory on the device which is A LOT more then 16-48 megs.
Secondly, but still related to this, with Android 2 and below tracking how much memory a Bitmap instance takes is a lot trickier since those intances are not reported against the Java heap memory. They are allocated in some other memory space. In short, another hassle less if you use OpenGL.
Simple animations such as rotating an image become a breeze with OpenGL. You just texture a quad, then roate it any way you want. Equivalent with Sprite animation is to sequentially display different (rotated versions ) of the image. This is better for memory consumption and speed.
If you're doing a 2D-like game, using OpenGL's orthogonal projection not only simplifies a lot of the (useless, in this case) hassle you'd have with a regular OpenGL perspective projection, but it actually alleviates A LOT of the issues you'd get with regular SurfaceView when needing to scale all your graphical elements so they'd look the same size on different screen resolutions/proportions. With OpenGL's ortho projection you effectifelly create a fixed area of desired widht and height and then have OpenGL project it on the device screen area automatically.
It comes without saying that making simple effects such as a pulsating light affecting some graphic element is a lot easier to do with OpenGL (where you just make the light and have it pulsate, and everything is lit accordingly) rather than simulate this with SurfaceView and baked in sprites.
I've actually started a small asteriod defence-like game with SurfaceView and Canvas and then quickly switched to OpenGL for the above mentioned reasons. Long story short, the game now looks better, and while it initially ran at 16 UPS on a Samsung TEOS and 30 UPS on an Optimus LG 2x it now runs at 33 UPS on the Teos and about 80 UPS on the LG 2x.

Categories

Resources