I am trying to create a Analog Meter in OpenGL ES 1.x on Android. I have created a simple circle using Midpoint Theorem in green color. Now I want to place some numbers around the circle for Meter readings. Since OpenGL has no native text rendering API, I am trying to load an PNG image like this with the reading on it and the rest of image is transparent.
Now what parameters I must pass to glBlendFunc() function to achive this.
I have tried many different combinations but nothing works.
This is a common problem on Android. You can't use the Android Bitmap class to load textures that have transparent areas in them. This is because Bitmap follows the Porter-Duff specification for alpha blending, and Bitmap optimizes images with per-pixel alpha by storing them in the premultiplied format (A, R*A, G*A, B*A). This is great for Porter-Duff but not for OpenGL, which requires a non-premultiplied (ARGB) format. This means that the Bitmap class can only be used with textures that are completely opaque (or have no per-pixel alpha). This article details:
http://software.intel.com/en-us/articles/porting-opengl-games-to-android-on-intel-atom-processors-part-1
Just try passing the Alpha Blending values which are:
glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);
I hope it helps.
I don't think(even at 2.2)Alpha PNGs are problem for the Android Bitmap loading algorithm.
Related
I have imported a model (e.g. a teapot) using Rajawali into my scene.
What I would like is to label parts of the model (e.g. the lid, body, foot, handle and the spout)
using plain Android views, but I have no idea how this could be achieved. Specifically, positioning
the labels on the right place seems challenging. The idea is that when I transform my model's position in the scene, the tips of the labels are still correctly positioned
Rajawali tutorial show how Android views can be placed on top of the scene here https://github.com/Rajawali/Rajawali/wiki/Tutorial-08-Adding-User-Interface-Elements
. I also understand how using the transformation matrices a 3D coordinate on the model can be
transformed into a 2D coordinate on the screen, but I have no idea how to determine the exact 3D coordinates
on the model itself. The model is exported to OBJ format using Blender, so I assume there is some clever way of determining
the coordinates in Blender and exporting them to a separate file or include them somehow in the OBJ file (but not
render those points, only include them as metadata), but I have no idea how I could do that.
Any ideas are very appreciated! :)
I would use a screenquad, not a view. This is a general GL solution, and will also work with iOS.
You must determine the indices of the desired model vertices. Using the text rendering algo below, you can just fiddle them until you hit the right ones.
Create a reasonable ARGB bitmap with same aspect ratio as the screen.
Create the screenquad texture using this bitmap
Create a canvas using this bitmap
The rest happens in onDrawFrame(). Clear the canvas using clear paint.
Use the MVP matrix to convert desired model vertices to canvas coordinates.
Draw your desired text at the canvas coordinates
Update the texture.
Your text will render very precisely at the vertices you specfied. The GL thread will double-buffer and loop you back to #4. Super smooth 3D text animation!
Use double floating point math to avoid loss of precision during coordinate conversion, which results in wobbly text. You could even use the z value of the vertex to scale the text. Fancy!
The performance bottleneck is #7 since the entire bitmap must be copied to GL texture memory, every frame. Try to keep the bitmap as small as possible, maintaining aspect ratio. Maybe let the user toggle the labels.
Note that the copy to GL texture memory is redundant since in OpenGL-ES, GL memory is just regular memory. For compatibility reasons, a redundant chunk of regular memory is reserved to artificially enforce the copy.
I can't seem to get my blending properly working in OpenGL ES 2 on Android. What I have are textures with alpha channels that I want to appear with the corresponding alpha. The blending appears as additive even when the top drawn object has alpha of 1.0. In my fragment shader I hard-coded a value of 1.0 for the alpha, and realized it seems to be using color, not alpha values.
For example, it looks like this :
Instead of this :
I am drawing in the correct order, in this example the blue should be fully opaque over top of the gray square. I have tried multiple blending modes (one,one), (alpha,alpha), etc., multiple draw orders, and using and not using depth test. I have tried random blend modes that yield incorrect results, so the blending is changing when I set it.
I believe the problem is that opengl is blending additive color. (Alpha, Alpha) makes sense to me, and when I explicitly set alpha to 1.0 in the shader, I would think I would get a square (the actual shape the texture is projected on) that has a blue circle on it. This not happening puzzles me, I guess I don't understand the sfactor and dfactor blending function enough.
Are you using the Android Bitmap class to load your textures?
Using GLUtils.texImage2D() to load alpha textures from a Bitmap on Android is broken. This is a problem that Google really should document better. The problem is that the Bitmap class converts all images into pre-multiplied format, but that does not work with OpenGL ES unless the image is completely opaque.
This article gives more detail on this click here.
I'm trying to get my point sprites to display with the correct opacity.
Originally, I was getting my sprite texture on a black square.
So, I added the following to my fragment shader:
"if(color.a < 0.5) "+
"discard;"+
Now, this does seem to work, in that my sprite displays without the black background, however, my texture itself is 'partially transparent' - and it isn't showing this partial transparency, it is appearing solid. It's a bit difficult to explain, but I hope you understand what I mean. If I draw the same texture using canvas/surfaceview, it displays correctly.
Basically I'm trying to get my textures to display in their original format (ie as they do in the software in which they were created - ie, the Gimp / photoshop etc).
Would appreciate any help - thanks
First make sure your textures are loaded from transparent pngs through a Bitmap with either RGBA_8888 or RGBA_4444 configuration, so you don't lose the alpha channel.
Second you need to enable GL_BLEND with the glEnable() command. On Android you will write it like this: GLES20.glEnable(GLES20.GL_BLEND);. This allows you to blend the already drawn color with the new color, achieving a transparent look.
The blending function should be set to GL_ONE, GL_ONE_MINUS_ALPHA for regular transparency: glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
GL_SRC_ALPHA, GL_ONE_MINUS_ALPHA for regular transparency: glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Finally, you do not need to use discard, just set the gl_FragColor to a 4-component vector with the alpha in the fourth channel (which is what you get when reading a texture from a sampler), e.g. you could just do gl_FragColor = texture2D(sampler, texCoord); if you wanted to.
You will most likely have to turn off depth-testing with glDisable(GL_DEPTH_TEST) to avoid problems with unsorted triangles.
You can read a little bit more about transparency here.
My Android app needs to display a full-screen bitmap as a background, then on top of that display some dynamic 3D graphics using OpenGL ES (either 1.1. or 2.0 - not decided yet). The background image is a snapshot of a WebView component in the same app, so its dimensions already fit the screen perfectly.
I'm new to OpenGL, but I know that the regular way to display a bitmap involve scaling it into a POT texture (glTexImage2D), configuring the matrices, creating some vertices for the rectangle and displaying that with glDrawArrays. Seems to be a lot of extra work (with loss of quality when down-scaling the image to POT size) when all that's needed is just to draw a bitmap to the screen, in 1:1 scale.
The "desktop" GL has glDrawPixels(), which seems to do exactly what's needed in this situation, but that's apparently missing in GLES. Is there any way to copy pixels to the screen buffer in GLES, circumventing the 3D pipeline? Or is there any way to draw OpenGL graphics on top of a "flat" background drawn by regular Android means? Or making a translucent GLView (there is RSTextureView for Renderscript-based display, but I couldn't find an equivalent for GL)?
but I know that the regular way to display a bitmap involve scaling it into a POT texture (glTexImage2D)
Then your knowledge is outdated. Modern OpenGL (version 2 and later) are fine with arbitrary image dimensions for their textures.
The "desktop" GL has glDrawPixels(), which seems to do exactly what's needed in this situation, but that's apparently missing in GLES.
Well, modern "desktop" OpenGL, namely version 3 core and later don't have glDrawPixels either.
However appealing this function is/was, it offers only poor performance and has so many caveats, that it's rarely used, whenever it's use can be avoided.
Just upload your unscaled image into a texture, disable mipmapping and draw it onto a fullscreen quad.
I would like to have a better understanding of how the components of Android's (2D) Canvas drawing pipeline fit together.
For example, how do XferMode, Shader, MaskFilter and ColorFilter interact? The reference docs for these classes are pretty sparse and the docs for Canvas and Paint don't really add any useful explanation.
It's also not entirely clear to me how drawing operations that have intrinsic colors (eg: drawBitmap, versus the "vector" primitives like drawRect) fit into all of this -- do they always ignore the Paint's color and use use their intrinsic color instead?
I was also surprised by the fact that one can do something like this:
Paint eraser = new Paint();
eraser.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.CLEAR));
canvas.drawOval(rectF, eraser);
This erases an oval. Before I noticed this my mental-model was that drawing to a canvas (conceptually) draws to a separate "layer" and then that layer is composed with the Canvas's Bitmap using the Paint's transfer mode. If it were as simple as that then the above code would erase the entire Bitmap (within the clipping region) as CLEAR always sets the color (and alpha) to 0 regardless of the source's alpha. So this implies that there's an additional sort of masking going on to constrain the erasing to an oval.
I did find the API demos but each demo works "in a vacuum" and doesn't show how the thing it focusses on (eg: XferModes) interacts with other stuff (eg: ColorFilters).
With enough time and effort I could empirically figure out how these pieces relate or go decipher the source, but I'm hoping that someone else has already worked this out, or better yet that there's some actual documentation of the pipeline/drawing-model that I missed.
This question was inspired by seeing the code in this answer to another SO question.
Update
While looking around for some documentation it occurred to me that since much the stuff I'm interested in here seems to be a pretty thin veneer on top of skia, maybe there's some skia documentation that would be helpful. The best thing I could find is the documentation for SkPaint which says:
There are 6 types of effects that can
be assigned to a paint:
SkPathEffect - modifications to the geometry (path) before it generates an
alpha mask (e.g. dashing)
SkRasterizer - composing custom mask layers (e.g. shadows)
SkMaskFilter - modifications to the alpha mask before it is colorized and
drawn (e.g. blur, emboss)
SkShader - e.g. gradients (linear, radial, sweep), bitmap patterns
(clamp, repeat, mirror)
SkColorFilter - modify the source color(s) before applying the xfermode
(e.g. color matrix)
SkXfermode - e.g. porter-duff transfermodes, blend modes
It isn't stated explicitly, but I'm guessing that the order of the effects here is the order they appear in the pipeline.
Like Romain Guy said, "This question is difficult to answer on StackOverflow". There wasn't really any complete documentation, and complete documentation would be kind of large to include here.
I ended up reading through the source and doing a bunch of experiments. I took notes along the way, and ended up turning them into a document which you can see here:
Android's 2D Canvas Rendering Pipeline
as well as this diagram:
It's "unofficial", obviously, so the normal caveats apply.
Based on the above, here are answers to some of the "sub-questions":
It's also not entirely clear to me how
drawing operations that have intrinsic
colors (eg: drawBitmap, versus the
"vector" primitives like drawRect)
fit into all of this -- do they always
ignore the Paint's color and use use
their intrinsic color instead?
The "source colors" come from the Shader. In drawBitmap the Shader is temporarily replaced by a BitmapShader if a non-ALPHA_8 Bitmap is used. In other cases, if no Shader is specified a Shader that just generates a solid color, the Paint's color, is used.
I was also surprised by the fact that
one can do something like this:
Paint eraser = new Paint();
eraser.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.CLEAR));
canvas.drawOval(rectF, eraser);
This erases an oval. Before I noticed
this my mental-model was that drawing
to a canvas (conceptually) draws to a
separate "layer" and then that layer
is composed with the Canvas's Bitmap
using the Paint's transfer mode. If it
were as simple as that then the above
code would erase the entire Bitmap
(within the clipping region) as CLEAR
always sets the color (and alpha) to 0 regardless of the source's alpha. So
this implies that there's an
additional sort of masking going on to
constrain the erasing to an oval.
The XferMode applies to the "source colors" (from the Shader) and the "destination colors" (from the Canvas's Bitmap). The result is then blended with the destination using the mask computed in Rasterization. See the Transfer phase in the above document for more details.
This question is difficult to answer on StackOverflow. Before I get started however, note that shapes (drawRect() for instance) do NOT have an intrinsic color. The color information always comes from the Paint object.
This erases an oval. Before I noticed
this my mental-model was that drawing
to a canvas (conceptually) draws to a
separate "layer" and then that layer
is composed with the Canvas's Bitmap
using the Paint's transfer mode. If it
were as simple as that then the above
code would erase the entire Bitmap
(within the clipping region) as CLEAR
always sets the color (and alpha) to 0
regardless of the source's alpha. So
this implies that there's an
additional sort of masking going on to
constrain the erasing to an oval.
Your model is a bit off. The oval is not drawn into a separate layer (unless you call Canvas.saveLayer()), it is drawn directly onto the Canvas' backing bitmap. The Paint's transfer mode is applied to every pixel drawn by the primitive. In this case, only the result of the rasterization of an oval affects the Bitmap. There's no special masking going on, the oval itself is the mask.
Anyhow, here is a simplified view of the pipeline:
Primitive (rect, oval, path, etc.)
PathEffect
Rasterization
MaskFilter
Color/Shader/ColorFilter
Xfermode
(I just saw your update and yes, what you found describes the stages of the pipeline in order.)
The pipeline becomes just a little bit more complicated when using layers (Canvas.saveLayer()), as the pipeline doubles. You first go through the pipeline to render your primitive(s) inside an offscreen bitmap (the layer), and the offscreen bitmap is then applied to the Canvas by going through the pipeline.
SkPathEffect - modifications to the geometry (path) before it generates an alpha mask (e.g. dashing)
SkRasterizer - composing custom mask layers (e.g. shadows)
SkMaskFilter - modifications to the alpha mask before it is colorized and drawn (e.g. blur)
SkShader - e.g. gradients (linear, radial, sweep), bitmap patterns (clamp, repeat, mirror)
SkColorFilter - modify the source color(s) before applying the xfermode (e.g. color matrix)
SkXfermode - e.g. porter-duff transfermodes, blend modes
http://imgur.com/0X5Yqod