I am drawing some Bitmaps to a canvas. Some (most) of these bitmaps utilize the alpha channel, and transparency/translucency is critical for the image to look correct. This is necessary due to some image manipulation I perform throughout the Activity.
Eventually the user is finished with their task and I take the canvas and save it to a PNG via use of this method:
Bitmap.createBitmap(this.getWidth(), this.getHeight(), Bitmap.Config.ARGB_8888);
At this point I no longer make any modifications to the canvas/image but I do display the image to a Canvas in another Activity. I want it to look exactly the same as in the previous Activity, so does it matter if I use Bitmap.Config.RGB_565 instead or will I lose information?
You will definitely lose information if you use Bitmap.Config.RGB_565. It may not be discernible on a phone display, but you're going from 24 bits of color information (not counting alpha) to 16 bits of color information when you go from RGB_8888 to RGB_565. Instead of 256 distinct red, green, and blue values in RGB_8888, there are only 64 distinct green values and 32 distinct red and blue values in RGB_565. This may cause banding or other sorts of quantization artifacts in your RGB_565 image.
Related
I am trying to create a Analog Meter in OpenGL ES 1.x on Android. I have created a simple circle using Midpoint Theorem in green color. Now I want to place some numbers around the circle for Meter readings. Since OpenGL has no native text rendering API, I am trying to load an PNG image like this with the reading on it and the rest of image is transparent.
Now what parameters I must pass to glBlendFunc() function to achive this.
I have tried many different combinations but nothing works.
This is a common problem on Android. You can't use the Android Bitmap class to load textures that have transparent areas in them. This is because Bitmap follows the Porter-Duff specification for alpha blending, and Bitmap optimizes images with per-pixel alpha by storing them in the premultiplied format (A, R*A, G*A, B*A). This is great for Porter-Duff but not for OpenGL, which requires a non-premultiplied (ARGB) format. This means that the Bitmap class can only be used with textures that are completely opaque (or have no per-pixel alpha). This article details:
http://software.intel.com/en-us/articles/porting-opengl-games-to-android-on-intel-atom-processors-part-1
Just try passing the Alpha Blending values which are:
glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);
I hope it helps.
I don't think(even at 2.2)Alpha PNGs are problem for the Android Bitmap loading algorithm.
I have read several internet articles about drawing fluids. They refer to taking a bitmap, blurring it and then applying a threshold. From what I can determine it looks like it might be some type of color replacement. Is that true?
I am not seeing any android bitmap or paint method that is called "threshold". So my question is "What is a bitmap threshold" and/or "Does android have an equivalent function?"
I think I understand what you are talking about. Imagine an image with several circles that are close to each other (but not necessarily touching). When the image gets blurred, the blured parts of the new image may touch, merge, and generally look like an amorphous blob of fluid. When you threshold the image, you effectively choose a saturation value below which all image data is discarded.
So, for example, if you wanted to threshold the image at 50%, all RGB pixel values that are greater than 50% will be kept. All others are discarded. The threshold function in this case would sum the Red, Green, and Blue colors and divide by 3. If the value is greater than 0xFF/2 the pixel is kept.
Setting how much the image gets blurred and the level of thresholding will cause the image to look more or less connected.
This code looked interesting:
If you always have to draw the same rectangle, it is faster to do it with a static bitmap or with canvas.drawRect()?
For this example, the are four layered rectangles. So a boarder with a fill color, and then a boarder between a middle color and the fill color.
So four paint.setColor() commands and four canvas.drawRect commands or one canvas.drawBitmap().
I strongly recommend drawRect().
Bitmaps take up a huge chunk of memory, and can lead to Out Of Memory Exceptions if not used correctly.
Written by android:
Bitmaps take up a lot of memory, especially for rich images like photographs. For example, the camera on the Galaxy Nexus takes photos up to 2592x1936 pixels (5 megapixels). If the bitmap configuration used is ARGB_8888 (the default from the Android 2.3 onward) then loading this image into memory takes about 19MB of memory (2592*1936*4 bytes), immediately exhausting the per-app limit on some devices.
To prevent headache, and unexpected crashes. Use drawRect();
If you are doing these 4 draws on a regular basis for different objects, consider writing a method that does all 4 for you. So you are not causing massive repetition.
For example:
public void DrawMyRect(Canvas canvas, int x, int y)
{
canvas.drawRect(x, y ,x + 15, y + 40, paint);
// Draw its line etc etc..
}
Alternatively, if you do go for drawing bitmap, as it does have advantages:
See this epic Link by Android, on how to properly use Bitmaps
The performance difference is probably negligible. The bitmap will use more memory, the canvas draw calls will use slightly more CPU. You can probably use a ShapeDrawable if you want to reduce the calls without the overhead of a bitmap.
I would like to have a better understanding of how the components of Android's (2D) Canvas drawing pipeline fit together.
For example, how do XferMode, Shader, MaskFilter and ColorFilter interact? The reference docs for these classes are pretty sparse and the docs for Canvas and Paint don't really add any useful explanation.
It's also not entirely clear to me how drawing operations that have intrinsic colors (eg: drawBitmap, versus the "vector" primitives like drawRect) fit into all of this -- do they always ignore the Paint's color and use use their intrinsic color instead?
I was also surprised by the fact that one can do something like this:
Paint eraser = new Paint();
eraser.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.CLEAR));
canvas.drawOval(rectF, eraser);
This erases an oval. Before I noticed this my mental-model was that drawing to a canvas (conceptually) draws to a separate "layer" and then that layer is composed with the Canvas's Bitmap using the Paint's transfer mode. If it were as simple as that then the above code would erase the entire Bitmap (within the clipping region) as CLEAR always sets the color (and alpha) to 0 regardless of the source's alpha. So this implies that there's an additional sort of masking going on to constrain the erasing to an oval.
I did find the API demos but each demo works "in a vacuum" and doesn't show how the thing it focusses on (eg: XferModes) interacts with other stuff (eg: ColorFilters).
With enough time and effort I could empirically figure out how these pieces relate or go decipher the source, but I'm hoping that someone else has already worked this out, or better yet that there's some actual documentation of the pipeline/drawing-model that I missed.
This question was inspired by seeing the code in this answer to another SO question.
Update
While looking around for some documentation it occurred to me that since much the stuff I'm interested in here seems to be a pretty thin veneer on top of skia, maybe there's some skia documentation that would be helpful. The best thing I could find is the documentation for SkPaint which says:
There are 6 types of effects that can
be assigned to a paint:
SkPathEffect - modifications to the geometry (path) before it generates an
alpha mask (e.g. dashing)
SkRasterizer - composing custom mask layers (e.g. shadows)
SkMaskFilter - modifications to the alpha mask before it is colorized and
drawn (e.g. blur, emboss)
SkShader - e.g. gradients (linear, radial, sweep), bitmap patterns
(clamp, repeat, mirror)
SkColorFilter - modify the source color(s) before applying the xfermode
(e.g. color matrix)
SkXfermode - e.g. porter-duff transfermodes, blend modes
It isn't stated explicitly, but I'm guessing that the order of the effects here is the order they appear in the pipeline.
Like Romain Guy said, "This question is difficult to answer on StackOverflow". There wasn't really any complete documentation, and complete documentation would be kind of large to include here.
I ended up reading through the source and doing a bunch of experiments. I took notes along the way, and ended up turning them into a document which you can see here:
Android's 2D Canvas Rendering Pipeline
as well as this diagram:
It's "unofficial", obviously, so the normal caveats apply.
Based on the above, here are answers to some of the "sub-questions":
It's also not entirely clear to me how
drawing operations that have intrinsic
colors (eg: drawBitmap, versus the
"vector" primitives like drawRect)
fit into all of this -- do they always
ignore the Paint's color and use use
their intrinsic color instead?
The "source colors" come from the Shader. In drawBitmap the Shader is temporarily replaced by a BitmapShader if a non-ALPHA_8 Bitmap is used. In other cases, if no Shader is specified a Shader that just generates a solid color, the Paint's color, is used.
I was also surprised by the fact that
one can do something like this:
Paint eraser = new Paint();
eraser.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.CLEAR));
canvas.drawOval(rectF, eraser);
This erases an oval. Before I noticed
this my mental-model was that drawing
to a canvas (conceptually) draws to a
separate "layer" and then that layer
is composed with the Canvas's Bitmap
using the Paint's transfer mode. If it
were as simple as that then the above
code would erase the entire Bitmap
(within the clipping region) as CLEAR
always sets the color (and alpha) to 0 regardless of the source's alpha. So
this implies that there's an
additional sort of masking going on to
constrain the erasing to an oval.
The XferMode applies to the "source colors" (from the Shader) and the "destination colors" (from the Canvas's Bitmap). The result is then blended with the destination using the mask computed in Rasterization. See the Transfer phase in the above document for more details.
This question is difficult to answer on StackOverflow. Before I get started however, note that shapes (drawRect() for instance) do NOT have an intrinsic color. The color information always comes from the Paint object.
This erases an oval. Before I noticed
this my mental-model was that drawing
to a canvas (conceptually) draws to a
separate "layer" and then that layer
is composed with the Canvas's Bitmap
using the Paint's transfer mode. If it
were as simple as that then the above
code would erase the entire Bitmap
(within the clipping region) as CLEAR
always sets the color (and alpha) to 0
regardless of the source's alpha. So
this implies that there's an
additional sort of masking going on to
constrain the erasing to an oval.
Your model is a bit off. The oval is not drawn into a separate layer (unless you call Canvas.saveLayer()), it is drawn directly onto the Canvas' backing bitmap. The Paint's transfer mode is applied to every pixel drawn by the primitive. In this case, only the result of the rasterization of an oval affects the Bitmap. There's no special masking going on, the oval itself is the mask.
Anyhow, here is a simplified view of the pipeline:
Primitive (rect, oval, path, etc.)
PathEffect
Rasterization
MaskFilter
Color/Shader/ColorFilter
Xfermode
(I just saw your update and yes, what you found describes the stages of the pipeline in order.)
The pipeline becomes just a little bit more complicated when using layers (Canvas.saveLayer()), as the pipeline doubles. You first go through the pipeline to render your primitive(s) inside an offscreen bitmap (the layer), and the offscreen bitmap is then applied to the Canvas by going through the pipeline.
SkPathEffect - modifications to the geometry (path) before it generates an alpha mask (e.g. dashing)
SkRasterizer - composing custom mask layers (e.g. shadows)
SkMaskFilter - modifications to the alpha mask before it is colorized and drawn (e.g. blur)
SkShader - e.g. gradients (linear, radial, sweep), bitmap patterns (clamp, repeat, mirror)
SkColorFilter - modify the source color(s) before applying the xfermode (e.g. color matrix)
SkXfermode - e.g. porter-duff transfermodes, blend modes
http://imgur.com/0X5Yqod
I'm developing an android app and I'm facing a weird issue.
I'm doing some image processing on a SurfaceView. I'm drawing the processed image using a canvas and the following method:
canvas.drawBitmap(image, x, y, paint)
My SurfaceView has a colored background (#3500ffff, kind of very dark green) and once the image is drawn, I can notice that its original colors are not conserved. It has a very slight dark green tint, like if the bitmap alpha was changed.
Did anyone already encounter this issue? Would you have an idea on how to fix this?
This would happen with a 16 bits destination. 16 bits buffers encode pixels in 565 format, which gives you a higher precision in the green channel, which sometimes result in greenish tints. A 32 bits destination/bitmap would solve this issue.
Presuming that your image is not transparent how did you define paint, it should not be a transparent colour or use some special effect. Try using null for paint.
The other thing is what are you drawing first the image or the background? Just wondering if your drawing order is correct.
If you set your surface to be non-transparent will the image change colour then?
Another thing I noticed which I think is connected with events sychronisation is that sometimes drawing on surface creates a semi-transparent sprite when moving the finger very fast over the screen which initialises drawing.