Draw quality of Single Bitmap vs Multiple Bitmaps on Canvas (Android) - android

I have a set of small images. If I draw these images individually on canvas, the draw quality is significantly low, compared to the case where I draw them on a screen size large bitmap and draw that bitmap on the canvas. Specially the lines get distorted. See the below (right side).
From the code below, the canvas also supports zooming (scaling). This issue occurs on small scale factors.
Question is how to improve the draw quantity of multiple small images to the standard of large image.
This is a code of multiple bitmaps drawn on canvas
canvas.scale(game.mScaleFactor, game.mScaleFactor);
canvas.translate(game.mPosX, game.mPosY);
for (int i = 0; i < game.clusters.size(); i++) {
Cluster cluster = game.clusters.get(i);
canvas.drawBitmap(cluster.Picture, cluster.left,
cluster.top, canvasPaint);
}
This is the code for single bitmap, game.board is a screen size image which has all the small bitmaps drawn on.
canvas.scale(game.mScaleFactor, game.mScaleFactor);
canvas.translate(game.mPosX, game.mPosY);
canvas.drawBitmap(game.board, matrix, canvasPaint)
The paint brush has following properties set.` All bitmaps are Bitmap.Config.ARGB_8888.
canvasPaint.setAntiAlias(true);
canvasPaint.setFilterBitmap(true);
canvasPaint.setDither(true);`

I can think of a couple, depending on you you are drawing the borders of the puzzle pieces.
The problem you are having is that when the single image is scaled, the lines are filtered with the rest of the image and it looks smooth (the blending is correct). When the puzzle is draw per-piece, the filtering reads adjacent pixels on the puzzle piece and blends them with the piece.
Approach 1
The first approach (one that is easy to do) is to render to FBO (RTT) at the logical size of the game and then scale the whole texture to the canvas with a fullscreen quad. This will get you the same result as single because the pixel blending involves neighboring pieces.
Approach B
Use bleeding to solve the issue. When you cut your puzzle piece, include the overlapping section of the adjacent pieces. Instead of setting the discarded pixels to zero, only set the alpha to zero. This will cause your blending function to pickup the same values as if it were placed on a single image. Also, double the lines for the border, but set the outside border alpha to zero.
Approach the Final
This last one is the most complicated, but will be smooth (AF) for any scaling.
Turn the alpha channel of your puzzle piece into a Signed Distance Field and render using a specialized shader that will smooth the output at any distance. Also, SDF allows you to draw the outline with a shader during rendering, and the outline will be smooth.
In fact, your SDF can be a separate texture and you can load it into the second texture stage. Bind the source image as tex unit 0, the sdf puzzle piece cutout(s) on tex unit 1 and use the SDF shader to determine the alpha from the SDF and the color from tex0, then mix in the outline as calculated from the SDF.
http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf
https://github.com/Chlumsky/msdfgen
http://catlikecoding.com/sdf-toolkit/docs/texture-generator/
SDF is generated from a Boolean map. Your puzzle piece cutouts will need to start as monochrome cutout and then turned into SDF (offline) using a tool or similar as listed above. Valve and LibGDX have example SDF shaders, as well as the tools listed above.

Related

Selecting rotated rectangular zones of a bitmap and draw them unrotated efficiently

I need some advice regarding efficient use of canvas and matrices
I have a source bitmap "B0" loaded in memory, which is WxH
I have a bitmap B1, onto which I draw with a canvas.
There is a rotated (with an arbitrary angle) rectangular portion
(w_p*h_p) of B0. I need to get this portion and draw it, once unrotated, onto B1
I would like to do it with normal Views, canvas and matrices. Not surfaceviews, not opengl
An "already working" approach is:
Rotate the bitmap B0 to compensate for the selected zone rotation -->
we get B0_r
Calculate the translated rectangle, which will now be unrotated. We
have SrcRect_u
With a canvas, draw the selected rectangle of B0_r (srcRect_u) onto
B1
However, if B0 is large enough, the rotation operation is very expensive since it applies to the whole bitmap. Also, it means to create an intermediate bitmap each time.
I need to repeat this step in a gameloop, where this rectangle (srcRect) can be rotated and translated, so it must be a "cheap" operation to achieve this
My question: is there a better approach in terms of efficiency , using canvas, matrices and "normal" Views?
EDIT
To better illustrate what I mean, I have added some pics.
B0, with the rotated selection zone
B0 rotated. Now the selection zone is unrotated
unrotated selection zone drawn onto a part of B1
Yes. Rotate the canvas you're going to be drawing to via canvas.rotate. Then draw the subset of the bitmap you wish via drawCanvas. This will have the effect of drawing a rotated bitmap, without rotating the bitmap or creating an intermediate. If you want to draw some things rotated and some unrotated, save the matrix (via canvas.save), rotate it, draw the bitmap, then restore (via canvas.restore)

Android OpenGL2.0 intersection between two textures

I'm making game in OpenGL2.0 and I want to check are two sprites have intersection but i don't need to check intersection between two rectangles.I have two sprites with texture,some part of texture is transparent,some not. I need to check intersection between sprites only on not trasnparent part.
Example: http://i.stack.imgur.com/ywGN5.png
The easiest way to determine intersection between two sprites is by Bounding Box method.
Object 1 Bounding Box:
vec3 min1 = {Xmin, Ymin, Zmin}
vec3 max1 = {Xmax, Ymax, Zmax}
Object 2 Bounding Box:
vec3 min2 = {Xmin, Ymin, Zmin}
vec3 max2 = {Xmax, Ymax, Zmax}
You must precompute the bounding box by traversing through the vertex buffer array for your sprites.
http://en.wikibooks.org/wiki/OpenGL_Programming/Bounding_box
Then during each render frame check if the bounding boxes overlap (compute on CPU).
a) First convert the Mins & Maxs to world space.
min1WorldSpace = modelViewMatrix * min1
b) Then check their overlap.
I need to check intersection between sprites only on not trasnparent part.
Checking this test case maybe complicated depending on your scene. You may have to segment your transparent sprites into a separate sprite and compute their bounding box.
In your example it looks like the transparent object is encapsulate inside an opaque object so it's easy. Just compute two bounding boxes.
I don't think there's a very elegant way of doing this with ES 2.0. ES 2.0 is a very minimal version of OpenGL, and you're starting to push the boundaries of what it can do. For example in ES 3.0, you could use queries, which would be very helpful in solving this nicely and efficiently.
What can be done in ES 2.0 is draw the sprites in a way so that only pixels in the intersection of the two end up producing color. This can be achieved with either using a stencil buffer, or with blending (see details below). But then you need to find out if any pixels were rendered, and there's no good mechanism in ES 2.0 that I can think of to do this. I believe you're pretty much stuck with reading back the result, using glReadPixels(), and then checking for non-black pixels on the CPU.
One idea I had to avoid reading back the whole image was to repeatedly downsample it until it reaches a size of 1x1. It would originally render to a texture, and then in each step, sample the current texture with linear sampling, rendering to a texture of half the size. I believe this would work, but I'm not sure if it would be more efficient than just reading back the whole image.
I won't provide full code for the proposed solution, but the outline looks like this. This is using blending for drawing only the pixels in the intersection.
Set up an FBO with an RGBA texture attached as a color buffer. The size does not necessarily have to be the same as your screen resolution. It just needs to be big enough to give you enough precision for your intersection.
Clear FBO with black clear color.
Render first sprite with only alpha output, and no blending.
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_TRUE);
glDisable(GL_BLEND);
// draw sprite 1
This leaves the alpha values of sprite 1 in the alpha of the framebuffer.
Render the second sprite with destination alpha blending. The transparent pixels will need to have black in their RGB components for this to work correctly. If that's not already the case, change the fragment shader to create pre-multiplied colors (multiply rgb of the output by a).
glColorMask(GL_TRUE GL_TRUE, GL_TRUE, GL_TRUE);
glBlendFunc(GL_DST_ALPHA, GL_ZERO);
glEnable(GL_BLEND);
// draw sprite 2
This renders sprite 2 with color output only where the alpha of sprite 1 was non-zero.
Read back the result using glReadPixels(). The region being read needs to cover at least the bounding box of the two sprites.
Add up all the RGB values of the pixels that were read.
There was overlap between the two sprites if the resulting color is not black.

Can setTextSize() be density independent?

If I were to initialize a Paint, and set it's text size like so:
Paint = new Paint();
Paint.setAntiAlias(true);
Paint.setARGB(255, 255, 0, 0);
Paint.setTextSize(screenWidth/100);
//screenWidth is the width of the screen in pixels given via display metrics.
And then draw text to the canvas like so:
String text = "Hello"
canvas.drawText(text, (screenWidth/13), (screenHeight/5), Paint);
Would the text Show up in the same relative spot the same relative size regardless of screen metrics? I ask because I only have 1 device and the Emulator doesn't run very well on my multi-core machine.
What I've been doing up until this point is simply using a bitmap with the text written over a background, but my memory usage is getting quite heavy, so I'm looking to cut down on the number of bitmaps loaded.
My other option is to save the text as a bitmap with a transparent background and overlay it on a single bitmap background. But that seems to be only half as productive, as it is actually creating 1 more bitmap, just reducing the total size of all the bitmaps stored. I also don't like this idea because i'd eventually like to take more control over the object life cycle and this will make that less effective.
Also, is there any method of adding styles to text (such as complicated fonts and color patterns besides using pre-made Drawables) so that the text can be drawn to canvas? (As cheaply as possible)
NVM, Solved By Poking around all day I figured out that DP units can be called from the res folder and will give a fairly uniform position of the text. and that paint is not as customization friendly as I wish.

how to use background image for texture in openGL

i want to use background image with texture but texture get images with power of 2 so i made image size (800x480 to 512x512).
but now it is showing image with some blank space.
how can i show image on entire screen. and also want to horizontally scrollable .
Well, if this is on Android, that means you don't have glDrawPixels, so the only way you could be "showing" an image would be to render a textured quad. So just make the quad whatever size you need.
You can also draw multiple textures, just by drawing one texture to one location, then drawing another texture to another location.
You need to use the right texture coordinates when rendering your background. You need to compute the texture coordinates so that you are having an offset for the axis perpendicular to the black borders.
I guess you resized your 800x480 image so that it fits in 512x512 pixels, making you end up with the actual background image (inside the 512x512 texture) being 512x307?
This would mean that your u texture coordinate offset would need to be (512.0 - 307.0) / 2.0 / 512.0 ~ 0.2, so your texture coordinates would need to be (0.0,0.0), (0.0,0.2), (1.0,0.2), (1.0,0.0).

Dynamically create / draw images to put in android view

I'm not sure I'm doing this the "right" way, so I'm open to other options as well. Here's what I'm trying to accomplish:
I want a view which contains a graph. The graph should be dynamically created by the app itself. The graph should be zoom-able, and will probably start out larger than the screen (800x600 or so)
I'm planning on starting out simple, just a scatter plot. Eventually, I want a scatter plot with a fit line and error bars with axis that stay on the screen while the graph is zoomed ... so that probably means three images overlaid with zoom functions tied together.
I've already built a view that can take a drawable, can use focused pinch-zoom and drag, can auto-scale images, can switch images dynamically, and takes images larger than the screen. Tying the images together shouldn't be an issue.
I can't, however, figure out how to dynamically draw simple images.
For instance: Do I get a BitMap object and draw on it pixel by pixel? I wanted to work with some of the ShapeDrawables, but it seems they can only draw a shape onto a canvas ... how then do I get a bitmap of all those shapes into my view? Or alternately, do I have to dynamically redraw /all/ of the image I want to portray in the "onDraw" routine of my view every time it moves or zooms?
I think the "perfect" solution would be to use the ShapeDrawable (or something like it to draw lines and label them) to draw the axis with the onDraw method of the view ... keep them current and at the right level ... then overlay a pre-produced image of the data points / fit curve / etc that can be zoomed and moved. That should be possible with white set to an alpha on the graph image.
PS: The graph image shouldn't actually /change/ while on the view. It's just zooming and being dragged. The axis will probably actually change with movement. So pre-producing the graph before (or immediately upon) entering the view would be optimal. But I've also noticed that scaling works really well with vector images ... which also sounds appropriate (rather than a bitmap?).
So I'm looking for some general guidance. Tried reading up on the BitMap, ShapeDrawable, Drawable, etc classes and just can't seem to find the right fit. That makes me think I'm barking up the wrong tree and someone with some more experience can point me in the right direction. Hopefully I didn't waste my time building the zoom-able view I put together yesterday :).
First off, it is never a waste of time writing code if you learned something from it. :-)
There is unfortunately still no support for drawing vector images in Android. So bitmap is what you get.
I think the bit you are missing is that you can create a Canvas any time you want to draw on a bitmap. You don't have to wait for onDraw to give you one.
So at some point (from onCreate, when data changes etc), create your own Bitmap of whatever size you want.
Here is some psuedo code (not tested)
Bitmap mGraph;
void init() {
// look at Bitmap.Config to determine config type
mGraph = new Bitmap(width, height, config);
Canvas c = new Canvas(mybits);
// use Canvas draw routines to draw your graph
}
// Then in onDraw you can draw to the on screen Canvas from your bitmap.
protected void onDraw(Canvas canvas) {
Rect dstRect = new Rect(0,0,viewWidth, viewHeight);
Rect sourceRect = new Rect();
// do something creative here to pick the source rect from your graph bitmap
// based on zoom and pan
sourceRect.set(10,10,100,100);
// draw to the screen
canvas.drawBitmap(mGraph, sourceRect, dstRect, graphPaint);
}
Hope that helps a bit.

Categories

Resources