I use the androidsvg-1.2.1.jar for rendering svg image. The original size of the image is 260 pixels in width and 100 pixels in height. I tried to set the width of the image in proportion to the width of the display as follows:
Display display = getWindowManager().getDefaultDisplay();
Point size = new Point();
display.getSize(size);
int width = size.x;
int height = size.y;
height = (int) (width / 2.6);
svg.setDocumentHeight(height);
svg.setDocumentWidth(width);
svg.setDocumentViewBox(0, 0, width, height);
The docs says that methods getDocumentHeight, setDocumentWidth and method setDocumentViewBox accept input values in pixels. But in this case viewbox had estimated size, but the picture itself was located in the left top corner of the viewbox and its size was much smaller than the size of the viewbox (less than about four times).
When I changed the last row of the code as
svg.setDocumentViewBox(0, 0, width/4, height/4);
the size of the picture has become almost equal to the size viewbox, but still remained a little smaller. Why is this happening? And what values should be applied to the input of the setDocumentViewBox method?
The viewBox is meant to describe the limits of the contents of the SVG. In other words, the bounding box around the graphical elements in the file. That's how the renderer knows how much it needs to scale the SVG to fill the area of the SVG viewport. The viewport is the rectangle you specify with width and height (setDocumentWidth() and setDocumentHeight()).
To get a perfect fit, you need to set the viewBox to the exact dimensions of your contents. You haven't provided or linked your SVG, so I can't tell you exactly what that is in your case.
But for example, say your SVG was a rectangle that was at 0,0 and was 100 wide and 20 high. You would need to do setDocumentViewBox(0,0,100,20). If your SVG was a circle of radius 50 at 80,60, you would do setDocumentViewBox(30,10,100,100).
In your case, it looks as if the light gray rectangle at the back defines the limits of your content, so you would probably be using the dimensions of that for your view box.
You say the original size of your SVG is 260x100. If that corresponds to the size of that grey rectangle, then you would set the viewBox with setDocumentViewBox(0,0,260,100).
Related
I have an Image View which displays an image (e.g 2000x1000 pixels) and I have a coordinate (X,Y) on that image (not the image view). The canvas of my Image View is 600x800 for example. How can I convert the point (X,Y) to screen coordinate so that I can draw a path with them on the OnDraw(...) method of Image View. Any help is appreciated! Thank you.
Update: If I use matrix to draw the path between coordinates, it works but the path and objects i draw become really small. Here is the code i used.
final Matrix matrix = canvas.getMatrix();
matrix.preConcat( _view.getImageMatrix() );
matrix.preScale( 1.0f /_inSampleSize, 1.0f / _inSampleSize);
canvas.setMatrix( matrix );
//I draw the path here
Update: I add a picture to show the effect when using matrix to draw the path. I would like to have the 4 line and the 4 corner balls to be in normal size. The red color is the boundary of the Image View which holds the picture.
I think that might depend on how exactly you are displaying your image. Your ImageView (600x800) is not the same aspect ratio as your bitmap (2000x1000).
You are keeping the bitmap's aspect ratio stable as you scale it down? If so, which part (height or width) takes up the full screen and which has black (or whatever else) as padding? This will help you determine your scale factor.
scale_factor = goal_height/height1; //if height is what you are scaling by
scale_factor = goal_width/width1; //if width is what you are scaling by.
I would try:
x_goal = x1 * scale_factor;
y_goal = y1 * scale_factor;
That is, if you have a point (1333, 900) in your image, and your image takes up the full width, you would multiply both x and y by 600/2000 to get (399.9, 270). (you might want to round that decimal).
If you are NOT keeping the bitmaps aspect ratio stable (that is, you're squeezing it to fit), then you'd have a height_scale_factor and a width_scale factor. So you'd take (1333,900) and multiply x by 600/2000 and y by 800/1000 to get (399.9,720).
I am trying to render a Sprite onto my phone screen. My world has a size of 100x100 units and I would like to split it into 10 equal rectangles (10 x 100 units each).
Each of them will be viewed as full screen and I want the camera to be able to scroll from one to another according to the character movement (as the character in the game reaches the halfway width of the rectangle).
The problem is that the camera zooms in too much to the Sprite area and the Sprite rendered doesn't respect the aspect ratio of the PNG file.
Should I use a shape render object such as a rectangle which would be the same size as the phone screen and fill the rectangle with parts of the Sprite, then somehow scale this shape render rectangle in order to preserve the aspect ratio of the PNG file?
Please advise me as to what is best?
If you do not specify units then Orthographic camera has a accessible zoom field. But it is always best to specify what you want exactly.
If you want to have 10 "things" next to eachother and fitting on the camera I would just specify that.
int thingsWidth = 1; //1 could stand for meter
int amountOfThings = 10;
//give you texture/image/sprite the width of "thingsWidth"
#override
public void resize(float width, float height)
{
float camWidth = thingsWidth * amountOfThings;
//You probably want to keep the aspect ration of the window
float camHeight = camWidth * ((float)height / (float)width);
camera.viewportWidth = camWidth;
camera.viewportHeight = camHeight;
camera.update;
}
This is basically how the camera works with a regular screenViewport since we did not specify a specific viewport.
I'm not sure what you want to achieve exactly but Scene2D Table could work in your favor too. You just set table.setFillParent(true); then add 10 of your images to the table using something like table.add(someActor).expand().fill(). Now all your actors will fully expand and fill vertical and share the horizontal space. Now it does not matter how you setup your camera since the table takes care of the layout.
This more of a conceptual Android question
Say I have this static image that I want to load into an ImageView -
Say the image is 600px by 200px.(width by height)
Say my image view I try fitting it into is 300 px by 200px(width by height)
I just want to scale by height and cut off the left end of the image so that the image can fit into the imageview. Also if no cuts need to take place(fits already), I don't want to cut any of it off.
So in the end the ImageView(if it was 300 px by 200px) would hold this image
(basically so the F doesn't get distorted)
I've looked Scale To Fit but none of the scale types seems to achieve this custom effect. Does anyone know of how I would go about this? In my case, I wouldn't want to maintain the original aspect ratio.
You should crop the Bitmap so that it always fits into your ImageView. If you want the bottom-right corner cropped, you could do something similar to this:
if (needsCropping()) {
int startWidth = originalImage.getWidth() - 300;
int startHeight = originalImage.getHeight() - 200;
Bitmap croppedBitmap = Bitmap.createBitmap(originalImage, startWidth, startHeight, width, height);
// TODO set imageview
}
FIT_END would do this:
Compute a scale that will maintain the original src aspect ratio, but will also ensure that src fits entirely inside dst. At least one axis (X or Y) will fit exactly. END aligns the result to the right and bottom edges of dst.
You can use ScaleType.CENTER_CROP BUT this will not rescale the image in any way.
Any combinations of those is impossible, for that you must have your own scale method (subclassing ImageView).
Or, the easy way would have .9PNGs as the source of the image, it would fit any space without distorcing.
i want to draw a rectangle using rect method form android graphics but i want to specify the values in dp so that it can fix to any screen size. i presume by default its in pixel or just x,y coordinates.
how can i draw it to fit in any screen size using the rect method
You can use DisplayMetrics to obtain the screen's size and pixel density and then calculate your rectangle's width and height accordingly by adjusting the width and height in relation to the screen's characteristics.
My app that I am trying to create is a board game. It will have one bitmap as the board and pieces that will move to different locations on the board. The general design of the board is square, has a certain number of columns and rows and has a border for looks. Think of a chess board or scrabble board.
Before using bitmaps, I first created the board and boarder by manually drawing it - drawLine & drawRect. I decided how many pixels in width the border would be based on the screen width and height passed in on "onSizeChanged". The remaining screen I divided by the number of columns or rows I needed.
For examples sake, let's say the screen dimensions are 102 x 102.
I may have chosen to set the border at 1 and set the number of rows & columns at 10. That would leave 100 x 100 left (reduced by two to account for the top & bottom border, as well as left/right border). Then with columns and rows set to 10, that would leave 10 pixels left for both height and width.
No matter what screen size is passed in, I store exactly how many pixels in width the boarder is and the height & width of each square on the board. I know exactly what location on the screen to move the pieces to based on a simple formula and I know exactly what cell a user touched to make a move.
Now how does that work with bitmaps? Meaning, if I create 3 different background bitmaps, once for each density, won't they still be resized to fit each devices screen resolution, because from what I read there were not just 3 screen resolutions, but 5 and now with tablets - even more. If I or Android scales the bitmaps up or down to fit the current devices screen size, how will I know how wide the border is scaled to and the dimensions of each square in order to figure out where to move a piece or calculate where a player touched. So far the examples I have looked at just show how to scale the overall bitmap and get the overall bitmaps width and height. But, I don't see how to tell how many pixels wide or tall each part of the board would be after it was scaled. When I draw each line and rectangle myself based in the screen dimensions from onSizeChanged, I always know these dimensions.
If anyone has any sample code or a URL to point me to that I can a read about this with bitmaps, I would appreciate it.
BTW, here is some sample code (very simplified) on how I know the dimensions of my game board (border and squares) no matter the screen size. Now I just need to know how to do this with the board as a bitmap that gets scaled to any screen size.
#Override
protected void onSizeChanged(int w, int h, int oldw, int oldh) {
intScreenWidth = w;
intScreenHeight = h;
// Set Border width - my real code changes this value based on the dimensions of w
// and h that are passed in. In other words bigger screens get a slightly larger
// border.
intOuterBorder = 1;
/** Reserve part of the board for the boardgame and part for player controls & score
My real code forces this to be square, but this is good enough to get the point
across.
**/
floatBoardHeight = intScreenHeight / 4 * 3;
// My real code actually causes floatCellWidth and floatCellHeight to
// be equal (Square).
floatCellWidth = (intScreenWidth - intOuterBorder * 2 ) / intNumColumns;
floatCellHeight = (floatBoardHeight - intOuterBorder * 2) / intNumRows;
super.onSizeChanged(w, h, oldw, oldh);
}
I think I found the answer. I might not be able to find the exact width/height and location of each playable square within a single scaled bitmap, but by looking at the Snake example in the SDK, I see it doesn't create 1 bitmap for the entire board and scale it based on the screen dimensions - instead it creates a bitmap for each tile and then scales the tile based on the screen resolution and the number of tiles wanted on the screen - just like I do when I draw the board manually. With this method, I should be able find the exact pixel boundaries for all of the playable squares on the board. I just have to break the board into multiple bitmaps for each square. I probably will have to do a similar approach for the borders, so I can detect their width/height as well after scaling.
Now I will test it to verify, but I expect it to work based on what I saw in the Snake SDK example.
--Mike
I tested a way to do what I was asking and it seems to work. Here is what I did:
I created a 320 x 320 bitmap for a board. It was made up of a border and squares (like a chess board). The border was 10 pixels in width all the way around the board. The squares were 20 x 20 pixels.
I detected the width and height of the screen through onSizeChanged. On a 480 x 800 display, I would set the new width for the board to be 480 x 480 and use the following code to scale the whole thing:
protected void onSizeChanged(int w, int h, int oldw, int oldh) {
floatBoardWidth = w;
floatBoardHeight = floatBoardWidth;
bitmapScaledBoard = Bitmap.createScaledBitmap(bitmapBoard, (int)floatBoardWidth, (int)floatBoardHeight, true);
super.onSizeChanged(w, h, oldw, oldh);
}
Now in order to detect how many pixels wide the border was scaled to and how many pixels in height & width the squares were scaled to, I first calculated how much the over all image was scaled. I knew the bitmap was 320 x 320, since I created it. I used the following formula to calculate how much the image was scaled:
floatBoardScale = floatScreenWidth / 320;
In the case of a 480 width screen, floatBoardScale equals: 1.5. Then to calculate what my border within the full bitmap was scaled to, I did:
floatBorderWidth = 10 * floatBoardScale;
10 was the original border width in my 320 x 320 bitmap. In my final code I won't hardcode values, I will use variables. Anyway, in the case of this formula, the new calculated border width should be: 15
When I multiplied the same scale factor to the board squares (that were 20 x 20 in the original bitmap) I got new values of 30 x 30. When I used those values in my formulas to calculate what square a person touched, it worked. I touched every corner of the squares and in the center and it always calculated the right location. Which is important, so no matter what the screen resolution, I know where the user wanted to move a piece and visually it shows up in the right location.
I hope this helps anyone who may have had the same question. Also, if anyone has a better method of accomplishing the same thing, please post it.
A couple things. First, start reading about how to support multiple screens. Pay close attention to learning about dips and how they work.
Next, watch this video (at least the first 15-20 minutes of it).
This subject isn't a cakewalk to grasp. I found it best to start playing around inside my code. I would suggest creating a surfaceview and start messing around with some bitmaps, different emulators (screen sizes and densities), and the different types of drawable folders.
Unfortunately, there is more to this topic than I think Google wants to admit, and while it's definitely do-able is isn't simple to get started on it for some types of applications.
Finally, you should consider boiling down your question to be more straight forward if you aren't looking for an abstract answer (like this one).
Good luck!