I am developing a photography app in which I want to select a particular section of a photo and stretch that portion only. How to do that?
I have tried to stretch a photo using Canvas but failed to do so. Is it possible from android.graphics.NinePatch class?
Any suggestions?
You can use matrix to apply new dimension to your bitmap.
you could use setscale/postScale methods of a matrix object.
A rather ugly solution would be using a image cropping library. You can use it to temporarily crop a portion of the image and load it into another ImageView and then scaling it up.
Also, you can get things done without using that CropImageView. The idea is whenever user touches the original image view, you will be given a (x, y) at which user's finger resides. So, you can extract a bitmap image centered at (x, y) and with a given radius.
For applying a magnifier effect you could use a round/circular ImageView for showing magnified portion of the image.
Something like this:
Finally i found a solution of my problem below :
Bitmap stretchImage = Bitmap.createBitmap(w, h+progress, Bitmap.Config.ARGB_8888 );
c = new Canvas(stretchImage);
//draw top bit
c.drawBitmap(normalImage, new Rect(0,0,w,75), new Rect(0,0,w,75), null);
//draw middle bit
c.drawBitmap(normalImage, new Rect(0,75,w, 150), new Rect(0,75,w,150+progress), null);
//draw right bit
c.drawBitmap(normalImage, new Rect(0 ,150,w,225), new Rect(0 ,150+progress,w,225+progress), null);
myImage.setImageBitmap(stretchImage);
Related
So this is my scenario:
I have an svg image that contains all the music notes on the staff (sort of sprite sheet)
I have two svg image that contains the key (maybe i can merge them together anyway)
All of them are converted to android Vector Drawable.
What i want to do is to select the note from the first svg, select the cleff, and then show them next to each other (the two images should be aligned).
So what i managed to achieve is to select the portion of svg for the note (i need to refine the rect size). But what i'm still having problem is to show both of them on the same line.
public MusicScore(Context context) {
super(context);
paint = new Paint();
paint.setColor(Color.BLACK);
Resources res = context.getResources();
musicClef = (VectorDrawable) res.getDrawable(R.drawable.ic_bassclef, null);
musicNotes = (VectorDrawable) res.getDrawable(R.drawable.ic_musicnotes, null);
int width = getWidth();
int height = getHeight();
Log.i("MUSIC", "H: " + musicClef.getMinimumHeight() + " W: " + musicClef.getMinimumWidth());
}
#Override
protected void onDraw(Canvas canvas){
Log.i("MUSIC", "Called");
int left = getWidth()/2;
int top = getHeight()/2;
musicClef.setBounds(0,0,musicClef.getIntrinsicWidth(), musicClef.getIntrinsicHeight() );
Bitmap source = Bitmap.createBitmap(musicNotes.getIntrinsicWidth(), musicNotes.getIntrinsicHeight(), Bitmap.Config.ARGB_8888);
Bitmap clefSource = Bitmap.createBitmap(musicClef.getIntrinsicWidth(), musicClef.getIntrinsicHeight(), Bitmap.Config.ARGB_8888);
Canvas newcanvas = new Canvas(source);
Canvas clefCanvas = new Canvas(clefSource);
int notesLeft = musicClef.getIntrinsicWidth();
int notesTop = musicClef.getIntrinsicHeight();
musicNotes.setBounds(0, 0, musicNotes.getIntrinsicWidth(), musicNotes.getIntrinsicHeight());
musicNotes.draw(newcanvas);
musicClef.draw(clefCanvas);
Rect rect = new Rect(1150,0,1700, musicNotes.getIntrinsicHeight());
Rect rect2 = new Rect(notesLeft ,0, notesLeft + 450, musicNotes.getIntrinsicHeight());
Rect clefRect = new Rect(0, 0, musicClef.getIntrinsicWidth(), musicClef.getIntrinsicHeight());
canvas.drawBitmap(clefSource, null, clefRect, null);
canvas.drawBitmap(source, rect, rect2, null);
}
So with that code i can show a portion of the notes drawable, after converting it to a Bitmap. And i can also draw both of them, and the horizontal position is aligned. The problem is that the vertical position is not aligned, what i'm getting is:
So i know that my code is not correct (i'm pretty new to canvas in android and trying to figure out what to do), and what i learned so far is:
that in order to select a portion of the image i need to convert it to the Bitmap (i haven't found any way to do it directly with the VectorDrawable)
While drawing a Bitmap on the canvas with DrawBitmap, the first Rect represents the portion area i want to show, and the second one is the size and position of the area displayed.
In order to have the bitmap displayed on the canvas, i need to create a canvas from the Bitmap and draw it in it, in order to have it displayed. Is that correct?
So i have several questions:
I would like to understand how to vertically align the images, so they have to start from the same y (top).
I'm not sure if the vector image is the best idea, maybe is better to convert it into display dependant pngs? Or anyway the canvas size is relative to the screen? Or maybe i need to make my code screen-independent (i suppose that getIntrinsicWidth/Height are returning the real size of the image, so maybe i need to scale it?)
Why I need to explicitly setBounds of both images to have them displayed?
UPDATE #1
So i understood why probably my images are not aligned on the same line. It looks like that when creating a vector asset, for some reason the sizes are "adapted", not sure if is the android systems doing that or android studio. But anyway what i found after loading two images with the same height in pixels, is that for one i have an height, and for the other i have a different height (nearly double), and that explain why the image is shifted on the bottom.
So what i tried is to make a single image with both the notes, and the clefs, and it sort of works in the emulator. But when testing on a real device i get the error:
W/OpenGLRenderer: Bitmap too large to be uploaded into a texture (11732x1168, max=8192x8192)
Bitmap too large to be uploaded into a texture (11732x1168, max=8192x8192)
i can understand what the error means, but anyway the image size in pixel is: 2933x292 pixels. Why the getIntrinsicWidth and getIntrinsicHeight are returning that dimensions? what is their metrics?
I'm wondering that maybe the vector drawable is not the best choice? Maybe is better to cnvert it into screen-dependant pngs? and use them?
I am trying to add an image on top of another image. The one on the top is a blurred image. I am using the following code to try and achieve this.
//First image as a background(full size)
mCanvas.drawBitmap(canvasBackImage, 0, 0, null); //draws fine
Rect rectangle = new Rect(0,0,200,200);
//Second image on top blurred 200px x 200px rectangle
mCanvas.drawBitmap(blurBuilder.blur(appContext, canvasBackImage, mX, mY), null, rectangle, null);
The image is drawn fine with the above code at coordinate 0,0 of the canvas however, if I modify the above code's line three to the following, it doesn't add the image at the 100,100 coordinate of the canvas.
Rect rectangle = new Rect(100,100,200,200);
I also tried it with 50,50 coordnate and it works. So Changing it the following works too.
Rect rectangle = new Rect(50,50,200,200);
I have no idea why this is not working as I expect it to. Am I doing something wrong?
My ultimate objective is to blur the image at the exact location that the user touched. So if user touched in the middle of the screen then that part of the image will be blurred out.
I moved the above code in onDraw method and it seems to be working as expected now.
Before I had it in a different method which was being called manually on a button click.
I'm trying to dynamically create images in android by taking an existing Bitmap and removing the centre of it in order to make a "cropped" version. The resulting image's height would naturally be smaller than the original, something like the attached example.
I've got a rough way of doing this by creating two new Bitmaps from the original, one containing the top of the image above the crop section (e.g. the android's head in the example) and the other containing the remaining image below the crop section (the android's feet) using the Bitmap.createBitmap(source, x, y, width, height) method, then drawing both of these bitmaps onto a canvas of a size equal to the original image minus the removed space.
This feels a bit clunky, and as I could be calling this method several times a second, it seems wasteful to create two bitmaps each time.
I was wondering if there was a more efficient way of doing this. Something like drawing the original Bitmap onto a canvas using a Path with it's Paint's xfermode set to a
new PorterDuffXfermode(Mode.DST_OUT) in order to cut out the portion of the image I wish to delete. But this seems to clear that area and not shrink the image down i.e. it leaves a big empty gap in the Android's middle.
Any suggestions greatly appreciated!
Why do you create two bitmaps? You only need to create one bitmap and then do canvas.drawBitmap() twice.
Bitmap bmpOriginal;
Bitmap bmpDerived = Bitmap.create(...);
Canvas canvas = new Canvas(bmpDerived);
canvas.drawBitmap(bmpOriginal, rectTopSrc, rectTopDst, null);
canvas.drawBitmap(bmpOriginal, rectBottomSrc, rectBottomDst, null);
Done.
I am new to OpenGl and needed some help.
I have a screen was able to draw a imageon it. Now i want to create a mirror image of the same image i.e i want the screen to be divided in 2 parts(horizontally) and then have the actual image at the bottom and create a duplicate image at the top just like a mirror image so that if a change is made to the bottom image it reflects on the top image too.
Please give suggestions. (I do not want canvas mirror image code)
try this
sprite = BitmapFactory .decodeResource(appContext.getResources(),R.drawable.spritegfx);
Matrix temp1=new Matrix();
temp1.preScale(-1.0f,1.0f);
flipedSprite = Bitmap.createBitmap(sprite , 0, 0,sprite.getWidth(),sprite .getHeight(), temp1, false);
canvas.drawBitmap(flipedSprite , 0,0, null);
http://www.opengl.org/archives/resources/faq/technical/transformations.htm#tran0170
I'm not sure I'm doing this the "right" way, so I'm open to other options as well. Here's what I'm trying to accomplish:
I want a view which contains a graph. The graph should be dynamically created by the app itself. The graph should be zoom-able, and will probably start out larger than the screen (800x600 or so)
I'm planning on starting out simple, just a scatter plot. Eventually, I want a scatter plot with a fit line and error bars with axis that stay on the screen while the graph is zoomed ... so that probably means three images overlaid with zoom functions tied together.
I've already built a view that can take a drawable, can use focused pinch-zoom and drag, can auto-scale images, can switch images dynamically, and takes images larger than the screen. Tying the images together shouldn't be an issue.
I can't, however, figure out how to dynamically draw simple images.
For instance: Do I get a BitMap object and draw on it pixel by pixel? I wanted to work with some of the ShapeDrawables, but it seems they can only draw a shape onto a canvas ... how then do I get a bitmap of all those shapes into my view? Or alternately, do I have to dynamically redraw /all/ of the image I want to portray in the "onDraw" routine of my view every time it moves or zooms?
I think the "perfect" solution would be to use the ShapeDrawable (or something like it to draw lines and label them) to draw the axis with the onDraw method of the view ... keep them current and at the right level ... then overlay a pre-produced image of the data points / fit curve / etc that can be zoomed and moved. That should be possible with white set to an alpha on the graph image.
PS: The graph image shouldn't actually /change/ while on the view. It's just zooming and being dragged. The axis will probably actually change with movement. So pre-producing the graph before (or immediately upon) entering the view would be optimal. But I've also noticed that scaling works really well with vector images ... which also sounds appropriate (rather than a bitmap?).
So I'm looking for some general guidance. Tried reading up on the BitMap, ShapeDrawable, Drawable, etc classes and just can't seem to find the right fit. That makes me think I'm barking up the wrong tree and someone with some more experience can point me in the right direction. Hopefully I didn't waste my time building the zoom-able view I put together yesterday :).
First off, it is never a waste of time writing code if you learned something from it. :-)
There is unfortunately still no support for drawing vector images in Android. So bitmap is what you get.
I think the bit you are missing is that you can create a Canvas any time you want to draw on a bitmap. You don't have to wait for onDraw to give you one.
So at some point (from onCreate, when data changes etc), create your own Bitmap of whatever size you want.
Here is some psuedo code (not tested)
Bitmap mGraph;
void init() {
// look at Bitmap.Config to determine config type
mGraph = new Bitmap(width, height, config);
Canvas c = new Canvas(mybits);
// use Canvas draw routines to draw your graph
}
// Then in onDraw you can draw to the on screen Canvas from your bitmap.
protected void onDraw(Canvas canvas) {
Rect dstRect = new Rect(0,0,viewWidth, viewHeight);
Rect sourceRect = new Rect();
// do something creative here to pick the source rect from your graph bitmap
// based on zoom and pan
sourceRect.set(10,10,100,100);
// draw to the screen
canvas.drawBitmap(mGraph, sourceRect, dstRect, graphPaint);
}
Hope that helps a bit.