Right now I have a layout that contains two imageviews, one in the background and one in the front that is a icon. Both of them will merge together as a single image and save it to sd card. The image of the icon can be move around before the user save it to sd card.
Now what i require is to allow the user to pinch the image of the icon so they could scale the image evenly without losing its proportion.
This is how i allow my icon image to move around in my onCreate class.
case MotionEvent.ACTION_MOVE:
if (isImageMoving)
{
x = event.getRawX() - img_additionalImage.getWidth();
y = event.getRawY() - img_additionalImage.getHeight();
img_additionalImage.setX(x);
img_additionalImage.setY(y);
}
break;
I want the user to freely scale the icon image to the desired size then let them save it to sd card.
Related
I am creating layout for finger selection. In this I am trying to achieve click events for each individual finger. This layout should be uniform on any type of screen resolution.
My approach:
Inside relative layout, I am assigning radio buttons (not radio group but individual) to each finger inside hand image using margins and padding but it is not resting properly over finger image. They are slightly moving left or right.
Problem in this - radio button positions is changing if screen resolution changes.
I failed to find library for such click events. Also in SO I didn't find any related questions. Can someone guide me in this to library or example or better approach than this?
A several years ago I worked on the similar task. Unfortunately, I don't remember the whole solution, but idea was pretty simple. In my case it was an image of the map where a user could select a district by tapping the map. I knew the resolution of the original image that I used to display in UI. I encoded each district against its boundaries so it gave me a list of pair's number. I had a touch listener attached to ImageView that was used to display the map. So every time a user clicked on the map I got a position of his click, multiply this value by a scale factor(this one was calculated based on the size of original image and the one that was scaled by Android). Then I checked if that value laid in any polygons.
So to make it more clear:
Let width, height = size of original image
x, y = user touch
scaleWidth, scaleHeight = size of the image displayed by Android on the user device
scaleX = scaleWidth / width, scaleY = scaleHeight / height
originalX = scaleX * x, originalY = scaleY * y
Then check if originalX and originalY fits in polygons. In your case those polygons could be just squares around every finger.
Here is my problem:
I have an Android application that displays an image. The image itself is resized to 480 x 640 regardless of size.
The user can click on multiple points of the image. Based on where the user clicks on the image, the bitmap itself has some warping applied to it.
So let's say the original image is 1000 x 2000 (using whole numbers to make it simpler).
Once the image is loaded into the ImageView, it scales to display properly in the imageview.
This is obviously different for different phones with different resolutions.
Now when the user clicks on different points, I ultimately want to pass those points to my WCF Service along with the bitmap data to perform some image manipulation.
So the problem for me is how to take the points where the user touched on the phone and convert those to points that are relative to the normal unscaled bitmap.
Summary:
Bitmap is scaled to fit. User Clicks at 100,100. 100,100 is the point relative to the scaled image...not the actual bitmap itself. I'm looking for guidance on how to convert that 100,100 to the point on the actual bitmap.
Thanks in advance for any help you can give.
ok, so the Android ImageView has a default ScaleType of FIT_CENTER, so that means:
public static final Matrix.ScaleToFit CENTER
Compute a scale that will maintain the original src aspect ratio, but
will also ensure that src fits entirely inside dst. At least one axis
(X or Y) will fit exactly. The result is centered inside dst.
so if you're whole image view has 480x640 to show the image, and for example your image is 1000x2000, then:
2000/640 = scaleFactor = 3.125/1
so width will scale down to 320 leaving 80 pixels on either side empty, so it can maintain the aspect ratio.
//this one will be 80.
int xBuffer= (imageViewWidth - (realImageWidth*scaleFactor))/2;
//this one will be zero in your example
int yBuffer = (imageViewHeight - (realImageHeight*scaleFactor))/2;
int imageViewX = 0;//x coord where on the image view it was clicked
int imageViewY = 0;//y coord where on the image view it was clicked
if (imageViewX < xBuffer || imageViewX > imageViewWidth-xBuffer)
{
//ignore the click, outside of your image.
}
else if (imageViewY < yBuffer || imageViewY > imageViewHeight-yBuffer)
{
//ignore the click, outside of your image.
}
else
{
realImageY = imageViewY * scaleFactor;
realImageX = (imageViewY - 80) * scaleFactor;
//save click somehow..
saveClick(realImageX,realImageY);
}
I have two images, Image A which is the big background at the back and image B that is a small icon that will be merging on top of image A.
How it works
The user takes a photo from the camera and this photo will be Image A.
The user selects the icon from the layout and that will be Image B.
After selecting the image for image B, the user can move image B around the layout to adjust the position where the image B will overlay on top of image A.
After which the user pressed save, the canvas will merge two image, B on top of A, with the position the user wants and save it to SD card.
Problem
I have managed to get the image B to move around the layout but I do not know how to get it to merged at the position to the image A.
This is what i did to get the image B to move around the layout.
img_additionalImage = (ImageView) findViewById(R.id.img_additionalImage);
img_additionalImage.setOnTouchListener(new OnTouchListener()
{
#SuppressLint("NewApi")
#Override
public boolean onTouch(View v, MotionEvent event)
{
switch (event.getAction())
{
case MotionEvent.ACTION_DOWN:
isImageMoving = true;
break;
case MotionEvent.ACTION_MOVE:
if (isImageMoving)
{
x = event.getRawX() - img_additionalImage.getWidth() / 2;
y = event.getRawY() - img_additionalImage.getHeight() / 2;
img_additionalImage.setX(x);
img_additionalImage.setY(y);
}
break;
case MotionEvent.ACTION_UP:
isImageMoving = false;
break;
}
return true;
}
});
I do not know how to merge two images together with position the user chose.
if you have use RealtiveLayout or LinearLayout as a parent layout of this 2 imageview than you can capture that view by this way..
view.setDrawingCacheEnabled(true);
Bitmap b = view.getDrawingCache();
b.compress(CompressFormat.JPEG, 95, new FileOutputStream("/some/location/image.jpg"));
Where view is your View. The 95 is the quality of the JPG compression. And the file output stream is just that.
what does setDrawaingCacheEnabled?
Enables or disables the drawing cache. When the drawing cache is
enabled, the next call to getDrawingCache() or buildDrawingCache()
will draw the view in a bitmap. Calling draw(android.graphics.Canvas)
will not draw from the cache when the cache is enabled. To benefit
from the cache, you must request the drawing cache by calling
getDrawingCache() and draw it on screen if the returned bitmap is not
null.
Enabling the drawing cache is similar to setting a layer when hardware
acceleration is turned off. When hardware acceleration is turned on,
enabling the drawing cache has no effect on rendering because the
system uses a different mechanism for acceleration which ignores the
flag. If you want to use a Bitmap for the view, even when hardware
acceleration is enabled, see setLayerType(int, android.graphics.Paint)
for information on how to enable software and hardware layers.
from Android Docs
ok so this is a hard one (in my head).
mframe, sframe1, sphoto1, sframe2, sphoto2 has its own scales and dimensions:
a width and a height is the dimensions that these objects have.
The plan:
Sframe2 gets dragged on to sframe1. When I let go of the mouse sphoto2's dimensions (which are scaled within the boundaries of sframe2) need to be dropped in the scaled location of sphoto1 (which resides scaled within sframe1).
to be able to drop sframe2 within sframe1 on the location I let go I need to be able to correlate the location it was dropped on to the scaled image as I want to merge sphoto2 with sphoto1.
sframe1 and sframe2 have coordinates on mframe. sphoto1 and sphoto2 only have private coordinates (such as I can merge the images to and x and y position on them.).
The problem is that because the photos inside are scaled differently to these frames I have figure out the scaling factors to be able to correctly merge sphoto2 with sphoto1 with photo2 at the correct size and position on sphoto1.
so the question is... How can I do that?
Below is a diagram to assist in visually representing the problem.
Here is also a video to show you what it should not do. The image inside the frame needs to scale and merge on the other image correctly.
http://www.youtube.com/watch?v=N17Rrs1dSz0&feature=youtu.be
My mind is fried. Can you figure out what needs to scale what?
You can set layout params for that image and multiply with screen ratio, example:
imageview.setLayoutParams(new LinearLayout.LayoutParams(
(int) (250 * config.ratio), (int) (280 * config.ratio)));
But if you use this solution, you must calculate sceen ratio before scaling your image
I have an image of size 1024x768 where hotspots are mapped to work on 10" tablet but when I run my application on kindle fire the image size is reduced and hotspots won't work. Is there any formula to map the coordinates on a smaller image to be mapped with larger image ?
You don't give enough detail to provide a precise answer.
What are you using to display the image? ImageView?
Is the image scaled/cropped to maintain the aspect ratio?
Are you getting X and Y in a touch listener?
Are your hot spots implemented with some kind of hit tester comparing touched X,Y with the hot spot definitions?
Assuming ImageView and a touch listener getting X and Y, then you need to scale the hot spots to whatever resolution your image is shown at. I've done this recently by extending the ImageView class and overriding the onMeasure() callback. In onMeasure, determine whether the image is landscape or portrait then calculate the scaling factor between your images native size (the size for which you specified the hot spots) and the display size.
Something like this:
if (this.getWidth() > this.getHeight()) {
scaleFactor = ((float)this.getWidth() / (float)this.originalBitmapWidth);
} else {
scaleFactor = ((float)this.getHeight() / (float)this.originalBitmapHeight);
}
for (Hotspot hotspot : hotspots){
hotspot.setScale(scaleFactor);
}