I am selecting some area of image in displayed imageview. Then i do some work on selected pixels of that bitmap which is displayed in imageview. But i am not able to hit those pixel which user selected. I know i have to do some pixel mapping of displayed image and actual bitmap. See images for better understanding of my problem.
User select some pixel using circular selector
After processing affected pixels are different not those which user selected
I have tried some thing like this to get accurate pixels
Bitmap image = EyeColorChangerActivity.getImageBitmap();
int displayedwidht = image.getWidth();
int displayedheight = image.getHeight();
x = x*((float)displayedwidht/mScreenWidth);
y = y*((float)displayedheight/mScreenHeight);
Set OnTouchListener on your ImageView.
Inside onTouch method of OnTouchListener use the following code to get the selected bitmap pixel.
#Override
public boolean onTouch(View arg0, MotionEvent arg1) {
int x = (int) arg1.getX();
int y = (int)arg1.getY();
int pixel = mBitmap.getPixel(x, y);
// process new color for selected pixel.
....
//set new color to bitmap pixel
return false;
}
Related
We are developing image mapping for education.
The teacher can add question with image.
The schema answer is based on the image touched selection by the teacher.
For example;
Which are the area represent district that having gold.
then the teacher can choose the correct answer(B and E district) by pressing the schema answer at picture
The question are
As a teacher how to do the schema answer with touching the image and what value to store into database
As a student how student can press the correct answer
Anyone can suggest or help me?
I'm newbie in android..
Thanks!
You should use two different image. First image is original image. Second image is "map" image. Map image contains zones of different colors. "Map" image must be saved to file with lossless compression (i.e. png).
Original image shows in ImageView. "Map" image must be decoded to Bitmap.
final ImageView imageView = ...; //TODO: bind imageView
imageView.setImageResource(R.drawable.original_image);
final Bitmap map = ...; //TODO: load map bitmap
imageView.setOnTouchListener((v, event) -> {
final int x = event.getX();
final int y = event.getY();
final float scale = ...//TODO calc image scale;
final int realX = (int) (x * scale);
final int realY = (int) (y * scale);
final int color = map.getPixel(realX, realY);
if (color == Color.RED) {
//Correct answer!
} else {
//something else
}
});
Sorry for my english.
In my app I have scanner view, which must scan two barcodes with zxing library. Top barcode is in PDF417 format, bottom - in DATAMATRIX. I used https://github.com/dm77/barcodescanner as a base, but with main difference: I have to use image coordinates as scan area. Main algorithm:
Depends of current step, scan activity passes current scan area to scanner view in screen coordinates. These coordinates calculated as follows:
public static Rect getScanRectangle(View view) {
int[] l = new int[2];
view.measure(0, 0);
view.getLocationOnScreen(l);
return new Rect(l[0], l[1], l[0] + view.getMeasuredWidth(), l[1] + view.getMeasuredHeight());
}
2.In the scanner view, in onPreviewFrame method, camera preview size is received from camera parameters. When I translated byte data from camera to bitmap image in memory, I saw, that it rotated 90 degrees cw, and camera resolution not equals screen resolution. So, I have to map screen coordinates into camera (or surface view) coordinates:
private Rect normalizeScreenCoordinates(Rect input, int cameraWidth, int cameraHeight) {
if(screenSize == null) {
screenSize = new Point();
Display display = activity.getWindowManager().getDefaultDisplay();
display.getSize(screenSize);
}
int height = screenSize.x;
int width = screenSize.y;
float widthCoef = (float)cameraWidth/width;
float heightCoef = (float)cameraHeight/height;
Rect result = new Rect((int)(input.top * widthCoef), (int)(input.left * heightCoef), (int)(input.bottom * widthCoef), (int)(input.right * heightCoef));
return result;
}
After that, translated coordinates passes into axing and on most test devices all works fine. But not on Nexus 5X. First, there are serious gap between display size and activity.getWindow().getDecorView() sizes. Maybe this is related to status bar size, which is translucent and for some reason it's height maybe not calculated. But, even after I added vertical offset, there are something wrong with scan area. What's may be reason for that error?
I have a set of imageButtons placed within a relative layout, and each imageButton has a shape within it that is visible while the rest of it is set to alpha. I have currently set these buttons to slightly overlap, and I am trying to code it so that when the alpha part of one button is pressed, it ignores that button and checks the button underneath it.
I am currently using onTouch() with an OnTouchListener to get the x and y coordinates of the touch on the screen, but that is calculated based on the whole screen from what I can tell. Is there a way to use the position found from event.getX() and event.getY() to look at where the button is on the screen and see if that spot clicked on the button is transparent or not?
Use View.getLocationOnScreen() and/or getLocationInWindow().
https://stackoverflow.com/a/2226048/1979882
In order to check if alpha-channel exists, I would use:
public static Bitmap loadBitmapFromView(View v) {
Bitmap bitmap;
v.setDrawingCacheEnabled(true);
bitmap = Bitmap.createBitmap(v.getDrawingCache());
v.setDrawingCacheEnabled(false);
return bitmap;
}
and than detect the ARGB value to a particular pixel.
int pixel = bitmap.getPixel(x,y);
Now you can get each channel with:
int alphaValue = Color.alpha(pixel);
int redValue = Color.red(pixel);
int blueValue = Color.blue(pixel);
int greenValue = Color.green(pixel);
https://stackoverflow.com/a/31775271/1979882
My activity has a background image 1280x800 pixels. I set it using android:scaleType="centerCrop".
There's a flagstaff depicted on a background image and I need to position another image ("flag") above the flagstaff.
If device's screen dimension was exactly 1280x800, then "flag"'s position would be (850, 520). But screen size can vary and Android scales and shifts the background image accordingly to centerCrop flag. Hence I need to assign somehow scale and shift to "flag" image to make it placed nicely above the flagstaff.
I have examined ImageView.java and found that scaleType is used to set a private Matrix mDrawMatrix. But I have no read access to this field as it's private.
So, given
#Override
public void onGlobalLayout()
{
ImageView bg = ...;
ImageView flag = ...;
int bgImageWidth = 1280;
int bgImageHeight = 800;
int flagPosX = 850;
int flagPosY = 520;
// What should I do here to place flag nicely?
}
You can see the size of the screen (context.getResources().getDisplayMetrics().widthPixels, context.getResources().getDisplayMetrics().heightPixels;) and calculate what is visible from the image, for example you could do something like this (haven't really tested it but you should get the idea):
private void placeFlag(Context context) {
ImageView bg = new ImageView(context);
ImageView flag = new ImageView(context);
int bgImageWidth = 1280;
int bgImageHeight = 800;
int flagPosX = 850;
int flagPosY = 520;
int screenWidth = context.getResources().getDisplayMetrics().widthPixels;
int screenHeight = context.getResources().getDisplayMetrics().heightPixels;
//calculate the proportions between the width of the bg and the screen
double widthScale = (double) bgImageWidth / (double) screenWidth;
double heightScale = (double) bgImageHeight / (double) screenHeight;
//see the real scale used, it will be the maximum between the 2 values because you are using crop
double realScale = Math.max(widthScale, heightScale);
//calculate the position for the flag
int flagRealX = (int) (flagPosX * realScale);
int flagRealY = (int) (flagPosY * realScale);
}
Also, you should be doing that in the method onGlobalLayout, you could do this in onCreate() or inside the constructor if you want a custom view.
you can use a LayerDrawable approach here to make one drawable image in it's static (In which you can set background image and icon on top of background image in custom_drawable.xml and can use that file as single drawable in activity).For reference go to android developer. Otherwise
For scale issue according to different device resolution put images in different drawable folder and can also design layout different .
I've drawn 5 bitmaps from .png files on a canvas - a head, a body and two arms and legs.
How can I detect which of these has been touched on an OnTouch? And, more specifically, can I detect if the OnTouch was within the actual shape of the body part touched?
What I mean is, obviously, the .pngs themselves are rectangular, but does Android know, or can I tell it, to ignore the transparency within the image?
You could get the colour of pixel touched and compare it to the colour of pixel on the background at those co-ords.
EDIT: ok, ignore that, you can't get the colour of a pixel on the canvas, so instead, get the x,y of the touch, check if any of the body part images have been touched, if so, take the x,y of the image from the touch x,y, then get the pixel of the image, which should be transparent or colour.
public boolean onTouchEvent(MotionEvent event)
{
int x = (int) event.getX();
int y = (int) event.getY();
int offsetx, offsety;
for(int i = 0;i<NUM_OF_BODY_PARTS;i++)
{
if(bodyPartRect[i].intersects(x,y,x+1,y+1))
{
offsetx = x - bodyPartRect[i].left;
offsety = y - bodyPartRect[i].top;
if(bodyPartBMP[i].getPixel(offsetx,offsety) == TRANSPARENT)
{
//whatever
}
}
}
}