Android memory handling with Bitmap in Xamarin.Android - android

Fellow developer, i seek your help.
I have a problem which is memory related. I don't really know how to tackle this so i will just present my code snippets. Bear in mind that, although it is on Xamarin.Android, it will also apply to normal Android.
I use a library which just starts a camera intent. The library (or component) i am using is : http://components.xamarin.com/view/xamarin.mobile. That is not really relevant, but maybe you can point me to other insights why i should or shouldn't be using this library.
Anyway, to start the camera and capture the input. I use the following code :
private void StartCamera() {
var picker = new MediaPicker (this);
if (! picker.IsCameraAvailable) {
Toast.MakeText (this, "No available camera found!", ToastLength.Short).Show ();
} else {
var intent = picker.GetTakePhotoUI (new StoreCameraMediaOptions{
Name = "photo.jpg",
Directory = "photos"
});
StartActivityForResult (intent, 1);
}
}
The onActivityForResult() method is called when i return from this camera intent. In this method, i do the following :
protected override async void OnActivityResult (int requestCode, Result resultCode, Intent data)
{
// User canceled
if (resultCode == Result.Canceled)
return;
System.GC.Collect ();
dialog = new ProgressDialog (this);
dialog.SetProgressStyle (ProgressDialogStyle.Spinner);
dialog.SetIconAttribute (Android.Resource.Attribute.DialogIcon);
dialog.SetTitle (Resources.GetString(Resource.String.dialog_picture_sending_title));
dialog.SetMessage (Resources.GetString(Resource.String.dialog_picture_sending_text));
dialog.SetCanceledOnTouchOutside (false);
dialog.SetCancelable (false);
dialog.Show ();
MediaFile file = await data.GetMediaFileExtraAsync (this);
await ConvertingAndSendingTask ( file );
dialog.Hide();
await SetupView ();
}
Then, in my ConvertingAndSendingTask() i convert the picture into the desired dimensions with a scaled bitmap. The code is as follows :
public async Task ConvertingAndSendingTask(MediaFile file) {
try{
System.GC.Collect();
int targetW = 1600;
int targetH = 1200;
BitmapFactory.Options options = new BitmapFactory.Options();
options.InJustDecodeBounds = true;
Bitmap b = BitmapFactory.DecodeFile (file.Path, options);
int photoW = options.OutWidth;
int photoH = options.OutHeight;
int scaleFactor = Math.Min(photoW/targetW, photoH/targetH);
options.InJustDecodeBounds = false;
options.InSampleSize = scaleFactor;
options.InPurgeable = true;
Bitmap bitmap = BitmapFactory.DecodeFile(file.Path, options);
float resizeFactor = CalculateInSampleSize (options, 1600, 1200);
Bitmap bit = Bitmap.CreateScaledBitmap(bitmap, (int)(bitmap.Width/resizeFactor),(int)(bitmap.Height/resizeFactor), false);
bitmap.Recycle();
System.GC.Collect();
byte[] data = BitmapToBytes(bit);
bit.Recycle();
System.GC.Collect();
await app.api.SendPhoto (data, app.ChosenAlbum.foreign_id);
bitmap.Recycle();
System.GC.Collect();
} catch(Exception e) {
System.Diagnostics.Debug.WriteLine (e.StackTrace);
}
}
Well, this method sends it good on newer devices with more memory but on lower end devices it ends up in a Out of memory error. Or anyway a nearly OOM. When Somethimes i goes well but when i want to take the second or third picture, it always ends up in OOM errors.
I realize that what i am doing is memory intensive. For example :
First i want the initial Width and Height of the original image.
Then it is sampled down (i don't really know if it is done well).
Then i load the sampled Bitmap into the memory.
When i loaded it into memory, my scaled bitmap has to be loaded in memory aswell prior to Recycle() the first Bitmap.
Ultimately i need a byte[] for sending the Bitmap over the web. But i need to convert it first prior to releasing my scaled Bitmap.
Then i release my scaled Bitmap and send the byte[].
Then as a final step, the byte[] needs to be released from memory aswell. I already do that on my BitmapToBytes() method as shown below, but i wanted to include it for maybe other insights.
static byte[] BitmapToBytes(Bitmap bitmap) {
byte[] data = new byte[0];
using (MemoryStream stream = new MemoryStream ())
{
bitmap.Compress (Bitmap.CompressFormat.Jpeg, 90, stream);
stream.Close ();
data = stream.ToArray ();
}
return data;
}
Does somebody sees any good parts where i can optimize this process? I know i am loading to much into memory but i can't think of another way.
It should be mentioned that i always want my images to be 1600x1200(Landscape) or 1200x1600(Portrait). I calculate that value in the following way :
public static float CalculateInSampleSize(BitmapFactory.Options options,
int reqWidth, int reqHeight) {
// Raw height and width of image
int height = options.OutHeight;
int width = options.OutWidth;
float inSampleSize = 1;
if (height > reqHeight || width > reqWidth) {
// Calculate ratios of height and width to requested height and
// width
float heightRatio = ((float) height / (float) reqHeight);
float widthRatio = ((float) width / (float) reqWidth);
// Choose the smallest ratio as inSampleSize value, this will
// guarantee
// a final image with both dimensions larger than or equal to the
// requested height and width.
inSampleSize = heightRatio < widthRatio ? heightRatio : widthRatio;
}
if(height < reqHeight || width < reqWidth) {
// Calculate ratios of height and width to requested height and
// width
float heightRatio = ((float) reqHeight / (float) height);
float widthRatio = ((float) reqWidth / (float) width);
// Choose the smallest ratio as inSampleSize value, this will
// guarantee
// a final image with both dimensions larger than or equal to the
// requested height and width.
inSampleSize = heightRatio < widthRatio ? heightRatio : widthRatio;
}
return inSampleSize;
}
Does anybody has any recommendations or an alternative workflow?
I would be so much helped with this!

This might be a very delayed reply but may helpful to somebody who got the same issue.
Use the the calculated InSampleSize value (option)to decode file to
bitmap. this itself generates the scaled down bitmap instead of using in CreateScaledBitmap.
If you are expecting High resolution image as output then it is
difficult to handle OutOfMemory issue. Because An image with a higher
resolution does not provide any visible benefit, but still takes up
precious memory and incurs additional performance overhead due to
additional scaling performed by the view.[xamarin doc]
Calculate the bitmap target width and height relating to the imageview height and width. you can calculate this by MeasuredHeight and MeasuredWidth Property. [Note: This works only after complete image draw]
Consider using async method to decode file instead running on main thread [DecodeFileAsync]
For more detail follow this http://appliedcodelog.blogspot.in/2015/07/avoiding-imagebitmap.html

Related

Why is Bitmap loading still so slow even with Google's implementation?

I use these two functions, slightly modified from Google's code in the Android docs to use filepaths:
public static int calculateInSampleSize(
BitmapFactory.Options options, int reqWidth, int reqHeight) {
// Raw height and width of image
final int height = options.outHeight;
final int width = options.outWidth;
int inSampleSize = 1;
if (height > reqHeight || width > reqWidth) {
final int halfHeight = height / 2;
final int halfWidth = width / 2;
// Calculate the largest inSampleSize value that is a power of 2 and keeps both
// height and width larger than the requested height and width.
while ((halfHeight / inSampleSize) >= reqHeight
&& (halfWidth / inSampleSize) >= reqWidth) {
inSampleSize *= 2;
}
}
return inSampleSize;
}
public static Bitmap decodeSampledBitmapFromFilePath(String pathName, int reqWidth, int reqHeight) {
// First decode with inJustDecodeBounds=true to check dimensions
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inJustDecodeBounds = true;
BitmapFactory.decodeFile(pathName, options);
// Calculate inSampleSize
options.inSampleSize = calculateInSampleSize(options, reqWidth, reqHeight);
// Decode bitmap with inSampleSize set
options.inJustDecodeBounds = false;
return BitmapFactory.decodeFile(pathName, options);
}
With the idea being to use a scaled-down version of the Bitmap to be mapped onto an ImageView rather than the full-thing, which is wasteful.
mImageView.setImageBitmap(decodeSampledBitmapFromFilePath(pathToFile, 100, 100));
I implemented a thing where you press a button and it rotates to the next image, but there's still a significant lag (it takes a moment for the ImageView to populate) on my phone compared to my emulator. And then occasionally my phone app will crash and I can't replicate it on my emulator.
Is there a problem with this code I've posted above? Is there a problem with the way I am using the code?
Example:
public void reloadPic() {
new Thread(new Runnable() {
#Override
public void run() {
final Bitmap bm = decodeSampledBitmapFromFilePath(filepath, mImageViewWidth, mImageViewHeight);
getActivity().runOnUiThread(new Runnable() {
#Override
public void run() {
mImageView.setImageBitmap(bm);
}
});
}
}).start();
}
Your code is jumping between threads several times. First you launch a thread. Then you wait for it to be scheduled You decode on that thread. Then you post a command to the UI thread and wait for it to be scheduled. THen you post a draw command to the ui thread (that's part of what setImageBitmap does). Then you have to process any other commands that came in first. Then you actually draw the screen. There's really only 3 ways to speed this up:
1) Get rid of the thread. You shouldn't decode lots of images on the UI thread, but decoding 1 isn't too bad.
2)Store the images in the right size to begin with. This may mean creating thumbnails of the images ahead of time. Then you don't need to scale.
3)Preload your images. If there's only one button and you know what image it will load, load it before you need it, so when the button is pressed you have it ready. Wastes a bit of memory, but only 1 image worth. (This isn't a viable solution if you have a lot of possible next images).

How to get the MvxGridView to be efficient and performant?

Using Xamarin and MvvmCross, I'm writing an Android application that is loading the images from an album into an MvxGridView with a custom binding:
<MvxGridView
android:id="#+id/grid_Photos"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:gravity="center"
android:numColumns="3"
android:verticalSpacing="4dp"
android:horizontalSpacing="4dp"
android:stretchMode="columnWidth"
android:fastScrollEnabled="true"
local:MvxBind="ItemsSource AllPhotos"
local:MvxItemTemplate="#layout/item_photo_thumbnail" />
Which uses item_photo_thumbnail.axml:
<ImageView
local:MvxBind="PicturePath PhotoPath"
style="#style/ImageView_Thumbnail" />
Here is the binding class:
public class PicturePathBinding : MvxTargetBinding
{
private readonly ImageView _imageView;
public PicturePathBinding(ImageView imageView)
: base(imageView)
{
_imageView = imageView;
}
public override MvxBindingMode DefaultMode
{
get { return MvxBindingMode.OneWay; }
}
public override Type TargetType
{
get { return typeof(string); }
}
public override void SetValue(object value)
{
if (value == null)
{
return;
}
string path = value as string;
if (!string.IsNullOrEmpty(path))
{
Java.IO.File imgFile = new Java.IO.File(path);
if (imgFile.Exists())
{
// First decode with inJustDecodeBounds=true to check dimensions
BitmapFactory.Options options = new BitmapFactory.Options();
options.InJustDecodeBounds = true;
BitmapFactory.DecodeFile(imgFile.AbsolutePath, options);
// Calculate inSampleSize
options.InSampleSize = CalculateInSampleSize(options, 100, 100);
// Decode bitmap with inSampleSize set
options.InJustDecodeBounds = false;
Bitmap myBitmap = BitmapFactory.DecodeFile(imgFile.AbsolutePath, options);
_imageView.SetImageBitmap(myBitmap);
}
}
}
protected override void Dispose(bool isDisposing)
{
if (isDisposing)
{
var target = Target as ImageView;
if (target != null)
{
target.Dispose();
target = null;
}
}
base.Dispose(isDisposing);
}
private int CalculateInSampleSize(BitmapFactory.Options options, int reqWidth, int reqHeight)
{
// Raw height and width of image
int height = options.OutHeight;
int width = options.OutWidth;
int inSampleSize = 1;
if (height > reqHeight || width > reqWidth)
{
int halfHeight = height / 2;
int halfWidth = width / 2;
// Calculate the largest inSampleSize value that is a power of 2 and keeps both
// height and width larger than the requested height and width.
while ((halfHeight / inSampleSize) > reqHeight &&
(halfWidth / inSampleSize) > reqWidth)
{
inSampleSize *= 2;
}
}
return inSampleSize;
}
}
The problem I'm having is that it is very sluggish and slow. I would love for the each image to load asynchronously. I don't know how to do that. In .net (XAML), their GridView control does everything automatically (with virtualization), but I'm realizing that in Android, it might have to be manually handled?
Can someone help me with this?
I am currently working with remote images, instead of local, so I have been using the Download Cache plugin that gives you the MvxImageView class. That in itself may give you some benefit.
So far my experience with Android is that everything runs if foreground by default, for the most part. Right now, with all of the calculation code inside of the binding class, that is almost certainly going to be run in the foreground.
What I would do to make this run faster is:
Use something like an ObservableCollection for your ItemsSource.
Kick off another thread in the start (or Start) of your View Model to add your items to the ObservableCollection. You can accomplish this easily with Task.Run()
Try to process as much as possible in that background thread with each item before adding it to the ObservableCollection
When updating ObservableCollection from the background thread, the actual update itself has to be done on the UI thread. This is easily done if you are using MvxViewModel as your base for your view model.
this.InvokeOnMainThread(() => myObservableCollection.Add(myItem) );
Following that pattern should actually help you Windows based clients as well.

Android : Maximum allowed width & height of bitmap

Im creating an app that needs to decode large images to bitmaps to be displayed in a ImageView.
If i just try to decode them straight to a bitmap i get the following error
" Bitmap too large to be uploaded into a texture (1944x2592, max=2048x2048)"
So to be able to show images with too high resolution im using:
Bitmap bitmap = BitmapFactory.decodeFile(path);
if(bitmap.getHeight()>=2048||bitmap.getWidth()>=2048){
DisplayMetrics metrics = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(metrics);
int width = metrics.widthPixels;
int height = metrics.heightPixels;
bitmap =Bitmap.createScaledBitmap(bitmap, width, height, true);
}
This works but I don't really want to hardcode the maximum value of 2048 as I have in the if-statement now, but I cant find out how to get a the max allowed size of the bitmap for a device
Any ideas?
Another way of getting the maximum allowed size would be to loop through all EGL10 configurations and keep track of the largest size.
public static int getMaxTextureSize() {
// Safe minimum default size
final int IMAGE_MAX_BITMAP_DIMENSION = 2048;
// Get EGL Display
EGL10 egl = (EGL10) EGLContext.getEGL();
EGLDisplay display = egl.eglGetDisplay(EGL10.EGL_DEFAULT_DISPLAY);
// Initialise
int[] version = new int[2];
egl.eglInitialize(display, version);
// Query total number of configurations
int[] totalConfigurations = new int[1];
egl.eglGetConfigs(display, null, 0, totalConfigurations);
// Query actual list configurations
EGLConfig[] configurationsList = new EGLConfig[totalConfigurations[0]];
egl.eglGetConfigs(display, configurationsList, totalConfigurations[0], totalConfigurations);
int[] textureSize = new int[1];
int maximumTextureSize = 0;
// Iterate through all the configurations to located the maximum texture size
for (int i = 0; i < totalConfigurations[0]; i++) {
// Only need to check for width since opengl textures are always squared
egl.eglGetConfigAttrib(display, configurationsList[i], EGL10.EGL_MAX_PBUFFER_WIDTH, textureSize);
// Keep track of the maximum texture size
if (maximumTextureSize < textureSize[0])
maximumTextureSize = textureSize[0];
}
// Release
egl.eglTerminate(display);
// Return largest texture size found, or default
return Math.max(maximumTextureSize, IMAGE_MAX_BITMAP_DIMENSION);
}
From my testing, this is pretty reliable and doesn't require you to create an instance.
Performance-wise, this took 18 milliseconds to execute on my Note 2 and only 4 milliseconds on my G3.
This limit should be coming from the underlying OpenGL implementation. If you're already using OpenGL in your app, you can use something like this to get the maximum size:
int[] maxSize = new int[1];
gl.glGetIntegerv(GL10.GL_MAX_TEXTURE_SIZE, maxSize, 0);
// maxSize[0] now contains max size(in both dimensions)
This shows that my both my Galaxy Nexus and Galaxy S2 have a maximum of 2048x2048.
Unfortunately, if you're not already using it, the only way to get an OpenGL context to call this from is to create one(including the surfaceview, etc), which is a lot of overhead just to query a maximum size.
this will decode and scale image before loaded into memory,just change landscape and portrait to the size you actually want
BitmapFactory.Options options = new BitmapFactory.Options();
options.inJustDecodeBounds = true;
BitmapFactory.decodeFile(path, options);
int imageHeight = options.outHeight;
int imageWidth = options.outWidth;
String imageType = options.outMimeType;
if(imageWidth > imageHeight) {
options.inSampleSize = calculateInSampleSize(options,512,256);//if landscape
} else{
options.inSampleSize = calculateInSampleSize(options,256,512);//if portrait
}
options.inJustDecodeBounds = false;
bitmap = BitmapFactory.decodeFile(path,options);
method for calculating size
public static int calculateInSampleSize(
BitmapFactory.Options options, int reqWidth, int reqHeight) {
// Raw height and width of image
final int height = options.outHeight;
final int width = options.outWidth;
int inSampleSize = 1;
if (height > reqHeight || width > reqWidth) {
// Calculate ratios of height and width to requested height and width
final int heightRatio = Math.round((float) height / (float) reqHeight);
final int widthRatio = Math.round((float) width / (float) reqWidth);
// Choose the smallest ratio as inSampleSize value, this will guarantee
// a final image with both dimensions larger than or equal to the
// requested height and width.
inSampleSize = heightRatio < widthRatio ? heightRatio : widthRatio;
}
return inSampleSize;
}
If you're on API level 14+ (ICS) you can use the getMaximumBitmapWidth and getMaximumBitmapHeight functions on the Canvas class. This would work on both hardware accelerated and software layers.
I believe the Android hardware must at least support 2048x2048, so that would be a safe lowest value. On software layers, the max size is 32766x32766.
The 2048*2048 limit is for GN. GN is a xhdpi device and perhaps you put the image in the wrong density bucket. I moved a 720*1280 image from drawable to drawable-xhdpi and it worked.
Thanks for the answer by Romain Guy. Here's the link of his answer.

Change image to a picture in Gallery

I am working on an application that needs to obtain a random picture from the android device's built-in picture gallery and then display it on the screen. Here's what I have to work with:
-An ImageView object called picture
-The ID, TITLE, DATA, MIME_TYPE, and SIZE of the picture I want to display
I think the problem is I don't know what information I need to put in this line:
picture.setImageResource(???);
Here's all my code to give you some idea of what I'm trying to do:
public void generateImage() {
// Get list of images accessible by cursor
ContentResolver cr = getActivity().getContentResolver();
String[] columns = new String[] {
ImageColumns._ID,
ImageColumns.TITLE,
ImageColumns.DATA,
ImageColumns.MIME_TYPE,
ImageColumns.SIZE };
Cursor cursor = cr.query(MediaStore.Images.Media.EXTERNAL_CONTENT_URI,
columns, null, null, null);
// Collect Picture IDs
cursor.moveToFirst();
ArrayList<Integer> picList = new ArrayList<Integer>();
while (!cursor.isAfterLast()) {
picList.add(cursor.getInt(0));
cursor.moveToNext();
}// end for
// Generate random number
int imageCount = picList.size() - 1;
Log.d("NUMBER OF IMAGES", "Image Count = " + imageCount);
Random random = new Random();
int randomInt = random.nextInt(imageCount);
// Extract the image
int picID = picList.get(randomInt);
picture.setImageResource(picID);
}// end Generate Image
Anybody have any idea what I need to do in order to set the picture object to the picture that I have from the gallery (preferably using the information I've already obtained)?
Looks like you are saving an array of the cursor position... that would probably not give you much to work with. I think you'd rather populate an ArrayList with ImageColumns._ID and then use that string as a URI to open the image.
//Extracting the image
columnIndex = cursor.getColumnIndexOrThrow(MediaStore.Images.Media.DATA);
cursor.moveToPosition(randomInt);
filePath = cursor.getString(columnIndex);
imageView.setImageBitmap(decodeSampledBitmapFromResource(filePath,width,height));
And for efficiently using bitmaps so that you do not run into OutOfMemory Exceptions here are two functions startight from the androids developers page [link http://developer.android.com/training/displaying-bitmaps/load-bitmap.html]
public Bitmap decodeSampledBitmapFromResource(String path, int reqWidth, int reqHeight) {
// First decode with inJustDecodeBounds = true to check dimensions
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inJustDecodeBounds = true;
BitmapFactory.decodeFile(path,options);
// Calculate inSampleSize
options.inSampleSize = calculateInSampleSize(options, reqWidth, reqHeight);
// Decode bitmap with inSampleSize set
options.inJustDecodeBounds = false;
return BitmapFactory.decodeFile(path, options);
}
public static int calculateInSampleSize( BitmapFactory.Options options, int reqWidth, int reqHeight) {
// Raw height and width of image
final int height = options.outHeight;
final int width = options.outWidth;
int inSampleSize = 1;
if (height > reqHeight || width > reqWidth) {
// Calculate ratios of height and width to requested height and width
final int heightRatio = Math.round((float) height / (float) reqHeight);
final int widthRatio = Math.round((float) width / (float) reqWidth);
// Choose the smallest ratio as inSampleSize value, this will guarantee
// a final image with both dimensions larger than or equal to the
// requested height and width.
inSampleSize = heightRatio < widthRatio ? heightRatio : widthRatio;
}
return inSampleSize;
}

Android imageview get pixel color from scaled image

My home automation app has a feature where people can upload images to their phone with floorplans and dashboards that they can use to control their home automation software. I have them upload two images: one visible image with the graphics that they want displayed, and a second color map with solid colors corresponding to the objects they want to target areas from the visible image. Both images have to be the same size, pixel-wise. When they tap the screen, I want to be able to get the color from the colormap overlay, and then I proceed to do what ever action has been associated with that color. The problem is, the scaling of the image is screwing me up. The images that they use can be larger than the device screen, so I scale them so they will fit within the display. I don't really need pinch to zoom capability right now, but I might implement it later. For now, I just want the image to be displayed at the largest size so it fits on the screen. So, my question is, how could I modify this code so that I can get the correct touchpoint color from the scaled image. The image scaling itself seems to be working fine. It is scaled and displays correctly. I just can't get the right touchpoint.
final Bitmap bm = decodeSampledBitmapFromFile(visible_image, size, size);
if (bm!=null) {
imageview.setScaleType(ImageView.ScaleType.CENTER_INSIDE);
imageview.setImageBitmap(bm);
}
final Bitmap bm2 = decodeSampledBitmapFromFile(image_overlay, size, size);
if (bm2!=null) {
overlayimageview.setScaleType(ImageView.ScaleType.CENTER_INSIDE);
overlayimageview.setImageBitmap(bm2);
imageview.setOnTouchListener(new OnTouchListener() {
#Override
public boolean onTouch(View v, MotionEvent mev) {
DecodeActionDownEvent(v, mev, bm2);
return false;
}
});
}
private void DecodeActionDownEvent(View v, MotionEvent ev, Bitmap bm2)
{
xCoord = Integer.valueOf((int)ev.getRawX());
yCoord = Integer.valueOf((int)ev.getRawY());
try {
// Here's where the trouble is.
// returns the value of the unscaled pixel at the scaled touch point?
colorTouched = bm2.getPixel(xCoord, yCoord);
} catch (IllegalArgumentException e) {
colorTouched = Color.WHITE; // nothing happens when touching white
}
}
private static Bitmap decodeSampledBitmapFromFile(String fileName,
int reqWidth, int reqHeight) {
// code from
// http://developer.android.com/training/displaying-bitmaps/load-bitmap.html
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inJustDecodeBounds = true;
BitmapFactory.decodeFile(fileName, options);
// Calculate inSampleSize
options.inSampleSize = calculateInSampleSize(options, reqWidth, reqHeight);
// Decode bitmap with inSampleSize set
options.inJustDecodeBounds = false;
return BitmapFactory.decodeFile(fileName, options);
}
private static int calculateInSampleSize(
BitmapFactory.Options options, int reqWidth, int reqHeight) {
// code from
// http://developer.android.com/training/displaying-bitmaps/load-bitmap.html
// Raw height and width of image
final int height = options.outHeight;
final int width = options.outWidth;
int inSampleSize = 1;
if (height > reqHeight || width > reqWidth) {
if (width > height) {
inSampleSize = Math.round((float)height / (float)reqHeight);
} else {
inSampleSize = Math.round((float)width / (float)reqWidth);
}
}
return inSampleSize;
}
I figured it out. I replaced
xCoord = Integer.valueOf((int)ev.getRawX());
yCoord = Integer.valueOf((int)ev.getRawY());
with
Matrix inverse = new Matrix();
v.getImageMatrix().invert(inverse);
float[] touchPoint = new float[] {ev.getX(), ev.getY()};
inverse.mapPoints(touchPoint);
xCoord = Integer.valueOf((int)touchPoint[0]);
yCoord = Integer.valueOf((int)touchPoint[1]);
After 7 years I have to provide another answer, because after upgrading to LG G8s from LG G6 the code from accepted answer stopped returning the correct color.
This solution works on all the phones I found at home, but who knows... maybe even this one is not universal.
(Using Xamarin C# but the principle is easy to understand)
First we have to get the actual ImageView size as float to get floating point division
private float imageSizeX;
private float imageSizeY;
private void OnCreate(...) {
...
FindViewById<ImageView>(Resource.Id.colpick_Image).LayoutChange += OnImageLayout;
FindViewById<ImageView>(Resource.Id.colpick_Image).Touch += Image_Touch;
...
}
//Event called every time the layout of our ImageView is updated
private void OnImageLayout(object sender, View.LayoutChangeEventArgs e) {
imageSizeX = Image.Width;
imageSizeY = Image.Height;
Image.LayoutChange -= OnImageLayout; //Just need it once then unsubscribe
}
//Called every time user touches/is touching the ImageView
private void Image_Touch(object sender, View.TouchEventArgs e) {
MotionEvent m = e.Event;
ImageView img = sender as ImageView;
int x = Convert.ToInt32(m.GetX(0));
int y = Convert.ToInt32(m.GetY(0));
Bitmap bmp = (img.Drawable as BitmapDrawable).Bitmap;
// The scale value (how many times is the image bigger)
// I only use it with images where Width == Height, so I get
// scaleX == scaleY
float scaleX = bmp.Width / imageSizeX;
float scaleY = bmp.Height / imageSizeY;
x = (int)(x * scaleX);
y = (int)(y * scaleY);
// Check for invalid values/outside of image bounds etc...
// Get the correct Color ;)
int color = bmp.GetPixel(x, y);
}

Categories

Resources