Android 4.3 ImageView ScaleType.MATRIX - android

today I set up the new Android JB 4.3 on my Nexus 7 and i tried to run my application.
Everythings works like it should except one little thing about ImageViews with ScaleType.MATRIX.
Basically what i have in my application is a ImageView as background and accordingly to a ViewPager callbacks i move the focused part of the image updating the Matrix i gave to the imageView using setImageMatix( Matrix matrix ).
the problem seems to be that i can't update the matrix anymore, i just have to instantiate a new one a pass it to the ImageView.
i managed to work around it instantiating everytime a new Matrix but it seems awfully memory expensive compared to the old version.
is this a BUG?
is there a way to udpate the Matrix? ( i by the way already tried to invalidate() the ImageView ecc. )
NOT WORKING
private void updateMatrix( final int page, final double offset ) {
double pagePosition = page + offset;
Matrix matrix = imageView.getImageMatrix();
matrix.setScale( scale, scale );
matrix.postTranslate( - (float) ( pagePosition * pageWidth ) , 0 );
imageView.setImageMatrix( matrix );
imageView.invalidate();
}
WORKING
private void updateMatrix( final int page, final double offset ) {
double pagePosition = page + offset;
Matrix matrix = new Matrix();
matrix.setScale( scale, scale );
matrix.postTranslate( - (float) ( pagePosition * pageWidth ) , 0 );
imageView.setImageMatrix( matrix );
imageView.invalidate();
}
EDIT:
in the first case the image is shown at the top left corner of the ImageView without any scale or translate applied to it, like if the matrix is back to identity.

Just preserve your Matrix as field instead of retrieving it from ImageView and you'll be happy :)

There may be a bug with ImageView scaling starting with 4.3. See my question and answer about this bug.

Related

BitmapTransformation in the Glide library not working as expected

I'm new to the Glide library, following the Transformations guide found here: https://github.com/bumptech/glide/wiki/Transformations
I'm trying to create a custom transformation, but when I place a breakline in the Transformation class's transform method, I can see that it is never called.
Below is my code:
private static class CustomTransformation extends BitmapTransformation {
private Context aContext;
public CustomTransformation(Context context) {
super(context);
aContext = context;
}
#Override
protected Bitmap transform(BitmapPool pool, Bitmap toTransform, int outWidth, int outHeight) {
return bitmapChanger(toTransform, 1080, (int) aContext.getResources().getDimension(R.dimen.big_image));
}
#Override
public String getId() {
return "some_id";
}
}
private static Bitmap bitmapChanger(Bitmap bitmap, int desiredWidth, int desiredHeight) {
float originalWidth = bitmap.getWidth();
float originalHeight = bitmap.getHeight();
float scaleX = desiredWidth / originalWidth;
float scaleY = desiredHeight / originalHeight;
//Use the larger of the two scales to maintain aspect ratio
float scale = Math.max(scaleX, scaleY);
Matrix matrix = new Matrix();
matrix.postScale(scale, scale);
//If the scaleY is greater, we need to center the image
if(scaleX < scaleY) {
float tx = (scale * originalWidth - desiredWidth) / 2f;
matrix.postTranslate(-tx, 0f);
}
return Bitmap.createBitmap(bitmap, 0, 0, (int) originalWidth, (int) originalHeight, matrix, true);
}
I've tried initiating Glide in two ways:
Glide.with(this).load(url).asBitmap().transform(new CustomTransformation(this)).into(imageView);
and
Glide.with(this).load(url).bitmapTransform(new CustomTransformation(this)).into(imageView);
But neither work. Any ideas? Again, I'm not looking for advice on the Matrix itself, I just don't understand why transform(...) isn't being called at all. Thanks!
You're most likely experiencing caching issues. The first time you compiled and executed your code the result of the transformation was cached so next time it doesn't have to be applied to the same source image.
Each transformation has a getId() method which is used in determining whether the transformation result has changed. Usually transformations don't change, but are either applied or not. You can change it on every build while developing, but it could be tedius.
To work around this problem you can add the following two calls to your Glide load line:
// TODO remove after transformation is done
.diskCacheStrategy(SOURCE) // override default RESULT cache and apply transform always
.skipMemoryCache(true) // do not reuse the transformed result while running
The first one can be changed to NONE, but then you would have to wait for the url to load from the internet every time, instead of just reading the image from the phone. The second one is useful if you have can navigate to and away from the transformation in question and want to debug it for example. It helps to not need a restart after every load to clear the memory cache.
Don't forget to remove these after you're done with the Transformation's development, because they affect production performance a lot and should be used after much consideration only, if at all.
Note
It looks like you're trying to resize your image to a certain size before loading, you can use .override(width, height) in combination with .centerCrop()/.fitCenter()/.dontTransform() for that.

Converting Camera Coordinates to Custom View Coordinates

I am trying to make a simple face detection app consisting of a SurfaceView (essentially a camera preview) and a custom View (for drawing purposes) stacked on top. The two views are essentially the same size, stacked on one another in a RelativeLayout. When a person's face is detected, I want to draw a white rectangle on the custom View around their face.
The Camera.Face.rect object returns the face bound coordinates using the coordinate system explained here and the custom View uses the coordinate system described in the answer to this question. Some sort of conversion is needed before I can use it to draw on the canvas.
Therefore, I wrote an additional method ScaleFacetoView() in my custom view class (below) I redraw the custom view every time a face is detected by overriding the OnFaceDetection() method. The result is the white box appears correctly when a face is in the center. The problem I noticed is that it does not correct track my face when it moves to other parts of the screen.
Namely, if I move my face:
Up - the box goes left
Down - the box goes right
Right - the box goes upwards
Left - the box goes down
I seem to have incorrectly mapped the values when scaling the coordinates. Android docs provide this method of converting using a matrix, but it is rather confusing and I have no idea what it is doing. Can anyone provide some code on the correct way of converting Camera.Face coordinates to View coordinates?
Here's the code for my ScaleFacetoView() method.
public void ScaleFacetoView(Face[] data, int width, int height, TextView a){
//Extract data from the face object and accounts for the 1000 value offset
mLeft = data[0].rect.left + 1000;
mRight = data[0].rect.right + 1000;
mTop = data[0].rect.top + 1000;
mBottom = data[0].rect.bottom + 1000;
//Compute the scale factors
float xScaleFactor = 1;
float yScaleFactor = 1;
if (height > width){
xScaleFactor = (float) width/2000.0f;
yScaleFactor = (float) height/2000.0f;
}
else if (height < width){
xScaleFactor = (float) height/2000.0f;
yScaleFactor = (float) width/2000.0f;
}
//Scale the face parameters
mLeft = mLeft * xScaleFactor; //X-coordinate
mRight = mRight * xScaleFactor; //X-coordinate
mTop = mTop * yScaleFactor; //Y-coordinate
mBottom = mBottom * yScaleFactor; //Y-coordinate
}
As mentioned above, I call the custom view like so:
#Override
public void onFaceDetection(Face[] arg0, Camera arg1) {
if(arg0.length == 1){
//Get aspect ratio of the screen
View parent = (View) mRectangleView.getParent();
int width = parent.getWidth();
int height = parent.getHeight();
//Modify xy values in the view object
mRectangleView.ScaleFacetoView(arg0, width, height);
mRectangleView.setInvalidate();
//Toast.makeText( cc ,"Redrew the face.", Toast.LENGTH_SHORT).show();
mRectangleView.setVisibility(View.VISIBLE);
//rest of code
Using the explanation Kenny gave I manage to do the following.
This example works using the front facing camera.
RectF rectF = new RectF(face.rect);
Matrix matrix = new Matrix();
matrix.setScale(1, 1);
matrix.postScale(view.getWidth() / 2000f, view.getHeight() / 2000f);
matrix.postTranslate(view.getWidth() / 2f, view.getHeight() / 2f);
matrix.mapRect(rectF);
The returned Rectangle by the matrix has all the right coordinates to draw into the canvas.
If you are using the back camera I think is just a matter of changing the scale to:
matrix.setScale(-1, 1);
But I haven't tried that.
The Camera.Face class returns the face bound coordinates using the image frame that the phone would save into its internal storage, rather than using the image displayed in the Camera Preview. In my case, the images were saved in a different manner from the camera, resulting in a incorrect mapping. I had to manually account for the discrepancy by taking the coordinates, rotating it counter clockwise 90 degrees and flipping it on the y-axis prior to scaling it to the canvas used for the custom view.
EDIT:
It would also appear that you can't change the way the face bound coordinates are returned by modifying the camera capture orientation using the Camera.Parameters.setRotation(int) method either.

Custom ImageView, ImageMatrix.mapPoints() and invert() inaccurate?

I wrote a custom ImageView (Android) for my app which works fine. For several purposes I need to map view- coordinates (e.g. coordinates of a click) to image- coordinates (where in the image did the click happen), as the image can be zoomed and scrolled. So far the methods below seemed to work fine:
private PointF viewPointToImgPointF(PointF viewPoint) {
final float[] coords = new float[] { viewPoint.x, viewPoint.y };
Matrix matrix = new Matrix();
getImageMatrix().invert(matrix);
matrix.mapPoints(coords); // --> PointF in image as scaled originally
if (coords != null && coords.length > 1) {
return new PointF(coords[0] * inSampleSize,coords[1] * inSampleSize);
} else {
return null;
}
}
private PointF imgPointToViewPointF(PointF imgPoint) {
final float[] coords = new float[] { imgPoint.x / inSampleSize, imgPoint.y / inSampleSize };
getImageMatrix().mapPoints(coords);
if (coords != null && coords.length > 1) {
return new PointF(coords[0], coords[1]);
} else {
return null;
}
}
In most cases I can use these methods without problems but now I noticed that they are not 100% accurate as I tried out the following:
PointF a = new PointF(100,100);
PointF b = imgPointToViewPointF(a);
PointF a2 = viewPointToImgPointF(b);
PointF b2 = imgPointToViewPointF(a2);
PointF a3 = viewPointToImgPointF(b2);
and got these values:
a: Point(100, 100)
b: Point(56, 569)
a2: Point(99, 99)
b2: Point(55, 568)
a3: Point(97, 97)
If everything worked correctly all a- values and all b- values should have stayed the same.
I also found out, that this little difference is decreasing towards the center of the image. If (100, 100) would be the center of the image, the methods would deliver correct results!
Has anybody experienced something similar and maybe even has a solution? Or do I do something wrong?
(inSampleSize is == 1 for the image I tested. it represent the factor that the image has been resized by to save memory)
I haven't checked this exact scenario but it looks like your problem is with repeatedly casting the float back to an int and then using that value as a float again in the second iteration. The loss of detail in the cast will be compounded each time.
Ok, it seems like it really has been a bug in Android < 4.4.
It works now on devices that were updated but still happens on devices with 4.3 and older.
So I assume there is not much I can do to avoid it on older devices (except for doing the whole matrix calculations by myself. Which I surely won't...)

Android canvas.drawBitmap() rendering bug

[UPDATED with additional code]
I'm having a major problem with Android correctly rendering some bitmaps in my custom view's onDraw() method on some (Nexus 7 and 10 that I know about) but not all devices. It renders properly on the Android phones I have for testing. Here is the snippet of consequence:
/* set up mImagePaint earlier */
mImagePaint.setAntiAlias(true);
mImagePaint.setFilterBitmap(true);
mImagePaint.setDither(true);
mImagePaint.setStyle(Paint.Style.FILL_AND_STROKE);
mImagePaint.setStrokeWidth(0);
mImagePaint.setColor(Color.WHITE);
protected void onDraw(Canvas canvas) {
final float vw = getWidth();
final float vh = getHeight();
final float bw = mBitmap.getWidth();
final float bh = mBitmap.getHeight();
final float ba = bw / bh;
final float va = vw / vh;
if (va > ba) {
final float top = (bh - ba / va * bh) / 2;
mSrcRect.set(0, (int) top, (int) bw, (int) (bh - top));
} else {
final float left = (bw - va / ba * bw) / 2;
mSrcRect.set((int) left, 0, (int) (bw - left), (int) bh);
}
mContentRect.set(0, 0, vw, vh);
canvas.drawBitmap(mBitmap, mSrcRect, mContentRect, mImagePaint);
}
Results on Nexus 7 and 10 are incorrect and renders a wide white border as shown below. This is part of the bitmap rendering but not part of the original bitmap or rect.
The correct (desired) result on a Samsung Galaxy phone:
The image and code shown in both examples above are exactly the same. I've tried variations already using null paint, null srcRect, and even using alternate method drawBitmap(bitmap, 0, 0, null) and get the same results. Looking at the framework code, of course drawBitmap() calls directly to native methods whose source code I can't view.
Only a small number of images seem to exhibit this problem and seems as though mostly square images exhibit it. But here is only one other non-square image that exhibits this problem:
Most of these images are slightly rotated given the desired custom view, and it now occurs to me that rotation might be part of the problem, but maybe not since the canvas isn't rotated, just when it's copied to the parent's canvas bitmap backing by Android.
Any ideas? This is nuts!
Are you sure the pictures are large enough to display without you having to scale up on those devices?

Android problem with Image Rotate and Matrix

Hopefully this is an easy one because I've been trying all sorts of different ways to get rid of this.
I am making an android app which incorporates a clock animation. I've got everything working really well except one very annoying thing.
I have a second hand on the clock and I'm using the following code to rotate it around a the second hand center point. As you'll probably notice I'm trying to make this look like an analogue second hand so it sweeps instead of just ticking.
public float secdegrees, secondwidth, secondheight;
secondMatrix = new Matrix();
secondwidth = secondHand.getWidth();
secondheight = secondHand.getHeight();
secdegrees = anglepersec * secnow;
secdegrees += anglepluspermilli * millis;
secondMatrix.setRotate(secdegrees, secondwidth/2, secondheight / 2);
newSecond = Bitmap.createBitmap(secondHand, 0, 0,
(int) secondwidth, (int) secondheight, secondMatrix, true);
c.drawBitmap(newSecond, (centrex - newSecond.getWidth()/2),
((face.getHeight()/2) - newSecond.getHeight()/2), null);
It actually does just the job I want... almost.
The problem is the hand shakes/jiggles around the center point ever so slightly, but it's really noticeable and really spoils the aesthetics.
I pretty much suspect that it's the way that it's rounding the float value, but I was hoping that someone had experienced this before and had any ideas on how to get rid of it.
For reference the second hand image was originally 74 px x 28 px and is (currently) 74 x 74 pixels .png with the middle of the second hand exactly crossing the crossing point. I've also tried making it 75 x 75 so that there is actually a central pixel too but no luck.
Any help at all would be appreciated.
** UPDATE
I've tried to change the code in case the decimals were getting dropped but still no luck I'm afraid. Here is option 2 I've tried and failed with;
secondMatrix = new Matrix();
secondwidth = secondHand.getWidth();
secondheight = secondHand.getHeight();
secdegrees = anglepersec * secnow;
secdegrees += anglepluspermilli * millis;
secondMatrix.setRotate(secdegrees, secondwidth/2, secondheight / 2);
newSecond = Bitmap.createBitmap(secondHand, 0, 0, (int) secondwidth,
(int) secondheight, secondMatrix, true);
float secW = newSecond.getWidth()/2;
float secH = newSecond.getHeight()/2;
// NEW CODE HERE
float calcdeg = secdegrees % 90;
calcdeg = (float) Math.toRadians(calcdeg);
float NegY = (float) ((secondwidth*Math.cos(calcdeg)) +
(secondwidth * Math.sin(calcdeg)));
c.drawBitmap(newSecond, centrex - NegY/2,
((face.getHeight()/2) - NegY/2), null);
I understand your problem, I have never encountered it mysleft, but it sounds pretty obvious to me. Since the rotations changes the width and height of the image, your imprecision comes from centrex - NegX/2
I have not tested, but I suggest you try:
Matrix matrix=new Matrix()
//first translate to place the center of the hand at Point(0,0)
matrix.setTranslate(-secondWidth/2,-secondHeight/2);
matrix.setRotate(secdegrees);
//now place the pivot Point(0,0) at its expected location
matrix.setTranslate(centreX,centreY);
newSecond = Bitmap.createBitmap(secondHand, 0, 0, secondWidth, secondHeight, matrix, false);
c.drawBitmap(newSecond,0,0,null);
Of course, this is suboptimal, since the newSecond bitmap is much larger than it actually needs to be. So if your centrex and centrey are big, you might want to translate less than that, and then draw with a translation of the difference.
//now place the pivot to a position where the hand can be fully drawn without imprecion on the future location of Point(0,0)
matrix.setTranslate(secondWith,secondHeight);
newSecond = Bitmap.createBitmap(secondHand, 0, 0, secondWidth, secondHeight, matrix, false);
c.drawBitmap(newSecond,centrex-secondWidth,centrey-secondHeight,null);
Hope this helps.

Categories

Resources