I need to know why this extraspace is added as left margin. Due to this wakllpaper is not setting properly in my app. If i try to set leftmost portion as wallpaper then centre portion is getting set as wallpaper due to this extraspace margin.
cropImageAndSetWallpaper(android.net.Uri uri, com.android.wallpapercropper.WallpaperCropActivity$OnBitmapCroppedHandler onBitmapCroppedHandler, boolean finishActivityWhenDone)
boolean centerCrop = getResources().getBoolean(R.bool.center_crop);
// Get the crop
boolean ltr = mCropView.getLayoutDirection() == View.LAYOUT_DIRECTION_LTR;
Display d = getWindowManager().getDefaultDisplay();
Point displaySize = new Point();
d.getSize(displaySize);
boolean isPortrait = displaySize.x < displaySize.y;
Point defaultWallpaperSize = getDefaultWallpaperSize(getResources(),
getWindowManager());
// Get the crop
RectF cropRect = mCropView.getCrop();
Point inSize = mCropView.getSourceDimensions();
int cropRotation = mCropView.getImageRotation();
float cropScale = mCropView.getWidth() / (float) cropRect.width();
Matrix rotateMatrix = new Matrix();
rotateMatrix.setRotate(cropRotation);
float[] rotatedInSize = new float[] { inSize.x, inSize.y };
rotateMatrix.mapPoints(rotatedInSize);
rotatedInSize[0] = Math.abs(rotatedInSize[0]);
rotatedInSize[1] = Math.abs(rotatedInSize[1]);
// Due to rounding errors in the cropview renderer the edges can be slightly offset
// therefore we ensure that the boundaries are sanely defined
cropRect.left = Math.max(0, cropRect.left);
cropRect.right = Math.min(rotatedInSize[0], cropRect.right);
cropRect.top = Math.max(0, cropRect.top);
cropRect.bottom = Math.min(rotatedInSize[1], cropRect.bottom);
// ADJUST CROP WIDTH
// Extend the crop all the way to the right, for parallax
// (or all the way to the left, in RTL)
float extraSpace;
if (centerCrop) {
extraSpace = 2f * Math.min(rotatedInSize[0] - cropRect.right, cropRect.left);
} else {
extraSpace = ltr ? rotatedInSize[0] - cropRect.right : cropRect.left;
}
// Cap the amount of extra width
float maxExtraSpace = defaultWallpaperSize.x / cropScale - cropRect.width();
extraSpace = Math.min(extraSpace, maxExtraSpace);
if (centerCrop) {
cropRect.left -= extraSpace / 2f;
cropRect.right += extraSpace / 2f;
} else {
if (ltr) {
cropRect.right += extraSpace;
} else {
cropRect.left -= extraSpace;
}
}
// ADJUST CROP HEIGHT
if (isPortrait) {
cropRect.bottom = cropRect.top + defaultWallpaperSize.y / cropScale;
} else { // LANDSCAPE
float extraPortraitHeight =
defaultWallpaperSize.y / cropScale - cropRect.height();
float expandHeight =
Math.min(Math.min(rotatedInSize[1] - cropRect.bottom, cropRect.top),
extraPortraitHeight / 2);
cropRect.top -= expandHeight;
cropRect.bottom += expandHeight;
}
final int outWidth = (int) Math.round(cropRect.width() * cropScale);
final int outHeight = (int) Math.round(cropRect.height() * cropScale);
Runnable onEndCrop = new Runnable() {
public void run() {
if (finishActivityWhenDone) {
setResult(Activity.RESULT_OK);
finish();
}
}
};
BitmapCropTask cropTask = new BitmapCropTask(this, uri,
cropRect, cropRotation, outWidth, outHeight, true, false, onEndCrop);
if (onBitmapCroppedHandler != null) {
cropTask.setOnBitmapCropped(onBitmapCroppedHandler);
}
cropTask.execute();
Extra space is added to the wallpaper to allow for a parallax effect. In case your phone uses a right-to-left layout, this space is added to the left of the crop you chose. Google's launcher starts out on the rightmost page in RTL layouts.
If you choose a crop on the left side of the picture, then the expansion will take place on the right side of the crop, an you will see mainly this extension on the main page of the launcher.
Related
Trying to take a picture and save in a specified path.I have attached the script to a RawImage.Initially tried Barts answer.But it was having different rotation and image flipped.So added some code to adjust the rotation and flipping to correct the view.Even though now the camera view looks correct ,it looks like the video feed getting from the camera is too wide and not clear.
Attaching screenshot and code.
private WebCamTexture camTexture;
// public RawImage Img;
// Start is called before the first frame update
void Start()
{
camTexture = new WebCamTexture();
WebCamDevice[] devices = WebCamTexture.devices;
if (devices.Length > 0)
{
camTexture.Play();
//Code below to adjust rotation
float rotationangle = (360 - camTexture.videoRotationAngle);
Quaternion rotQuaternion = new Quaternion();
rotQuaternion.eulerAngles = new Vector3(0, 0, rotationangle);
this.transform.rotation = rotQuaternion;
}
}
// Update is called once per frame
void Update()
{
GetComponent<RawImage>().texture = camTexture;
//CODE TO FLIP
float scaleY = camTexture.videoVerticallyMirrored ? -1f : 1f;
this.GetComponent<RawImage>().rectTransform.localScale = new Vector3(1f, scaleY, 1f);
}
public void PicTake()
{
TakePhoto();
}
How to correct this.
I had similar troubles when I was testing with Android,iOS,Mac,PC devices. Below is the script I used for solving the scaling & rotation problem.
It uses Unity Quad as background plane and fill the screen.
void CalculateBackgroundQuad()
{
Camera cam = Camera.main;
ScreenRatio = (float)Screen.width / (float)Screen.height;
BackgroundQuad.transform.SetParent(cam.transform);
BackgroundQuad.transform.localPosition = new Vector3(0f, 0f, cam.farClipPlane / 2f);
float videoRotationAngle = webCamTexture.videoRotationAngle;
BackgroundQuad.transform.localRotation = baseRotation * Quaternion.AngleAxis(webCamTexture.videoRotationAngle, Vector3.forward);
float distance = cam.farClipPlane / 2f;
float frustumHeight = 2.0f * distance * Mathf.Tan(cam.fieldOfView * 0.5f * Mathf.Deg2Rad);
BackgroundQuad.transform.localPosition = new Vector3(0f, 0f, distance);
Vector3 QuadScale = new Vector3(1f, frustumHeight, 1f);
//adjust the scaling for portrait Mode & Landscape Mode
if (videoRotationAngle == 0 || videoRotationAngle == 180)
{
//landscape mode
TextureRatio = (float)(webCamTexture.width) / (float)(webCamTexture.height);
if (ScreenRatio > TextureRatio)
{
float SH = ScreenRatio / TextureRatio;
float TW = TextureRatio * frustumHeight * SH;
float TH = frustumHeight * (webCamTexture.videoVerticallyMirrored ? -1 : 1) * SH;
QuadScale = new Vector3(TW, TH, 1f);
}
else
{
float TW = TextureRatio * frustumHeight;
QuadScale = new Vector3(TW, frustumHeight * (webCamTexture.videoVerticallyMirrored ? -1 : 1), 1f);
}
}
else
{
//portrait mode
TextureRatio = (float)(webCamTexture.height) / (float)(webCamTexture.width);
if (ScreenRatio > TextureRatio)
{
float SH = ScreenRatio / TextureRatio;
float TW = frustumHeight * -1f * SH;
float TH = TW * (webCamTexture.videoVerticallyMirrored ? 1 : -1) * SH;
QuadScale = new Vector3(TW, TH, 1f);
}
else
{
float TW = TextureRatio * frustumHeight;
QuadScale = new Vector3(frustumHeight * -1f, TW * (webCamTexture.videoVerticallyMirrored ? 1 : -1), 1f);
}
}
BackgroundQuad.transform.localScale = QuadScale;
}
The above script should work on all devices. Just simple math solution.
I have an rotated textview and I want to drag and drop this view.
The problem is that the drag shadow has no rotation.
I found a solution for android in java but this does not work for me.
Maybe I translate the code wrong
How to drag a rotated DragShadow?
class CustomDragShdowBuilder : View.DragShadowBuilder
{
private View _view;
public CustomDragShdowBuilder(View view)
{
_view = view;
}
public override void OnDrawShadow(Canvas canvas)
{
double rotationRad = Math.ToRadians(_view.Rotation);
int w = (int)(_view.Width * _view.ScaleX);
int h = (int)(_view.Height * _view.ScaleY);
double s = Math.Abs(Math.Sin(rotationRad));
double c = Math.Abs(Math.Cos(rotationRad));
int width = (int)(w * c + h * s);
int height = (int)(w * s + h * c);
canvas.Scale(_view.ScaleX, _view.ScaleY, width / 2, height / 2);
canvas.Rotate(_view.Rotation, width / 2, height / 2);
canvas.Translate((width - _view.Width) / 2, (height - _view.Height) / 2);
base.OnDrawShadow(canvas);
}
public override void OnProvideShadowMetrics(Point shadowSize, Point shadowTouchPoint)
{
shadowTouchPoint.Set(shadowSize.X / 2, shadowSize.Y / 2);
base.OnProvideShadowMetrics(shadowSize, shadowTouchPoint);
}
}
I found a solution for android in java but this does not work for me. Maybe I translate the code wrong
Yes, you are translating it wrong, you did change the codes of OnDrawShadow but you didn't pay attention to OnProvideShadowMetrics, which aims at changing the size of the canvas drawing area, so you need to pass the same width and height that has been calculated by codes:
Here is the modified version of DragShdowBuilder:
public class MyDragShadowBuilder : DragShadowBuilder
{
private int width, height;
// Defines the constructor for myDragShadowBuilder
public MyDragShadowBuilder(View v) : base(v)
{
}
// Defines a callback that sends the drag shadow dimensions and touch point back to the system.
public override void OnProvideShadowMetrics(Android.Graphics.Point outShadowSize, Android.Graphics.Point outShadowTouchPoint)
{
double rotationRad = Java.Lang.Math.ToRadians(View.Rotation);
int w = (int)(View.Width * View.ScaleX);
int h = (int)(View.Height * View.ScaleY);
double s = Java.Lang.Math.Abs(Java.Lang.Math.Sin(rotationRad));
double c = Java.Lang.Math.Abs(Java.Lang.Math.Cos(rotationRad));
//calculate the size of the canvas
//width = view's width*cos(rad)+height*sin(rad)
width = (int)(w * c + h * s);
//height = view's width*sin(rad)+height*cos(rad)
height = (int)(w * s + h * c);
outShadowSize.Set(width, height);
// Sets the touch point's position to be in the middle of the drag shadow
outShadowTouchPoint.Set(outShadowSize.X / 2, outShadowSize.Y / 2);
}
// Defines a callback that draws the drag shadow in a Canvas that the system constructs
// from the dimensions passed in onProvideShadowMetrics().
public override void OnDrawShadow(Canvas canvas)
{
canvas.Scale(View.ScaleX, View.ScaleY, width/2 , height/2);
//canvas.DrawColor(Android.Graphics.Color.White);
canvas.Rotate(View.Rotation,width/2, height / 2);
canvas.Translate((width - View.Width)/2, (height - View.Height) / 2);
base.OnDrawShadow(canvas);
}
}
And here is the complete sample:RotatedTextViewSample
In my android app.I want to make image Straightening edit feature using android-gpuimage library but GPUImageView doesn't give feature of getBitmap() or setMatrix() then how is it possible please let me know? here is the code to review :
#Override
public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) {
if(isStraightenEffectEnabled){
float angle = (progress - 45);
float width = mGPUImageView.getWidth();
float height = mGPUImageView.getHeight();
if (width > height) {
width = mGPUImageView.getHeight();
height = mGPUImageView.getWidth();
}
float a = (float) Math.atan(height/width);
// the length from the center to the corner of the green
float len1 = (width / 2) / (float) Math.cos(a - Math.abs(Math.toRadians(angle)));
// the length from the center to the corner of the black
float len2 = (float) Math.sqrt(Math.pow(width/2,2) + Math.pow(height/2,2));
// compute the scaling factor
float scale = len2 / len1;
Matrix matrix = mGPUImageView.getMatrix();
if (mMatrix == null) {
mMatrix = new Matrix(matrix);
}
matrix = new Matrix(mMatrix);
float newX = (mGPUImageView.getWidth() / 2) * (1 - scale);
float newY = (mGPUImageView.getHeight() / 2) * (1 - scale);
matrix.postScale(scale, scale);
matrix.postTranslate(newX, newY);
matrix.postRotate(angle, mGPUImageView.getWidth() / 2, mGPUImageView.getHeight() / 2);
Is it possible to get mGPUImageView.setMatrix(matrix);
NOW HERE getMatrix() is a method of GPUImageView but setMatrix() or getBitmap() is not method available with GPUImageView class. Any workarounds if possible ?
add getGPUImage in GPUImageView class
public GPUImage getGPUImage() {
return mGPUImage;
}
then get you can get bitmap like this:
mGPUImageView.getGPUImage().getBitmapWithFilterApplied();
You can also get bitmap like this:
Bitmap bitmap = mGPUImageView.capture();
You can get filtered bitmap renderer and Pixelbuffer. This might be helpful for you.
GPUImageLookupFilter amatorka = new GPUImageLookupFilter();
amatorka.setBitmap(BitmapFactory.decodeResource(getResources(), getResources().getIdentifier("fil_" + position, "drawable", getPackageName())));
GPUImageRenderer renderer = new GPUImageRenderer(amatorka);
renderer.setImageBitmap(bitmap, false);
PixelBuffer buffer = new PixelBuffer(80, 80);
buffer.setRenderer(renderer);
buffer.getBitmap();
Hii everyone currently iam working on scanning qr code from my app and i have used zxing library and it's working good and my problem is in my galaxy s4 mobile the scanning area is very small
Please help me
thanks in advance
I know it is too much late but help for others
just go to camera manager class and paste this code on replacement of given method
it works for all types of screens
public Rect getFramingRect() {
if (framingRect == null) {
if (camera == null) {
return null;
}
Point screenResolution = configManager.getScreenResolution();
int width = screenResolution.x * 3 / 4;
int height = screenResolution.y * 3 / 4;
Log.v("Framing rect is : ", "width is "+width+" and height is "+height);
int leftOffset = (screenResolution.x - width) / 2;
int topOffset = (screenResolution.y - height) / 2;
framingRect = new Rect(leftOffset, topOffset, leftOffset + width, topOffset + height);
Log.d(TAG, "Calculated framing rect: " + framingRect);
}
return framingRect;
}
public Rect getFramingRect() {
if (framingRect == null) {
if (camera == null) {
return null;
}
Point screenResolution = configManager.getScreenResolution();
int screenx = screenResolution.x;
int screeny = screenResolution.y;
int width, height, left, top;
if (screenx > screeny) {
width = (int) (screenx * 12.5 / 100);
height = (int) (screeny * 25 / 100);
left = (int) screenx * 83 / 100;
top = (int) screeny * 75 / 100;
} else {
left = (int) (screenx * 12.5 / 100);
top = (int) (screeny * 25 / 100);
width = (int) screenx * 83 / 100;
height = (int) screeny * 75 / 100;
}
framingRect = new Rect(left,top, width, height);
Log.d(TAG, "Calculated framing rect: " + framingRect);
}
return framingRect;
}
Replace the above code in CameraManager.java file
this worked for me try this out
The CameraManager class has two constants defined MIN_FRAME_WIDTH and MIN_FRAME_HEIGHT. You should modify them as desired and everything should work:
private static final int MIN_FRAME_WIDTH = 240; // (your desired value here)
private static final int MIN_FRAME_HEIGHT = 240; // (your desired value here)
If you are calling this from another android app, use intent extras SCAN_WIDTH and SCAN_HEIGHT for this.
If you happen to be using phonegap-plugin-barcodescanner (3.0.0 or later), then passing the same intents like xxxxx.scan(onSuccessFunc,onFailFunc,{SCAN_HEIGHT:111,SCAN_WIDTH:222}) will produce the same result. 111 being the height, and 222 being the width.
If you are using ZxingScannerView, you can override createViewFinderView() and increase or decrease the framing rectangle size via viewFinderView like this :
scannerView = object : ZXingScannerView(requireContext()) {
override fun createViewFinderView(context: Context?): IViewFinder {
val viewfinderView = super.createViewFinderView(context)
viewfinderView.setViewFinderOffset(-90) // increase size of framing rectangle
return viewfinderView;
}
}
yourLayour.addView(scannerView)
scannerView.startCamera()
I have a SurfaceView that is resposible for drawing a Bitmap as a background and another one that will be used as an overlay. So I've decided to do all transformations using a Matrix that can be used for both bitmaps as it is (I think) one of the fastest ways to do it without using OpenGL.
I've been able to implement panning around and zooming but I have some problems with what I've came with:
I wasn't able to find a way how to focus on the center of the two
fingers while zooming, the image always resets to its initial state
(that is, without panning nor scalling) before the new scale being
applied. Besides looking wrong, that doesn't allow the user to zoom
out to see the whole image and then zoom in on the part that is
important.
After the scalling operation the image won't be at the
same place after the new draw pass because the translation value will
be different.
Is there a way to achieve that using a Matrix or is there another solution?
Code is below (I use a SurfaceHolder in a separate thread do lock the SurfaceView canvas and call its doDraw method):
public class MapSurfaceView extends SurfaceView implements SurfaceHolder.Callback {
public void doDraw(Canvas canvas) {
canvas.drawColor(Color.BLACK);
canvas.drawBitmap(mBitmap, mTransformationMatrix, mPaintAA);
}
#Override
public boolean onTouchEvent(MotionEvent event) {
switch (event.getAction() & MotionEvent.ACTION_MASK) {
case MotionEvent.ACTION_POINTER_DOWN: {
if (event.getPointerCount() == 2) {
mOriginalDistance = MathUtils.distanceBetween(event.getX(0), event.getX(1), event.getY(0), event.getY(1));
mScreenMidpoint = MathUtils.midpoint(event.getX(0), event.getX(1), event.getY(0), event.getY(1));
mImageMidpoint = MathUtils.midpoint((mXPosition+event.getX(0))/mScale, (mXPosition+event.getX(1))/mScale, (mYPosition+event.getY(0))/mScale, (mYPosition+event.getY(1))/mScale);
mOriginalScale = mScale;
}
}
case MotionEvent.ACTION_DOWN: {
mOriginalTouchPoint = new Point((int)event.getX(), (int)event.getY());
mOriginalPosition = new Point(mXPosition, mYPosition);
break;
}
case MotionEvent.ACTION_MOVE: {
if (event.getPointerCount() == 2) {
final double currentDistance = MathUtils.distanceBetween(event.getX(0), event.getX(1), event.getY(0), event.getY(1));
if (mIsZooming || currentDistance - mOriginalDistance > mPinchToZoomTolerance || mOriginalDistance - currentDistance > mPinchToZoomTolerance) {
final float distanceRatio = (float) (currentDistance / mOriginalDistance);
float tempZoom = mOriginalScale * distanceRatio;
mScale = Math.min(10, Math.max(Math.min((float)getHeight()/(float)mBitmap.getHeight(), (float)getWidth()/(float)mBitmap.getWidth()), tempZoom));
mScale = (float) MathUtils.roundToDecimals(mScale, 1);
mIsZooming = true;
mTransformationMatrix = new Matrix();
mTransformationMatrix.setScale(mScale, mScale);//, mImageMidpoint.x, mImageMidpoint.y);
} else {
System.out.println("Dragging");
mIsZooming = false;
final int deltaX = (int) ((int) (mOriginalTouchPoint.x - event.getX()));
final int deltaY = (int) ((int) (mOriginalTouchPoint.y - event.getY()));
mXPosition = mOriginalPosition.x + deltaX;
mYPosition = mOriginalPosition.y + deltaY;
validatePositions();
mTransformationMatrix = new Matrix();
mTransformationMatrix.setScale(mScale, mScale);
mTransformationMatrix.postTranslate(-mXPosition, -mYPosition);
}
}
break;
}
case MotionEvent.ACTION_UP:
case MotionEvent.ACTION_POINTER_UP: {
mIsZooming = false;
validatePositions();
mTransformationMatrix = new Matrix();
mTransformationMatrix.setScale(mScale, mScale);
mTransformationMatrix.postTranslate(-mXPosition, -mYPosition);
}
}
return true;
}
private void validatePositions() {
// Lower right corner
mXPosition = Math.min(mXPosition, (int)((mBitmap.getWidth() * mScale)-getWidth()));
mYPosition = Math.min(mYPosition, (int)((mBitmap.getHeight() * mScale)-getHeight()));
// Upper left corner
mXPosition = Math.max(mXPosition, 0);
mYPosition = Math.max(mYPosition, 0);
// Image smaller than the container, should center it
if (mBitmap.getWidth() * mScale <= getWidth()) {
mXPosition = (int) -((getWidth() - (mBitmap.getWidth() * mScale))/2);
}
if (mBitmap.getHeight() * mScale <= getHeight()) {
mYPosition = (int) -((getHeight() - (mBitmap.getHeight() * mScale))/2);
}
}
}
Instead of resetting the transformation matrix every time using new Matrix(), try updating it using post*(). This way, you do only operations relative to the screen. It is easier to think in terms: "zoom to this point on the screen".
Now some code. Having calculated mScale in zooming part:
...
mScale = (float) MathUtils.roundToDecimals(mScale, 1);
float ratio = mScale / mOriginalScale;
mTransformationMatrix.postScale(ratio, ratio, mScreenMidpoint.x, mScreenMidpoint.y);
It might be even better to recalculate mScreenMidpoint on each zooming touch event. This would allow user to change the focus point a bit while zooming. For me, it is more natural than having the focus point frozen after first two finger touch.
During dragging, you translate using deltaX and deltaY instead of absolute points:
mTransformationMatrix.postTranslate(-deltaX, -deltaY);
Of course now you have to change your validatePositions() method to:
ensure deltaX and deltaY do not make image move too much, or
use transformation matrix to check if image is off screen and then move it to counter that
I will describe the second method, as it is more flexible and allows to validate zooming as well.
We calculate how much image is off screen and then move it using those values:
void validate() {
mTransformationMatrix.mapRect(new RectF(0, 0, mBitmap.getWidth(), mBitmap.getHeight()));
float height = rect.height();
float width = rect.width();
float deltaX = 0, deltaY = 0;
// Vertical delta
if (height < mScreenHeight) {
deltaY = (mScreenHeight - height) / 2 - rect.top;
} else if (rect.top > 0) {
deltaY = -rect.top;
} else if (rect.bottom < mScreenHeight) {
deltaY = mScreenHeight - rect.bottom;
}
// Horziontal delta
if (width < mScreenWidth) {
deltaX = (mScreenWidth - width) / 2 - rect.left;
} else if (rect.left > 0) {
deltaX = -rect.left;
} else if (rect.right < mScreenWidth) {
deltaX = mScreenWidth - rect.right;
}
mTransformationMatrix.postTranslate(deltaX, deltaY)
}