I use fresco image viewer library in my app and everything works prefect but I have a problem with rotating image. by default this library can rotate image automatically when the device screen rotation is on but I don't want to use that way.
actually I want to know how can I rotate it by touch or button click in 90 degree and it's very important to work with this library.
This is my code to show image :
ImagePipelineConfig config = ImagePipelineConfig.newBuilder(mContext)
.setProgressiveJpegConfig(new SimpleProgressiveJpegConfig())
.setResizeAndRotateEnabledForNetwork(true)
.setDownsampleEnabled(true)
.build();
Fresco.initialize(mContext, config);
ImageViewer.Builder builder = new ImageViewer.Builder < > (mContext, images);
builder.setFormatter(new ImageViewer.Formatter < Image > () {
#Override
public String format(Image customImage) {
return customImage.getLarge();
}
}).setOverlayView(overlayView)
.show();
https://github.com/stfalcon-studio/FrescoImageViewer
Example for a 90 degree rotation:
ImageRequest imageRequest = ImageRequestBuilder.newBuilderWithSource(URI)
.setRotationOptions(RotationOptions.forceRotation(RotationOptions.ROTATE_90))
.build();
See also this example in Fresco's Showcase sample app.
Related
images loaded using picasso.get().load(url) are rotated. I tried to use .apply { rotate(90F) } which does rotation for all images loaded from URL.
if the picture is taken in portait, rotating 90 works fine but if the image is taken in landscape this does not work.
Is there a way I could rotate picasso image based on exif.
this is the code I have at the moment
Picasso.get().load(url)
.apply { placeholderRes?.let { placeholder(it) } }
.apply { if (resizeWidthRes != null && resizeHeightRes != null) resizeDimen(resizeWidthRes, resizeHeightRes) }
.apply { if (convertToCircle) transform(CircleTransformation()) }
.apply { rotate(90F) } //DOES NOT WORK FOR LANDSCAPE PICTURES
.into(imageView)
Picture taken in landscape
Picture loaded in image view which is roatated
Any suggestions on how to use exif and rotate picasso image from url would be very helpful
Thanks
R
I need an ImageView that can be touched zoom in/out and pan. Each time when I touch the ImageView, I need to know the touch position of the source image. For example the image resolution is 1280*720, even the image in the ImageView is zoomed in, I still know exactly the touch position of the image(not touch position of the ImageView)
Thanks.
Why don't you skip all that and use an existing library?
Try this: PhotoView works perfectly for me on API 23, with just a few lines of code you will be able to zoom in/out and pan
compile 'com.github.chrisbanes:PhotoView:1.3.0'
repositories {
...
maven { url "https://jitpack.io" }
}
Example
ImageView mImageView;
PhotoViewAttacher mAttacher;
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
// Any implementation of ImageView can be used!
mImageView = (ImageView) findViewById(R.id.iv_photo);
// Set the Drawable displayed
Drawable bitmap = getResources().getDrawable(R.drawable.wallpaper);
mImageView.setImageDrawable(bitmap);
// Attach a PhotoViewAttacher, which takes care of all of the zooming functionality.
// (not needed unless you are going to change the drawable later)
mAttacher = new PhotoViewAttacher(mImageView);
}
// If you later call mImageView.setImageDrawable/setImageBitmap/setImageResource/etc then you just need to call
mAttacher.update();
Finally, I used this open source code
https://github.com/sephiroth74/ImageViewZoom
I modified the code, but there is still a lot of bugs in this open source project, I have fixed several of them.
The following function was added to class ImageViewTouch.GestureListener which is used to transfer touchpoint(in screen) to point in the real image(no matter the image is zoom in/out).
private PointF calculatePositionInSourceImage(PointF touchPointF) {
//point relative to imageRect
PointF touchPointRelativeToImageRect = new PointF();
RectF imageRect = getBitmapRect();
touchPointRelativeToImageRect.set(touchPointF.x - imageRect.left,
touchPointF.y - imageRect.top);
//real image resolution
int imageWidth = getDrawable().getIntrinsicWidth();
int imageHeight = getDrawable().getIntrinsicHeight();
//touch point in image
PointF touchPointRelativeToImage = new PointF();
touchPointRelativeToImage.set(touchPointRelativeToImageRect.x/imageRect.width() * imageWidth,
touchPointRelativeToImageRect.y/imageRect.height() * imageHeight);
if(touchPointRelativeToImage.x < 0 || touchPointRelativeToImage.y < 0 )
touchPointRelativeToImage.set(0,0);
return touchPointRelativeToImage;
}
I use camera to click image(object) in the app, after clicking the photo, I want to know the height and width of the object.
Please guide me.
Create Bitmap first in onActivityResult method then get height and width of bitmap
java.io.File file = new java.io.File(
this.application.getDirectory(),
this.application.getFileName());
if (file.exists()) {
Bitmap bitmapPhoto = BitmapFactory.decodeFile(outputPath);
height = bitmapPhoto.getHeight();
width = bitmapPhoto.getWidth()
}
I got what you need,
Firstly you have to integrate Opencv Library into your application, Then use Object detection in OpenCv, You have to click on the object you need to measure(height,width);
By using below formula you can get all you need.
realWorldHeight = rect.size().height / imageMat.size().height * realWorldDistanceToObject * Math.cos( realWorldAngleOfObjectHeight / 2);
I'm planning to calculate disparity map by taking two pictures from the two back cameras of Evo 3D. However, I'm able to use only one camera. I tried different index.
index
0 gives me left camera (one of the back cameras)
1 gives me front camera
-1 gives me left camera (one of the back cameras).
I once got other camera using -1 index, but it's not working anymore. I'm using CameraBridgeViewBase.
I have seen on Google group of android-opencv that people have successfully used both cameras of Evo 3D phone. I want to know how to do it? Is there some other index? or is there some other way using which I can use this.
P.S. Native Camera doesn't work. (Android 4.0.3).
The stereoscopic camera ID in Android changed from 2 to 100 with the ICS upgrade. This is the constant used by the Android Camera.open call. I don't think there was ever any official way to get one camera or the other. You can only get one image or both images.
As the above answer suggests, I used 100 as the Camera Index, but it didn't work with OpenCV, so I tried using Android's Camera SDK, but got some errors. But as this is a part of HTC Open Sense SDK, I downloaded it on my Eclipse and used http://www.htcdev.com/devcenter/opensense-sdk/stereoscopic-3d/s3d-sample-code/ . I used the base file of S3D Camera Demo and added few more functions so that I could access the Camera image data and convert it to OpenCV Mat.
So I made few changes in onTouchEvent function in that code, and more code there.
#Override
public boolean onTouchEvent(MotionEvent event) {
switch (event.getAction()) {
case MotionEvent.ACTION_DOWN:
// toggle();
//Intent cameraIntent = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE);
//startActivityForResult(cameraIntent, 1337);
int bufferSize = width * height * 3;
byte[] mPreviewBuffer = null;
// New preview buffer.
mPreviewBuffer = new byte[bufferSize + 4096];
// with buffer requires addbuffer.
camera.addCallbackBuffer(mPreviewBuffer);
camera.setPreviewCallbackWithBuffer(mCameraCallback);
break;
default:
break;
}
return true;
}
private final Camera.PreviewCallback mCameraCallback = new Camera.PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera c) {
Log.d(TAG, "ON Preview frame");
img = new Mat(height, width, CvType.CV_8UC1);
gray = new Mat(height, width, CvType.CV_8UC1);
img.put(0, 0, data);
Imgproc.cvtColor(img, gray, Imgproc.COLOR_YUV420sp2GRAY);
String pixvalue = String.valueOf(gray.get(300, 400)[0]);
String pixval1 = String.valueOf(gray.get(300, 400+width/2)[0]);
Log.d(TAG, pixvalue);
Log.d(TAG, pixval1);
// to do the camera image split processing using "data"
}
};
The Image that you get from the camera is in YUV420s mode and I was initially having problems accessing the data as I had created a 4 channel Mat. Actually, it needs only 1 channel Mat.
I tried to create a simple collage designer for Android. Each image can be moved, rotated, scaled. Use this code:
var os:Sprite = new Sprite();
os.cacheAsBitmap = true;
os.cacheAsBitmapMatrix = new Matrix();
Multitouch.inputMode = MultitouchInputMode.GESTURE;
if (Multitouch.supportsGestureEvents){
os.addEventListener(TransformGestureEvent.GESTURE_ROTATE , onRotate );
os.addEventListener(TransformGestureEvent.GESTURE_ZOOM , onZoom);
os.addEventListener(TransformGestureEvent.GESTURE_PAN , onPan);
}
os.addEventListener(MouseEvent.MOUSE_DOWN, onDown);
os.addEventListener(MouseEvent.MOUSE_UP, onUp);
protected function onRotate(event:TransformGestureEvent):void
{
event.target.rotation += event.rotation;
}
protected function onZoom(event:TransformGestureEvent):void
{
event.target.scaleX *= event.scaleX;
event.target.scaleY *= event.scaleY;
}
protected function onPan(event:TransformGestureEvent):void
{
event.target.x = event.offsetX;
event.target.y = event.offsetY;
}
protected function onDown(e:MouseEvent):void
{
os.startDrag();
e.stopPropagation();
}
protected function onUp(e:MouseEvent):void
{
os.stopDrag();
}
However, scaling images is not smooth, the image suddenly changes size, motion pull. Although I have quite a powerful device for testing. I can not use a standard way using markers, because the images are quite small, and tap your finger into the marker will be difficult.
Prompt code examples how this can be implemented, please.
Are you using "gpu" renderMode to test?
And try use bitmap instead