I have normal portrait orientation in the emulator and int rotation has zero value. Everything is fine.
void bindPreview(#NonNull ProcessCameraProvider cameraProvider) {
int rotation = cameraView.getDisplay().getRotation(); // 0
// Preview
Preview preview = new Preview.Builder()
.setTargetAspectRatio(AspectRatio.RATIO_4_3)
.build();
// Camera
CameraSelector cameraSelector = new CameraSelector.Builder()
.requireLensFacing(CameraSelector.LENS_FACING_BACK)
.build();
// Create image capture
imageCapture = new ImageCapture.Builder()
.setTargetResolution(new Size(1200, 720))
.setTargetRotation(rotation)
.build();
// ViewPort
Rational aspectRatio = new Rational(cameraView.getWidth(), cameraView.getHeight());
ViewPort viewPort = new ViewPort.Builder(aspectRatio, rotation).build();
// Use case
UseCaseGroup useCaseGroup = new UseCaseGroup.Builder()
.addUseCase(preview)
.addUseCase(imageCapture)
.setViewPort(viewPort)
.build();
}
But after in imageCapture.takePicture callback
public void onCaptureSuccess(#NonNull ImageProxy image) {
imageRotationDegrees = image.getImageInfo().getRotationDegrees(); // 90
}
imageRotationDegrees return 90! means that the image must be rotated to get the natural orientation, but it is not! Its value should be 0.
Is it normal?
update:
On device I get 0 on imageRotationDegrees.
On emulator I got 90 on imageRotationDegrees
But all images come in the correct orientation regardless of this value. How do I know which image I should rotate if I have to?
Yes, it is normal for the imageRotationDegrees value to be different from the value of rotation in the bindPreview method.
The rotation value represents the rotation of the device's screen, while the imageRotationDegrees value represents the orientation of the image as captured by the camera. These values can be different because the camera and the screen are not necessarily oriented in the same way.
For example, if the device is in portrait orientation with the camera facing the user, the rotation value will be 0, but the imageRotationDegrees value will be 90 because the camera is capturing the image rotated 90 degrees from the device's portrait orientation.
To get the natural orientation of the image, you can rotate the image by the value of imageRotationDegrees. For example, if you are using the Android's Bitmap class to represent the image, you can use the rotate method to rotate the image by the correct amount:
Bitmap rotatedImage = Bitmap.createBitmap(image, 0, 0, image.getWidth(), image.getHeight(), matrix, true);
Okey. If is not right please write me:
Hardware devices has a varios sensor rotation.
"imageRotationDegrees=0" on emulator means screen_height = sensor_height
"imageRotationDegrees=90" on device means screen_height = sensor_width
Depending on the rotation I get different values from "image.getCropRect()".
I check the imageRotationDegrees and calculate the correct frame for cropping
if (imageRotationDegrees == 0 || imageRotationDegrees == 180) {
} else {
}
Additional info:
getRotationDegrees is different on imageCapture and imageAnalysis on device
Related
I'm trying to use Firebase's MLKit for face detection with Camerax. I'm having a hard time to get Image analysis's imageproxy size to match PreviewView's size. For both Image analysis and PreviewView, I've set setTargetResolution() to PreviewView width and height. However when I check the size of the Imageproxy in the analyzer, it's giving me 1920 as width and 1080 as height. My PreviewView is 1080 for width and 2042 for height. When I swap the width and the height in setTargetResolution() for Image analysis, I get 1088 for both width and height in imageproxy. My previewview is also locked to portrait mode.
Ultimately, I need to feed the raw imageproxy data and the face point data into an AR code. So scaling up just the graphics overlay that draws the face points will not work for me.
Q: If there are no way to fix this within the camerax libraries, How to scale the imageproxy that returns from the analyzer to match the previewview?
I'm using Java and the latest Camerax libs:
def camerax_version = "1.0.0-beta08"
It's quite difficult to ensure both the preview and image analysis use cases have the same output resolution, since different devices support different resolutions, and image analysis has a hard limit on the max resolution of its output (as mentioned in the documentation).
To make the conversion easier between coordinates from the image analysis frames and the UI/PreviewView, you can set both preview and ImageAnalysis to use the same aspect ratio, for instance AspectRatio.RATIO_4_3, as well as PreviewView (by wrapping it inside a ConstraintLayout for example, and setting a constraint on its width/height ratio). With this, mapping coordinates of detected faces from the analyzer to the UI becomes more straight-forward, you can take a look at it in this sample.
Alternatively, you could use CameraX's ViewPort API which -I believe- is still experimental. It allows defining a field of view for a group of use cases, resulting in their outputs matching and having WYSIWYG. You can find an example of its usage here. For your case, you'd write something like this.
Preview preview = ...
preview.setSurfaceProvider(previewView.getSurfaceProvider());
ImageAnalysis imageAnalysis = ...
imageAnalysis.setAnalyzer(...);
ViewPort viewPort = preview.getViewPort();
UseCaseGroup useCaseGroup = new UseCaseGroup.Builder()
.setViewPort(viewPort)
.addUseCase(preview)
.addUseCase(imageAnalysis)
.build();
cameraProvider.bindToLifecycle(
lifecycleOwner,
cameraSelector,
usecaseGroup);
In this scenario, every ImageProxy your analyzer receives will contain a crop rect that matches what PreviewView displays. So you just need to crop your image, then pass it to the face detector.
This answer is derived from #Husayn's answer. I have added relevant sample code part.
Camerax image size for preview and analysis varies for various reasons (example device specific display size/hardware/camera or app specific view and processing)
However there are options to map the processing image size and resulting xy coordinates to preview size and to preview xy coordinates.
Setup layout with DimensionRatio 3:4 for both preview and analysis overlay in layout,
Example:
<androidx.camera.view.PreviewView
android:id="#+id/view_finder"
android:layout_width="match_parent"
android:layout_height="0dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintDimensionRatio="3:4"
app:layout_constraintTop_toTopOf="parent"/>
<com.loa.sepanex.scanner.view.GraphicOverlay
android:id="#+id/graphic_overlay"
android:layout_width="match_parent"
android:layout_height="0dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintDimensionRatio="3:4"
app:layout_constraintTop_toTopOf="parent"/>
setup preview and analysis use cases wuth AspectRatio.RATIO_4_3
Example:
viewFinder = view.findViewById(R.id.view_finder)
graphicOverlay = view.findViewById(R.id.graphic_overlay)
//...
preview = Preview.Builder()
.setTargetAspectRatio(AspectRatio.RATIO_4_3)
.setTargetRotation(rotation)
.build()
imageAnalyzer = ImageAnalysis.Builder()
.setTargetAspectRatio(AspectRatio.RATIO_4_3)
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.setTargetRotation(rotation)
.build()
.also {
it.setAnalyzer(cameraExecutor, ImageAnalysis.Analyzer {
image ->
//val rotationDegrees = image.imageInfo.rotationDegrees
try {
val mediaImage: Image? = image.image
if (mediaImage != null) {
val imageForFaceDetectionProcess = InputImage.fromMediaImage(mediaImage, image.getImageInfo().getRotationDegrees())
//...
}
}
}
}
Define scale and traslate APIs for getting the mapping of analysis image xy coordinates to preview xy coordinates, as shown below
val preview = viewFinder.getChildAt(0)
var previewWidth = preview.width * preview.scaleX
var previewHeight = preview.height * preview.scaleY
val rotation = preview.display.rotation
if (rotation == Surface.ROTATION_90 || rotation == Surface.ROTATION_270) {
val temp = previewWidth
previewWidth = previewHeight
previewHeight = temp
}
val isImageFlipped = lensFacing == CameraSelector.LENS_FACING_FRONT
val rotationDegrees: Int = imageProxy.getImageInfo().getRotationDegrees()
if (rotationDegrees == 0 || rotationDegrees == 180) {
graphicOverlay!!.setImageSourceInfo(
imageProxy.getWidth(), imageProxy.getHeight(), isImageFlipped)
} else {
graphicOverlay!!.setImageSourceInfo(
imageProxy.getHeight(), imageProxy.getWidth(), isImageFlipped)
}
:::
:::
float viewAspectRatio = (float) previewWidth / previewHeight;
float imageAspectRatio = (float) imageWidth / imageHeight;
postScaleWidthOffset = 0;
postScaleHeightOffset = 0;
if (viewAspectRatio > imageAspectRatio) {
// The image needs to be vertically cropped to be displayed in this view.
scaleFactor = (float) previewWidth / imageWidth;
postScaleHeightOffset = ((float) previewWidth / imageAspectRatio - previewHeight) / 2;
} else {
// The image needs to be horizontally cropped to be displayed in this view.
scaleFactor = (float) previewHeight / imageHeight;
postScaleWidthOffset = ((float) previewHeight * imageAspectRatio - previewWidth) / 2;
}
transformationMatrix.reset();
transformationMatrix.setScale(scaleFactor, scaleFactor);
transformationMatrix.postTranslate(-postScaleWidthOffset, -postScaleHeightOffset);
if (isImageFlipped) {
transformationMatrix.postScale(-1f, 1f, previewWidth / 2f, previewHeight / 2f);
}
:::
:::
public float scale(float imagePixel) {
return imagePixel * overlay.scaleFactor;
}
public float translateX(float x) {
if (overlay.isImageFlipped) {
return overlay.getWidth() - (scale(x) - overlay.postScaleWidthOffset);
} else {
return scale(x) - overlay.postScaleWidthOffset;
}
}
public float translateY(float y) {
return scale(y) - overlay.postScaleHeightOffset;
}
use translateX and translateY methods for plotting analysis image based data into preview
Example:
for (FaceContour contour : face.getAllContours()) {
for (PointF point : contour.getPoints()) {
canvas.drawCircle(translateX(point.x), translateY(point.y), FACE_POSITION_RADIUS, facePositionPaint);
}
}
I have a PreviewViewthat occupies the whole screen except for the toolbar.
The preview of the camera works great, but when I capture the image, the aspect ratio is complately different.
I would like to show the image to the user after it is successfully captured so it is the same size as the PreviewView so I don't have to crop or stretch it.
Is it possible to change the aspect ratio so on every device it is the size of the PreviewView or do I have to set it to a fixed value?
You can set the aspect ratio of the Preview and ImageCapture use cases while building them. If you set the same aspect ratio to both use cases, you should end up with a captured image that matches the camera preview output.
Example: Setting Preview and ImageCapture's aspect ratios to 4:3
Preview preview = new Preview.Builder()
.setTargetAspectRatio(AspectRatio.RATIO_4_3)
.build();
ImageCapture imageCapture = new ImageCapture.Builder()
.setTargetAspectRatio(AspectRatio.RATIO_4_3)
.build();
By doing this, you'll most likely still end up with a captured image that doesn't match what PreviewView is displaying. Assuming you don't change the default scale type of PreviewView, it'll be equal to ScaleType.FILL_CENTER, meaning that unless the camera preview output has an aspect ratio that matches that of PreviewView, PreviewView will crop parts of the preview (the top and bottom, or the right and left sides), resulting in the captured image not matching what PreviewView displays. To solve this issue, you should set PreviewView's aspect ratio to the same aspect ratio as the Preview and ImageCapture use cases.
Example: Setting PreviewView's aspect ratio to 4:3
<androidx.constraintlayout.widget.ConstraintLayout
android:layout_width="match_parent"
android:layout_height="match_parent">
<androidx.camera.view.PreviewView
android:layout_width="match_parent"
android:layout_height="0dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintDimensionRatio="3:4"
app:layout_constraintTop_toTopOf="parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
I was working with CameraX and faced the same issue. I followed the excellent answer given by #Husayn Hakeen. However, my app needs the previewView to match the device height width precisely. So I made some changes. In kotlin ( or java), I added
Preview preview = new Preview.Builder()
.setTargetAspectRatio(AspectRatio.RATIO_16_9)
.build();
ImageCapture imageCapture = new ImageCapture.Builder()
.setTargetAspectRatio(AspectRatio.RATIO_16_9)
.build();
And my xml has:
<androidx.camera.view.PreviewView
android:id="#+id/viewFinder"
android:layout_width="0dp"
android:layout_height="0dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent"
tools:background="#color/colorLight"
/>
From kotlin, I measured the device actual height and width in pixel:
val metrics: DisplayMetrics = DisplayMetrics().also { viewFinder.display.getRealMetrics(it) }
val deviceWidthPx = metrics.widthPixels
val deviceHeightPx = metrics.heightPixels
Using these data, I used some basic coordinate geometry to draw some lines on the captured photo:
As you can see, the yellow rectangle indicate the device preview, so this is what you see on the preview. I had a requirement to perform a center crop, that's why I drew the blue square.
After that, I cropped the preview:
#JvmStatic
fun cropPreviewBitmap(previewBitmap: Bitmap, deviceWidthPx: Int, deviceHeightPx: Int): Bitmap {
// crop the image
var cropHeightPx = 0f
var cropWidthPx = 0f
if(deviceHeightPx > deviceWidthPx) {
cropHeightPx = 1.0f *previewBitmap.height
cropWidthPx = 1.0f * deviceWidthPx / deviceHeightPx * cropHeightPx
}else {
cropWidthPx = 1.0f *previewBitmap.width
cropHeightPx = 1.0f * deviceHeightPx / deviceWidthPx * cropWidthPx
}
val cx = previewBitmap.width / 2
val cy = previewBitmap.height / 2
val minimusPx = Math.min(cropHeightPx, cropWidthPx)
val left2 = cx - minimusPx / 2
val top2 = cy - minimusPx /2
val croppedBitmap = Bitmap.createBitmap(previewBitmap, left2.toInt(), top2.toInt(), minimusPx.toInt(), minimusPx.toInt())
return croppedBitmap
}
This worked for me.
Easiest solution is to use the useCaseGroup, where you add both preview and image capture use cases under the same group + set the same view port.
Beware that with this solution, you will need to start the camera in onCreate method when it's ready (otherwise your app will crash):
viewFinder.post {
startCamera()
}
I want to be able to get the exact image for analysis step as I get in preview.
I have preview use case:
val metrics = DisplayMetrics().also { binding.codeScannerView.display.getRealMetrics(it) }
val screenAspectRatio = Rational(metrics.widthPixels, metrics.heightPixels)
val previewConfig = PreviewConfig.Builder().apply {
setTargetAspectRatio(screenAspectRatio)
}.build()
And next to it I have analysis use case config:
val analyzerConfig = ImageAnalysisConfig.Builder().apply {
setTargetResolution(Size(metrics.heightPixels, metrics.widthPixels))
setTargetAspectRatio(screenAspectRatio)
val analyzerThread = HandlerThread(
"QrCodeReader").apply { start() }
setCallbackHandler(Handler(analyzerThread.looper))
setImageReaderMode(
ImageAnalysis.ImageReaderMode.ACQUIRE_LATEST_IMAGE)
}.build()
My preview is fullscreen so it's size is 1440x2560. But if I try to get dimensions from ImageProxy in analyzer, I get 1920x1050 which seems to have incorrect dimensions and switched width with height. Why is that and how can I force my analysis step to have same dimensions as full screen?
Intro:
implementation 'androidx.camera:camera-core:1.0.0-alpha10'
implementation 'androidx.camera:camera-camera2:1.0.0-alpha10'
implementation "androidx.camera:camera-lifecycle:1.0.0-alpha10"
implementation "androidx.camera:camera-view:1.0.0-alpha07"
implementation 'com.google.firebase:firebase-ml-vision:24.0.1'
implementation 'com.google.firebase:firebase-ml-vision-barcode-model:16.0.2'
I am solve this issue via the method FirebaseVisionImage.fromBitmap(bitmap)
where bitmap - manual cropped && rotated image according to preview configuration.
The steps are:
when you setup ImageAnalysis.Builder() && Preview.Builder()
obtain the on-screen rendered size for
androidx.camera.view.PreviewView element:
previewView.measure(View.MeasureSpec.UNSPECIFIED, View.MeasureSpec.UNSPECIFIED)
val previewSize = Size(previewView.width, previewView.height)
Then pass the size into your own ImageAnalysis.Analyzer
implementation (let's say it will be viewFinderSize variable, usage see below)
when override fun analyze(mediaImage: ImageProxy){ occurs, do the manual crop of received ImageProxy. I am use the snippet from another SO question about distorted YUV_420_888 image: https://stackoverflow.com/a/45926852/2118862
When you do the cropping, keep in mind that ImageAnalysis use case receive image aligned in vertical-middle basis within your preview use case. In other words, received image, after rotation will be vertically centred as if it will be inside your preview area (even if your preview area is smaller than the image passed into analysis). So, crop area should be calculated in both directions from the vertical center: up and down.
The vertical size of the crop (height) should be manually calculated on the horizontal size basis. This means, when you receive image into analysis, it has full horizontal size within your preview area (100% of width inside preview is equal 100% of width inside analysis). So, no hidden zones in horizontal dimension. This open the way to calculate the size for vertical cropping. I am done this with next code:
var bitmap = ... <- obtain the bitmap as suggested in the SO link above
val matrix = Matrix()
matrix.postRotate(90f)
bitmap = Bitmap.createBitmap(bitmap, 0, 0, image.width, image.height, matrix, true)
val cropHeight = if (bitmap.width < viewFinderSize!!.width) {
// if preview area larger than analysing image
val koeff = bitmap.width.toFloat() / viewFinderSize!!.width.toFloat()
viewFinderSize!!.height.toFloat() * koeff
} else {
// if preview area smaller than analysing image
val prc = 100 - (viewFinderSize!!.width.toFloat()/(bitmap.width.toFloat()/100f))
viewFinderSize!!.height + ((viewFinderSize!!.height.toFloat()/100f) * prc)
}
val cropTop = (bitmap.height/2)-((cropHeight)/2)
bitmap = Bitmap.createBitmap(bitmap, 0, cropTop.toInt(), bitmap.width, cropHeight.toInt())
The final value in bitmap variable - is the cropped image ready to pass into FirebaseVisionImage.fromBitmap(bitmap)
PS.
welcome to improve the suggested variant
May be I am late but here is the code working for me
Few times if the the cropTop comes as -ve value so whenever the negative value comes you should process image which comes from ImageProxy in other cases you can process cropped bitmap
val mediaImage = imageProxy.image ?: return
var bitmap = ImageUtils.convertYuv420888ImageToBitmap(mediaImage)
val rotationDegrees = imageProxy.imageInfo.rotationDegrees
val matrix = Matrix()
matrix.postRotate(rotationDegrees.toFloat())
bitmap =
Bitmap.createBitmap(bitmap, 0, 0, mediaImage.width, mediaImage.height, matrix, true)
val cropHeight = if (bitmap.width < previewView.width) {
// if preview area larger than analysing image
val koeff = bitmap.width.toFloat() / previewView.width.toFloat()
previewView.height.toFloat() * koeff
} else {
// if preview area smaller than analysing image
val prc = 100 - (previewView.width.toFloat() / (bitmap.width.toFloat() / 100f))
previewView.height + ((previewView.height.toFloat() / 100f) * prc)
}
val cropTop = (bitmap.height / 2) - ((cropHeight) / 2)
if (cropTop > 0) {
Bitmap.createBitmap(bitmap, 0, cropTop.toInt(), bitmap.width, cropHeight.toInt())
.also { process(it, imageProxy) }
} else {
imageProxy.image?.let { process(it, imageProxy) }
}
In my app I have scanner view, which must scan two barcodes with zxing library. Top barcode is in PDF417 format, bottom - in DATAMATRIX. I used https://github.com/dm77/barcodescanner as a base, but with main difference: I have to use image coordinates as scan area. Main algorithm:
Depends of current step, scan activity passes current scan area to scanner view in screen coordinates. These coordinates calculated as follows:
public static Rect getScanRectangle(View view) {
int[] l = new int[2];
view.measure(0, 0);
view.getLocationOnScreen(l);
return new Rect(l[0], l[1], l[0] + view.getMeasuredWidth(), l[1] + view.getMeasuredHeight());
}
2.In the scanner view, in onPreviewFrame method, camera preview size is received from camera parameters. When I translated byte data from camera to bitmap image in memory, I saw, that it rotated 90 degrees cw, and camera resolution not equals screen resolution. So, I have to map screen coordinates into camera (or surface view) coordinates:
private Rect normalizeScreenCoordinates(Rect input, int cameraWidth, int cameraHeight) {
if(screenSize == null) {
screenSize = new Point();
Display display = activity.getWindowManager().getDefaultDisplay();
display.getSize(screenSize);
}
int height = screenSize.x;
int width = screenSize.y;
float widthCoef = (float)cameraWidth/width;
float heightCoef = (float)cameraHeight/height;
Rect result = new Rect((int)(input.top * widthCoef), (int)(input.left * heightCoef), (int)(input.bottom * widthCoef), (int)(input.right * heightCoef));
return result;
}
After that, translated coordinates passes into axing and on most test devices all works fine. But not on Nexus 5X. First, there are serious gap between display size and activity.getWindow().getDecorView() sizes. Maybe this is related to status bar size, which is translucent and for some reason it's height maybe not calculated. But, even after I added vertical offset, there are something wrong with scan area. What's may be reason for that error?
I am having a service in which I have to capture image without a surfaceView, everything is working perfect except the result image orientation, which I found to be miss-angled. On small device like HTC, I found it having issue or rotation so set rotation manually to see it working and it worked.
if (camInfo.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) {
parameters.setRotation(270);
} else if (camInfo.facing ==
Camera.CameraInfo.CAMERA_FACING_BACK) {
parameters.setRotation(90);
}
But when checking over samsung and HTC one (large devices) it creates problem with the angle. I found some posts where there I have to put the image path and then try to set the rotation, but that didn't work for me i.e. this as I am taking picture without serfaceview and then immediately posting it to the server. I also tried the google portion of code for setCameraOrientation() but it requires the activity view to work and so I am failed there too.
All I need is to fix the angle of the image before sent to the server without any surfaceview or activity thing.
setRotation() may only choose to play with the EXIF flags. The result is an image which is still rotated 90°, but with a "flag" that describes the correct orientation. Not all viewers correctly account for this flag. Specifically, BitmapFactory ignores it. You can draw the bitmap rotated on a canvas, or rotate the bitmap acquired from BitmapFactory.decodeFile(), or manipulate the JPEG data before writing it into outStream using a 3rd party lib, e.g. MediaUtil. The Android port is on GitHub.
You can have access to the image orientation information through the ExifInterface object. It will give you different values depending on the phone and if you capture the image in Landscape or Portrait mode. You can then use a matrix to rotate the image according to the ExifInterface information.And finally send it to your server.
Knowing the path of your image ( imagePath ), use the following code:
Matrix matrix = new Matrix();
try{
ExifInterface exif = new ExifInterface(imagePath);
int orientation = exif.getAttributeInt(
ExifInterface.TAG_ORIENTATION,
ExifInterface.ORIENTATION_NORMAL);
switch (orientation) {
case ExifInterface.ORIENTATION_ROTATE_90:
// Change the image orientation
matrix.postRotate(90);
break;
case ExifInterface.ORIENTATION_ROTATE_180:
// Change the image orientation
matrix.postRotate(180);
break;
}catch (IOException e) {
e.printStackTrace();
}
Then you use your matrix object in order to rotate your bitmap using:
rotatedBitmap = Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(), bitmap.getHeight(),matrix, true);
You would have to save the bitmap somewhere ( external or internal temp storage ) before sending it to the server.
Hope it helps you.
Your rotation-setting code is a bit too simple, so it's possible that on some devices it'll do the wrong thing. It's not guaranteed that these are the right rotations - the correct answer is a function of your device's current orientation, and the orientation of the sensor on the device.
Take a look at the sample code for Camera.Parameters.setRotation:
public void onOrientationChanged(int orientation) {
if (orientation == ORIENTATION_UNKNOWN) return;
android.hardware.Camera.CameraInfo info =
new android.hardware.Camera.CameraInfo();
android.hardware.Camera.getCameraInfo(cameraId, info);
orientation = (orientation + 45) / 90 * 90;
int rotation = 0;
if (info.facing == CameraInfo.CAMERA_FACING_FRONT) {
rotation = (info.orientation - orientation + 360) % 360;
} else { // back-facing camera
rotation = (info.orientation + orientation) % 360;
}
mParameters.setRotation(rotation);
}
If you don't have an activity, you'll have to figure out how to get the device's current orientation some other way, but you do need to include info.orientation in your calculation.