Android Picasso auto rotates image - android

I am using Picasso to load images from the web in my application. I have noticed that some images are shown rotated by 90degrees although when I open the image in my browser I see it correctly positioned. I assume that these images have EXIF data. Is there any way to instruct Picasso to ignore EXIF?

As we know, Picasso supports EXIF from local storage, this is done via Android inner Utils. Providing the same functionality can't be done easy due to ability to use custom Http loading libraries.
My solution is simple: we must override caching and apply Exif rotation before item is cached.
OkHttpClient client = new OkHttpClient.Builder()
.addNetworkInterceptor(chain -> {
Response originalResponse = chain.proceed(chain.request());
byte[] body = originalResponse.body().bytes();
ResponseBody newBody = ResponseBody
.create(originalResponse.body().contentType(), ImageUtils.processImage(body));
return originalResponse.newBuilder().body(newBody).build();
})
.cache(cache)
.build();
Here we add NetworkInterceptor that can transform request and response before it gets cached.
public class ImageUtils {
public static byte[] processImage(byte[] originalImg) {
int orientation = Exif.getOrientation(originalImg);
if (orientation != 0) {
Bitmap bmp = BitmapFactory.decodeByteArray(originalImg, 0, originalImg.length);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
rotateImage(orientation, bmp).compress(Bitmap.CompressFormat.PNG, 100, stream);
return stream.toByteArray();
}
return originalImg;
}
private static Bitmap rotateImage(int angle, Bitmap bitmapSrc) {
Matrix matrix = new Matrix();
matrix.postRotate(angle);
return Bitmap.createBitmap(bitmapSrc, 0, 0,
bitmapSrc.getWidth(), bitmapSrc.getHeight(), matrix, true);
}
}
Exif transformation:
public class Exif {
private static final String TAG = "Exif";
// Returns the degrees in clockwise. Values are 0, 90, 180, or 270.
public static int getOrientation(byte[] jpeg) {
if (jpeg == null) {
return 0;
}
int offset = 0;
int length = 0;
// ISO/IEC 10918-1:1993(E)
while (offset + 3 < jpeg.length && (jpeg[offset++] & 0xFF) == 0xFF) {
int marker = jpeg[offset] & 0xFF;
// Check if the marker is a padding.
if (marker == 0xFF) {
continue;
}
offset++;
// Check if the marker is SOI or TEM.
if (marker == 0xD8 || marker == 0x01) {
continue;
}
// Check if the marker is EOI or SOS.
if (marker == 0xD9 || marker == 0xDA) {
break;
}
// Get the length and check if it is reasonable.
length = pack(jpeg, offset, 2, false);
if (length < 2 || offset + length > jpeg.length) {
Log.e(TAG, "Invalid length");
return 0;
}
// Break if the marker is EXIF in APP1.
if (marker == 0xE1 && length >= 8 &&
pack(jpeg, offset + 2, 4, false) == 0x45786966 &&
pack(jpeg, offset + 6, 2, false) == 0) {
offset += 8;
length -= 8;
break;
}
// Skip other markers.
offset += length;
length = 0;
}
// JEITA CP-3451 Exif Version 2.2
if (length > 8) {
// Identify the byte order.
int tag = pack(jpeg, offset, 4, false);
if (tag != 0x49492A00 && tag != 0x4D4D002A) {
Log.e(TAG, "Invalid byte order");
return 0;
}
boolean littleEndian = (tag == 0x49492A00);
// Get the offset and check if it is reasonable.
int count = pack(jpeg, offset + 4, 4, littleEndian) + 2;
if (count < 10 || count > length) {
Log.e(TAG, "Invalid offset");
return 0;
}
offset += count;
length -= count;
// Get the count and go through all the elements.
count = pack(jpeg, offset - 2, 2, littleEndian);
while (count-- > 0 && length >= 12) {
// Get the tag and check if it is orientation.
tag = pack(jpeg, offset, 2, littleEndian);
if (tag == 0x0112) {
// We do not really care about type and count, do we?
int orientation = pack(jpeg, offset + 8, 2, littleEndian);
switch (orientation) {
case 1:
return 0;
case 3:
return 180;
case 6:
return 90;
case 8:
return 270;
}
Log.i(TAG, "Unsupported orientation");
return 0;
}
offset += 12;
length -= 12;
}
}
Log.i(TAG, "Orientation not found");
return 0;
}
private static int pack(byte[] bytes, int offset, int length,
boolean littleEndian) {
int step = 1;
if (littleEndian) {
offset += length - 1;
step = -1;
}
int value = 0;
while (length-- > 0) {
value = (value << 8) | (bytes[offset] & 0xFF);
offset += step;
}
return value;
}
}
This solution is experimental and must be tested for leaks and probably improved. In most cases Samsung and iOs devices return 90 DEG rotation and this solution works. Other cases also must be tested.

Can you post the image you're using?
because as this thread said, exif orientation for images loaded from web is ignored(only content provider and local files).
I also try to display this image in picasso 2.5.2, the real orientation of the image is facing rightside(the bottom code in image is facing right). The exif orientation, is 90deg clockwise. Try open it in chrome(chrome is honoring exif rotation), the image will be faced down(bottom code in image is facing down).

based on #ph0en1x response this version use google exif library and kotlin: add this interceptor to okhttpclient used by picasso
addNetworkInterceptor {
val response = it.proceed(it.request())
val body = response.body
if (body?.contentType()?.type == "image") {
val bytes = body.bytes()
val degrees = bytes.inputStream().use { input ->
when (ExifInterface(input).getAttributeInt(ExifInterface.TAG_ORIENTATION, ExifInterface.ORIENTATION_NORMAL)) {
ExifInterface.ORIENTATION_ROTATE_270 -> 270
ExifInterface.ORIENTATION_ROTATE_180 -> 180
ExifInterface.ORIENTATION_ROTATE_90 -> 90
else -> 0
}
}
if (degrees != 0) {
val bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
ByteArrayOutputStream().use { output ->
Bitmap.createBitmap(bitmap, 0, 0, bitmap.width, bitmap.height, Matrix().apply { postRotate(degrees.toFloat()) }, true)
.compress(Bitmap.CompressFormat.PNG, 100, output)
response.newBuilder().body(output.toByteArray().toResponseBody(body.contentType())).build()
}
} else
response.newBuilder().body(bytes.toResponseBody(body.contentType())).build()
} else
response
}

Related

When the image is saved , the image gets rotated by 90 degrees.Using CameraX

private void startCamera() {
CameraX.unbindAll();
Rational aspectRatio = new Rational (textureView.getWidth(), textureView.getHeight());
Size screen = new Size(textureView.getWidth(), textureView.getHeight()); //size of the screen
PreviewConfig pConfig = new PreviewConfig.Builder()
.setTargetAspectRatio(aspectRatio)
.setTargetResolution(screen)
.setLensFacing(CameraX.LensFacing.BACK)
.build();
Preview preview = new Preview(pConfig);
preview.setOnPreviewOutputUpdateListener(
new Preview.OnPreviewOutputUpdateListener() {
#Override
public void onUpdated(Preview.PreviewOutput output){
ViewGroup parent = (ViewGroup) textureView.getParent();
parent.removeView(textureView);
parent.addView(textureView, 0);
textureView.setSurfaceTexture(output.getSurfaceTexture());
int a=output.getRotationDegrees();
Log.d( TAG , "output Rota3: " + String.valueOf( a) );
updateTransform();
}
});
ImageCaptureConfig imageCaptureConfig = new ImageCaptureConfig.Builder().setCaptureMode(ImageCapture.CaptureMode.MIN_LATENCY)
.setTargetRotation((getWindowManager().getDefaultDisplay().getRotation())).setFlashMode( FlashMode.OFF ).build();
imgCap = new ImageCapture(imageCaptureConfig);
imgCapture.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
File file = new File(Environment.getExternalStorageDirectory() + "/" + "IMG_" +System.currentTimeMillis() + ".png");
imgCap.takePicture(file, new ImageCapture.OnImageSavedListener() {
#Override
public void onImageSaved(#NonNull File file) {
String msg = "Pic captured at " + file.getAbsolutePath();
Toast.makeText(getBaseContext(), msg,Toast.LENGTH_LONG).show();
}
#Override
public void onError(#NonNull ImageCapture.UseCaseError useCaseError, #NonNull String message, #Nullable Throwable cause) {
String msg = "Pic capture failed : " + message;
Toast.makeText(getBaseContext(), msg,Toast.LENGTH_LONG).show();
if(cause != null){
cause.printStackTrace();
}
}
});
}
});
CameraX.bindToLifecycle((LifecycleOwner)this, preview, imgCap);
}
private void updateTransform(){
Matrix mx = new Matrix();
float w = textureView.getMeasuredWidth();
float h = textureView.getMeasuredHeight();
float cX = w / 2f;
float cY = h / 2f;
int rotationDgr;
int rotation = (int)textureView.getRotation();
Log.d( TAG , "rotation: " + rotation );
switch(rotation){
case Surface.ROTATION_0:
rotationDgr = 0;
break;
case Surface.ROTATION_90:
rotationDgr = 90;
break;
case Surface.ROTATION_180:
rotationDgr = 180;
break;
case Surface.ROTATION_270:
rotationDgr = 270;
break;
default:
return;
}
Log.d( TAG , "rotationDGR: " + rotationDgr );
mx.postRotate((float)rotationDgr, cX, cY);
textureView.setTransform(mx);
}
Used CameraX.
Working on application to capture the image and get it saved after capturing.
But unfortunately, I am, getting orientation 0 as the value of getting TargetRotaion() always in every device. Even in the 90 degree rotated image devices.
The problem is image gets rotated on getting saved.
I tried some solutions suggested on StackOverflow but couldn't get the problem resolved.
Use this class `
public class DeviceOrientation {
private static final String TAG = "CameraExif";
// Returns the degrees in clockwise. Values are 0, 90, 180, or 270.
public static int getOrientation(byte[] jpeg) {
if (jpeg == null) {
return 0;
}
int offset = 0;
int length = 0;
// ISO/IEC 10918-1:1993(E)
while (offset + 3 < jpeg.length && (jpeg[offset++] & 0xFF) == 0xFF) {
int marker = jpeg[offset] & 0xFF;
// Check if the marker is a padding.
if (marker == 0xFF) {
continue;
}
offset++;
// Check if the marker is SOI or TEM.
if (marker == 0xD8 || marker == 0x01) {
continue;
}
// Check if the marker is EOI or SOS.
if (marker == 0xD9 || marker == 0xDA) {
break;
}
// Get the length and check if it is reasonable.
length = pack(jpeg, offset, 2, false);
if (length < 2 || offset + length > jpeg.length) {
Log.e(TAG, "Invalid length");
return 0;
}
// Break if the marker is EXIF in APP1.
if (marker == 0xE1 && length >= 8 &&
pack(jpeg, offset + 2, 4, false) == 0x45786966 &&
pack(jpeg, offset + 6, 2, false) == 0) {
offset += 8;
length -= 8;
break;
}
// Skip other markers.
offset += length;
length = 0;
}
// JEITA CP-3451 DeviceOrientation Version 2.2
if (length > 8) {
// Identify the byte order.
int tag = pack(jpeg, offset, 4, false);
if (tag != 0x49492A00 && tag != 0x4D4D002A) {
Log.e(TAG, "Invalid byte order");
return 0;
}
boolean littleEndian = (tag == 0x49492A00);
// Get the offset and check if it is reasonable.
int count = pack(jpeg, offset + 4, 4, littleEndian) + 2;
if (count < 10 || count > length) {
Log.e(TAG, "Invalid offset");
return 0;
}
offset += count;
length -= count;
// Get the count and go through all the elements.
count = pack(jpeg, offset - 2, 2, littleEndian);
while (count-- > 0 && length >= 12) {
// Get the tag and check if it is orientation.
tag = pack(jpeg, offset, 2, littleEndian);
if (tag == 0x0112) {
// We do not really care about type and count, do we?
int orientation = pack(jpeg, offset + 8, 2, littleEndian);
switch (orientation) {
case 1:
return 0;
case 3:
return 180;
case 6:
return 90;
case 8:
return 270;
}
Log.i(TAG, "Unsupported orientation");
return 0;
}
offset += 12;
length -= 12;
}
}
Log.i(TAG, "Orientation not found");
return 0;
}
private static int pack(byte[] bytes, int offset, int length,
boolean littleEndian) {
int step = 1;
if (littleEndian) {
offset += length - 1;
step = -1;
}
int value = 0;
while (length-- > 0) {
value = (value << 8) | (bytes[offset] & 0xFF);
offset += step;
}
return value;
}
}`
and use this method in after taking photo
int orientation = DeviceOrientation.getOrientation(bytes);

Take picture with drawable/paint on face using vision api

What I am trying?
I am trying to take picture with drawable/paint on face but, i am not able to get both on same picture.
What I have tried?
I have tried using CameraSource.takePicture but i am just getting face without any drawable/paint on it.
mCameraSource.takePicture(shutterCallback, new CameraSource.PictureCallback() {
#Override
public void onPictureTaken(byte[] bytes) {
try {
String mainpath = getExternalStorageDirectory() + separator + "TestXyz" + separator + "images" + separator;
File basePath = new File(mainpath);
if (!basePath.exists())
Log.d("CAPTURE_BASE_PATH", basePath.mkdirs() ? "Success": "Failed");
String path = mainpath + "photo_" + getPhotoTime() + ".jpg";
File captureFile = new File(path);
captureFile.createNewFile();
if (!captureFile.exists())
Log.d("CAPTURE_FILE_PATH", captureFile.createNewFile() ? "Success": "Failed");
FileOutputStream stream = new FileOutputStream(captureFile);
stream.write(bytes);
stream.flush();
stream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
});
I also tried using :
mPreview.setDrawingCacheEnabled(true);
Bitmap drawingCache = mPreview.getDrawingCache();
try {
String mainpath = getExternalStorageDirectory() + separator + "TestXyz" + separator + "images" + separator;
File basePath = new File(mainpath);
if (!basePath.exists())
Log.d("CAPTURE_BASE_PATH", basePath.mkdirs() ? "Success": "Failed");
String path = mainpath + "photo_" + getPhotoTime() + ".jpg";
File captureFile = new File(path);
captureFile.createNewFile();
if (!captureFile.exists())
Log.d("CAPTURE_FILE_PATH", captureFile.createNewFile() ? "Success": "Failed");
FileOutputStream stream = new FileOutputStream(captureFile);
drawingCache.compress(Bitmap.CompressFormat.PNG, 100, stream);
stream.flush();
stream.close();
} catch (IOException e) {
e.printStackTrace();
}
in this case i am only getting what i draw on face. Here, mPreview is the CameraSourcePreview.
Just added capture button and added above code in this google example.
You are very close to achieve what you need :)
You have:
An image from the Camera of the face (First code snippet)
An image from the Canvas of the eyes overlay (Second code snippet)
What you need:
An image that has the face with the eyes overlay on top - A merged image.
How to merge?
To merge 2 images simply use a canvas, like so:
public Bitmap mergeBitmaps(Bitmap face, Bitmap overlay) {
// Create a new image with target size
int width = face.getWidth();
int height = face.getHeight();
Bitmap newBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Rect faceRect = new Rect(0,0,width,height);
Rect overlayRect = new Rect(0,0,overlay.getWidth(),overlay.getHeight());
// Draw face and then overlay (Make sure rects are as needed)
Canvas canvas = new Canvas(newBitmap);
canvas.drawBitmap(face, faceRect, faceRect, null);
canvas.drawBitmap(overlay, overlayRect, faceRect, null);
return newBitmap
}
Then you can save the new image, as you are doing now.
Full code would look like:
mCameraSource.takePicture(shutterCallback, new
CameraSource.PictureCallback() {
#Override
public void onPictureTaken(byte[] bytes) {
// Generate the Face Bitmap
BitmapFactory.Options options = new BitmapFactory.Options();
Bitmap face = BitmapFactory.decodeByteArray(bytes, 0, bytes.length, options);
// Generate the Eyes Overlay Bitmap
mPreview.setDrawingCacheEnabled(true);
Bitmap overlay = mPreview.getDrawingCache();
// Generate the final merged image
Bitmap result = mergeBitmaps(face, overlay);
// Save result image to file
try {
String mainpath = getExternalStorageDirectory() + separator + "TestXyz" + separator + "images" + separator;
File basePath = new File(mainpath);
if (!basePath.exists())
Log.d("CAPTURE_BASE_PATH", basePath.mkdirs() ? "Success": "Failed");
String path = mainpath + "photo_" + getPhotoTime() + ".jpg";
File captureFile = new File(path);
captureFile.createNewFile();
if (!captureFile.exists())
Log.d("CAPTURE_FILE_PATH", captureFile.createNewFile() ? "Success": "Failed");
FileOutputStream stream = new FileOutputStream(captureFile);
result.compress(Bitmap.CompressFormat.PNG, 100, stream);
stream.flush();
stream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
});
Note that the above is just an example code.
You should probably move the merging and saving to a file to a background thread.
I am able to capture image with drawable/paint on it by below solution :
private void captureImage() {
mPreview.setDrawingCacheEnabled(true);
Bitmap drawingCache = mPreview.getDrawingCache();
mCameraSource.takePicture(shutterCallback, new CameraSource.PictureCallback() {
#Override
public void onPictureTaken(byte[] bytes) {
int orientation = Exif.getOrientation(bytes);
Bitmap temp = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
Bitmap picture = rotateImage(temp,orientation);
Bitmap overlay = Bitmap.createBitmap(mGraphicOverlay.getWidth(),mGraphicOverlay.getHeight(),picture.getConfig());
Canvas canvas = new Canvas(overlay);
Matrix matrix = new Matrix();
matrix.setScale((float)overlay.getWidth()/(float)picture.getWidth(),(float)overlay.getHeight()/(float)picture.getHeight());
// mirror by inverting scale and translating
matrix.preScale(-1, 1);
matrix.postTranslate(canvas.getWidth(), 0);
Paint paint = new Paint();
canvas.drawBitmap(picture,matrix,paint);
canvas.drawBitmap(drawingCache,0,0,paint);
try {
String mainpath = getExternalStorageDirectory() + separator + "MaskIt" + separator + "images" + separator;
File basePath = new File(mainpath);
if (!basePath.exists())
Log.d("CAPTURE_BASE_PATH", basePath.mkdirs() ? "Success": "Failed");
String path = mainpath + "photo_" + getPhotoTime() + ".jpg";
File captureFile = new File(path);
captureFile.createNewFile();
if (!captureFile.exists())
Log.d("CAPTURE_FILE_PATH", captureFile.createNewFile() ? "Success": "Failed");
FileOutputStream stream = new FileOutputStream(captureFile);
overlay.compress(Bitmap.CompressFormat.PNG, 100, stream);
stream.flush();
stream.close();
picture.recycle();
drawingCache.recycle();
mPreview.setDrawingCacheEnabled(false);
} catch (IOException e) {
e.printStackTrace();
}
}
});
}
Sometimes orientation issue also occurs on some devices. For that i used Exif class and rotateImage() function.
Exif Class (reference from here) :
public class Exif {
private static final String TAG = "CameraExif";
// Returns the degrees in clockwise. Values are 0, 90, 180, or 270.
public static int getOrientation(byte[] jpeg) {
if (jpeg == null) {
return 0;
}
int offset = 0;
int length = 0;
// ISO/IEC 10918-1:1993(E)
while (offset + 3 < jpeg.length && (jpeg[offset++] & 0xFF) == 0xFF) {
int marker = jpeg[offset] & 0xFF;
// Check if the marker is a padding.
if (marker == 0xFF) {
continue;
}
offset++;
// Check if the marker is SOI or TEM.
if (marker == 0xD8 || marker == 0x01) {
continue;
}
// Check if the marker is EOI or SOS.
if (marker == 0xD9 || marker == 0xDA) {
break;
}
// Get the length and check if it is reasonable.
length = pack(jpeg, offset, 2, false);
if (length < 2 || offset + length > jpeg.length) {
Log.e(TAG, "Invalid length");
return 0;
}
// Break if the marker is EXIF in APP1.
if (marker == 0xE1 && length >= 8 &&
pack(jpeg, offset + 2, 4, false) == 0x45786966 &&
pack(jpeg, offset + 6, 2, false) == 0) {
offset += 8;
length -= 8;
break;
}
// Skip other markers.
offset += length;
length = 0;
}
// JEITA CP-3451 Exif Version 2.2
if (length > 8) {
// Identify the byte order.
int tag = pack(jpeg, offset, 4, false);
if (tag != 0x49492A00 && tag != 0x4D4D002A) {
Log.e(TAG, "Invalid byte order");
return 0;
}
boolean littleEndian = (tag == 0x49492A00);
// Get the offset and check if it is reasonable.
int count = pack(jpeg, offset + 4, 4, littleEndian) + 2;
if (count < 10 || count > length) {
Log.e(TAG, "Invalid offset");
return 0;
}
offset += count;
length -= count;
// Get the count and go through all the elements.
count = pack(jpeg, offset - 2, 2, littleEndian);
while (count-- > 0 && length >= 12) {
// Get the tag and check if it is orientation.
tag = pack(jpeg, offset, 2, littleEndian);
if (tag == 0x0112) {
// We do not really care about type and count, do we?
int orientation = pack(jpeg, offset + 8, 2, littleEndian);
switch (orientation) {
case 1:
return 0;
case 3:
return 3;
case 6:
return 6;
case 8:
return 8;
}
Log.i(TAG, "Unsupported orientation");
return 0;
}
offset += 12;
length -= 12;
}
}
Log.i(TAG, "Orientation not found");
return 0;
}
private static int pack(byte[] bytes, int offset, int length,
boolean littleEndian) {
int step = 1;
if (littleEndian) {
offset += length - 1;
step = -1;
}
int value = 0;
while (length-- > 0) {
value = (value << 8) | (bytes[offset] & 0xFF);
offset += step;
}
return value;
}
}
rotateImage function :
private Bitmap rotateImage(Bitmap bm, int i) {
Matrix matrix = new Matrix();
switch (i) {
case ExifInterface.ORIENTATION_NORMAL:
return bm;
case ExifInterface.ORIENTATION_FLIP_HORIZONTAL:
matrix.setScale(-1, 1);
break;
case ExifInterface.ORIENTATION_ROTATE_180:
matrix.setRotate(180);
break;
case ExifInterface.ORIENTATION_FLIP_VERTICAL:
matrix.setRotate(180);
matrix.postScale(-1, 1);
break;
case ExifInterface.ORIENTATION_TRANSPOSE:
matrix.setRotate(90);
matrix.postScale(-1, 1);
break;
case ExifInterface.ORIENTATION_ROTATE_90:
matrix.setRotate(90);
break;
case ExifInterface.ORIENTATION_TRANSVERSE:
matrix.setRotate(-90);
matrix.postScale(-1, 1);
break;
case ExifInterface.ORIENTATION_ROTATE_270:
matrix.setRotate(-90);
break;
default:
return bm;
}
try {
Bitmap bmRotated = Bitmap.createBitmap(bm, 0, 0, bm.getWidth(), bm.getHeight(), matrix, true);
bm.recycle();
return bmRotated;
} catch (OutOfMemoryError e) {
e.printStackTrace();
return null;
}
}
You can achieve the effect that you want by breaking it into smaller steps.
Take the picture
Send the bitmap to Google Mobile Vision to detect the "landmarks" in the face and the probability that each eye is open
Paint the appropriate "eyes" onto your image
When using Google Mobile Vision's FaceDetector, you'll get back a SparseArray of Face objects (which may contain more than one face, or which may be empty). So you'll need to handle these cases. But you can loop through the SparseArray and find the Face object that you want to play with.
static Bitmap processFaces(Context context, Bitmap picture) {
// Create a "face detector" object, using the builder pattern
FaceDetector detector = new FaceDetector.Builder(context)
.setTrackingEnabled(false) // disable tracking to improve performance
.setClassificationType(FaceDetector.ALL_CLASSIFICATIONS)
.build();
// create a "Frame" object, again using a builder pattern (and passing in our picture)
Frame frame = new Frame.Builder().setBitmap(picture).build(); // build frame
// get a sparse array of face objects
SparseArray<Face> faces = detector.detect(frame); // detect the faces
// This example just deals with a single face for the sake of simplicity,
// but you can change this to deal with multiple faces.
if (faces.size() != 1) return picture;
// make a mutable copy of the background image that we can modify
Bitmap bmOverlay = Bitmap.createBitmap(picture.getWidth(), picture.getHeight(), picture.getConfig());
Canvas canvas = new Canvas(bmOverlay);
canvas.drawBitmap(picture, 0, 0, null);
// get the Face object that we want to manipulate, and process it
Face face = faces.valueAt(0);
processFace(face, canvas);
detector.release();
return bmOverlay;
}
Once you've got a Face object, you can find the features that interest you like this
private static void processFace(Face face, Canvas canvas) {
// The Face object can tell you the probability that each eye is open.
// I'm comparing this probability to an arbitrary threshold of 0.6 here,
// but you can vary it between 0 and 1 as you please.
boolean leftEyeClosed = face.getIsLeftEyeOpenProbability() < .6;
boolean rightEyeClosed = face.getIsRightEyeOpenProbability() < .6;
// Loop through the face's "landmarks" (eyes, nose, etc) to find the eyes.
// landmark.getPosition() gives you the (x,y) coordinates of each feature.
for (Landmark landmark : face.getLandmarks()) {
if (landmark.getType() == Landmark.LEFT_EYE)
overlayEyeBitmap(canvas, leftEyeClosed, landmark.getPosition().x, landmark.getPosition().y);
if (landmark.getType() == Landmark.RIGHT_EYE)
overlayEyeBitmap(canvas, rightEyeClosed, landmark.getPosition().x, landmark.getPosition().y);
}
}
Then you can add your paint!
private static void overlayEyeBitmap(Canvas canvas, boolean eyeClosed, float cx, float cy) {
float radius = 40;
// draw the eye's background circle with appropriate color
Paint paintFill = new Paint();
paintFill.setStyle(Paint.Style.FILL);
if (eyeClosed)
paintFill.setColor(Color.YELLOW);
else
paintFill.setColor(Color.WHITE);
canvas.drawCircle(cx, cy, radius, paintFill);
// draw a black border around the eye
Paint paintStroke = new Paint();
paintStroke.setColor(Color.BLACK);
paintStroke.setStyle(Paint.Style.STROKE);
paintStroke.setStrokeWidth(5);
canvas.drawCircle(cx, cy, radius, paintStroke);
if (eyeClosed)
// draw horizontal line across closed eye
canvas.drawLine(cx - radius, cy, cx + radius, cy, paintStroke);
else {
// draw big off-center pupil on open eye
paintFill.setColor(Color.BLACK);
float cxPupil = cx - 10;
float cyPupil = cy + 10;
canvas.drawCircle(cxPupil, cyPupil, 25, paintFill);
}
}
In the snippet above, I just hardcoded the eye radii, to show proof of concept. You'll probably want to do some more flexible scaling, using some percentage of face.getWidth() to determine the appropriate values. But here's what this image processing can do:
Some more details about the Mobile Vision API are here, and Udacity's current Advanced Android course has a nice walkthrough of this stuff (taking a picture, sending it to Mobile Vision, and adding a bitmap onto it). The course is free, or you can just look at what they did on Github.

camerasource.takePicture() save rotated images in some device

I am using vision api for tracking face. I applied a mask on the basis of face position.When i take a picture from front camera i call camerasource.takePicture() to save images.I am facing issue of image rotation in some device like samsung and capture image shows mask and face in different different position.I use Exif class to get orientation of images but it always return 0 so i am unable to rotate the image.
I am using following class to getOrientation and rotate image.
public class ExifUtils {
public Bitmap rotateBitmap(String src, Bitmap bitmap) {
try {
int orientation = getExifOrientation(src);
if (orientation == 1) {
return bitmap;
}
Matrix matrix = new Matrix();
switch (orientation) {
case 2:
matrix.setScale(-1, 1);
break;
case 3:
matrix.setRotate(180);
break;
case 4:
matrix.setRotate(180);
matrix.postScale(-1, 1);
break;
case 5:
matrix.setRotate(90);
matrix.postScale(-1, 1);
break;
case 6:
matrix.setRotate(90);
break;
case 7:
matrix.setRotate(-90);
matrix.postScale(-1, 1);
break;
case 8:
matrix.setRotate(-90);
break;
default:
return bitmap;
}
try {
Bitmap oriented = Bitmap.createBitmap(bitmap, 0, 0,
bitmap.getWidth(), bitmap.getHeight(), matrix, true);
bitmap.recycle();
return oriented;
} catch (OutOfMemoryError e) {
e.printStackTrace();
return bitmap;
}
} catch (IOException e) {
e.printStackTrace();
}
return bitmap;
}
private int getExifOrientation(String src) throws IOException {
int orientation = 1;
try {
if (Build.VERSION.SDK_INT >= 5) {
Class<?> exifClass = Class
.forName("android.media.ExifInterface");
Constructor<?> exifConstructor = exifClass
.getConstructor(new Class[]{String.class});
Object exifInstance = exifConstructor
.newInstance(new Object[]{src});
Method getAttributeInt = exifClass.getMethod("getAttributeInt",
new Class[]{String.class, int.class});
Field tagOrientationField = exifClass
.getField("TAG_ORIENTATION");
String tagOrientation = (String) tagOrientationField.get(null);
orientation = (Integer) getAttributeInt.invoke(exifInstance,
new Object[]{tagOrientation, 1});
}
} catch (ClassNotFoundException e) {
e.printStackTrace();
} catch (SecurityException e) {
e.printStackTrace();
} catch (NoSuchMethodException e) {
e.printStackTrace();
} catch (IllegalArgumentException e) {
e.printStackTrace();
} catch (Fragment.InstantiationException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
} catch (InvocationTargetException e) {
e.printStackTrace();
} catch (NoSuchFieldException e) {
e.printStackTrace();
} catch (java.lang.InstantiationException e) {
e.printStackTrace();
}
return orientation;
}
}
I found this issue in vision api is there any solution.
I solve my problem myself. I get orientation from byte data then rotate my images according to orientation.
private CameraSource.PictureCallback mPicture = new CameraSource.PictureCallback() {
#Override
public void onPictureTaken(byte[] bytes) {
int orientation = Exif.getOrientation(bytes);
Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
switch(orientation) {
case 90:
bitmapPicture= rotateImage(bitmap, 90);
break;
case 180:
bitmapPicture= rotateImage(bitmap, 180);
break;
case 270:
bitmapPicture= rotateImage(bitmap, 270);
break;
case 0:
// if orientation is zero we don't need to rotate this
default:
break;
}
//write your code here to save bitmap
}
}
};
public static Bitmap rotateImage(Bitmap source, float angle) {
Matrix matrix = new Matrix();
matrix.postRotate(angle);
return Bitmap.createBitmap(source, 0, 0, source.getWidth(), source.getHeight(), matrix,
true);
}
Below class is used to get orientation from byte[] data.
public class Exif {
private static final String TAG = "CameraExif";
// Returns the degrees in clockwise. Values are 0, 90, 180, or 270.
public static int getOrientation(byte[] jpeg) {
if (jpeg == null) {
return 0;
}
int offset = 0;
int length = 0;
// ISO/IEC 10918-1:1993(E)
while (offset + 3 < jpeg.length && (jpeg[offset++] & 0xFF) == 0xFF) {
int marker = jpeg[offset] & 0xFF;
// Check if the marker is a padding.
if (marker == 0xFF) {
continue;
}
offset++;
// Check if the marker is SOI or TEM.
if (marker == 0xD8 || marker == 0x01) {
continue;
}
// Check if the marker is EOI or SOS.
if (marker == 0xD9 || marker == 0xDA) {
break;
}
// Get the length and check if it is reasonable.
length = pack(jpeg, offset, 2, false);
if (length < 2 || offset + length > jpeg.length) {
Log.e(TAG, "Invalid length");
return 0;
}
// Break if the marker is EXIF in APP1.
if (marker == 0xE1 && length >= 8 &&
pack(jpeg, offset + 2, 4, false) == 0x45786966 &&
pack(jpeg, offset + 6, 2, false) == 0) {
offset += 8;
length -= 8;
break;
}
// Skip other markers.
offset += length;
length = 0;
}
// JEITA CP-3451 Exif Version 2.2
if (length > 8) {
// Identify the byte order.
int tag = pack(jpeg, offset, 4, false);
if (tag != 0x49492A00 && tag != 0x4D4D002A) {
Log.e(TAG, "Invalid byte order");
return 0;
}
boolean littleEndian = (tag == 0x49492A00);
// Get the offset and check if it is reasonable.
int count = pack(jpeg, offset + 4, 4, littleEndian) + 2;
if (count < 10 || count > length) {
Log.e(TAG, "Invalid offset");
return 0;
}
offset += count;
length -= count;
// Get the count and go through all the elements.
count = pack(jpeg, offset - 2, 2, littleEndian);
while (count-- > 0 && length >= 12) {
// Get the tag and check if it is orientation.
tag = pack(jpeg, offset, 2, littleEndian);
if (tag == 0x0112) {
// We do not really care about type and count, do we?
int orientation = pack(jpeg, offset + 8, 2, littleEndian);
switch (orientation) {
case 1:
return 0;
case 3:
return 180;
case 6:
return 90;
case 8:
return 270;
}
Log.i(TAG, "Unsupported orientation");
return 0;
}
offset += 12;
length -= 12;
}
}
Log.i(TAG, "Orientation not found");
return 0;
}
private static int pack(byte[] bytes, int offset, int length,
boolean littleEndian) {
int step = 1;
if (littleEndian) {
offset += length - 1;
step = -1;
}
int value = 0;
while (length-- > 0) {
value = (value << 8) | (bytes[offset] & 0xFF);
offset += step;
}
return value;
}
}
I have encountered similar problem with Samsung devices, ExifInterface doesn't seem to work correctly with images saved by them. In order to solve the problem I used code from Glide image library, it seems to handle checking original image rotation correctly.
Check out this link: Glide source
getOrientation method from there seems to do the job most of the time.
Sounds like an issue with Exif tags to me. Basically, modern cameras save images in the same orientation, but also save a tag that tells you, what the original orientation was.
You could use Exif Interface, which comes bundled with the java api. I prefer Alessandro Crugnola's Android-Exif-Interface library, which doesn't require you to keep filepaths around
How I used Android-Exif-Interface in my project:
ExifInterface exif = new ExifInterface();
Matrix matrix = new Matrix();
try {
exif.readExif(context.getContentResolver().openInputStream(fileUri), ExifInterface.Options.OPTION_ALL);
ExifTag tag = exif.getTag(ExifInterface.TAG_ORIENTATION);
int orientation = tag.getValueAsInt(1);
switch (orientation) {
case 3: /* 180° */
matrix.postRotate(180);
break;
case 6: /* 90° */
matrix.postRotate(90);
break;
case 8: /* 270° */
matrix.postRotate(-90);
break;
}
} catch (IOException e) {
Log.i("INFO","expected behaviour: IOException");
//not every picture comes from the phone, should that be the case,
// we can't get exif tags anyway, since those aren't being transmitted
// via http (atleast I think so. I'd need to save the picture on the SD card to
// confirm that and I don't want to do that)
} catch(NullPointerException e){
Log.i("INFO","expected behaviour: NullPointerException");
//same as above, not every picture comes from the phone
}
In many cases, the pictureCallback() receives a Jpeg with undefined orientation tag. But you can calculate the actual device orientation either by looking at the display rotation, or running an orientation listener, as for Camera.takePicture returns a rotated byteArray.

Android camera/picture orientation issues with Samsung Galaxy S3, S4, S5

I am developing a camera application for Android API 16 to 21 which main and only purpose is to take portrait photo. I am able to take picture with several devices (Nexus 4, Nexus 5, HTC...) and have them correctly oriented (meaning that my preview equals the taken picture both in size and orientation).
However I have tested my application on several other devices and some of them are giving me alot of trouble: Samsung Galaxy S3/S4/S5.
On these three devices, the preview is correctly displayed, however the pictures returned by the method onPictureTaken(final byte[] jpeg, Camera camera) are always sideways.
This is the Bitmap created from byte[] jpeg and displayed in the ImageView to my user just before saving it to the disk:
And here is the image once saved on the disk:
As you can see the image is completly stretched in the preview and wrongly rotated once saved on the disk.
Here is my CameraPreview class (I obfuscated other methods since they had nothing to do with camera parameters):
public class CameraPreview extends SurfaceView implements SurfaceHolder.Callback
{
private SurfaceHolder surfaceHolder;
private Camera camera;
// Removed unnecessary code
public void surfaceCreated(SurfaceHolder holder)
{
camera.setPreviewDisplay(holder);
setCameraParameters();
camera.startPreview();
}
private void setCameraParameters()
{
Camera.Parameters parameters = camera.getParameters();
Camera.CameraInfo info = new Camera.CameraInfo();
Camera.getCameraInfo(Camera.CameraInfo.CAMERA_FACING_BACK, info);
DisplayMetrics metrics = new DisplayMetrics();
WindowManager windowManager = (WindowManager)getContext().getSystemService(Context.WINDOW_SERVICE);
windowManager.getDefaultDisplay().getMetrics(metrics);
int rotation = windowManager.getDefaultDisplay().getRotation();
int degrees = 0;
switch (rotation)
{
case Surface.ROTATION_0:
degrees = 0;
break;
case Surface.ROTATION_90:
degrees = 90;
break;
case Surface.ROTATION_180:
degrees = 180;
break;
case Surface.ROTATION_270:
degrees = 270;
break;
}
int rotate = (info.orientation - degrees + 360) % 360;
parameters.setRotation(rotate);
// Save Parameters
camera.setDisplayOrientation(90);
camera.setParameters(parameters);
}
}
How come this exact piece of code works for other devices except Samsung's one ?
I tried to find answers on the following SO posts but nothing could help me so far:
this one and this other one.
EDIT
Implementing Joey Chong's answer does not changes anything:
public void onPictureTaken(final byte[] data, Camera camera)
{
try
{
File pictureFile = new File(...);
Bitmap realImage = BitmapFactory.decodeByteArray(data, 0, data.length);
FileOutputStream fos = new FileOutputStream(pictureFile);
realImage.compress(Bitmap.CompressFormat.JPEG, 100, fos);
int orientation = -1;
ExifInterface exif = new ExifInterface(pictureFile.toString());
int exifOrientation = exif.getAttributeInt(ExifInterface.TAG_ORIENTATION, ExifInterface.ORIENTATION_NORMAL);
switch (exifOrientation)
{
case ExifInterface.ORIENTATION_ROTATE_270:
orientation = 270;
break;
case ExifInterface.ORIENTATION_ROTATE_180:
orientation = 180;
break;
case ExifInterface.ORIENTATION_ROTATE_90:
orientation = 90;
break;
case ExifInterface.ORIENTATION_NORMAL:
orientation = 0;
break;
default:
break;
}
fos.close();
}
Here are the EXIF results I get for a working device:
Orientation: 0
And here the results for the S4:
Orientation: 0
It is because the phone still save in landscape and put the meta data as 90 degree.
You can try check the exif, rotate the bitmap before put in image view. To check exif, use something like below:
int orientation = -1;
ExifInterface exif = new ExifInterface(imagePath);
int exifOrientation = exif.getAttributeInt(ExifInterface.TAG_ORIENTATION,
ExifInterface.ORIENTATION_NORMAL);
switch (exifOrientation) {
case ExifInterface.ORIENTATION_ROTATE_270:
orientation = 270;
break;
case ExifInterface.ORIENTATION_ROTATE_180:
orientation = 180;
break;
case ExifInterface.ORIENTATION_ROTATE_90:
orientation = 90;
break;
case ExifInterface.ORIENTATION_NORMAL:
orientation = 0;
break;
default:
break;
}
I had a similar problem regarding the saved image.
I used something similar to what is described here https://github.com/googlesamples/android-vision/issues/124 by user kinghsumit (the comment from Sep 15, 2016).
I'll copy it here, just in case.
private CameraSource.PictureCallback mPicture = new CameraSource.PictureCallback() {
#Override
public void onPictureTaken(byte[] bytes) {
int orientation = Exif.getOrientation(bytes);
Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
switch(orientation) {
case 90:
bitmapPicture= rotateImage(bitmap, 90);
break;
case 180:
bitmapPicture= rotateImage(bitmap, 180);
break;
case 270:
bitmapPicture= rotateImage(bitmap, 270);
break;
case 0:
// if orientation is zero we don't need to rotate this
default:
break;
}
//write your code here to save bitmap
}
}
public static Bitmap rotateImage(Bitmap source, float angle) {
Matrix matrix = new Matrix();
matrix.postRotate(angle);
return Bitmap.createBitmap(source, 0, 0, source.getWidth(), source.getHeight(), matrix, true);
}
Below class is used to get orientation from byte[] data.
public class Exif {
private static final String TAG = "CameraExif";
// Returns the degrees in clockwise. Values are 0, 90, 180, or 270.
public static int getOrientation(byte[] jpeg) {
if (jpeg == null) {
return 0;
}
int offset = 0;
int length = 0;
// ISO/IEC 10918-1:1993(E)
while (offset + 3 < jpeg.length && (jpeg[offset++] & 0xFF) == 0xFF) {
int marker = jpeg[offset] & 0xFF;
// Check if the marker is a padding.
if (marker == 0xFF) {
continue;
}
offset++;
// Check if the marker is SOI or TEM.
if (marker == 0xD8 || marker == 0x01) {
continue;
}
// Check if the marker is EOI or SOS.
if (marker == 0xD9 || marker == 0xDA) {
break;
}
// Get the length and check if it is reasonable.
length = pack(jpeg, offset, 2, false);
if (length < 2 || offset + length > jpeg.length) {
Log.e(TAG, "Invalid length");
return 0;
}
// Break if the marker is EXIF in APP1.
if (marker == 0xE1 && length >= 8 &&
pack(jpeg, offset + 2, 4, false) == 0x45786966 &&
pack(jpeg, offset + 6, 2, false) == 0) {
offset += 8;
length -= 8;
break;
}
// Skip other markers.
offset += length;
length = 0;
}
// JEITA CP-3451 Exif Version 2.2
if (length > 8) {
// Identify the byte order.
int tag = pack(jpeg, offset, 4, false);
if (tag != 0x49492A00 && tag != 0x4D4D002A) {
Log.e(TAG, "Invalid byte order");
return 0;
}
boolean littleEndian = (tag == 0x49492A00);
// Get the offset and check if it is reasonable.
int count = pack(jpeg, offset + 4, 4, littleEndian) + 2;
if (count < 10 || count > length) {
Log.e(TAG, "Invalid offset");
return 0;
}
offset += count;
length -= count;
// Get the count and go through all the elements.
count = pack(jpeg, offset - 2, 2, littleEndian);
while (count-- > 0 && length >= 12) {
// Get the tag and check if it is orientation.
tag = pack(jpeg, offset, 2, littleEndian);
if (tag == 0x0112) {
// We do not really care about type and count, do we?
int orientation = pack(jpeg, offset + 8, 2, littleEndian);
switch (orientation) {
case 1:
return 0;
case 3:
return 180;
case 6:
return 90;
case 8:
return 270;
}
Log.i(TAG, "Unsupported orientation");
return 0;
}
offset += 12;
length -= 12;
}
}
Log.i(TAG, "Orientation not found");
return 0;
}
private static int pack(byte[] bytes, int offset, int length, boolean littleEndian) {
int step = 1;
if (littleEndian) {
offset += length - 1;
step = -1;
}
int value = 0;
while (length-- > 0) {
value = (value << 8) | (bytes[offset] & 0xFF);
offset += step;
}
return value;
}
}
It worked for me, except for the Nexus 5x, but that's because that device has a peculiar issue due to its construction.
I hope this helps you!
I used this AndroidCameraUtil.
It helped me a lot on this issue.
You can try to use Camera parameters to fix rotation issue.
Camera.Parameters parameters = camera.getParameters();
parameters.set("orientation", "portrait");
parameters.setRotation(90);
camera.setParameters(parameters);

Android Convert Raw data to Png image color is not good?

I have tried to convert the pure raw data to png image. i am not able to get the correct output image with exact color.
for reference i have attached the both raw file as well as image . please advice me to get the image with correct color. i have added both image and raw file .
CODE
File screensPath = new File(SCREENSHOT_FOLDER);
screensPath.mkdirs();
// construct screenshot file name
StringBuilder sb = new StringBuilder();
sb.append(SCREENSHOT_FOLDER);
sb.append(Math.abs(UUID.randomUUID().hashCode())); // hash code of UUID should be quite random yet short
sb.append(".png");
String file = sb.toString();
// fetch the screen and save it
Screenshot ss = null;
try {
ss = retreiveRawScreenshot();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
if(ss!=null)
{
writeImageFile(ss, file);
}
incre++;
private Screenshot retreiveRawScreenshot() throws Exception {
try {
InputStream is = new FileInputStream("/mnt/sdcard/screenshots/ss"+incre+".raw");
// retrieve response -- first the size and BPP of the screenshot
StringBuilder sb = new StringBuilder();
int c;
while ((c = is.read()) != -1) {
if (c == 0) break;
sb.append((char)c);
}
//========================================= not used =====================================
// parse it
String[] screenData = sb.toString().split(" ");
if (screenData.length >= 3) {
System.out.println("$$$$$$$$$$$$$$$$$$$$$$$$$$$ ");
Screenshot ss = new Screenshot();
ss.width = Integer.parseInt(screenData[0]);
ss.height = Integer.parseInt(screenData[1]);
ss.bpp = Integer.parseInt(screenData[2]);
System.out.println("$$$$$$$$$$$$$$$$$$$$$$$$$$$ ");
// retreive the screenshot
// (this method - via ByteBuffer - seems to be the fastest)
ByteBuffer bytes = ByteBuffer.allocate (ss.width * ss.height * ss.bpp / 8);
is = new BufferedInputStream(is); // buffering is very important apparently
byte[] rgbsnew = null;
toRGB565(bytes.array(), ss.width, ss.height, rgbsnew);
// is.read(bytes.array()); // reading all at once for speed
is.read(rgbsnew); // reading all at once for speed
bytes.position(0); // reset position to the beginning of ByteBuffer
ss.pixels =ByteBuffer.wrap(rgbsnew);
// convert byte-buffer to integer
return ss;
}
//========================================= not used ==========================================
Screenshot ss = new Screenshot();
ss.width = 320;
ss.height = 480;
ss.bpp = 16;
ByteBuffer bytes = ByteBuffer.allocate (ss.width * ss.height * ss.bpp / 8);
is = new BufferedInputStream(is); // buffering is very important apparently
is.read(bytes.array()); // reading all at once for speed
bytes.position(0); // reset position to the beginning of ByteBuffer
ss.pixels = bytes;
//============================= newly tried to set raw to image view ==============================
/*mRawImage = new RawImage();
mRawImage.readHeader(1, bytes);
// Receive framebuffer data.
byte[] data = new byte[mRawImage.size];
bytes = ByteBuffer.wrap(data);
mRawImage.data = data;
Bitmap bmp=BitmapFactory.decodeByteArray(mRawImage.data,0,mRawImage.data.length);
imageView1.setImageBitmap(bmp);*/
//============================newly tried to set raw to image view ===============================
return ss;
}
catch (Exception e) {
// throw new Exception(e);
return null;
}
finally {}
//return null;
}
class Screenshot {
public Buffer pixels;
public int width;
public int height;
public int bpp;
public boolean isValid() {
if (pixels == null || pixels.capacity() == 0 || pixels.limit() == 0) return false;
if (width <= 0 || height <= 0) return false;
return true;
}
}
private void writeImageFile(Screenshot ss, String file) {
//if (ss == null || !ss.isValid()) throw new IllegalArgumentException();
//if (file == null || file.length() == 0) throw new IllegalArgumentException();
// resolve screenshot's BPP to actual bitmap pixel format
Bitmap.Config pf;
switch (ss.bpp) {
//case 16: pf = Config.RGB_565; break;
case 16: pf = Config.RGB_565; break;
case 32: pf = Config.ARGB_8888; break;
default: pf = Config.ARGB_8888; break;
}
//=====================================================================
/*int[] rgb24 = new int[ss.pixels.capacity()];
int i = 0;
for (;i<320*480;i++)
{
//uint16_t pixel16 = ((uint16_t *)gr_framebuffer[0].data)[i];
//int pixel16=(IntBuffer)
int pixel16=Integer.parseInt(ss.pixels.position(i).toString());
// RRRRRGGGGGGBBBBBB -> RRRRRRRRGGGGGGGGBBBBBBBB
// in rgb24 color max is 2^8 per channel (*255/32 *255/64 *255/32)
rgb24[3*i+2] = (255*(pixel16 & 0x001F))/ 32; //Blue
rgb24[3*i+1] = (255*((pixel16 & 0x07E0) >> 5))/64; //Green
rgb24[3*i] = (255*((pixel16 & 0xF800) >> 11))/32; //Red
}
//ss.pixels=rgb24;
*///=====================================================================
// create appropriate bitmap and fill it wit data
Bitmap bmp = Bitmap.createBitmap(ss.width, ss.height, pf);
bmp.copyPixelsFromBuffer(ss.pixels);
// handle the screen rotation
int rot = getScreenRotation();
if (rot != 0) {
Matrix matrix = new Matrix();
matrix.postRotate(-rot);
bmp = Bitmap.createBitmap(bmp, 0, 0, bmp.getWidth(), bmp.getHeight(), matrix, true);
}
// save it in PNG format
FileOutputStream fos;
try {
fos = new FileOutputStream(file);
} catch (FileNotFoundException e) {
throw new InvalidParameterException();
}
bmp.compress(CompressFormat.PNG, 100, fos);
}
private int getScreenRotation() {
WindowManager wm = (WindowManager)getSystemService(WINDOW_SERVICE);
Display disp = wm.getDefaultDisplay();
// check whether we operate under Android 2.2 or later
try {
Class<?> displayClass = disp.getClass();
Method getRotation = displayClass.getMethod("getRotation");
int rot = ((Integer)getRotation.invoke(disp)).intValue();
switch (rot) {
case Surface.ROTATION_0: return 0;
case Surface.ROTATION_90: return 90;
case Surface.ROTATION_180: return 180;
case Surface.ROTATION_270: return 270;
default: return 0;
}
} catch (NoSuchMethodException e) {
// no getRotation() method -- fall back to dispation()
int orientation = disp.getOrientation();
// Sometimes you may get undefined orientation Value is 0
// simple logic solves the problem compare the screen
// X,Y Co-ordinates and determine the Orientation in such cases
if(orientation==Configuration.ORIENTATION_UNDEFINED){
Configuration config = getResources().getConfiguration();
orientation = config.orientation;
if(orientation==Configuration.ORIENTATION_UNDEFINED){
//if height and widht of screen are equal then
// it is square orientation
if(disp.getWidth()==disp.getHeight()){
orientation = Configuration.ORIENTATION_SQUARE;
}else{ //if widht is less than height than it is portrait
if(disp.getWidth() < disp.getHeight()){
orientation = Configuration.ORIENTATION_PORTRAIT;
}else{ // if it is not any of the above it will defineitly be landscape
orientation = Configuration.ORIENTATION_LANDSCAPE;
}
}
}
}
return orientation == 1 ? 0 : 90; // 1 for portrait, 2 for landscape
} catch (Exception e) {
return 0; // bad, I know ;P
}
}
//===========================================================================
/**
* Converts semi-planar YUV420 as generated for camera preview into RGB565
* format for use as an OpenGL ES texture. It assumes that both the input
* and output data are contiguous and start at zero.
*
* #param yuvs the array of YUV420 semi-planar data
* #param rgbs an array into which the RGB565 data will be written
* #param width the number of pixels horizontally
* #param height the number of pixels vertically
*/
//we tackle the conversion two pixels at a time for greater speed
private void toRGB565(byte[] yuvs, int width, int height, byte[] rgbs) {
//the end of the luminance data
final int lumEnd = width * height;
//points to the next luminance value pair
int lumPtr = 0;
//points to the next chromiance value pair
int chrPtr = lumEnd;
//points to the next byte output pair of RGB565 value
int outPtr = 0;
//the end of the current luminance scanline
int lineEnd = width;
while (true) {
//skip back to the start of the chromiance values when necessary
if (lumPtr == lineEnd) {
if (lumPtr == lumEnd) break; //we've reached the end
//division here is a bit expensive, but's only done once per scanline
chrPtr = lumEnd + ((lumPtr >> 1) / width) * width;
lineEnd += width;
}
//read the luminance and chromiance values
final int Y1 = yuvs[lumPtr++] & 0xff;
final int Y2 = yuvs[lumPtr++] & 0xff;
final int Cr = (yuvs[chrPtr++] & 0xff) - 128;
final int Cb = (yuvs[chrPtr++] & 0xff) - 128;
int R, G, B;
//generate first RGB components
B = Y1 + ((454 * Cb) >> 8);
if(B < 0) B = 0; else if(B > 255) B = 255;
G = Y1 - ((88 * Cb + 183 * Cr) >> 8);
if(G < 0) G = 0; else if(G > 255) G = 255;
R = Y1 + ((359 * Cr) >> 8);
if(R < 0) R = 0; else if(R > 255) R = 255;
//NOTE: this assume little-endian encoding
rgbs[outPtr++] = (byte) (((G & 0x3c) << 3) | (B >> 3));
rgbs[outPtr++] = (byte) ((R & 0xf8) | (G >> 5));
//generate second RGB components
B = Y2 + ((454 * Cb) >> 8);
if(B < 0) B = 0; else if(B > 255) B = 255;
G = Y2 - ((88 * Cb + 183 * Cr) >> 8);
if(G < 0) G = 0; else if(G > 255) G = 255;
R = Y2 + ((359 * Cr) >> 8);
if(R < 0) R = 0; else if(R > 255) R = 255;
//NOTE: this assume little-endian encoding
rgbs[outPtr++] = (byte) (((G & 0x3c) << 3) | (B >> 3));
rgbs[outPtr++] = (byte) ((R & 0xf8) | (G >> 5));
}
}
Thanks .
RAW file

Categories

Resources