Mirror the front facing camera in Android - android

When you take a picture with the front facing camera in Android the preview is reflected along the Y axis to make the image seen appear as if the user was looking in the mirror. I want to undo this effect (apply a second reflection) or just stop the one thats done automatically.
I though to use this:
Camera mCamera;
....
mCamera.setPreviewCallback(...);
But I dont really know what to do with the overriding of
onPreviewFrame(byte[] data, Camera camera){...}
Whats the best way I can achieve what I've described?
Note I am trying to apply this effect to the live preview, not images that are already taken.

First when you open your camera instance with Camera.open() you should open front camera with Camera.open(getSpecialFacingCamera())
private int getSpecialFacingCamera() {
int cameraId = -1;
// Search for the front facing camera
int numberOfCameras = Camera.getNumberOfCameras();
for (int i = 0; i < numberOfCameras; i++) {
Camera.CameraInfo info = new Camera.CameraInfo();
Camera.getCameraInfo(i, info);
if (info.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) {
cameraId = i;
break;
}
}
return cameraId;
}
Then in your callback method where camera data is converting in to image
you can use this code to keep it normal
public void onPictureTaken(byte[] data, Camera camera){
Bitmap newImage = null;
Bitmap cameraBitmap;
if (data != null) {
cameraBitmap = BitmapFactory.decodeByteArray(data, 0, (data != null) ? data.length : 0);
if (getResources().getConfiguration().orientation == Configuration.ORIENTATION_PORTRAIT) {
// use matrix to reverse image data and keep it normal
Matrix mtx = new Matrix();
//this will prevent mirror effect
mtx.preScale(-1.0f, 1.0f);
// Setting post rotate to 90 because image will be possibly in landscape
mtx.postRotate(90.f);
// Rotating Bitmap , create real image that we want
newImage = Bitmap.createBitmap(cameraBitmap, 0, 0, cameraBitmap.getWidth(), cameraBitmap.getHeight(), mtx, true);
}else{// LANDSCAPE MODE
//No need to reverse width and height
newImage = Bitmap.createScaledBitmap(cameraBitmap, screenWidth, screenHeight, true);
cameraBitmap = newImage;
}
}
}
you can pass newImage in canvas and create jpeg image and save it on device.
Do not forgot Camera is deprecated in Api level 21...

You can use Matrix to flip the image data, something like:
byte[] baImage = null;
Size size = camera.getParameters().getPreviewSize();
ByteArrayOutputStream os = new ByteArrayOutputStream();
YuvImage yuv = new YuvImage(data, ImageFormat.NV21, size.width, size.height, null);
yuv.compressToJpeg(new Rect(0, 0, size.width, size.height), 100, os);
baImage = os.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(rawImage, 0, rawImage.length);
Matrix matrix = new Matrix();
matrix.preScale(-1.0f, 1.0f);
Bitmap mirroredBitmap = Bitmap.createBitmap(bitmap, 0, 0, size.width, size.height, matrix, false);

Related

Face Detection API- coordinates

So I am using the API to detect faces in images, and it is working well for me so far. I have not been able to figure out how to crop the image to the face however. I know how to crop the Bitmap, but it requires getting the top left position of the face in the Bitmap and width and height. When I query for the top left position using
points = face.getPosition();
Bitmap bmp = Bitmap.createBitmap(bit,(int)points.x,(int)(-1.0*points.y),(int)face.getWidth(),(int)face.getHeight());
But when I look at points, I notice that y is -63.5555 and x is 235.6666; I dont understand why there is a negative y coordinate. I did some Debugging and looked inside the face object; I found that it contained a PointF object already that had positive x and y coordinates. So why is a negative y coordinate being returned in this case?
The bounding box estimates the dimensions of the head, even though it may not be entirely visible within the photo. The coordinates may be negative if the face is cropped by the top or left of the image (e.g., the top of the head is cropped off the top of the picture, resulting in a y coordinate above 0).
The difference that you see in debugging is due to that fact that the implementation internally uses the head center position to represent the position (approximately at the mid-point between the eyes), but the API translates this to the top-left position when you call getPosition, for your convenience.
Also note that the bounding box is not necessarily a tight bounds on the face. If you want a tighter fit, you should enable landmark detection and compute your desired level of cropping relative to the returned landmarks.
I have used the same API before and was able to successfully crop the face.
Try
//Crop face option
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig = Bitmap.Config.ARGB_8888;
//Bitmap bitmap = BitmapFactory.decodeFile(pictureFile.getAbsolutePath(), options);
Bitmap bitmap = getRotatedImageToUpload(pictureFile.getAbsolutePath());
Bitmap faceBitmap = Bitmap.createBitmap(bitmap, (int) faceCentre.x, (int) faceCentre.y, (int) faceWidth, (int) faceHeight);
FileOutputStream out = null;
try {
out = new FileOutputStream(getOutputMediaFile());
faceBitmap.compress(Bitmap.CompressFormat.PNG, 100, out); // bmp is your Bitmap instance
// PNG is a lossless format, the compression factor (100) is ignored
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
if (out != null) {
out.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
//End of Crop face option
And the code for getRotateImageToUpload is
public Bitmap getRotatedImageToUpload(String filePath) {
try {
String file = filePath;
BitmapFactory.Options bounds = new BitmapFactory.Options();
bounds.inJustDecodeBounds = true;
BitmapFactory.decodeFile(file, bounds);
BitmapFactory.Options opts = new BitmapFactory.Options();
Bitmap bm = BitmapFactory.decodeFile(file, opts);
ExifInterface exif = null;
exif = new ExifInterface(file);
String orientString = exif.getAttribute(ExifInterface.TAG_ORIENTATION);
int orientation = orientString != null ? Integer.parseInt(orientString) : ExifInterface.ORIENTATION_NORMAL;
int rotationAngle = 0;
if (orientation == ExifInterface.ORIENTATION_ROTATE_90) rotationAngle = 90;
if (orientation == ExifInterface.ORIENTATION_ROTATE_180) rotationAngle = 180;
if (orientation == ExifInterface.ORIENTATION_ROTATE_270) rotationAngle = 270;
Matrix matrix = new Matrix();
matrix.setRotate(rotationAngle, (float) bm.getWidth() / 2, (float) bm.getHeight() / 2);
Bitmap rotatedBitmap = Bitmap.createBitmap(bm, 0, 0, bounds.outWidth, bounds.outHeight, matrix, true);
return rotatedBitmap;
} catch (IOException e) {
e.printStackTrace();
}
return null;
}

Rotate a bitmap using render script android

When I use following code, it ends up with outofmemory exception. After doing researh Render script looks like a good candidate. Where can I find sample code for similar operation and how can integrate it to my project.
public Bitmap rotateBitmap(Bitmap image, int angle) {
if (image != null) {
Matrix matrix = new Matrix();
matrix.postRotate(angle, (image.getWidth()) / 2,
(image.getHeight()) / 2);
return Bitmap.createBitmap(image, 0, 0, image.getWidth(),
image.getHeight(), matrix, true);
}
return null;
}
Basically rotating bitmap is a task of rotating 2D array without using additional memory. And this is the correct implementation with RenderScript: Android: rotate image without loading it to memory .
But this is not necessary if all you want is just to display rotated Bitmap. You can simply extend ImageView and rotate the Canvas while drawing on it:
canvas.save();
canvas.rotate(angle, X + (imageW / 2), Y + (imageH / 2));
canvas.drawBitmap(imageBmp, X, Y, null);
canvas.restore();
What about ScriptIntrinsic, since it's just a built-in RenderScript kernels for common operations you cannot do nothing above the already implemented functions: ScriptIntrinsic3DLUT, ScriptIntrinsicBLAS, ScriptIntrinsicBlend, ScriptIntrinsicBlur, ScriptIntrinsicColorMatrix, ScriptIntrinsicConvolve3x3, ScriptIntrinsicConvolve5x5, ScriptIntrinsicHistogram, ScriptIntrinsicLUT, ScriptIntrinsicResize, ScriptIntrinsicYuvToRGB. They do not include functionality to rotate bitmap at the moment so you should create your own ScriptC script.
Try this code..
private Bitmap RotateImage(Bitmap _bitmap, int angle) {
Matrix matrix = new Matrix();
matrix.postRotate(angle);
_bitmap = Bitmap.createBitmap(_bitmap, 0, 0, _bitmap.getWidth(), _bitmap.getHeight(), matrix, true);
return _bitmap;
}
Use this code when select image from gallery.
like this..
File _file = new File(file_name);
BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = 1;
Bitmap bitmap = BitmapFactory.decodeFile(file_name, options);
try {
ExifInterface exif = new ExifInterface(file_name);
int orientation = exif.getAttributeInt(ExifInterface.TAG_ORIENTATION, 1);
if (orientation == ExifInterface.ORIENTATION_ROTATE_90) {
bitmap = RotateImage(bitmap, 90);
} else if (orientation ==ExifInterface.ORIENTATION_ROTATE_270) {
bitmap = RotateImage(bitmap, 270);
}
} catch (Exception e) {
e.printStackTrace();
}
image_holder.setImageBitmap(bitmap);

Combine two images into one orientation issue

I want to make an app where user can take picture with back camera, and then with front camera.
So, after that I get two bitmaps and I want to combine them into one image.
This code I use for front Camera parameters:
//Set up picture orientation for saving...
Camera.Parameters parameters = theCamera.getParameters();
parameters.setRotation(90);
frontCamera.setParameters(parameters);
//Set up camera preview and set orientation to PORTRAIT
frontCamera.stopPreview();
frontCamera.setDisplayOrientation(90);
frontCamera.setPreviewDisplay(holder);
frontCamera.startPreview();
This code I use for taking picture with front camera
cameraObject.takePicture(shutterCallback, rawCallback, jpegCallback);
Callback for taking picture with back camera
PictureCallback jpegCallback = new PictureCallback() {
#Override
public void onPictureTaken(byte[] data, Camera camera) {
backBitmap = decodeBitampFromByte(data, 0, 800, 800);
frontCameraObject.release();
initFrontCamera();
}
};
NOTE: Similar code is for taking picture with front camera. I get two bitmaps, and then I try to combine them with code below, but I get saved bitmap with wrong orientation.
This code I use for combing two bitamps: frontBitmap, backBitmap.
public Bitmap combineImages(Bitmap c, Bitmap s, String loc)
{
Bitmap cs = null;
int w = c.getWidth() + s.getWidth();
int h;
if(c.getHeight() >= s.getHeight()){
h = c.getHeight();
}else{
h = s.getHeight();
}
cs = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
Canvas comboImage = new Canvas(cs);
comboImage.drawBitmap(c, 0f, 0f, null);
comboImage.drawBitmap(s, c.getWidth, 0f, null);
String tmpImg = String.valueOf(System.currentTimeMillis()) + ".png";
OutputStream os = null;
try {
os = new FileOutputStream(loc + tmpImg);
cs.compress(CompressFormat.PNG, 100, os);
} catch (IOException e) {
Log.e("combineImages", "problem combining images", e);
}
return cs;
}
NOTE Image with bottle of the water is taken with back camera, and other is with front camera.
Try changing comboImage.drawBitmap(s, c.getWidth, 0f, null); to
comboImage.drawBitmap(s, 0f,c.getHeigh, null);

Getting Out of Memory Error onPictureTaken BitmapFactory.decodeByteArray

I am getting a weird OOM error on decoding the bytearray I get when taking a picture. My code is as below. Please tell if there is a better way to do it.
#Override
public void onPictureTaken(byte[] data, Camera camera) {
// TODO something with the image data
// Restart the preview and re-enable the shutter button so that we can take another picture
camera.startPreview();
inPreview = true;
shutterButton.setEnabled(true);
System.gc();
Bitmap bitmap = BitmapFactory.decodeByteArray(data , 0, data.length);
Matrix matrix = new Matrix();
matrix.postRotate(90);
Bitmap resizedBitmap = Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(),
bitmap.getHeight(), matrix, false);
if(bitmap != null){
boolean fileSaved = Funcs.saveImage(getActivity(),resizedBitmap, "moizali");
if(fileSaved){
((BaseActivity)getActivity()).goToPhotoEditActivity("moizali");
}
}
}

Android FaceDetector.findFaces doesn't find any faces

This is a code I have written to run android's face detector. Unfortunately, it doesn't find any. I have put this in a onPreviewFrame(data, camera).
Camera.Parameters parameters = camera.getParameters();
Camera.Size size = parameters.getPreviewSize();
YuvImage image = new YuvImage(data, ImageFormat.NV21, size.width, size.height, null);
Rect rectangle = new Rect(0, 0, size.width, size.height);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
int quality = 100;
image.compressToJpeg(rectangle, quality, stream);
Bitmap bitmap = BitmapFactory.decodeByteArray(stream.toByteArray(), 0, stream.size());
FaceDetector detector = new FaceDetector(size.width, size.height, 5);
FaceDetector.Face[] faces = new FaceDetector.Face[5];
int numFaces = detector.findFaces(bitmap, faces);
textView.setText("numFaces = " + numFaces);
Any ideas? fixes?
I think you should check whether your bitmap data is correct firstly. You must setPreviewSize before startPreview. check whether the size is same as your preview data. or you can hard code it to (640, 480) for a test. If your bitmap is correct. maybe the API cannot work due to the input data quality. you can dump some data, and try other app or lib, such as OpenCV, to verify. Good luck.

Categories

Resources