I read many posts there? But i don't find correctly answer.
I try do something this:
#Override
public void onPictureTaken(byte[] paramArrayOfByte, Camera paramCamera) {
try {
Bitmap bitmap = BitmapFactory.decodeByteArray(paramArrayOfByte, 0,
paramArrayOfByte.length);
int width = bitmap.getWidth();
int height = bitmap.getHeight();
FileOutputStream os = new ileOutputStream(Singleton.mPushFilePath);
Matrix matrix = new Matrix();
matrix.postRotate(90);
Bitmap resizedBitmap = Bitmap.createBitmap(bitmap, 0, 0, width,
height, matrix, false);
resizedBitmap.compress(Bitmap.CompressFormat.JPEG, 95, os);
os.close();
...
Is there a way to rotate picture, without using BitmapFactory? I want rotate picture without loss of quality!
Perhaps you can take the picture already rotated as you desire using Camera.setDisplayOrientation? Check Android camera rotate. Further, investigate Camera.Parameters.setRotation(). One of these techniques should do the trick for you.
Otherwise your code looks fine except for using parameter 95 on Bitmap.compress, you need to use 100 for lossless compression.
To avoid out-of-memory exception, use Camera.Parameters.setPictureSize() to take a lower resolution picture (e.g. 3Mpx). i.e. do you really need an 8Mpx photo? Make sure to use Camera.Parameters.getSupportedPictureSizes() to determine the supported sizes on your device.
Related
I am using Android Studio and I am creating an app that starts a camera preview and processes the image that is returned in the onPreviewFrame(byte[] data, Camera camera) callback method of android.hardware.Camera. The data that is returned is in NV21 format (the default for Camera class). In this method I receive a raw image that I need to convert to an RGB one in order to process it, since my application needs to process the colors from the image. I am using the following method to convert the byte[] array into a bitmap and then use the bitmap appropriately.
private Bitmap getBitmap(byte[] data, Camera camera) {
// Convert to JPG
ByteArrayOutputStream baos = new ByteArrayOutputStream();
previewSize = camera.getParameters().getPreviewSize();
yuvimage = new YuvImage(data, ImageFormat.NV21, previewSize.width, previewSize.height, null);
yuvimage.compressToJpeg(new Rect(0, 0, previewSize.width, previewSize.height), 80, baos);
jdata = baos.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jdata, 0, jdata.length);
// rotate image
bitmap = Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(), bitmap.getHeight(), mtx, true);
return bitmap;
}
This method works well and I get desired results. But since my preview size is set to the maximum supported by the device that the application is running on(currently on my phone 1920x1088) this method takes too long and as a result I can only process 1 to 2 images per second. If I remove the conversion method I can see that the onPreviewFrame method is called 10 to 12 times per second, meaning I can receive that much per second but I can only process 1 or 2 because of the conversion.
Is there a faster way that I can use in order to receive a RGB matrix from the byte[] array that is passed?
Actually in my case it was sufficient to remove the rotation of the bitmap, because decoding the bitmap took 250-350ms and rotating the bitmap took ~500ms. So I removed the rotation and changed the orientation of the scanning. Fortunetly this isn't hard at all. If I have a function that checks a given pixel for the color and it looked like this:
boolean foo(int X, int Y) {
// statements
// ...
}
Now it looks like this:
boolean foo(int X, int Y) {
int oldX = X;
int oldY = Y;
Y = bitmap.getHeight() - oldX;
X = oldY;
// statements
// ...
}
Hope this helps. :)
I'm attempting to get a bitmap from a camera preview in Android, then examine the bitmap and draw something to the screen based on what the camera is seeing. This all has to be done live due to the nature of the project I'm working on.
At the moment I'm using a surfaceview to display the live preview and I'm getting the bitmap using the following code I found on a separate question on here.
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
snipeCamera.setPreviewCallback(new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
ByteArrayOutputStream outstr = new ByteArrayOutputStream();
Rect rect = new Rect(0, 0, width, height);
YuvImage yuvimage = new YuvImage(data, ImageFormat.NV21, width, height, null);
yuvimage.compressToJpeg(rect, 100, outstr);
Bitmap bmp = BitmapFactory.decodeByteArray(outstr.toByteArray(), 0, outstr.size());
bit = bmp.copy(Bitmap.Config.ARGB_8888, true);
}
});
}
Bit is a:
public static Bitmap bit;
Whenever I try and access this bitmap anywhere else I get a null exception error. I'm guessing it has something to do with the fact that it's being set inside setPreviewCallback, but I don't know enough about Android to fix it. Is there something I can do to get access to this bitmap? Or is there another way I can work with a live bitmap of what the camera is seeing?
Is there something I can do to get access to this bitmap?
You already have access to your Bitmap. You are getting it from decodeByteArray().
(note that I am assuming that the code leading up to and including decodeByteArray() is correct — there could be additional problems lurking in there)
You just need to consume the Bitmap in your onPreviewFrame() method. If your work to do is quick (sub-millisecond), probably just do that work right there. If your work to do is not-so-quick, you'll now need to work out background threading plans, arranging to update your UI with the results of that work on the main application thread, and related issues.
If, to consume the Bitmap, you need access to other objects in your camera UI, just makes sure that the SurfaceHolder has references to those other objects. Your onPreviewFrame() method is in an anonymous inner class inside the SurfaceHolder, and onPreviewFrame() has access to everything that the SurfaceHolder has.
I am trying to take a picture with the max camera size and then scale it to 1080x1776, with a quality of 80% and saving the bitmap as .jpeg. Even though I am pretty sure that I am rotating the matrix in the wrong way, since the only way to get 1080x1776 working is to set the parameters the other way around in createScaledBitmap, this code works with my Nexus 5.
When I tried the same code on a One Plus, I got a really low resolution quality. I don't understand why since the code works on a Nexus 5, which has similar specs, it shouldn't be working for the one plus. On the One Plus the picture is scaling in the right way 1080x1776, but the quality is bad.
Does anyone know why? Same code, new phones, different results? I have also tried the code on a Nexus 7 and still I do have bad pictures. Why would this code work on my nexus 5?
Bitmap image = BitmapFactory.decodeByteArray(data, 0, data.length);
Bitmap imageScaled = Bitmap.createScaledBitmap(image, 1776, 1080, true);
// Override Android default landscape orientation and save portrait
Matrix matrix = new Matrix();
if (currentCameraId == Camera.CameraInfo.CAMERA_FACING_FRONT)
{
float[] mirrorY = { -1, 0, 0, 0, 1, 0, 0, 0, 1};
Matrix matrixMirrorY = new Matrix();
matrixMirrorY.setValues(mirrorY);
matrix.postConcat(matrixMirrorY);
}
matrix.postRotate(90);
Bitmap rotatedScaledImage = Bitmap.createBitmap(imageScaled, 0,
0, imageScaled.getWidth(), imageScaled.getHeight(),
matrix, true);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
rotatedScaledImage.compress(Bitmap.CompressFormat.JPEG, 80, bos);
My application has a "photobooth" feature which will allow the user to take a picture with the camera and at the same time show an overlay image on top of the camera view. After the picture is taken, i need to save what the user saw while taking the picture to the filesystem.
I have experienced 1 big problem while developing a solution to this: capturing an image with the compatible dimensions in which i can attach an overlay image to resulting in what the user saw while taking the picture.
It seems i cannot capture an image from the camera with defined dimensions(i have to basically pick from a list of them). Some phones only can produce certain dimensions.
Since i cannot choose the size of the captured image, it seems as though i will be required to include many different sizes of the overlay image, and attach the best match to the captured image. I can't just slap any old overlay on top of the camera image and make it look right.
Questions:
Am i over-complicating this "camera image + overlay image creation" process?
What suggestions do you have in completing this task without the need of including several different sizes overlay images?
Edit:
Here is my solution(brief). Please realize this is not a perfect and maybe not most efficient way to do this, but it works. Some things may be unnecessary/redundant but whatever!
Notes:
this doesn't work too great on tablet devices.
the overlay image needs to be rotated to be in landscape mode(even though you will be taking the image holding the phone in portrait)
overlay size is 480x320
you need to force the activity to landscape mode while taking the picture(now the overlay looks like its portrait!)
i add the overlay image view using addContentView(overlayImageView, new LayoutParams(LayoutParams.FILL_PARENT, LayoutParams.FILL_PARENT));
...
final Camera.PictureCallback jpegCallback = new Camera.PictureCallback() {
#Override
public void onPictureTaken(byte[] data, Camera camera) {
BitmapFactory.Options options = new BitmapFactory.Options();
Bitmap mutableBitmap = null;
try {
//for a PORTRAIT overlay and taking the image holding the phone in PORTRAIT mode
mutableBitmap = BitmapFactory.decodeByteArray(data, 0, data.length, options).copy(Bitmap.Config.RGB_565, true);
Matrix matrix = new Matrix();
int width = mutableBitmap.getWidth();
int height = mutableBitmap.getHeight();
int newWidth = overlayImage.getDrawable().getBounds().width();
int newHeight = overlayImage.getDrawable().getBounds().height();
float scaleWidth = ((float) newWidth) / width;
float scaleHeight = ((float) newHeight) / height;
matrix.postScale(scaleWidth, scaleHeight);
matrix.postRotate(90);
Bitmap resizedBitmap = Bitmap.createBitmap(mutableBitmap, 0, 0, mutableBitmap.getWidth(), mutableBitmap.getHeight(), matrix, true);
finalBitmap = resizedBitmap.copy(Bitmap.Config.RGB_565, true);
Canvas canvas = new Canvas(finalBitmap);
Bitmap overlayBitmap = BitmapFactory.decodeResource(getResources(), overlay);
matrix = new Matrix();
matrix.postRotate(90);
Bitmap resizedOverlay = Bitmap.createBitmap(overlayBitmap, 0, 0, overlayBitmap.getWidth(), overlayBitmap.getHeight(), matrix, true);
canvas.drawBitmap(resizedOverlay, 0, 0, new Paint());
canvas.scale(50, 0);
canvas.save();
//finalBitmap is the image with the overlay on it
}
catch(OutOfMemoryError e) {
//fail
}
}
}
I think this is a question of how you manipulate your overlays. You can crop it according to the captured image size and resize it to fit, preserving its ratio. You can place the overlay, by comparing its ratio to the backround ratio, to its optimal position.
I would keep overlays big enough, with a wide border (bleed), to easily size them to an image using filters to draw it with good qaulity. I guess overlays are something which you would design and have transparent parts, like an image of a clown without a face so the user can snap somebody elses face into it?
I have a picture (bitmap) and I want to draw some shapes and rotated
text on it.
This works fine as long as the picture doesn't get too large. However,
when using a picture (2560 x 1920 pixels)taken with the build-in
camera of my android 2.1 phone, the result is distorted.
It looks like the rotation back, after drawing the rotated text, has
not been completed. Also, the distortion point is not always the same,
like it depends on the cpu usage.
You can see some resulting pictures here:
http://dl.dropbox.com/u/4751612/Result1.png
http://dl.dropbox.com/u/4751612/Result2.png
The code is executed inside a AsyncTask. The strange this is that this code works fine in one Activity, but not in another. In both activities the AsyncTask is executed when a button is clicked.
These are some excerpts of the code I'm using.
// Load the image from the MediaStore
c = MediaStore.Images.Media.query(context.getContentResolver(),
Uri.parse(drawing.fullImage), new String[] {MediaColumns.DATA});
if (c != null && c.moveToFirst()) {
imageFilePath = c.getString(0);
bitmap = ImageUtil.getBitmap(new File(imageFilePath), 10000);
}
c.close();
// Create a canvas to draw on
drawingBitmap = Bitmap.createBitmap(bitmap.getWidth(),
bitmap.getHeight(), Bitmap.Config.ARGB_8888);
canvas = new Canvas(drawingBitmap);
// Draw image
canvas.drawBitmap(bitmap, 0, 0,
MeasureFactory.getMeasurePaint(context));
// calculate text width
rect = new Rect();
paint.getTextBounds(text, 0, text.length(), rect);
// Draw rotated text
canvas.save();
canvas.rotate(-angle, centerPoint.x, centerPoint.y);
canvas.drawText(text, centerPoint.x-Math.abs(rect.exactCenterX()),
Math.abs(centerPoint.y-rect.exactCenterY()), paint);
canvas.restore();
// Upload the bitmap to the Media Library
Uri uri =
getContentResolver().insert(MediaStore.Images.Media.EXTERNAL_CONTENT_URI,
values);
OutputStream outStream = getContentResolver().openOutputStream(uri);
drawingBitmap.compress(Bitmap.CompressFormat.JPEG, 90, outStream);
outStream.flush();
outStream.close();
Thanks in advance for any help.
Since it works as long as the resolution isn't too high, I would just rescale all images to something that works.
You can accomplish this using
Bitmap scaledBitmap = Bitmap.createScaledBitmap(bitmap, 800 /* width */, 600 /* height */, true);
This turned out to be a memory problem although no OutOfMemoryException was visible in the log.
So, I "solved" it by scaling the image if the resolution is too high, as suggested by ingo. The problem is that I don't know how to determine the limits of a device. I suppose they are different for every device and depends on the current memory usage.