I have to create grey Scale camera live Preview. It is working fine. But i want to make the code faster.
private android.hardware.Camera.PreviewCallback previewCallback = new android.hardware.Camera.PreviewCallback()
{
public void onPreviewFrame(byte abyte0[] , Camera camera)
{
Size size = cameraParams.getPreviewSize();
int[] rgbData = YuvUtils.decodeGreyscale(abyte0, size.width, size.height);
Bitmap bitmapung = Bitmap.createBitmap(rgbData, size.width, size.height, Bitmap.Config.ARGB_8888);
Bitmap bitmapToSet = Bitmap.createBitmap(bitmapung, 0, 0, widthPreview, heightPreview, matrix, true);
MyActivity.View.setBitmapToDraw(bitmapToSet);
};
As i am creating Bitmap object twice.Can i do this job with one
bitmap Object.
where (on which method like onResume or onCreate) should i getCamera
Size(means width and height once). So that i don't need to set it for each callback.
And I know i should use AsyncTask to do it. I will do it after
solving my current issue.
EDIT
I can create bitmap while getting Camera Size(width,height). Then single instance of bitmap is associated to my Class. but i have to call setPixels for it.
bitmapung.setPixels(rgbData, offset, stride, x, y, size.width, size.height);
What should I set the values of Stride ,offset and x,y. And also explain them plz.
Also have a look on these questions also
Create custom Color Effect
Add thermal effect to yuvImage
Related
I am using Android Studio and I am creating an app that starts a camera preview and processes the image that is returned in the onPreviewFrame(byte[] data, Camera camera) callback method of android.hardware.Camera. The data that is returned is in NV21 format (the default for Camera class). In this method I receive a raw image that I need to convert to an RGB one in order to process it, since my application needs to process the colors from the image. I am using the following method to convert the byte[] array into a bitmap and then use the bitmap appropriately.
private Bitmap getBitmap(byte[] data, Camera camera) {
// Convert to JPG
ByteArrayOutputStream baos = new ByteArrayOutputStream();
previewSize = camera.getParameters().getPreviewSize();
yuvimage = new YuvImage(data, ImageFormat.NV21, previewSize.width, previewSize.height, null);
yuvimage.compressToJpeg(new Rect(0, 0, previewSize.width, previewSize.height), 80, baos);
jdata = baos.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jdata, 0, jdata.length);
// rotate image
bitmap = Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(), bitmap.getHeight(), mtx, true);
return bitmap;
}
This method works well and I get desired results. But since my preview size is set to the maximum supported by the device that the application is running on(currently on my phone 1920x1088) this method takes too long and as a result I can only process 1 to 2 images per second. If I remove the conversion method I can see that the onPreviewFrame method is called 10 to 12 times per second, meaning I can receive that much per second but I can only process 1 or 2 because of the conversion.
Is there a faster way that I can use in order to receive a RGB matrix from the byte[] array that is passed?
Actually in my case it was sufficient to remove the rotation of the bitmap, because decoding the bitmap took 250-350ms and rotating the bitmap took ~500ms. So I removed the rotation and changed the orientation of the scanning. Fortunetly this isn't hard at all. If I have a function that checks a given pixel for the color and it looked like this:
boolean foo(int X, int Y) {
// statements
// ...
}
Now it looks like this:
boolean foo(int X, int Y) {
int oldX = X;
int oldY = Y;
Y = bitmap.getHeight() - oldX;
X = oldY;
// statements
// ...
}
Hope this helps. :)
New Camera 2 API is very different from old one.Showing the manipulated camera frames to user part of pipeline is confuses me. I know there is very good explanation on Camera preview image data processing with Android L and Camera2 API but showing frames is still not clear. My question is what is the way of showing frames on screen which came from ImageReaders callback function after some processing while preserving efficiency and speed in Camera2 api pipeline?
Example Flow :
camera.add_target(imagereader.getsurface) -> on imagereaders callback do some processing -> (show that processed image on screen?)
Workaround Idea : Sending bitmaps to imageview every time new frame processed.
Edit after clarification of the question; original answer at bottom
Depends on where you're doing your processing.
If you're using RenderScript, you can connect a Surface from a SurfaceView or a TextureView to an Allocation (with setSurface), and then write your processed output to that Allocation and send it out with Allocation.ioSend(). The HDR Viewfinder demo uses this approach.
If you're doing EGL shader-based processing, you can connect a Surface to an EGLSurface with eglCreateWindowSurface, with the Surface as the native_window argument. Then you can render your final output to that EGLSurface and when you call eglSwapBuffers, the buffer will be sent to the screen.
If you're doing native processing, you can use the NDK ANativeWindow methods to write to a Surface you pass from Java and convert to an ANativeWindow.
If you're doing Java-level processing, that's really slow and you probably don't want to. But can use the new Android M ImageWriter class, or upload a texture to EGL every frame.
Or as you say, draw to an ImageView every frame, but that'll be slow.
Original answer:
If you are capturing JPEG images, you can simply copy the contents of the ByteBuffer from Image.getPlanes()[0].getBuffer() into a byte[], and then use BitmapFactory.decodeByteArray to convert it to a Bitmap.
If you are capturing YUV_420_888 images, then you need to write your own conversion code from the 3-plane YCbCr 4:2:0 format to something you can display, such as a int[] of RGB values to create a Bitmap from; unfortunately there's not yet a convenient API for this.
If you are capturing RAW_SENSOR images (Bayer-pattern unprocessed sensor data), then you need to do a whole lot of image processing or just save a DNG.
I had the same need, and wanted a quick and dirty manipulation for a demo. I was not worried about efficient processing for a final product. This was easily achieved using the following java solution.
My original code to connect the camera2 preview to a TextureView was commented-out and replaced with a surface to an ImageReader:
// Get the surface of the TextureView on the layout
//SurfaceTexture texture = mTextureView.getSurfaceTexture();
//if (null == texture) {
// return;
//}
//texture.setDefaultBufferSize(mPreviewWidth, mPreviewHeight);
//Surface surface = new Surface(texture);
// Capture the preview to the memory reader instead of a UI element
mPreviewReader = ImageReader.newInstance(mPreviewWidth, mPreviewHeight, ImageFormat.JPEG, 1);
Surface surface = mPreviewReader.getSurface();
// This part stays the same regardless of where we render
mCaptureRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mCaptureRequestBuilder.addTarget(surface);
mCameraDevice.createCaptureSession(...
Then I registered a listener for the image:
mPreviewReader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image image = reader.acquireLatestImage();
if (image != null) {
Image.Plane plane = image.getPlanes()[0];
ByteBuffer buffer = plane.getBuffer();
byte[] bytes = new byte[buffer.capacity()];
buffer.get(bytes);
Bitmap preview = BitmapFactory.decodeByteArray(bytes, 0, buffer.capacity());
image.close();
if(preview != null ) {
// This gets the canvas for the same mTextureView we would have connected to the
// Camera2 preview directly above.
Canvas canvas = mTextureView.lockCanvas();
if (canvas != null) {
float[] colorTransform = {
0, 0, 0, 0, 0,
.35f, .45f, .25f, 0, 0,
0, 0, 0, 0, 0,
0, 0, 0, 1, 0};
ColorMatrix colorMatrix = new ColorMatrix();
colorMatrix.set(colorTransform); //Apply the monochrome green
ColorMatrixColorFilter colorFilter = new ColorMatrixColorFilter(colorMatrix);
Paint paint = new Paint();
paint.setColorFilter(colorFilter);
canvas.drawBitmap(preview, 0, 0, paint);
mTextureView.unlockCanvasAndPost(canvas);
}
}
}
}
}, mBackgroundPreviewHandler);
I'm attempting to get a bitmap from a camera preview in Android, then examine the bitmap and draw something to the screen based on what the camera is seeing. This all has to be done live due to the nature of the project I'm working on.
At the moment I'm using a surfaceview to display the live preview and I'm getting the bitmap using the following code I found on a separate question on here.
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
snipeCamera.setPreviewCallback(new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
ByteArrayOutputStream outstr = new ByteArrayOutputStream();
Rect rect = new Rect(0, 0, width, height);
YuvImage yuvimage = new YuvImage(data, ImageFormat.NV21, width, height, null);
yuvimage.compressToJpeg(rect, 100, outstr);
Bitmap bmp = BitmapFactory.decodeByteArray(outstr.toByteArray(), 0, outstr.size());
bit = bmp.copy(Bitmap.Config.ARGB_8888, true);
}
});
}
Bit is a:
public static Bitmap bit;
Whenever I try and access this bitmap anywhere else I get a null exception error. I'm guessing it has something to do with the fact that it's being set inside setPreviewCallback, but I don't know enough about Android to fix it. Is there something I can do to get access to this bitmap? Or is there another way I can work with a live bitmap of what the camera is seeing?
Is there something I can do to get access to this bitmap?
You already have access to your Bitmap. You are getting it from decodeByteArray().
(note that I am assuming that the code leading up to and including decodeByteArray() is correct — there could be additional problems lurking in there)
You just need to consume the Bitmap in your onPreviewFrame() method. If your work to do is quick (sub-millisecond), probably just do that work right there. If your work to do is not-so-quick, you'll now need to work out background threading plans, arranging to update your UI with the results of that work on the main application thread, and related issues.
If, to consume the Bitmap, you need access to other objects in your camera UI, just makes sure that the SurfaceHolder has references to those other objects. Your onPreviewFrame() method is in an anonymous inner class inside the SurfaceHolder, and onPreviewFrame() has access to everything that the SurfaceHolder has.
I read many posts there? But i don't find correctly answer.
I try do something this:
#Override
public void onPictureTaken(byte[] paramArrayOfByte, Camera paramCamera) {
try {
Bitmap bitmap = BitmapFactory.decodeByteArray(paramArrayOfByte, 0,
paramArrayOfByte.length);
int width = bitmap.getWidth();
int height = bitmap.getHeight();
FileOutputStream os = new ileOutputStream(Singleton.mPushFilePath);
Matrix matrix = new Matrix();
matrix.postRotate(90);
Bitmap resizedBitmap = Bitmap.createBitmap(bitmap, 0, 0, width,
height, matrix, false);
resizedBitmap.compress(Bitmap.CompressFormat.JPEG, 95, os);
os.close();
...
Is there a way to rotate picture, without using BitmapFactory? I want rotate picture without loss of quality!
Perhaps you can take the picture already rotated as you desire using Camera.setDisplayOrientation? Check Android camera rotate. Further, investigate Camera.Parameters.setRotation(). One of these techniques should do the trick for you.
Otherwise your code looks fine except for using parameter 95 on Bitmap.compress, you need to use 100 for lossless compression.
To avoid out-of-memory exception, use Camera.Parameters.setPictureSize() to take a lower resolution picture (e.g. 3Mpx). i.e. do you really need an 8Mpx photo? Make sure to use Camera.Parameters.getSupportedPictureSizes() to determine the supported sizes on your device.
My application has a "photobooth" feature which will allow the user to take a picture with the camera and at the same time show an overlay image on top of the camera view. After the picture is taken, i need to save what the user saw while taking the picture to the filesystem.
I have experienced 1 big problem while developing a solution to this: capturing an image with the compatible dimensions in which i can attach an overlay image to resulting in what the user saw while taking the picture.
It seems i cannot capture an image from the camera with defined dimensions(i have to basically pick from a list of them). Some phones only can produce certain dimensions.
Since i cannot choose the size of the captured image, it seems as though i will be required to include many different sizes of the overlay image, and attach the best match to the captured image. I can't just slap any old overlay on top of the camera image and make it look right.
Questions:
Am i over-complicating this "camera image + overlay image creation" process?
What suggestions do you have in completing this task without the need of including several different sizes overlay images?
Edit:
Here is my solution(brief). Please realize this is not a perfect and maybe not most efficient way to do this, but it works. Some things may be unnecessary/redundant but whatever!
Notes:
this doesn't work too great on tablet devices.
the overlay image needs to be rotated to be in landscape mode(even though you will be taking the image holding the phone in portrait)
overlay size is 480x320
you need to force the activity to landscape mode while taking the picture(now the overlay looks like its portrait!)
i add the overlay image view using addContentView(overlayImageView, new LayoutParams(LayoutParams.FILL_PARENT, LayoutParams.FILL_PARENT));
...
final Camera.PictureCallback jpegCallback = new Camera.PictureCallback() {
#Override
public void onPictureTaken(byte[] data, Camera camera) {
BitmapFactory.Options options = new BitmapFactory.Options();
Bitmap mutableBitmap = null;
try {
//for a PORTRAIT overlay and taking the image holding the phone in PORTRAIT mode
mutableBitmap = BitmapFactory.decodeByteArray(data, 0, data.length, options).copy(Bitmap.Config.RGB_565, true);
Matrix matrix = new Matrix();
int width = mutableBitmap.getWidth();
int height = mutableBitmap.getHeight();
int newWidth = overlayImage.getDrawable().getBounds().width();
int newHeight = overlayImage.getDrawable().getBounds().height();
float scaleWidth = ((float) newWidth) / width;
float scaleHeight = ((float) newHeight) / height;
matrix.postScale(scaleWidth, scaleHeight);
matrix.postRotate(90);
Bitmap resizedBitmap = Bitmap.createBitmap(mutableBitmap, 0, 0, mutableBitmap.getWidth(), mutableBitmap.getHeight(), matrix, true);
finalBitmap = resizedBitmap.copy(Bitmap.Config.RGB_565, true);
Canvas canvas = new Canvas(finalBitmap);
Bitmap overlayBitmap = BitmapFactory.decodeResource(getResources(), overlay);
matrix = new Matrix();
matrix.postRotate(90);
Bitmap resizedOverlay = Bitmap.createBitmap(overlayBitmap, 0, 0, overlayBitmap.getWidth(), overlayBitmap.getHeight(), matrix, true);
canvas.drawBitmap(resizedOverlay, 0, 0, new Paint());
canvas.scale(50, 0);
canvas.save();
//finalBitmap is the image with the overlay on it
}
catch(OutOfMemoryError e) {
//fail
}
}
}
I think this is a question of how you manipulate your overlays. You can crop it according to the captured image size and resize it to fit, preserving its ratio. You can place the overlay, by comparing its ratio to the backround ratio, to its optimal position.
I would keep overlays big enough, with a wide border (bleed), to easily size them to an image using filters to draw it with good qaulity. I guess overlays are something which you would design and have transparent parts, like an image of a clown without a face so the user can snap somebody elses face into it?