I am using Android Studio and I am creating an app that starts a camera preview and processes the image that is returned in the onPreviewFrame(byte[] data, Camera camera) callback method of android.hardware.Camera. The data that is returned is in NV21 format (the default for Camera class). In this method I receive a raw image that I need to convert to an RGB one in order to process it, since my application needs to process the colors from the image. I am using the following method to convert the byte[] array into a bitmap and then use the bitmap appropriately.
private Bitmap getBitmap(byte[] data, Camera camera) {
// Convert to JPG
ByteArrayOutputStream baos = new ByteArrayOutputStream();
previewSize = camera.getParameters().getPreviewSize();
yuvimage = new YuvImage(data, ImageFormat.NV21, previewSize.width, previewSize.height, null);
yuvimage.compressToJpeg(new Rect(0, 0, previewSize.width, previewSize.height), 80, baos);
jdata = baos.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jdata, 0, jdata.length);
// rotate image
bitmap = Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(), bitmap.getHeight(), mtx, true);
return bitmap;
}
This method works well and I get desired results. But since my preview size is set to the maximum supported by the device that the application is running on(currently on my phone 1920x1088) this method takes too long and as a result I can only process 1 to 2 images per second. If I remove the conversion method I can see that the onPreviewFrame method is called 10 to 12 times per second, meaning I can receive that much per second but I can only process 1 or 2 because of the conversion.
Is there a faster way that I can use in order to receive a RGB matrix from the byte[] array that is passed?
Actually in my case it was sufficient to remove the rotation of the bitmap, because decoding the bitmap took 250-350ms and rotating the bitmap took ~500ms. So I removed the rotation and changed the orientation of the scanning. Fortunetly this isn't hard at all. If I have a function that checks a given pixel for the color and it looked like this:
boolean foo(int X, int Y) {
// statements
// ...
}
Now it looks like this:
boolean foo(int X, int Y) {
int oldX = X;
int oldY = Y;
Y = bitmap.getHeight() - oldX;
X = oldY;
// statements
// ...
}
Hope this helps. :)
Related
I am trying face detection and adding mask(graphic overlay) using google vision api ,the problem is i could not get the ouptut from camera after detecting and adding mask.so far I have tried this solution from github , https://github.com/googlesamples/android-vision/issues/24 ,based on this issue i have added a custom detector class,
Mobile Vision API - concatenate new detector object to continue frame processing . and added this on mydetector class How to create Bitmap from grayscaled byte buffer image? .
MyDetectorClass
class MyFaceDetector extends Detector<Face>
{
private Detector<Face> mDelegate;
MyFaceDetector(Detector<Face> delegate) {
mDelegate = delegate;
}
public SparseArray<Face> detect(Frame frame) {
// *** add your custom frame processing code here
ByteBuffer byteBuffer = frame.getGrayscaleImageData();
byte[] bytes = byteBuffer.array();
int w = frame.getMetadata().getWidth();
int h = frame.getMetadata().getHeight();
YuvImage yuvimage=new YuvImage(bytes, ImageFormat.NV21, w, h, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(new Rect(0, 0, w, h), 100, baos); // Where 100 is the quality of the generated jpeg
byte[] jpegArray = baos.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
Log.e("got bitmap","bitmap val " + bitmap);
return mDelegate.detect(frame);
}
public boolean isOperational() {
return mDelegate.isOperational();
}
public boolean setFocus(int id) {
return mDelegate.setFocus(id);
}
}
frame processing
public SparseArray<Face> detect(Frame frame)
{
// *** add your custom frame processing code here
ByteBuffer byteBuffer = frame.getGrayscaleImageData();
byte[] bytes = byteBuffer.array();
int w = frame.getMetadata().getWidth();
int h = frame.getMetadata().getHeight();
YuvImage yuvimage=new YuvImage(bytes, ImageFormat.NV21, w, h, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(new Rect(0, 0, w, h), 100, baos); // Where 100 is the quality of the generated jpeg
byte[] jpegArray = baos.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
Log.e("got bitmap","bitmap val " + bitmap);
return mDelegate.detect(frame);
}
i am getting a rotated bitmap ,that is without the mask (graphic overlay) i have added .How can i get the camera output with mask .
Thanks in advance.
The simple answer is: You can't.
Why? Android camera output frames in NV21 ByteBuffer. And you must generate your masks based on the landmarks points in a separated Bitmap, then join them.
Sorry but, that's how the Android Camera API work. Nothing can be done. You must do it manually.
Also, I wouldn't get the camera preview then convert it to YuvImage then to Bitmap. That process consumes a lot of resources and makes preview very very slow. Instead I would use this method which will be a lot faster and rotates your preview internally so you don't loose time doing it:
outputFrame = new Frame.Builder().setImageData(mPendingFrameData, mPreviewSize.getWidth(), mPreviewSize.getHeight(), ImageFormat.NV21)
.setId(mPendingFrameId)
.setTimestampMillis(mPendingTimeMillis)
.setRotation(mRotation)
.build();
mDetector.receiveFrame(outputFrame);
All the code can be found in CameraSource.java
I have to create grey Scale camera live Preview. It is working fine. But i want to make the code faster.
private android.hardware.Camera.PreviewCallback previewCallback = new android.hardware.Camera.PreviewCallback()
{
public void onPreviewFrame(byte abyte0[] , Camera camera)
{
Size size = cameraParams.getPreviewSize();
int[] rgbData = YuvUtils.decodeGreyscale(abyte0, size.width, size.height);
Bitmap bitmapung = Bitmap.createBitmap(rgbData, size.width, size.height, Bitmap.Config.ARGB_8888);
Bitmap bitmapToSet = Bitmap.createBitmap(bitmapung, 0, 0, widthPreview, heightPreview, matrix, true);
MyActivity.View.setBitmapToDraw(bitmapToSet);
};
As i am creating Bitmap object twice.Can i do this job with one
bitmap Object.
where (on which method like onResume or onCreate) should i getCamera
Size(means width and height once). So that i don't need to set it for each callback.
And I know i should use AsyncTask to do it. I will do it after
solving my current issue.
EDIT
I can create bitmap while getting Camera Size(width,height). Then single instance of bitmap is associated to my Class. but i have to call setPixels for it.
bitmapung.setPixels(rgbData, offset, stride, x, y, size.width, size.height);
What should I set the values of Stride ,offset and x,y. And also explain them plz.
Also have a look on these questions also
Create custom Color Effect
Add thermal effect to yuvImage
i create Android apps, that need skew angle detection. When image input in a slanting position after the image is processed using that functions, it will turn into a perpendicular position.
So, i use Leptonica (tess-two library) to achieve that, i used FindSkew() function here is my piece source code :
// get bitmap picture
BitmapFactory.Options PictureOptions = new BitmapFactory.Options();
PengaturanGambarCapture.inSampleSize = 2;
Image = BitmapFactory.decodeByteArray(data, 0, data.length, PictureOptions);
// get skew angle value
float SkewValue = Skew.findSkew(ReadFile.readBitmap(Image));
// rotate bitmap using matrix
int w = Image.getWidth();
int h = Image.getHeight();
Matrix MatrixSkew = new Matrix();
MatrixSkew.postRotate(SkewValue);
Bitmap BitmapSkew = Bitmap.createBitmap(Image, 0, 0, w, h, MatrixSkew, true);
// set BitmapSkew to imageview
OutputImage.setImageBitmap(BitmapSkew);
But when it run does not happen, ... the picture still in a tilted position. What's my mistake ??? Would you help me to fix it or you have other ways to rotate tilted images automatically. Thank you
I read many posts there? But i don't find correctly answer.
I try do something this:
#Override
public void onPictureTaken(byte[] paramArrayOfByte, Camera paramCamera) {
try {
Bitmap bitmap = BitmapFactory.decodeByteArray(paramArrayOfByte, 0,
paramArrayOfByte.length);
int width = bitmap.getWidth();
int height = bitmap.getHeight();
FileOutputStream os = new ileOutputStream(Singleton.mPushFilePath);
Matrix matrix = new Matrix();
matrix.postRotate(90);
Bitmap resizedBitmap = Bitmap.createBitmap(bitmap, 0, 0, width,
height, matrix, false);
resizedBitmap.compress(Bitmap.CompressFormat.JPEG, 95, os);
os.close();
...
Is there a way to rotate picture, without using BitmapFactory? I want rotate picture without loss of quality!
Perhaps you can take the picture already rotated as you desire using Camera.setDisplayOrientation? Check Android camera rotate. Further, investigate Camera.Parameters.setRotation(). One of these techniques should do the trick for you.
Otherwise your code looks fine except for using parameter 95 on Bitmap.compress, you need to use 100 for lossless compression.
To avoid out-of-memory exception, use Camera.Parameters.setPictureSize() to take a lower resolution picture (e.g. 3Mpx). i.e. do you really need an 8Mpx photo? Make sure to use Camera.Parameters.getSupportedPictureSizes() to determine the supported sizes on your device.
I am reading a raw image from the network. This image has been read by an image sensor, not from a file.
These are the things I know about the image:
~ Height & Width
~ Total size (in bytes)
~ 8-bit grayscale
~ 1 byte/pixel
I'm trying to convert this image to a bitmap to display in an imageview.
Here's what I tried:
BitmapFactory.Options opt = new BitmapFactory.Options();
opt.outHeight = shortHeight; //360
opt.outWidth = shortWidth;//248
imageBitmap = BitmapFactory.decodeByteArray(imageArray, 0, imageSize, opt);
decodeByteArray returns null, since it cannot decode my image.
I also tried reading it directly from the input stream, without converting it to a Byte Array first:
imageBitmap = BitmapFactory.decodeStream(imageInputStream, null, opt);
This returns null as well.
I've searched on this & other forums, but cannot find a way to achieve this.
Any ideas?
EDIT: I should add that the first thing I did was to check if the stream actually contains the raw image. I did this using other applications `(iPhone/Windows MFC) & they are able to read it and display the image correctly. I just need to figure out a way to do this in Java/Android.
Android does not support grayscale bitmaps. So first thing, you have to extend every byte to a 32-bit ARGB int. Alpha is 0xff, and R, G and B bytes are copies of the source image's byte pixel value. Then create the bitmap on top of that array.
Also (see comments), it seems that the device thinks that 0 is white, 1 is black - we have to invert the source bits.
So, let's assume that the source image is in the byte array called Src. Here's the code:
byte [] src; //Comes from somewhere...
byte [] bits = new byte[src.length*4]; //That's where the RGBA array goes.
int i;
for(i=0;i<src.length;i++)
{
bits[i*4] =
bits[i*4+1] =
bits[i*4+2] = ~src[i]; //Invert the source bits
bits[i*4+3] = 0xff; // the alpha.
}
//Now put these nice RGBA pixels into a Bitmap object
Bitmap bm = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bm.copyPixelsFromBuffer(ByteBuffer.wrap(bits));
Once I did something like this to decode the byte stream obtained from camera preview callback:
Bitmap.createBitmap(imageBytes, previewWidth, previewHeight,
Bitmap.Config.ARGB_8888);
Give it a try.
for(i=0;i<src.length;i++)
{
bits[i*4] = bits[i*4+1] = bits[i*4+2] = ~src[i]; //Invert the source bits
bits[i*4+3] = 0xff; // the alpha.
}
The conversion loop can take a lot of time to convert the 8 bit image to RGBA, a 640x800 image can take more than 500ms... A quicker solution is to use ALPHA8 format for the bitmap and use a color filter:
//setup color filter to inverse alpha, in my case it was needed
float[] mx = new float[]{
1.0f, 0, 0, 0, 0, //red
0, 1.0f, 0, 0, 0, //green
0, 0, 1.0f, 0, 0, //blue
0, 0, 0, -1.0f, 255 //alpha
};
ColorMatrixColorFilter cf = new ColorMatrixColorFilter(mx);
imageView.setColorFilter(cf);
// after set only the alpha channel of the image, it should be a lot faster without the conversion step
Bitmap bm = Bitmap.createBitmap(width, height, Bitmap.Config.ALPHA_8);
bm.copyPixelsFromBuffer(ByteBuffer.wrap(src)); //src is not modified, it's just an 8bit grayscale array
imageview.setImageBitmap(bm);
Use Drawable create from stream. Here's how to do it with an HttpResponse, but you can get the inputstream anyway you want.
InputStream stream = response.getEntity().getContent();
Drawable drawable = Drawable.createFromStream(stream, "Get Full Image Task");