It's well documented that Android's camera preview data is returned back in NV21 (YUV 420). 2.2 added a YuvImage class for decoding the data. The problem I've encountered is that the YuvImage class data appears corrupt or incorrect. I used the Renderscript Sample app called HelloCompute which transforms a Bitmap into a mono-chrome Bitmap. I used two methods for decoding the Preview data into a Bitmap and passing it as input to the Renderscript:
Method 1 - Android YuvImage Class:
YuvImage preview = new YuvImage(data, ImageFormat.NV21, width, height, null);
ByteArrayOutputStream mJpegOutput = new ByteArrayOutputStream(data.length);
preview.compressToJpeg(new Rect(0, 0, width, height), 100, mJpegOutput);
mBitmapIn = BitmapFactory.decodeByteArray( mJpegOutput.toByteArray(), 0, mJpegOutput.size());
// pass mBitmapIn to RS
Method 2 - Posted Decoder Method:
As posted over here by David Pearlman
// work around for Yuv format </p>
mBitmapIn = Bitmap.createBitmap(
ImageUtil.decodeYUV420SP(data, width, height),
width,
height,
Bitmap.Config.ARGB_8888);
// pass mBitmapIn to RS
When the image is processed by the Renderscript and displayed Method 1 is very grainy and not mono-chrome, while Method 2 produces the expected output, a mono-chrome image of the preview frame. Am I doing something wrong or is the YuvImage class not usable? I'm testing this on a Xoom running 3.1.
Furthermore, I displayed the bitmaps produced by both methods on screen prior to passing to the RS. The bitmap from Method 1 has noticeable differences in lighting (I suspected this was due to the JPeg compression), while Method 2's bitmap is identical to the Preview Frame.
There is no justification for using Jpeg encode/decode just to convert a YUV image to a grayscale bitmap (I believe you want grayscale, not monochrome b/w bitmap after all). You can find many code samples that produce the result you need. You may use this one: Converting preview frame to bitmap.
Related
I'm using the Android Camera2 API to take still capture images and displaying them on a TextureView (for later image editing).
I have been scouring the web for a faster method to:
Decode Camera Image buffer into bitmap
Scale bitmap to size of screen and rotate it (since it comes in rotated 90 degrees)
Display it on a texture view
Currently I've managed an execution time of around 0.8s for the above, but this is too long for my particular application.
A few solutions I've considered were:
Simply taking a single frame of the preview (timing-wise this was fast, except that I had no control over auto flash)
Trying to get instead a YUV_420_888 formatted image and then somehow turning that into a bitmap (there's a lot of stuff online that might help but my initial attempts bore no fruit as of yet)
Simply sending a reduced quality image from the camera itself, but from what I've read it looks like the JPEG_QUALITY parameter in CaptureRequests does nothing! I've also tried setting BitmapFactory options inSampleSize but without any noticeable improvement in speed.
Finding some way to directly manipulate the jpeg byte array from image buffer to transform it and then converting to bitmap, all in one shot
For your reference, the following code takes the image buffer, decodes and transforms it, and displays it on the textureview:
Canvas canvas = mTextureView.lockCanvas();
// obtain image bytes (jpeg) from image in camera fragment
// mFragment.getImage() returns Image object
ByteBuffer buffer = mFragment.getImage().getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
// decoding process takes several hundred milliseconds
Bitmap src = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
mFragment.getImage().close();
// resize horizontally oriented images
if (src.getWidth() > src.getHeight()) {
// transformation matrix that scales and rotates
Matrix matrix = new Matrix();
if (CameraLayout.getFace() == CameraCharacteristics.LENS_FACING_FRONT) {
matrix.setScale(-1, 1);
}
matrix.postRotate(90);
matrix.postScale(((float) canvas.getWidth()) / src.getHeight(),
((float) canvas.getHeight()) / src.getWidth());
// bitmap creation process takes another several hundred millis!
Bitmap resizedBitmap = Bitmap.createBitmap(
src, 0, 0, src.getWidth(), src.getHeight(), matrix, true);
canvas.drawBitmap(resizedBitmap, 0, 0, null);
} else {
canvas.drawBitmap(src, 0, 0, null);
}
// post canvas to texture view
mTextureView.unlockCanvasAndPost(canvas);
This is my first question on stack overflow, so I apologize if I haven't quite followed common conventions.
Thanks in advance for any feedback.
If all you're doing with this is to draw it into a View, and won't be saving it, have you tried to simply request JPEGs that are lower resolution than maximum, and match the screen dimensions better?
Alternatively, if you need the full-size image, JPEG images typically contain a thumbnail - extracting that and displaying it is a lot faster than processing the full-resolution image.
In terms of your current code, if possible, you should avoid having to create a second Bitmap with the scaling. Could you instead place an ImageView on top of your TextureView when you want to display the image, and then rely on its built-in scaling?
Or use Canvas.concat(Matrix) instead of creating the intermediate Bitmap?
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(byteBuffer);
// bitmap is valid and can be displayed
I extracted the ByteArray from the valid byteBuffer. But it returns null when I tried to decodeByteArray. Can someone explain why it's the case.
byteBuffer.rewind();
byteBuffer.get(byteArray, 0, byteBuffer.capacity());
Bitmap bitmap = BitmapFactory.decodeByteArray(byteArray, 0 , byteArray.length);
// returns null
I believe the 2 functions do different things and expect different data.
copyPixelsFromBuffer()
is used to import raw pixel information into an existing Bitmap image which already has size and pixel depth configured.
BitmapFactory.decodeByteArray()
is used to create a bitmap from a byte array containing the full bitmap file data, not just the raw pixels. That's why the function doesn't take (or need) size and pixel depth information, because it gets it all from the bytes passed to it.
I'm working on an app which gets NV21 buffer from onPreviewFrame() callback through JNI layer and then I convert it to RGB by using OpenCV in C++. Below is the sample code:
Mat yuv(height+height/2, width, CV_8UC1, inBuffer);
Mat rgb(height, width, CV_8UC3);
cvtColor(yuv, rgb, COLOR_YUV2RGB_NV21);
Now in the android app, I get the rgb buffer back and try to display it by generating a bitmap from it:
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565);
bitmap.createPixelsFromBuffer(ByteBuffer.wrap(imageBuffer));
However, it doesn't display a proper image. Does anyone know what I'm missing here?
In your Bitmap configuration change Bitmap.Config.RGB_565 to Bitmap.Config.ARGB_8888
From the android developer docs,
Bitmap.Config RGB_565
Each pixel is stored on 2 bytes and only the RGB channels are encoded:
red is stored with 5 bits of precision (32 possible values), green is
stored with 6 bits of precision (64 possible values) blue is stored
with 5 bits of precision.
Also, in your native function call keep a 4 channel Mat. Convert with COLOR_YUV2RGBA_NV21.
I am trying to load images into a Mat in openCV for Android for face recognition.
The images are in in jpeg format of size 640 x 480.
I am using Eclipse and this codes are in .cpp file.
This is my codes.
while (getline(file, line)) {
stringstream liness(line);
getline(liness, path, ',');
getline(liness, classlabel);
if(!path.empty() && !classlabel.empty()) {
images.push_back(imread(path, 0));
labels.push_back(atoi(classlabel.c_str()));
}
}
However, I am getting an error saying that "The matrix is not continuous, thus its number of rows cannot be changed in function cv::Mat cv:Mat:reshape(int,int)const"
I tried using the solution in OpenCV 2.0 C++ API using imshow: returns unhandled exception and "bad-flag"
but it's in Visual Studio.
Any help would be greatly appreciated.
Conversion of image from Camera preview.
The image is converted to Grayscale from camera preview data.
Mat matRgb = new Mat();
Imgproc.cvtColor(matYuv, matRgb, Imgproc.COLOR_YUV420sp2RGB, 4);
try{
Mat matGray = new Mat();
Imgproc.cvtColor(matRgb, matGray, Imgproc.COLOR_RGB2GRAY, 0);
resultBitmap = Bitmap.createBitmap(640, 480, Bitmap.Config.ARGB_8888);
Utils.matToBitmap(matGray, resultBitmap);
Saving image.
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bmFace[0].compress(Bitmap.CompressFormat.JPEG, 100, stream);
byte[] flippedImageByteArray = stream.toByteArray();
the 'Mat not continuous' error is not at all related to the link you have there.
if you're trying fisher or eigenfaces, the images have to get 'flattened' to a single row for the pca.
this is not possible, if the data has 'gaps' or was padded to make the row size a multiple of 4. some image editors do that to your data.
also, imho your images are by far too large ( pca works best, when it'S almost quadratic, ie the rowsize (num_pixels) is similar to the colsize(num_images).
so my proposal would be, to resize the train images ( and also the test images later ) to something like 100x100, when loading them, this will also achieve a continuous data block.
(and again, avoid jpegs for anything image-processing related, too many compression artefacts!)
I'm trying to create an Android application that will process camera frames in real time. To start off with, I just want to display a grayscale version of what the camera sees. I've managed to extract the appropriate values from the byte array in the onPreviewFrame method. Below is just a snippet of my code:
byte[] pic;
int pic_size;
Bitmap picframe;
public void onPreviewFrame(byte[] frame, Camera c)
{
pic_size = mCamera.getParameters().getPreviewSize().height * mCamera.getParameters().getPreviewSize().width;
pic = new byte[pic_size];
for(int i = 0; i < pic_size; i++)
{
pic[i] = frame[i];
}
picframe = BitmapFactory.decodeByteArray(pic, 0, pic_size);
}
The first [width*height] values of the byte[] frame array are the luminance (greyscale) values. Once I've extracted them, how do I display them on the screen as an image? Its not a 2D array as well, so how would I specify the width and height?
You can get extensive guidance from the OpenCV4Android SDK. Look into their available examples, specifically Tutorial 1 Basic. 0 Android Camera
But, as it was in my case, for intensive image processing, this will get slower than acceptable for a real-time image processing application.
A good replacement for their onPreviewFrame 's byte array conversion to YUVImage:
YuvImage yuvImage = new YuvImage(frame, ImageFormat.NV21, width, height, null);
Create a rectangle the same size as the image.
Create a ByteArrayOutputStream and pass this, the rectangle and the compression value to compressToJpeg():
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(imageSizeRectangle, 100, baos);
byte [] imageData = baos.toByteArray();
Bitmap previewBitmap = BitmapFactory.decodeByteArray(imageData , 0, imageData .length);
Rendering these previewFrames on a surface and the best practices involved is a new dimension. =)
This very old post has caught my attention now.
The API available in '11 was much more limited. Today one can use SurfaceTexture (see example) to preview camera stream after (some) manipulations.
This is not an easy task to achieve, with the current Android tools/API available. In general, realtime image-processing is better done at the NDK level. To just show the black and white, you can still do it in java. The byte array containing the frame data is in YUV format, where the Y-Plane comes first. So, if you get the just the Y-plane alone (first width x height bytes), it already gives you the black and white.
I did achieve this through extensive work and trials. You can view the app at google:
https://play.google.com/store/apps/details?id=com.nm.camerafx