Using the raw camera byte[] array for augmented reality - android

I'm developing an Augmented Reality app, so I need to capture the camera preview, add visual effects to it, and display it on screen. I would like to do this using the onPreviewFrame method of PreviewCallback. This gives me a byte[] variable containing raw image data (YUV420 encoded) to work with.
Even though I searched for a solution for many hours, I cannot find a way to convert this byte[] variable to any image format I can work with or even draw on the screen.
Preferably, I would convert the byte[] data to some RGB format that can be used both for computations and drawing.
Is there a proper way to do this?

I stumbled upon the same issue a few months back when I had to do some
edge detection on the camera frames. This works perfectly for me.
Try it out.
public void surfaceChanged(SurfaceHolder holder,int format, int width,int height)
{
camera.setPreviewCallback(new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
ByteArrayOutputStream outstr = new ByteArrayOutputStream();
Rect rect = new Rect(0, 0, width, height);
YuvImage yuvimage=new YuvImage(data,ImageFormat.NV21,width,height,null);
yuvimage.compressToJpeg(rect, 100, outstr);
Bitmap bmp = BitmapFactory.decodeByteArray(outstr.toByteArray(), 0, outstr.size());
}
}
}
You can use the bitmap for all your processing purposes now.
Get the interested pixel and you can comfortably do your RGB
or HSV stuff on it.

Imran Nazar has writen a two part tutorial on augmented reality which you may find useful. Although he eventually uses the NDK, the first part and most of the second part detail what you need using just Java.
I believe Bitmap.createBitmap is the method you need.

Related

How to use CameraSource to detect custom visual code which need color information

I want to use CameraSource to detect some visual code (which is not any kind of Barcode). I implements Detector and its detect(Frame frame) method. However, when I call frame.getBitmap() in the detect method, it always returns null. I know Frame has another method, getGrayscaleImageData(), but detecting the code needs color information. It seems that CameraSource only pass the gray-scale image data to its underlying detector.
So, is there a way to detect this code by CameraSource? Or should I abandon CameraSource and find another way?
In the current release, CameraSource actually does return the full color information for the image from getGrayscaleImageData. The leading bytes of what is returned is the grayscale layer of the image (the Y channel), but the bytes beyond that have the color information. The format details depend upon what image format you specified in setting up the CameraSource (the default is NV21 format).
Found it :D
this code return colored bitmap so fast but if it's the Front camera you may have to flip/rotate according to device.
public SparseArray detect(Frame frame) {
byte[] bytes = frame.getGrayscaleImageData().array();
YuvImage yuvImage = new YuvImage(frame.getGrayscaleImageData().array(), ImageFormat.NV21, frame.getMetadata().getWidth(), frame.getMetadata().getHeight(), null);
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
yuvImage.compressToJpeg(new Rect(0, 0, frame.getMetadata().getWidth(), frame.getMetadata().getHeight()), 100, byteArrayOutputStream);
byte[] jpegArray = byteArrayOutputStream.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);//this bitmap is colored.
return null;
}

Android: Display camera still capture on TextureView Quickly?

I'm using the Android Camera2 API to take still capture images and displaying them on a TextureView (for later image editing).
I have been scouring the web for a faster method to:
Decode Camera Image buffer into bitmap
Scale bitmap to size of screen and rotate it (since it comes in rotated 90 degrees)
Display it on a texture view
Currently I've managed an execution time of around 0.8s for the above, but this is too long for my particular application.
A few solutions I've considered were:
Simply taking a single frame of the preview (timing-wise this was fast, except that I had no control over auto flash)
Trying to get instead a YUV_420_888 formatted image and then somehow turning that into a bitmap (there's a lot of stuff online that might help but my initial attempts bore no fruit as of yet)
Simply sending a reduced quality image from the camera itself, but from what I've read it looks like the JPEG_QUALITY parameter in CaptureRequests does nothing! I've also tried setting BitmapFactory options inSampleSize but without any noticeable improvement in speed.
Finding some way to directly manipulate the jpeg byte array from image buffer to transform it and then converting to bitmap, all in one shot
For your reference, the following code takes the image buffer, decodes and transforms it, and displays it on the textureview:
Canvas canvas = mTextureView.lockCanvas();
// obtain image bytes (jpeg) from image in camera fragment
// mFragment.getImage() returns Image object
ByteBuffer buffer = mFragment.getImage().getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
// decoding process takes several hundred milliseconds
Bitmap src = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
mFragment.getImage().close();
// resize horizontally oriented images
if (src.getWidth() > src.getHeight()) {
// transformation matrix that scales and rotates
Matrix matrix = new Matrix();
if (CameraLayout.getFace() == CameraCharacteristics.LENS_FACING_FRONT) {
matrix.setScale(-1, 1);
}
matrix.postRotate(90);
matrix.postScale(((float) canvas.getWidth()) / src.getHeight(),
((float) canvas.getHeight()) / src.getWidth());
// bitmap creation process takes another several hundred millis!
Bitmap resizedBitmap = Bitmap.createBitmap(
src, 0, 0, src.getWidth(), src.getHeight(), matrix, true);
canvas.drawBitmap(resizedBitmap, 0, 0, null);
} else {
canvas.drawBitmap(src, 0, 0, null);
}
// post canvas to texture view
mTextureView.unlockCanvasAndPost(canvas);
This is my first question on stack overflow, so I apologize if I haven't quite followed common conventions.
Thanks in advance for any feedback.
If all you're doing with this is to draw it into a View, and won't be saving it, have you tried to simply request JPEGs that are lower resolution than maximum, and match the screen dimensions better?
Alternatively, if you need the full-size image, JPEG images typically contain a thumbnail - extracting that and displaying it is a lot faster than processing the full-resolution image.
In terms of your current code, if possible, you should avoid having to create a second Bitmap with the scaling. Could you instead place an ImageView on top of your TextureView when you want to display the image, and then rely on its built-in scaling?
Or use Canvas.concat(Matrix) instead of creating the intermediate Bitmap?

openCV for android face recogntion shows "mat not continuous" error

I am trying to load images into a Mat in openCV for Android for face recognition.
The images are in in jpeg format of size 640 x 480.
I am using Eclipse and this codes are in .cpp file.
This is my codes.
while (getline(file, line)) {
stringstream liness(line);
getline(liness, path, ',');
getline(liness, classlabel);
if(!path.empty() && !classlabel.empty()) {
images.push_back(imread(path, 0));
labels.push_back(atoi(classlabel.c_str()));
}
}
However, I am getting an error saying that "The matrix is not continuous, thus its number of rows cannot be changed in function cv::Mat cv:Mat:reshape(int,int)const"
I tried using the solution in OpenCV 2.0 C++ API using imshow: returns unhandled exception and "bad-flag"
but it's in Visual Studio.
Any help would be greatly appreciated.
Conversion of image from Camera preview.
The image is converted to Grayscale from camera preview data.
Mat matRgb = new Mat();
Imgproc.cvtColor(matYuv, matRgb, Imgproc.COLOR_YUV420sp2RGB, 4);
try{
Mat matGray = new Mat();
Imgproc.cvtColor(matRgb, matGray, Imgproc.COLOR_RGB2GRAY, 0);
resultBitmap = Bitmap.createBitmap(640, 480, Bitmap.Config.ARGB_8888);
Utils.matToBitmap(matGray, resultBitmap);
Saving image.
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bmFace[0].compress(Bitmap.CompressFormat.JPEG, 100, stream);
byte[] flippedImageByteArray = stream.toByteArray();
the 'Mat not continuous' error is not at all related to the link you have there.
if you're trying fisher or eigenfaces, the images have to get 'flattened' to a single row for the pca.
this is not possible, if the data has 'gaps' or was padded to make the row size a multiple of 4. some image editors do that to your data.
also, imho your images are by far too large ( pca works best, when it'S almost quadratic, ie the rowsize (num_pixels) is similar to the colsize(num_images).
so my proposal would be, to resize the train images ( and also the test images later ) to something like 100x100, when loading them, this will also achieve a continuous data block.
(and again, avoid jpegs for anything image-processing related, too many compression artefacts!)

Android YuvImage class format incorrect?

It's well documented that Android's camera preview data is returned back in NV21 (YUV 420). 2.2 added a YuvImage class for decoding the data. The problem I've encountered is that the YuvImage class data appears corrupt or incorrect. I used the Renderscript Sample app called HelloCompute which transforms a Bitmap into a mono-chrome Bitmap. I used two methods for decoding the Preview data into a Bitmap and passing it as input to the Renderscript:
Method 1 - Android YuvImage Class:
YuvImage preview = new YuvImage(data, ImageFormat.NV21, width, height, null);
ByteArrayOutputStream mJpegOutput = new ByteArrayOutputStream(data.length);
preview.compressToJpeg(new Rect(0, 0, width, height), 100, mJpegOutput);
mBitmapIn = BitmapFactory.decodeByteArray( mJpegOutput.toByteArray(), 0, mJpegOutput.size());
// pass mBitmapIn to RS
Method 2 - Posted Decoder Method:
As posted over here by David Pearlman
// work around for Yuv format </p>
mBitmapIn = Bitmap.createBitmap(
ImageUtil.decodeYUV420SP(data, width, height),
width,
height,
Bitmap.Config.ARGB_8888);
// pass mBitmapIn to RS
When the image is processed by the Renderscript and displayed Method 1 is very grainy and not mono-chrome, while Method 2 produces the expected output, a mono-chrome image of the preview frame. Am I doing something wrong or is the YuvImage class not usable? I'm testing this on a Xoom running 3.1.
Furthermore, I displayed the bitmaps produced by both methods on screen prior to passing to the RS. The bitmap from Method 1 has noticeable differences in lighting (I suspected this was due to the JPeg compression), while Method 2's bitmap is identical to the Preview Frame.
There is no justification for using Jpeg encode/decode just to convert a YUV image to a grayscale bitmap (I believe you want grayscale, not monochrome b/w bitmap after all). You can find many code samples that produce the result you need. You may use this one: Converting preview frame to bitmap.

Processing Android camera frames in real time

I'm trying to create an Android application that will process camera frames in real time. To start off with, I just want to display a grayscale version of what the camera sees. I've managed to extract the appropriate values from the byte array in the onPreviewFrame method. Below is just a snippet of my code:
byte[] pic;
int pic_size;
Bitmap picframe;
public void onPreviewFrame(byte[] frame, Camera c)
{
pic_size = mCamera.getParameters().getPreviewSize().height * mCamera.getParameters().getPreviewSize().width;
pic = new byte[pic_size];
for(int i = 0; i < pic_size; i++)
{
pic[i] = frame[i];
}
picframe = BitmapFactory.decodeByteArray(pic, 0, pic_size);
}
The first [width*height] values of the byte[] frame array are the luminance (greyscale) values. Once I've extracted them, how do I display them on the screen as an image? Its not a 2D array as well, so how would I specify the width and height?
You can get extensive guidance from the OpenCV4Android SDK. Look into their available examples, specifically Tutorial 1 Basic. 0 Android Camera
But, as it was in my case, for intensive image processing, this will get slower than acceptable for a real-time image processing application.
A good replacement for their onPreviewFrame 's byte array conversion to YUVImage:
YuvImage yuvImage = new YuvImage(frame, ImageFormat.NV21, width, height, null);
Create a rectangle the same size as the image.
Create a ByteArrayOutputStream and pass this, the rectangle and the compression value to compressToJpeg():
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(imageSizeRectangle, 100, baos);
byte [] imageData = baos.toByteArray();
Bitmap previewBitmap = BitmapFactory.decodeByteArray(imageData , 0, imageData .length);
Rendering these previewFrames on a surface and the best practices involved is a new dimension. =)
This very old post has caught my attention now.
The API available in '11 was much more limited. Today one can use SurfaceTexture (see example) to preview camera stream after (some) manipulations.
This is not an easy task to achieve, with the current Android tools/API available. In general, realtime image-processing is better done at the NDK level. To just show the black and white, you can still do it in java. The byte array containing the frame data is in YUV format, where the Y-Plane comes first. So, if you get the just the Y-plane alone (first width x height bytes), it already gives you the black and white.
I did achieve this through extensive work and trials. You can view the app at google:
https://play.google.com/store/apps/details?id=com.nm.camerafx

Categories

Resources