Processing Android camera frames in real time - android

I'm trying to create an Android application that will process camera frames in real time. To start off with, I just want to display a grayscale version of what the camera sees. I've managed to extract the appropriate values from the byte array in the onPreviewFrame method. Below is just a snippet of my code:
byte[] pic;
int pic_size;
Bitmap picframe;
public void onPreviewFrame(byte[] frame, Camera c)
{
pic_size = mCamera.getParameters().getPreviewSize().height * mCamera.getParameters().getPreviewSize().width;
pic = new byte[pic_size];
for(int i = 0; i < pic_size; i++)
{
pic[i] = frame[i];
}
picframe = BitmapFactory.decodeByteArray(pic, 0, pic_size);
}
The first [width*height] values of the byte[] frame array are the luminance (greyscale) values. Once I've extracted them, how do I display them on the screen as an image? Its not a 2D array as well, so how would I specify the width and height?

You can get extensive guidance from the OpenCV4Android SDK. Look into their available examples, specifically Tutorial 1 Basic. 0 Android Camera
But, as it was in my case, for intensive image processing, this will get slower than acceptable for a real-time image processing application.
A good replacement for their onPreviewFrame 's byte array conversion to YUVImage:
YuvImage yuvImage = new YuvImage(frame, ImageFormat.NV21, width, height, null);
Create a rectangle the same size as the image.
Create a ByteArrayOutputStream and pass this, the rectangle and the compression value to compressToJpeg():
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(imageSizeRectangle, 100, baos);
byte [] imageData = baos.toByteArray();
Bitmap previewBitmap = BitmapFactory.decodeByteArray(imageData , 0, imageData .length);
Rendering these previewFrames on a surface and the best practices involved is a new dimension. =)

This very old post has caught my attention now.
The API available in '11 was much more limited. Today one can use SurfaceTexture (see example) to preview camera stream after (some) manipulations.

This is not an easy task to achieve, with the current Android tools/API available. In general, realtime image-processing is better done at the NDK level. To just show the black and white, you can still do it in java. The byte array containing the frame data is in YUV format, where the Y-Plane comes first. So, if you get the just the Y-plane alone (first width x height bytes), it already gives you the black and white.
I did achieve this through extensive work and trials. You can view the app at google:
https://play.google.com/store/apps/details?id=com.nm.camerafx

Related

Android camera2 API - Display processed frame in real time

I'm trying to create an app that processes camera images in real time and displays them on screen. I'm using the camera2 API. I have created a native library to process the images using OpenCV.
So far I have managed to set up an ImageReader that receives images in YUV_420_888 format like this.
mImageReader = ImageReader.newInstance(
mPreviewSize.getWidth(),
mPreviewSize.getHeight(),
ImageFormat.YUV_420_888,
4);
mImageReader.setOnImageAvailableListener(mOnImageAvailableListener, mImageReaderHandler);
From there I'm able to get the image planes (Y, U and V), get their ByteBuffer objects and pass them to my native function. This happens in the mOnImageAvailableListener:
Image image = reader.acquireLatestImage();
Image.Plane[] planes = image.getPlanes();
Image.Plane YPlane = planes[0];
Image.Plane UPlane = planes[1];
Image.Plane VPlane = planes[2];
ByteBuffer YPlaneBuffer = YPlane.getBuffer();
ByteBuffer UPlaneBuffer = UPlane.getBuffer();
ByteBuffer VPlaneBuffer = VPlane.getBuffer();
myNativeMethod(YPlaneBuffer, UPlaneBuffer, VPlaneBuffer, w, h);
image.close();
On the native side I'm able to get the data pointers from the buffers, create a cv::Mat from the data and perform the image processing.
Now the next step would be to show my processed output, but I'm unsure how to show my processed image. Any help would be greatly appreciated.
Generally speaking, you need to send the processed image data to an Android view.
The most performant option is to get an android.view.Surface object to draw into - you can get one from a SurfaceView (via SurfaceHolder) or a TextureView (via SurfaceTexture). Then you can pass that Surface through JNI to your native code, and there use the NDK methods:
ANativeWindow_fromSurface to get an ANativeWindow
The various ANativeWindow methods to set the output buffer size and format, and then draw your processed data into it.
Use setBuffersGeometry() to configure the output size, then lock() to get an ANativeWindow_Buffer. Write your image data to ANativeWindow_Buffer.bits, and then send the buffer off with unlockAndPost().
Generally, you should probably stick to RGBA_8888 as the most compatible format; technically only it and two other RGB variants are officially supported. So if your processed image is in YUV, you'd need to convert it to RGBA first.
You'll also need to ensure that the aspect ratio of your output view matches that of the dimensions you set; by default, Android's Views will just scale those internal buffers to the size of the output View, possibly stretching it in the process.
You can also set the format to one of Android's internal YUV formats, but this is not guaranteed to work!
I've tried the ANativeWindow approach, but it's a pain to set up and I haven't managed to do it correctly. In the end I just gave up and imported OpenCV4Android library which simplifies things by converting camera data to a RGBA Mat behind the scenes.

How to use CameraSource to detect custom visual code which need color information

I want to use CameraSource to detect some visual code (which is not any kind of Barcode). I implements Detector and its detect(Frame frame) method. However, when I call frame.getBitmap() in the detect method, it always returns null. I know Frame has another method, getGrayscaleImageData(), but detecting the code needs color information. It seems that CameraSource only pass the gray-scale image data to its underlying detector.
So, is there a way to detect this code by CameraSource? Or should I abandon CameraSource and find another way?
In the current release, CameraSource actually does return the full color information for the image from getGrayscaleImageData. The leading bytes of what is returned is the grayscale layer of the image (the Y channel), but the bytes beyond that have the color information. The format details depend upon what image format you specified in setting up the CameraSource (the default is NV21 format).
Found it :D
this code return colored bitmap so fast but if it's the Front camera you may have to flip/rotate according to device.
public SparseArray detect(Frame frame) {
byte[] bytes = frame.getGrayscaleImageData().array();
YuvImage yuvImage = new YuvImage(frame.getGrayscaleImageData().array(), ImageFormat.NV21, frame.getMetadata().getWidth(), frame.getMetadata().getHeight(), null);
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
yuvImage.compressToJpeg(new Rect(0, 0, frame.getMetadata().getWidth(), frame.getMetadata().getHeight()), 100, byteArrayOutputStream);
byte[] jpegArray = byteArrayOutputStream.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);//this bitmap is colored.
return null;
}

Cocos2dx: Garbled bitmap image from android

Note: Using the latest of cocos 2.x
Hi there. I'm trying to pass a bitmap through jni and convert it to a sprite to display, but the image created using CCTexture2d's initWithData is completely garbled. I've looked at other forum posts here, but even after following those as closely as possible, nothing works. Note that we are not going to use a path for the image (and it isn't an option).
There are a few issues that I will break down:
What kind of data is necessary?
The initWithData method has no documentation as to what kind of data it even takes- byte array? Pixel array? It seems other people have used byte array and got it working, so that is what we have gone with. This is the code to get the byte array, but are we supposed to use PNG or JPEG format?
ByteArrayOutputStream bAOS = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG,100, bAOS);
byte[] bytes = bAOS.toByteArray();
Getting the data to cocos
We are using some other convenience methods to write jni messages as json data which gets converted to a CCDictionary. The Byte array gets turned into a CCArray of a bunch of ints, which we can then loop through and assign...to what kind of array? "byte" isn't a valid primitive in c++. Online I've seen an int array:
CCArray *bytes = (CCArray*)dictionary->objectForKey("bytes");
int byteArray[bytes->count()];
for (int i = 0; i < bytes->count(); i++) {
CCNumber *v = (CCNumber*)bytes->objectAtIndex(i);
byteArray[i] = v->getIntValue();
};
and I've tried a char array as well, but neither worked.
CCTexture2d? CCImage?
Ok, so now we have our supposed byte array, we can render the image. Do we use CCTexture2d or CCImage?
For CCTexture2d (we know the image is 20x20 px):
CCTexture2D *texture = new CCTexture2D();
texture->initWithData(byteArray,kCCTexture2DPixelFormat_RGBA8888,20,20,CCSizeMake(20,20));
The image is garbled.
For CCImage...what is the length? I've seen length calculated as the image witdth * image height, or as the actual length of the byte array. Your guess is as good as mine, as I have never gotten a CCImage to work.
CCImage *image = new CCImage();
image->initWithImageData(byteArray,length,CCImage::kFmtPng,20,20,8);
Once we have a CCImage, we can set it to a texture using CCTexture2d's initWithImage function.
Now that we have the Texture, making a sprite with it should be simple:
CCSprite *sprite = CCSprite::createWithTexture(texture);
Which works, in as much as a sprite is created. The image is trash though.
So, what exactly is wrong here?
How do we get the data from the bitmap?
How do we make the data something that cocos can render?
Texture2d or Image? What are the parameters?
For kicks, here is the image we are testing:
And here is the byte array as Android reports it:
[-119,80,78,71,13,10,26,10,0,0,0,13,73,72,68,82,0,0,0,40,0,0,0,40,8,2,0,0,0,3,-100,47,58,0,0,0,3,115,66,73,84,8,8,8,-37,-31,79,-32,0,0,0,-71,73,68,65,84,88,-123,-19,-44,-79,13,-125,48,16,-123,-31,-97,8,-118,-48,96,37,-108,40,-70,34,3,-48,-91,-124,81,24,-115,77,48,27,-64,6,-98,32,50,19,-112,5,48,-123,69,20,41,-36,-55,-35,-45,-13,-41,-100,-114,21,-30,-34,64,-45,48,-60,-74,-41,11,63,26,-123,21,86,88,97,-123,21,86,-8,-60,112,106,-101,-56,-90,-61,11,83,48,-10,6,39,44,38,-108,39,-51,16,9,11,69,-117,8,-127,-81,-89,-102,-66,99,-82,67,-11,116,108,35,97,88,-124,121,-81,109,-4,78,120,-66,-27,82,88,-31,-81,77,26,-35,-12,20,19,66,-32,-128,92,51,-71,21,46,47,-19,-15,-80,67,122,58,-61,-10,109,-86,114,-9,122,-40,-22,-19,-114,-121,23,-52,76,13,-19,102,-6,-52,-20,-67,112,73,57,-122,-22,-25,91,46,-123,21,-2,63,-8,3,120,-72,71,35,-19,75,10,-28,0,0,0,0,73,69,78,68,-82,66,96,-126]
And here is the result (the image is not 20 x 20 though- scaled by cocos to fit density, probably)
Any help would be greatly appreciated

openCV for android face recogntion shows "mat not continuous" error

I am trying to load images into a Mat in openCV for Android for face recognition.
The images are in in jpeg format of size 640 x 480.
I am using Eclipse and this codes are in .cpp file.
This is my codes.
while (getline(file, line)) {
stringstream liness(line);
getline(liness, path, ',');
getline(liness, classlabel);
if(!path.empty() && !classlabel.empty()) {
images.push_back(imread(path, 0));
labels.push_back(atoi(classlabel.c_str()));
}
}
However, I am getting an error saying that "The matrix is not continuous, thus its number of rows cannot be changed in function cv::Mat cv:Mat:reshape(int,int)const"
I tried using the solution in OpenCV 2.0 C++ API using imshow: returns unhandled exception and "bad-flag"
but it's in Visual Studio.
Any help would be greatly appreciated.
Conversion of image from Camera preview.
The image is converted to Grayscale from camera preview data.
Mat matRgb = new Mat();
Imgproc.cvtColor(matYuv, matRgb, Imgproc.COLOR_YUV420sp2RGB, 4);
try{
Mat matGray = new Mat();
Imgproc.cvtColor(matRgb, matGray, Imgproc.COLOR_RGB2GRAY, 0);
resultBitmap = Bitmap.createBitmap(640, 480, Bitmap.Config.ARGB_8888);
Utils.matToBitmap(matGray, resultBitmap);
Saving image.
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bmFace[0].compress(Bitmap.CompressFormat.JPEG, 100, stream);
byte[] flippedImageByteArray = stream.toByteArray();
the 'Mat not continuous' error is not at all related to the link you have there.
if you're trying fisher or eigenfaces, the images have to get 'flattened' to a single row for the pca.
this is not possible, if the data has 'gaps' or was padded to make the row size a multiple of 4. some image editors do that to your data.
also, imho your images are by far too large ( pca works best, when it'S almost quadratic, ie the rowsize (num_pixels) is similar to the colsize(num_images).
so my proposal would be, to resize the train images ( and also the test images later ) to something like 100x100, when loading them, this will also achieve a continuous data block.
(and again, avoid jpegs for anything image-processing related, too many compression artefacts!)

Using the raw camera byte[] array for augmented reality

I'm developing an Augmented Reality app, so I need to capture the camera preview, add visual effects to it, and display it on screen. I would like to do this using the onPreviewFrame method of PreviewCallback. This gives me a byte[] variable containing raw image data (YUV420 encoded) to work with.
Even though I searched for a solution for many hours, I cannot find a way to convert this byte[] variable to any image format I can work with or even draw on the screen.
Preferably, I would convert the byte[] data to some RGB format that can be used both for computations and drawing.
Is there a proper way to do this?
I stumbled upon the same issue a few months back when I had to do some
edge detection on the camera frames. This works perfectly for me.
Try it out.
public void surfaceChanged(SurfaceHolder holder,int format, int width,int height)
{
camera.setPreviewCallback(new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
ByteArrayOutputStream outstr = new ByteArrayOutputStream();
Rect rect = new Rect(0, 0, width, height);
YuvImage yuvimage=new YuvImage(data,ImageFormat.NV21,width,height,null);
yuvimage.compressToJpeg(rect, 100, outstr);
Bitmap bmp = BitmapFactory.decodeByteArray(outstr.toByteArray(), 0, outstr.size());
}
}
}
You can use the bitmap for all your processing purposes now.
Get the interested pixel and you can comfortably do your RGB
or HSV stuff on it.
Imran Nazar has writen a two part tutorial on augmented reality which you may find useful. Although he eventually uses the NDK, the first part and most of the second part detail what you need using just Java.
I believe Bitmap.createBitmap is the method you need.

Categories

Resources