I have a C function that I'm calling from Java using the Android NDK. Essentially it takes the camera data and converts it from YUV to RGB format. The problem is I'm not sure what object type imageOut is returned in as in C it's simply of type jobject. This is the snippet of code I have (unfortunately I have nothing else to go by):
JNIEXPORT void JNICALL Java_com_twothreetwo_zoomplus_ZoomPlus_yuvrgb(JNIEnv *env, jobject obj, jbyteArray imageIn, jint widthIn, jint heightIn, jobject imageOut, jint widthOut, jint heightOut)
{
LOGI("width is %d; height is %d;",widthIn,heightIn);
jbyte *cImageIn = (*env)->GetByteArrayElements(env, imageIn, NULL);
jbyte *cImageOut = (jbyte*)(*env)->GetDirectBufferAddress(env, imageOut);
unsigned int *rgbs = (unsigned int*)cImageOut;
int half_widthIn = widthIn >> 1;
//the end of the luminance data
int lumEnd = (widthIn * heightIn) >> 1;
//points to the next luminance value pair
int lumPtr = 0;
//points to the next chromiance value pair
int chrPtr = lumEnd;
//the end of the current luminance scanline
int lineEnd = half_widthIn;
unsigned short *yuvs;
int x,y;
for (y=0;y<heightIn;y++) {
int yPosOut=(y*widthOut) >> 1;
for (x=0;x<half_widthIn;x++) {
//read the luminance and chromiance values
int Y1 = yuvs[lumPtr++];
int Y2 = (Y1 >> 8) & 0xff;
Y1 = Y1 & 0xff;
int Cr = yuvs[chrPtr++];
int Cb = ((Cr >> 8) & 0xff) - 128;
Cr = (Cr & 0xff) - 128;
int R, G, B;
//generate first RGB components
B = Y1 + ((454 * Cb) >> 8);
if (B < 0) B = 0; if (B > 255) B = 255;
G = Y1 - ((88 * Cb + 183 * Cr) >> 8);
if (G < 0) G = 0; if (G > 255) G = 255;
R = Y1 + ((359 * Cr) >> 8);
if (R < 0) R = 0; if (R > 255) R = 255;
int val = ((R & 0xf8) << 8) | ((G & 0xfc) << 3) | (B >> 3);
//generate second RGB components
B = Y1 + ((454 * Cb) >> 8);
if (B < 0) B = 0; if (B > 255) B = 255;
G = Y1 - ((88 * Cb + 183 * Cr) >> 8);
if (G < 0) G = 0; if (G > 255) G = 255;
R = Y1 + ((359 * Cr) >> 8);
if (R < 0) R = 0; if (R > 255) R = 255;
rgbs[yPosOut+x] = val | ((((R & 0xf8) << 8) | ((G & 0xfc) << 3) | (B >> 3)) << 16);
}
//skip back to the start of the chromiance values when necessary
chrPtr = lumEnd + ((lumPtr >> 1) / half_widthIn) * half_widthIn;
lineEnd += half_widthIn;
}
(*env)->ReleaseByteArrayElements(env, imageIn, cImageIn, JNI_ABORT);
}
I'm calling the function in the onPreviewFrame function:
public native void yuvrgb(byte[] yuvImageIn, int widthIn, int heightIn, Bitmap imageOut, int widthOut, int heightOut);
public void onPreviewFrame(byte[] data, Camera camera)
{
yuvrgb(data,480,640,bitmapWip,480,640);
cameraImageView.setImageBitmap(bitmapWip);
}
As you can see, I'm currently declaring imageOut as a Bitmap which is where I think I'm going wrong as I just guessed the type.
I don't get any errors, the app simply crashes instantly. Does anyone know what I'm doing wrong?
cImageOut is an array of bytes. It's declared at the top of your C function:
jbyte *cImageOut = (jbyte*)(*env)->GetDirectBufferAddress(env, imageOut);
You should declare it as a byte array in your java code and then convert it to a Bitmap using BitmapFactory.decodeByteArray
http://developer.android.com/reference/android/graphics/BitmapFactory.html#decodeByteArray%28byte[],%20int,%20int%29
Related
I took over an Android project, which already has a yuv2rgb method. I have to write this method rgb2yuv. Please help me, thank you!
public static int[] yuv2rgb(byte[] pYUV, int width, int height) {
int[] pRGB = new int[width * height];
int i, j, yp;
int hfWidth = width >> 1;
int size = width * height;
int qtrSize = size >> 2;
for (i = 0, yp = 0; i < height; i++) {
int uvp = size + (i >> 1) * hfWidth, u = 0, v = 0;
for (j = 0; j < width; j++, yp++) {
int y = (0xff & pYUV[yp]) - 16;
if ((j & 1) == 0) {
u = (0xff & pYUV[uvp + (j >> 1)]) - 128;
v = (0xff & pYUV[uvp + qtrSize + (j >> 1)]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0) r = 0;
else if (r > 262143) r = 262143;
if (g < 0) g = 0;
else if (g > 262143) g = 262143;
if (b < 0) b = 0;
else if (b > 262143) b = 262143;
pRGB[i * width + j] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
}
}
return pRGB;
}
My colleague helped me find a way,it's work!
public static byte[] colorconvertRGB_IYUV_I420(int[] aRGB, int width, int height) {
final int frameSize = width * height;
final int chromasize = frameSize / 4;
int yIndex = 0;
int uIndex = frameSize;
int vIndex = frameSize + chromasize;
byte[] yuv = new byte[width * height * 3 / 2];
int a, R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
//a = (aRGB[index] & 0xff000000) >> 24; //not using it right now
R = (aRGB[index] & 0xff0000) >> 16;
G = (aRGB[index] & 0xff00) >> 8;
B = (aRGB[index] & 0xff) >> 0;
Y = ((66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ((-38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
V = ((112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
yuv[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv[uIndex++] = (byte) ((U < 0) ? 0 : ((U > 255) ? 255 : U));
yuv[vIndex++] = (byte) ((V < 0) ? 0 : ((V > 255) ? 255 : V));
}
index++;
}
}
return yuv;
}
In onPreviewFrame using android.hardware.Camera, if I use the default NV21 format, I can use YuvImage to compress to jpeg format, which works great. If I try to change the format using setPreviewFormat(ImageFormat.YV12), then it does not work anymore as YuvImage does not support YV12 format. I've found only one solution somewhere to convert Bitmap to YV12, but I want to do the opposite and get a jpeg out of these bytes. Is there a library to do this?
If YUV420 to JPEG is what you're looking for then,
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
// TODO Auto-generated method stub
ByteArrayOutputStream out= new ByteArrayOutputStream();
decodeYUV420(pixels, data, previewSize.width, previewSize.height);
mBitmap = Bitmap.createBitmap(pixels, previewSize.width, previewSize.height,Config.ARGB_8888);
mBitmap.compress(CompressFormat.JPEG , 25, out);
.......
where, the decodeYUV420 method goes as follows:
public void decodeYUV420(int[] rgb, byte[] yuv420, int width, int height) {
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420[yp])) - 16;
if (y < 0) y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420[uvp++]) - 128;
u = (0xff & yuv420[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0) r = 0; else if (r > 262143) r = 262143;
if (g < 0) g = 0; else if (g > 262143) g = 262143;
if (b < 0) b = 0; else if (b > 262143) b = 262143;
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
}
}
I've created a similar project to this on github check here
and the code implementation: here
and yes it works!
I'm not expert with image format. I'm testing frame rate performance of camera.
When I convert data from YUV to RGB, this data which RGB format is? rgb565 or argb8888?
And why createBitmap take a long time? add info to raw data?
This is the rgb code
public int[] YUV_NV21_TO_RGB( byte[] yuv, int width, int height) {
final int frameSize = width * height;
int[] argb = new int[width*height];
final int ii = 0;
final int ij = 0;
final int di = +1;
final int dj = +1;
int a = 0;
for (int i = 0, ci = ii; i < height; ++i, ci += di) {
for (int j = 0, cj = ij; j < width; ++j, cj += dj) {
int y = (0xff & ((int) yuv[ci * width + cj]));
int v = (0xff & ((int) yuv[frameSize + (ci >> 1) * width + (cj & ~1) + 0]));
int u = (0xff & ((int) yuv[frameSize + (ci >> 1) * width + (cj & ~1) + 1]));
y = y < 16 ? 16 : y;
int a0 = 1192 * (y - 16);
int a1 = 1634 * (v - 128);
int a2 = 832 * (v - 128);
int a3 = 400 * (u - 128);
int a4 = 2066 * (u - 128);
int r = (a0 + a1) >> 10;
int g = (a0 - a2 - a3) >> 10;
int b = (a0 + a4) >> 10;
r = r < 0 ? 0 : (r > 255 ? 255 : r);
g = g < 0 ? 0 : (g > 255 ? 255 : g);
b = b < 0 ? 0 : (b > 255 ? 255 : b);
argb[a++] = 0xff000000 | (r << 16) | (g << 8) | b;
}
}
return argb;
}
The problem is that if i use CreateBitmap with RGB_565 option, time is at least 10 ms faster than ARGB8888.
If RGB_565 is a sort of compression (loss of data), should not be the opposite ( createBitmap with ARGB888 faster than RGB_565)?
I am using Convolution Matrix for my android app for making image Emboss.
i have defined the class for it as:
public class ConvolutionMatrix {
public static final int SIZE = 3;
public double[][] Matrix;
public double Factor = 1;
public double Offset = 1;
public ConvolutionMatrix(int size) {
Matrix = new double[size][size];
}
public void setAll(double value) {
for (int x = 0; x < SIZE; ++x) {
for (int y = 0; y < SIZE; ++y) {
Matrix[x][y] = value;
}
}
}
public void applyConfig(double[][] config) {
for (int x = 0; x < SIZE; ++x) {
for (int y = 0; y < SIZE; ++y) {
Matrix[x][y] = config[x][y];
}
}
}
public static Bitmap computeConvolution3x3(Bitmap src,
ConvolutionMatrix matrix) {
int width = src.getWidth();
int height = src.getHeight();
Bitmap result = Bitmap.createBitmap(width, height, src.getConfig());
int A, R, G, B;
int sumR, sumG, sumB;
int[][] pixels = new int[SIZE][SIZE];
for (int y = 0; y < height - 2; ++y) {
for (int x = 0; x < width - 2; ++x) {
// get pixel matrix
for (int i = 0; i < SIZE; ++i) {
for (int j = 0; j < SIZE; ++j) {
pixels[i][j] = src.getPixel(x + i, y + j);
}
}
// get alpha of center pixel
A = Color.alpha(pixels[1][1]);
// init color sum
sumR = sumG = sumB = 0;
// get sum of RGB on matrix
for (int i = 0; i < SIZE; ++i) {
for (int j = 0; j < SIZE; ++j) {
sumR += (Color.red(pixels[i][j]) * matrix.Matrix[i][j]);
sumG += (Color.green(pixels[i][j]) * matrix.Matrix[i][j]);
sumB += (Color.blue(pixels[i][j]) * matrix.Matrix[i][j]);
}
}
// get final Red
R = (int) (sumR / matrix.Factor + matrix.Offset);
if (R < 0) {
R = 0;
} else if (R > 255) {
R = 255;
}
// get final Green
G = (int) (sumG / matrix.Factor + matrix.Offset);
if (G < 0) {
G = 0;
} else if (G > 255) {
G = 255;
}
// get final Blue
B = (int) (sumB / matrix.Factor + matrix.Offset);
if (B < 0) {
B = 0;
} else if (B > 255) {
B = 255;
}
// apply new pixel
result.setPixel(x + 1, y + 1, Color.argb(A, R, G, B));
}
}
// final image
return result;
}
}
It is giving me proper result but it takes too much time for calculating the result. Is there any way to make calculation faster and work efficiently?
The core of your slowdown is:
// apply new pixel
result.setPixel(x + 1, y + 1, Color.argb(A, R, G, B));
That's a reasonable amount of work each iteration to set each pixel pixel by pixel, they aren't free in the bitmap class. It's far better to call the getPixels() routine and mess with the raw pixels there then put them back, just one time when you're done.
You could also hardcode the emboss (most of the time you're grabbing a bunch of data and multiplying it by zero with that kernel, you easily cheat and grab like the three pixels you care about.
private static int hardEmboss(int[] pixels, int stride, int index, int[][] matrix, int parts) {
//ignoring the matrix
int p1 = pixels[index];
int p2 = pixels[index + stride + 1];
int p3 = pixels[index + stride + stride + 2];
int r = 2 * ((p1 >> 16) & 0xFF) - ((p2 >> 16) & 0xFF) - ((p3 >> 16) & 0xFF);
int g = 2 * ((p1 >> 8) & 0xFF) - ((p2 >> 8) & 0xFF) - ((p3 >> 8) & 0xFF);
int b = 2 * ((p1) & 0xFF) - ((p2) & 0xFF) - ((p3) & 0xFF);
return 0xFF000000 | ((crimp(r) << 16) | (crimp(g) << 8) | (crimp(b)));
}
Assuming your emboss kernel is:
int[][] matrix = new int[][]{
{2, 0, 0},
{0, -1, 0},
{0, 0, -1}
};
Also, unbeknownst to most everybody there's a critical flaw in the standard convolution algorithm, where the return the results pixel to the center is in error. If you return it to the upper left hand corner you can simply process all the data in the same memory footprint going left to right, top to bottom in a scanline operation.
public static int crimp(int v) { return (v > 255)?255:((v < 0)?0:v); }
public static void applyEmboss(int[] pixels, int stride) {
//stride should be equal to width here, and pixels.length == bitmap.height * bitmap.width;
int pos;
pos = 0;
try {
while (true) {
int p1 = pixels[pos];
int p2 = pixels[pos + stride + 1];
int p3 = pixels[pos + stride + stride + 2];
int r = 2 * ((p1 >> 16) & 0xFF) - ((p2 >> 16) & 0xFF) - ((p3 >> 16) & 0xFF);
int g = 2 * ((p1 >> 8) & 0xFF) - ((p2 >> 8) & 0xFF) - ((p3 >> 8) & 0xFF);
int b = 2 * ((p1) & 0xFF) - ((p2) & 0xFF) - ((p3) & 0xFF);
pixels[pos++] = 0xFF000000 | ((crimp(r) << 16) | (crimp(g) << 8) | (crimp(b)));
}
}
catch (ArrayIndexOutOfBoundsException e) { }
}
The disadvantage is that the pixel appears to shift left and up by 1 pixel. Though, if you do another scanline fill backwards you could shift them back. And all the garbage here will end up as 2 rows on the right and bottom sides (some of this will be filled with embossed nonsense because I didn't have it slow down to check for those places). This means that if you want to cut that off when you readd the pixels reduce the height and width by 2, and leave the stride at the size of the original width. Since all the good data will be in the top bit, you don't have to fiddle at all with the offset.
Also, just use renderscript.
Take a look at: Convolution Demo. It is an App, which compares a convolution implementation done in Java vs in C++.
Needless to say C++ variant runs more than 10x faster.
So if you want speed either implement it via NDK or via Shaders.
How can i read the preview frames from the actual camera source code? I am trying to modify the actual source code of camera application in android for reading the preview frames. Can anyone help me with this.
You shouldn't have to change the actual source code for reading frames. Implementing the Camera.PreviewCallback interface should suffice. It returns raw data from the camera.
Before messing with the source of the camera app try the example here:
CameraPreview
Then implement the Camera.PreviewCallback.
The captured frame comes in YUV420SP so you have to convert it to rgb in order to convert it into a colored bitmap and show it on screen.
Like this :
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
int imageWidth = camera.getParameters().getPreviewSize().width ;
int imageHeight =camera.getParameters().getPreviewSize().height ;
int RGBData[] = new int[imageWidth* imageHeight];
byte[] mYUVData = new byte[data.length];
System.arraycopy(data, 0, mYUVData, 0, data.length);
decodeYUV420SP(RGBData, mYUVData, imageWidth, imageHeight);
Bitmap bitmap = Bitmap.createBitmap(imageWidth, imageHeight, Bitmap.Config.ARGB_8888);
bitmap.setPixels(RGBData, 0, imageWidth, 0, 0, imageWidth, imageHeight);
}
static public void decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0) y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0) r = 0; else if (r > 262143) r = 262143;
if (g < 0) g = 0; else if (g > 262143) g = 262143;
if (b < 0) b = 0; else if (b > 262143) b = 262143;
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
}
}
}