I am working on TensorFlow stylize image. But, the problem I am facing is that it resize my actual image. I want to apply style on whole image itself. For example, if my image resolution is 1280x960, it should be the same after I apply style on it.
I am not using default INPUT_SIZE value 256. Using default value it works fine. Here is my code I am using to prevent resize image.
private TensorFlowInferenceInterface inferenceInterface;
private void applyStyle(){
inferenceInterface = new TensorFlowInferenceInterface(mActivity.getAssets(), "bossK_float.pb");
Bitmap bitmap = getBitmapFromPath();
bitmap=Bitmap.createBitmap(bitmap,0,bitmap.getWidth(),bitmap.getHeight(), matrix, true);
INPUT_SIZE_WIDTH = bitmap.getWidth();
INPUT_SIZE_HEIGHT = bitmap.getHeight();
mStyledBitmap = stylizeImage(bitmap);
}
private Bitmap stylizeImage(Bitmap bitmap) {
Bitmap scaledBitmap = scaleBitmap(bitmap, INPUT_SIZE_WIDTH, INPUT_SIZE_HEIGHT);
intValues = new int[INPUT_SIZE_WIDTH * INPUT_SIZE_HEIGHT];
floatValues = new float[INPUT_SIZE_WIDTH * INPUT_SIZE_HEIGHT * 3];
scaledBitmap.getPixels(intValues, 0, scaledBitmap.getWidth(), 0, 0, scaledBitmap.getWidth(), scaledBitmap.getHeight());
scaledBitmap = scaledBitmap.copy(Bitmap.Config.ARGB_8888, true);
for (int i = 0; i < intValues.length; ++i) {
final int val = intValues[i];
floatValues[i * 3 + 0] = ((val >> 16) & 0xFF) * 1.0f;
floatValues[i * 3 + 1] = ((val >> 8) & 0xFF) * 1.0f;
floatValues[i * 3 + 2] = (val & 0xFF) * 1.0f;
}
Trace.beginSection("feed");
inferenceInterface.feed(INPUT_NAME, floatValues, INPUT_SIZE_WIDTH, INPUT_SIZE_HEIGHT, 3);
Trace.endSection();
Trace.beginSection("run");
inferenceInterface.run(new String[]{OUTPUT_NAME});
Trace.endSection();
Trace.beginSection("fetch");
inferenceInterface.fetch(OUTPUT_NAME, floatValues);
Trace.endSection();
for (int i = 0; i < intValues.length; ++i) {
intValues[i] =
0xFF000000
| (((int) (floatValues[i * 3 + 0])) << 16)
| (((int) (floatValues[i * 3 + 1])) << 8)
| ((int) (floatValues[i * 3 + 2]));
}
scaledBitmap.setPixels(intValues, 0, scaledBitmap.getWidth(), 0, 0, scaledBitmap.getWidth(), scaledBitmap.getHeight());
return scaledBitmap;
}
private Bitmap scaleBitmap(Bitmap origin, int newWidth, int newHeight) {
if (origin == null) {
return null;
}
int height = origin.getHeight();
int width = origin.getWidth();
float scaleWidth = ((float) newWidth) / width;
float scaleHeight = ((float) newHeight) / height;
Matrix matrix = new Matrix();
matrix.postScale(scaleWidth, scaleHeight);
Bitmap newBitmap = Bitmap.createBitmap(origin, 0, 0, width, height, matrix, false);
return newBitmap;
}
When I change my INPUT_SIZE values to INPUT_SIZE_WIDTH and INPUT_SIZE_HEIGHT, my application stops without error message. I debug this code, but it stucks on this piece of code and stop my app:
Trace.beginSection("run");
inferenceInterface.run(new String[]{OUTPUT_NAME});
Trace.endSection();
Please let me know, how can I style whole image using TensorFlow.
Thank You!
Your code stops there because of the differences in size. You probably must be getting an ArrayOutOfBound Exception.
The model is to be trained to accept images of a particular size. So, whenever you classify, the image is to be reduced to that particular size.
Even your training data which when creating a pb/lite/tflite file will be converted to accept the same size images you mention within the model creation. The results will not affect to a larger extinct. You can give that a try.
Related
I am doing an image processing which require to convert RGB bitmap image to YCbCr color space. I retrieved RGB value for each pixel and apply the conversion matrix to it.
public void convertRGB (View v) {
if (imageLoaded) {
int width = inputBM.getWidth();
int height = inputBM.getHeight();
int pixel;
int alpha, red, green, blue;
int Y,Cb,Cr;
outputBM = Bitmap.createBitmap(width, height, inputBM.getConfig());
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
pixel = inputBM.getPixel(x, y);
alpha = Color.alpha(pixel);
red = Color.red(pixel);
green = Color.green(pixel);
blue = Color.blue(pixel);
Y = (int) (0.299 * red + 0.587 * green + 0.114 * blue);
Cb = (int) (128-0.169 * red-0.331 * green + 0.500 * blue);
Cr = (int) (128+0.500 * red - 0.419 * green - 0.081 * blue);
int p = (Y << 24) | (Cb << 16) | (Cr<<8);
outputBM.setPixel(x,y,p);
}
}
comImgView.setImageBitmap(outputBM);
}
}
The problem is he output color is different with original. I tried to use BufferedImage but it do not work in Android
Original:
After Conversion:
May I know what is the correct way to handle YCbCr image in android java.
Try setting using below code
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(your_yuv_data, ImageFormat.NV21, width, height, null);
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 50, out);
byte[] imageBytes = out.toByteArray();
Bitmap image = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
iv.setImageBitmap(image);
Check documentation for detailed description for YuvImage Class.
I've spend a lot of time trying to figure this out but can't see what I am doing wrong.
This is my original image
Image recaptured 5 times
Recapturing the image multiple times clearly shows that there is something not right. Capturing it once is just ok but twice is enough to clearly see the difference.
I found these similar issues on stackoverflow:
Bitmap quality using glReadPixels with frame buffer objects
Extract pixels from TextureSurface using glReadPixels resulting in bad image Bitmap
(sorry limited to links I can add)
Unfortunately none of the proposed suggestions/solutions fixed my issue.
This is my code:
private Bitmap createSnapshot (int x, int y, int w, int h) {
int bitmapBuffer[] = new int[w * h];
int bitmapSource[] = new int[w * h];
IntBuffer intBuffer = IntBuffer.wrap(bitmapBuffer);
intBuffer.position(0);
try {
glReadPixels(x, y, w, h, GL_RGBA, GL_UNSIGNED_BYTE, intBuffer);
int offset1, offset2;
for (int i = 0; i < h; i++) {
offset1 = i * w;
offset2 = (h - i - 1) * w;
for (int j = 0; j < w; j++) {
int texturePixel = bitmapBuffer[offset1 + j];
int blue = (texturePixel >> 16) & 0xff;
int red = (texturePixel << 16) & 0x00ff0000;
int pixel = (texturePixel & 0xff00ff00) | red | blue;
bitmapSource[offset2 + j] = pixel;
}
}
} catch (GLException e) {
return null;
}
return Bitmap.createBitmap(bitmapSource, w, h, Bitmap.Config.ARGB_8888);
}
I'm using OpenGL 2. For bitmap compression I am using PNG. Tested it using JPEG (quality 100) and the result is the same but slightly worse.
There is also a slight yellowish tint added to the image.
This is not a straightforward problem, please read through!
I want to manipulate a JPEG file and save it again as JPEG. The problem is that even without manipulation there's significant (visible) quality loss.
Question: what option or API am I missing to be able to re-compress JPEG without quality loss (I know it's not exactly possible, but I think what I describe below is not an acceptable level of artifacts, especially with quality=100).
Control
I load it as a Bitmap from the file:
BitmapFactory.Options options = new BitmapFactory.Options();
// explicitly state everything so the configuration is clear
options.inPreferredConfig = Config.ARGB_8888;
options.inDither = false; // shouldn't be used anyway since 8888 can store HQ pixels
options.inScaled = false;
options.inPremultiplied = false; // no alpha, but disable explicitly
options.inSampleSize = 1; // make sure pixels are 1:1
options.inPreferQualityOverSpeed = true; // doesn't make a difference
// I'm loading the highest possible quality without any scaling/sizing/manipulation
Bitmap bitmap = BitmapFactory.decodeFile("/sdcard/image.jpg", options);
Now, to have a control image to compare to, let's save the plain Bitmap bytes as PNG:
bitmap.compress(PNG, 100/*ignored*/, new FileOutputStream("/sdcard/image.png"));
I compared this to the original JPEG image on my computer and there's no visual difference.
I also saved the raw int[] from getPixels and loaded it as a raw ARGB file on my computer: there's no visual difference to the original JPEG, nor the PNG saved from Bitmap.
I checked the Bitmap's dimensions and config, they match the source image and the input options: it's decoded as ARGB_8888 as expected.
The above to control checks prove that the pixels in the in-memory Bitmap are correct.
Problem
I want to have JPEG files as a result, so the above PNG and RAW approaches wouldn't work, let's try to save as JPEG 100% first:
// 100% still expected lossy, but not this amount of artifacts
bitmap.compress(JPEG, 100, new FileOutputStream("/sdcard/image.jpg"));
I'm not sure its measure is percent, but it's easier to read and discuss, so I'm gonna use it.
I'm aware that JPEG with the quality of 100% is still lossy, but it shouldn't be so visually lossy that it's noticeable from afar. Here's a comparison of two 100% compressions of the same source.
Open them in separate tabs and click back and forth between to see what I mean. The difference images were made using Gimp: original as bottom layer, re-compressed middle layer with "Grain extract" mode, top layer full white with "Value" mode to enhance badness.
The below images are uploaded to Imgur which also compresses the files, but since all of the images are compressed the same, the original unwanted artifacts remain visible the same way I see it when opening my original files.
Original [560k]:
Imgur's difference to original (not relevant to problem, just to show that it's not causing any extra artifacts when uploading the images):
IrfanView 100% [728k] (visually identical to original):
IrfanView 100%'s difference to original (barely anything)
Android 100% [942k]:
Android 100%'s difference to original (tinting, banding, smearing)
In IrfanView I have to go below 50% [50k] to see remotely similar effects. At 70% [100k] in IrfanView there's no noticable difference, but the size is 9th of Android's.
Background
I created an app that takes a picture from Camera API, that image comes as a byte[] and is an encoded JPEG blob. I saved this file via OutputStream.write(byte[]) method, that was my original source file. decodeByteArray(data, 0, data.length, options) decodes the same pixels as reading from a File, tested with Bitmap.sameAs so it's irrelevant to the issue.
I was using my Samsung Galaxy S4 with Android 4.4.2 to test things out.
Edit: while investigating further I also tried Android 6.0 and N preview emulators and they reproduce the same issue.
After some investigation I found the culprit: Skia's YCbCr conversion. Repro, code for investigation and solutions can be found at TWiStErRob/AndroidJPEG.
Discovery
After not getting a positive response on this question (neither from http://b.android.com/206128) I started digging deeper. I found numerous half-informed SO answers which helped me tremendously in discovering bits and pieces. One such answer was https://stackoverflow.com/a/13055615/253468 which made me aware of YuvImage which converts an YUV NV21 byte array into a JPEG compressed byte array:
YuvImage yuv = new YuvImage(yuvData, ImageFormat.NV21, width, height, null);
yuv.compressToJpeg(new Rect(0, 0, width, height), 100, jpeg);
There's a lot of freedom going into creating the YUV data, with varying constants and precision. From my question it's clear that Android uses an incorrect algorithm.
While playing around with the algorithms and constants I found online I always got a bad image: either the brightness changed or had the same banding issues as in the question.
Digging deeper
YuvImage is actually not used when calling Bitmap.compress, here's the stack for Bitmap.compress:
libjpeg/jpeg_write_scanlines(jcapistd.c:77)
skia/rgb2yuv_32(SkImageDecoder_libjpeg.cpp:913)
skia/writer(=Write_32_YUV).write(SkImageDecoder_libjpeg.cpp:961)
[WE_CONVERT_TO_YUV is unconditionally defined]
SkJPEGImageEncoder::onEncode(SkImageDecoder_libjpeg.cpp:1046)
SkImageEncoder::encodeStream(SkImageEncoder.cpp:15)
Bitmap_compress(Bitmap.cpp:383)
Bitmap.nativeCompress(Bitmap.java:1573)
Bitmap.compress(Bitmap.java:984)
app.saveBitmapAsJPEG()
and the stack for using YuvImage
libjpeg/jpeg_write_raw_data(jcapistd.c:120)
YuvToJpegEncoder::compress(YuvToJpegEncoder.cpp:71)
YuvToJpegEncoder::encode(YuvToJpegEncoder.cpp:24)
YuvImage_compressToJpeg(YuvToJpegEncoder.cpp:219)
YuvImage.nativeCompressToJpeg(YuvImage.java:141)
YuvImage.compressToJpeg(YuvImage.java:123)
app.saveNV21AsJPEG()
By using the constants in rgb2yuv_32 from the Bitmap.compress flow I was able to recreate the same banding effect using YuvImage, not an achievement, just a confirmation that it's indeed the YUV conversion that is messed up. I double-checked that the problem is not during YuvImage calling libjpeg: by converting the Bitmap's ARGB to YUV and back to RGB then dumping the resulting pixel blob as a raw image, the banding was already there.
While doing this I realized that the NV21/YUV420SP layout is lossy as it samples the color information every 4th pixel, but it keeps the value (brightness) of each pixel which means that some color info is lost, but most of the info for people's eyes are in the brightness anyway. Take a look at the example on wikipedia, the Cb and Cr channel makes barely recognisable images, so lossy sampling on it doesn't matter much.
Solution
So, at this point I knew that libjpeg does the right conversion when it is passed the right raw data. This is when I set up the NDK and integrated the latest LibJPEG from http://www.ijg.org. I was able to confirm that indeed passing the RGB data from the Bitmap's pixels array yields the expected result. I like to avoid using native components when not absolutely necessary, so aside of going for a native library that encodes a Bitmap I found a neat workaround. I've essentially taken the rgb_ycc_convert function from jcolor.c and rewrote it in Java using the skeleton from https://stackoverflow.com/a/13055615/253468. The below is not optimized for speed, but readability, some constants were removed for brevity, you can find them in libjpeg code or my example project.
private static final int JSAMPLE_SIZE = 255 + 1;
private static final int CENTERJSAMPLE = 128;
private static final int SCALEBITS = 16;
private static final int CBCR_OFFSET = CENTERJSAMPLE << SCALEBITS;
private static final int ONE_HALF = 1 << (SCALEBITS - 1);
private static final int[] rgb_ycc_tab = new int[TABLE_SIZE];
static { // rgb_ycc_start
for (int i = 0; i <= JSAMPLE_SIZE; i++) {
rgb_ycc_tab[R_Y_OFFSET + i] = FIX(0.299) * i;
rgb_ycc_tab[G_Y_OFFSET + i] = FIX(0.587) * i;
rgb_ycc_tab[B_Y_OFFSET + i] = FIX(0.114) * i + ONE_HALF;
rgb_ycc_tab[R_CB_OFFSET + i] = -FIX(0.168735892) * i;
rgb_ycc_tab[G_CB_OFFSET + i] = -FIX(0.331264108) * i;
rgb_ycc_tab[B_CB_OFFSET + i] = FIX(0.5) * i + CBCR_OFFSET + ONE_HALF - 1;
rgb_ycc_tab[R_CR_OFFSET + i] = FIX(0.5) * i + CBCR_OFFSET + ONE_HALF - 1;
rgb_ycc_tab[G_CR_OFFSET + i] = -FIX(0.418687589) * i;
rgb_ycc_tab[B_CR_OFFSET + i] = -FIX(0.081312411) * i;
}
}
static void rgb_ycc_convert(int[] argb, int width, int height, byte[] ycc) {
int[] tab = LibJPEG.rgb_ycc_tab;
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
int index = 0;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int r = (argb[index] & 0x00ff0000) >> 16;
int g = (argb[index] & 0x0000ff00) >> 8;
int b = (argb[index] & 0x000000ff) >> 0;
byte Y = (byte)((tab[r + R_Y_OFFSET] + tab[g + G_Y_OFFSET] + tab[b + B_Y_OFFSET]) >> SCALEBITS);
byte Cb = (byte)((tab[r + R_CB_OFFSET] + tab[g + G_CB_OFFSET] + tab[b + B_CB_OFFSET]) >> SCALEBITS);
byte Cr = (byte)((tab[r + R_CR_OFFSET] + tab[g + G_CR_OFFSET] + tab[b + B_CR_OFFSET]) >> SCALEBITS);
ycc[yIndex++] = Y;
if (y % 2 == 0 && index % 2 == 0) {
ycc[uvIndex++] = Cr;
ycc[uvIndex++] = Cb;
}
index++;
}
}
}
static byte[] compress(Bitmap bitmap) {
int w = bitmap.getWidth();
int h = bitmap.getHeight();
int[] argb = new int[w * h];
bitmap.getPixels(argb, 0, w, 0, 0, w, h);
byte[] ycc = new byte[w * h * 3 / 2];
rgb_ycc_convert(argb, w, h, ycc);
argb = null; // let GC do its job
ByteArrayOutputStream jpeg = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(ycc, ImageFormat.NV21, w, h, null);
yuvImage.compressToJpeg(new Rect(0, 0, w, h), quality, jpeg);
return jpeg.toByteArray();
}
The magic key seems to be ONE_HALF - 1 the rest looks an awful lot like the math in Skia. That's a good direction for future investigation, but for me the above is sufficiently simple to be a good solution for working around Android's builtin weirdness, albeit slower. Note that this solution uses the NV21 layout which loses 3/4 of the color info (from Cr/Cb), but this loss is much less than the errors created by Skia's math. Also note that YuvImage doesn't support odd-sized images, for more info see NV21 format and odd image dimensions.
Please use the following method:
public String convertBitmaptoSmallerSizetoString(String image){
File imageFile = new File(image);
Bitmap bitmap = BitmapFactory.decodeFile(imageFile.getAbsolutePath());
int nh = (int) (bitmap.getHeight() * (512.0 / bitmap.getWidth()));
Bitmap scaled = Bitmap.createScaledBitmap(bitmap, 512, nh, true);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
scaled.compress(Bitmap.CompressFormat.PNG, 90, stream);
byte[] imageByte = stream.toByteArray();
String img_str = Base64.encodeToString(imageByte, Base64.NO_WRAP);
return img_str;
}
Below is my Code:
public static String compressImage(Context context, String imagePath)
{
final float maxHeight = 1024.0f;
final float maxWidth = 1024.0f;
Bitmap scaledBitmap = null;
BitmapFactory.Options options = new BitmapFactory.Options();
options.inJustDecodeBounds = true;
Bitmap bmp = BitmapFactory.decodeFile(imagePath, options);
int actualHeight = options.outHeight;
int actualWidth = options.outWidth;
float imgRatio = (float) actualWidth / (float) actualHeight;
float maxRatio = maxWidth / maxHeight;
if (actualHeight > maxHeight || actualWidth > maxWidth) {
if (imgRatio < maxRatio) {
imgRatio = maxHeight / actualHeight;
actualWidth = (int) (imgRatio * actualWidth);
actualHeight = (int) maxHeight;
} else if (imgRatio > maxRatio) {
imgRatio = maxWidth / actualWidth;
actualHeight = (int) (imgRatio * actualHeight);
actualWidth = (int) maxWidth;
} else {
actualHeight = (int) maxHeight;
actualWidth = (int) maxWidth;
}
}
options.inSampleSize = calculateInSampleSize(options, actualWidth, actualHeight);
options.inJustDecodeBounds = false;
options.inDither = false;
options.inPurgeable = true;
options.inInputShareable = true;
options.inTempStorage = new byte[16 * 1024];
try {
bmp = BitmapFactory.decodeFile(imagePath, options);
} catch (OutOfMemoryError exception) {
exception.printStackTrace();
}
try {
scaledBitmap = Bitmap.createBitmap(actualWidth, actualHeight, Bitmap.Config.RGB_565);
} catch (OutOfMemoryError exception) {
exception.printStackTrace();
}
float ratioX = actualWidth / (float) options.outWidth;
float ratioY = actualHeight / (float) options.outHeight;
float middleX = actualWidth / 2.0f;
float middleY = actualHeight / 2.0f;
Matrix scaleMatrix = new Matrix();
scaleMatrix.setScale(ratioX, ratioY, middleX, middleY);
assert scaledBitmap != null;
Canvas canvas = new Canvas(scaledBitmap);
canvas.setMatrix(scaleMatrix);
canvas.drawBitmap(bmp, middleX - bmp.getWidth() / 2, middleY - bmp.getHeight() / 2, new Paint(Paint.FILTER_BITMAP_FLAG));
if (bmp != null) {
bmp.recycle();
}
ExifInterface exif;
try {
exif = new ExifInterface(imagePath);
int orientation = exif.getAttributeInt(ExifInterface.TAG_ORIENTATION, 0);
Matrix matrix = new Matrix();
if (orientation == 6) {
matrix.postRotate(90);
} else if (orientation == 3) {
matrix.postRotate(180);
} else if (orientation == 8) {
matrix.postRotate(270);
}
scaledBitmap = Bitmap.createBitmap(scaledBitmap, 0, 0, scaledBitmap.getWidth(), scaledBitmap.getHeight(), matrix, true);
} catch (IOException e) {
e.printStackTrace();
}
FileOutputStream out = null;
String filepath = getFilename(context);
try {
out = new FileOutputStream(filepath);
scaledBitmap.compress(Bitmap.CompressFormat.JPEG, 80, out);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
return filepath;
}
public static int calculateInSampleSize(BitmapFactory.Options options, int reqWidth, int reqHeight) {
final int height = options.outHeight;
final int width = options.outWidth;
int inSampleSize = 1;
if (height > reqHeight || width > reqWidth) {
final int heightRatio = Math.round((float) height / (float) reqHeight);
final int widthRatio = Math.round((float) width / (float) reqWidth);
inSampleSize = heightRatio < widthRatio ? heightRatio : widthRatio;
}
final float totalPixels = width * height;
final float totalReqPixelsCap = reqWidth * reqHeight * 2;
while (totalPixels / (inSampleSize * inSampleSize) > totalReqPixelsCap) {
inSampleSize++;
}
return inSampleSize;
}
public static String getFilename(Context context) {
File mediaStorageDir = new File(Environment.getExternalStorageDirectory()
+ "/Android/data/"
+ context.getApplicationContext().getPackageName()
+ "/Files/Compressed");
if (!mediaStorageDir.exists()) {
mediaStorageDir.mkdirs();
}
String mImageName = "IMG_" + String.valueOf(System.currentTimeMillis()) + ".jpg";
return (mediaStorageDir.getAbsolutePath() + "/" + mImageName);
}
I have camera (the "deprecated" API) and camera PreviewCallback, in which I get frames. I use that to take pictures (not takePicure or PictureCallback, because I want it fast over quality) and save as jpeg, code snippset below:
#Override
public synchronized void onPreviewFrame(byte[] frame, Camera camera) {
YuvImage yuv = new YuvImage(frame, previewFormat, width, height, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
yuv.compressToJpeg(new Rect(0, 0, width, height), 50, out);
out.toByteArray(); // this is my JPEG
// ... save this JPEG
}
I want to draw text, timestamp, into frame. I know how to do this by converting to Bitmap, using Canvas, drawing text and converting to Bitmap again. But this method is relatively slow (it's not biggest issue), but also i need the frame byte array to other things i.e. put into video and I would like to know if there is some method or lib to directly write text into that frame. I don't use preview (or rather, I use dummy preview), I'm asking how to change the frame byte array. I bet I'd need to do some mumble-jumble into byte array to make it work. The frame is in standard(?) format (YUV420/NV21).
Edit:
I managed to get a working function (getNV21). Of course it is far from efficient at it creates new Bitmap every frame and draws text to it but at least i have something I can work on, directly into yuv image. Still, answers would be appreciated.
private Bitmap drawText(String text, int rotation) {
Bitmap bitmap = Bitmap.createBitmap(maxWidth, maxHeight, Bitmap.Config.ARGB_8888);
int width = bitmap.getWidth();
int height = bitmap.getHeight();
Canvas canvas = new Canvas(bitmap);
Paint paint = new Paint(Paint.ANTI_ALIAS_FLAG);
paint.setColor(Color.rgb(255, 255, 255));
paint.setStrokeWidth(height/36);
paint.setTextSize(height/36);
paint.setShadowLayer(5f, 0f, 0f, Color.BLACK);
paint.setTypeface(Typeface.MONOSPACE);
if (rotation == 0 || rotation == 180) {
canvas.rotate(rotation,width/2,height/2);
} else {
canvas.translate(Math.abs(width-height), 0);
int w = width/2 - Math.abs(width/2-height/2);
int h = height/2;
canvas.rotate(-rotation,w,h);
}
canvas.drawText(text, 10, height-10, paint);
return bitmap;
}
byte [] getNV21(byte[] yuv, int inputWidth, int inputHeight, Bitmap scaled) {
int [] argb = new int[inputWidth * inputHeight];
scaled.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);
encodeYUV420SP(yuv, argb, inputWidth, inputHeight);
scaled.recycle();
return yuv;
}
private static void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
int a, R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++,index++,yIndex++) {
a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;
if (R == 0 && G == 0 && B == 0) {
if (j % 2 == 0 && index % 2 == 0) uvIndex+=2;
continue;
}
// well known RGB to YUV algorithm
Y = ( ( 66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ( ( -38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
V = ( ( 112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
// NV21 has a plane of Y and interleaved planes of VU each sampled by a factor of 2
// meaning for every 4 Y pixels there are 1 V and 1 U. Note the sampling is every other
// pixel AND every other scanline.
yuv420sp[yIndex] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv420sp[uvIndex] = (byte)((V<0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[uvIndex] = (byte)((U<0) ? 0 : ((U > 255) ? 255 : U));
}
}
}
}
I am trying to draw the 'Y' component as greyscale from the image I get from the Camera via onPreviewFrame.
I am using the version of Canvas.drawBitmap that takes an array of 'colors' as a parameter. The Android docs don't mention what format the Color is in, so I'm assuming ARGB 8888.
I do get an image showing up, but it is showing up with an odd Yellow tint.
Here is my code below:
public void onPreviewFrame(byte[] bytes, Camera camera) {
Canvas canvas = null;
try {
synchronized(mSurfaceHolder) {
canvas = mSurfaceHolder.lockCanvas();
Size size = camera.getParameters().getPreviewSize();
int width = size.width;
int height = size.height;
if (mHeight * mWidth != height * width)
{
mColors = new int[width * height];
mHeight = height;
mWidth = width;
Log.i(TAG, "prewviw size = " + width + " x " + height);
}
for (int x = 0; x < width; x ++) {
for (int y = 0; y < height; y++) {
int yval = bytes[x + y * width];
mColors[x + y * width] = (0xFF << 24) | (yval << 16) | (yval << 8) | yval;
}
}
canvas.drawBitmap(mColors, 0, width, 0.f, 0.f, width, height, false, null);
}
}
finally {
if (canvas != null) {
mSurfaceHolder.unlockCanvasAndPost(canvas);
}
}
}
I've also tried using another version of Canvas.drawBitmap that takes a Bitmap as a parameter. I constructed the Bitmap in a similar way from the same array and I told it to use ARGB explicitly. But it still ended up being tinted Yellow!
What am I doing wrong here?
It's a different approach, but the following works, and don't suffer for from the color problems of your solution:
YuvImage yuv = new YuvImage(data, previewFormat, size.width, size.height, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
yuv.compressToJpeg(new Rect(0, 0, size.width, size.height), 50, out);
byte[] bytes = out.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
canvas.drawBitmap(bitmap, null, new Rect(0, 0, size.width, size.height), null);
Note: This probably needs to allocate data for every frame, which your solution doesn't.