Android camera write text directly into frame - android

I have camera (the "deprecated" API) and camera PreviewCallback, in which I get frames. I use that to take pictures (not takePicure or PictureCallback, because I want it fast over quality) and save as jpeg, code snippset below:
#Override
public synchronized void onPreviewFrame(byte[] frame, Camera camera) {
YuvImage yuv = new YuvImage(frame, previewFormat, width, height, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
yuv.compressToJpeg(new Rect(0, 0, width, height), 50, out);
out.toByteArray(); // this is my JPEG
// ... save this JPEG
}
I want to draw text, timestamp, into frame. I know how to do this by converting to Bitmap, using Canvas, drawing text and converting to Bitmap again. But this method is relatively slow (it's not biggest issue), but also i need the frame byte array to other things i.e. put into video and I would like to know if there is some method or lib to directly write text into that frame. I don't use preview (or rather, I use dummy preview), I'm asking how to change the frame byte array. I bet I'd need to do some mumble-jumble into byte array to make it work. The frame is in standard(?) format (YUV420/NV21).
Edit:
I managed to get a working function (getNV21). Of course it is far from efficient at it creates new Bitmap every frame and draws text to it but at least i have something I can work on, directly into yuv image. Still, answers would be appreciated.
private Bitmap drawText(String text, int rotation) {
Bitmap bitmap = Bitmap.createBitmap(maxWidth, maxHeight, Bitmap.Config.ARGB_8888);
int width = bitmap.getWidth();
int height = bitmap.getHeight();
Canvas canvas = new Canvas(bitmap);
Paint paint = new Paint(Paint.ANTI_ALIAS_FLAG);
paint.setColor(Color.rgb(255, 255, 255));
paint.setStrokeWidth(height/36);
paint.setTextSize(height/36);
paint.setShadowLayer(5f, 0f, 0f, Color.BLACK);
paint.setTypeface(Typeface.MONOSPACE);
if (rotation == 0 || rotation == 180) {
canvas.rotate(rotation,width/2,height/2);
} else {
canvas.translate(Math.abs(width-height), 0);
int w = width/2 - Math.abs(width/2-height/2);
int h = height/2;
canvas.rotate(-rotation,w,h);
}
canvas.drawText(text, 10, height-10, paint);
return bitmap;
}
byte [] getNV21(byte[] yuv, int inputWidth, int inputHeight, Bitmap scaled) {
int [] argb = new int[inputWidth * inputHeight];
scaled.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);
encodeYUV420SP(yuv, argb, inputWidth, inputHeight);
scaled.recycle();
return yuv;
}
private static void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
int a, R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++,index++,yIndex++) {
a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;
if (R == 0 && G == 0 && B == 0) {
if (j % 2 == 0 && index % 2 == 0) uvIndex+=2;
continue;
}
// well known RGB to YUV algorithm
Y = ( ( 66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ( ( -38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
V = ( ( 112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
// NV21 has a plane of Y and interleaved planes of VU each sampled by a factor of 2
// meaning for every 4 Y pixels there are 1 V and 1 U. Note the sampling is every other
// pixel AND every other scanline.
yuv420sp[yIndex] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv420sp[uvIndex] = (byte)((V<0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[uvIndex] = (byte)((U<0) ? 0 : ((U > 255) ? 255 : U));
}
}
}
}

Related

Android: bitmap to RGBA and back

I'm trying to write a couple of methods to convert an Android Bitmap to an RGBA byte array and then back to a Bitmap. The problem is that I don't seem to hit the formula, because the colors are always coming back wrong. I have tried with several different assumptions but to no avail.
So, this is the method to convert from Bitmap to RGBA that I think is fine:
public static byte[] bitmapToRgba(Bitmap bitmap) {
int[] pixels = new int[bitmap.getWidth() * bitmap.getHeight()];
byte[] bytes = new byte[pixels.length * 4];
bitmap.getPixels(pixels, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
int i = 0;
for (int pixel : pixels) {
// Get components assuming is ARGB
int A = (pixel >> 24) & 0xff;
int R = (pixel >> 16) & 0xff;
int G = (pixel >> 8) & 0xff;
int B = pixel & 0xff;
bytes[i++] = (byte) R;
bytes[i++] = (byte) G;
bytes[i++] = (byte) B;
bytes[i++] = (byte) A;
}
return bytes;
}
And this is the method aimed at creating back a bitmap from those bytes that is not working as expected:
public static Bitmap bitmapFromRgba(int width, int height, byte[] bytes) {
int[] pixels = new int[bytes.length / 4];
int j = 0;
// It turns out Bitmap.Config.ARGB_8888 is in reality RGBA_8888!
// Source: https://stackoverflow.com/a/47982505/1160360
// Now, according to my own experiments, it seems it is ABGR... this sucks.
// So we have to change the order of the components
for (int i = 0; i < pixels.length; i++) {
byte R = bytes[j++];
byte G = bytes[j++];
byte B = bytes[j++];
byte A = bytes[j++];
int pixel = (A << 24) | (B << 16) | (G << 8) | R;
pixels[i] = pixel;
}
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(IntBuffer.wrap(pixels));
return bitmap;
}
That's my last implementation, though I have tried several different ones without success. I'm assuming createBitmap expects ABGR in spite of specifying ARGB_8888 because I have done experiments hardcoding all the pixels to things like:
0xff_ff_00_00 -> got blue
0xff_00_ff_00 -> got green
0xff_00_00_ff -> got red
Anyway maybe that assumption is wrong and a consequence of some other mistaken one before.
I think the main problem may be related to the use of signed numeric values, since there are no unsigned ones in Java (well, there's something in Java 8+ but on one hand I don't think it should be necessary to use these, and on the other it is not supported by older Android versions that I need to support).
Any help will be very appreciated.
Thanks a lot in advance!
I solved it myself. There are a number of issues but all these began with this line:
bitmap.copyPixelsFromBuffer(IntBuffer.wrap(pixels));
That seems to be mixing up the color components in the wrong way. Maybe it's something related to byte order (little/big indian stuff), in any case I worked it around using setPixels instead:
bitmap.setPixels(pixels, 0, width, 0, 0, width, height);
This is the final code that's working as expected, just in case it's useful for someone else:
public static byte[] bitmapToRgba(Bitmap bitmap) {
if (bitmap.getConfig() != Bitmap.Config.ARGB_8888)
throw new IllegalArgumentException("Bitmap must be in ARGB_8888 format");
int[] pixels = new int[bitmap.getWidth() * bitmap.getHeight()];
byte[] bytes = new byte[pixels.length * 4];
bitmap.getPixels(pixels, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
int i = 0;
for (int pixel : pixels) {
// Get components assuming is ARGB
int A = (pixel >> 24) & 0xff;
int R = (pixel >> 16) & 0xff;
int G = (pixel >> 8) & 0xff;
int B = pixel & 0xff;
bytes[i++] = (byte) R;
bytes[i++] = (byte) G;
bytes[i++] = (byte) B;
bytes[i++] = (byte) A;
}
return bytes;
}
public static Bitmap bitmapFromRgba(int width, int height, byte[] bytes) {
int[] pixels = new int[bytes.length / 4];
int j = 0;
for (int i = 0; i < pixels.length; i++) {
int R = bytes[j++] & 0xff;
int G = bytes[j++] & 0xff;
int B = bytes[j++] & 0xff;
int A = bytes[j++] & 0xff;
int pixel = (A << 24) | (R << 16) | (G << 8) | B;
pixels[i] = pixel;
}
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixels, 0, width, 0, 0, width, height);
return bitmap;
}

Android RGB to YCbCr Conversion and output to imageView

I am doing an image processing which require to convert RGB bitmap image to YCbCr color space. I retrieved RGB value for each pixel and apply the conversion matrix to it.
public void convertRGB (View v) {
if (imageLoaded) {
int width = inputBM.getWidth();
int height = inputBM.getHeight();
int pixel;
int alpha, red, green, blue;
int Y,Cb,Cr;
outputBM = Bitmap.createBitmap(width, height, inputBM.getConfig());
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
pixel = inputBM.getPixel(x, y);
alpha = Color.alpha(pixel);
red = Color.red(pixel);
green = Color.green(pixel);
blue = Color.blue(pixel);
Y = (int) (0.299 * red + 0.587 * green + 0.114 * blue);
Cb = (int) (128-0.169 * red-0.331 * green + 0.500 * blue);
Cr = (int) (128+0.500 * red - 0.419 * green - 0.081 * blue);
int p = (Y << 24) | (Cb << 16) | (Cr<<8);
outputBM.setPixel(x,y,p);
}
}
comImgView.setImageBitmap(outputBM);
}
}
The problem is he output color is different with original. I tried to use BufferedImage but it do not work in Android
Original:
After Conversion:
May I know what is the correct way to handle YCbCr image in android java.
Try setting using below code
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(your_yuv_data, ImageFormat.NV21, width, height, null);
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 50, out);
byte[] imageBytes = out.toByteArray();
Bitmap image = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
iv.setImageBitmap(image);
Check documentation for detailed description for YuvImage Class.

Android TensorFlow - prevent resize Image

I am working on TensorFlow stylize image. But, the problem I am facing is that it resize my actual image. I want to apply style on whole image itself. For example, if my image resolution is 1280x960, it should be the same after I apply style on it.
I am not using default INPUT_SIZE value 256. Using default value it works fine. Here is my code I am using to prevent resize image.
private TensorFlowInferenceInterface inferenceInterface;
private void applyStyle(){
inferenceInterface = new TensorFlowInferenceInterface(mActivity.getAssets(), "bossK_float.pb");
Bitmap bitmap = getBitmapFromPath();
bitmap=Bitmap.createBitmap(bitmap,0,bitmap.getWidth(),bitmap.getHeight(), matrix, true);
INPUT_SIZE_WIDTH = bitmap.getWidth();
INPUT_SIZE_HEIGHT = bitmap.getHeight();
mStyledBitmap = stylizeImage(bitmap);
}
private Bitmap stylizeImage(Bitmap bitmap) {
Bitmap scaledBitmap = scaleBitmap(bitmap, INPUT_SIZE_WIDTH, INPUT_SIZE_HEIGHT);
intValues = new int[INPUT_SIZE_WIDTH * INPUT_SIZE_HEIGHT];
floatValues = new float[INPUT_SIZE_WIDTH * INPUT_SIZE_HEIGHT * 3];
scaledBitmap.getPixels(intValues, 0, scaledBitmap.getWidth(), 0, 0, scaledBitmap.getWidth(), scaledBitmap.getHeight());
scaledBitmap = scaledBitmap.copy(Bitmap.Config.ARGB_8888, true);
for (int i = 0; i < intValues.length; ++i) {
final int val = intValues[i];
floatValues[i * 3 + 0] = ((val >> 16) & 0xFF) * 1.0f;
floatValues[i * 3 + 1] = ((val >> 8) & 0xFF) * 1.0f;
floatValues[i * 3 + 2] = (val & 0xFF) * 1.0f;
}
Trace.beginSection("feed");
inferenceInterface.feed(INPUT_NAME, floatValues, INPUT_SIZE_WIDTH, INPUT_SIZE_HEIGHT, 3);
Trace.endSection();
Trace.beginSection("run");
inferenceInterface.run(new String[]{OUTPUT_NAME});
Trace.endSection();
Trace.beginSection("fetch");
inferenceInterface.fetch(OUTPUT_NAME, floatValues);
Trace.endSection();
for (int i = 0; i < intValues.length; ++i) {
intValues[i] =
0xFF000000
| (((int) (floatValues[i * 3 + 0])) << 16)
| (((int) (floatValues[i * 3 + 1])) << 8)
| ((int) (floatValues[i * 3 + 2]));
}
scaledBitmap.setPixels(intValues, 0, scaledBitmap.getWidth(), 0, 0, scaledBitmap.getWidth(), scaledBitmap.getHeight());
return scaledBitmap;
}
private Bitmap scaleBitmap(Bitmap origin, int newWidth, int newHeight) {
if (origin == null) {
return null;
}
int height = origin.getHeight();
int width = origin.getWidth();
float scaleWidth = ((float) newWidth) / width;
float scaleHeight = ((float) newHeight) / height;
Matrix matrix = new Matrix();
matrix.postScale(scaleWidth, scaleHeight);
Bitmap newBitmap = Bitmap.createBitmap(origin, 0, 0, width, height, matrix, false);
return newBitmap;
}
When I change my INPUT_SIZE values to INPUT_SIZE_WIDTH and INPUT_SIZE_HEIGHT, my application stops without error message. I debug this code, but it stucks on this piece of code and stop my app:
Trace.beginSection("run");
inferenceInterface.run(new String[]{OUTPUT_NAME});
Trace.endSection();
Please let me know, how can I style whole image using TensorFlow.
Thank You!
Your code stops there because of the differences in size. You probably must be getting an ArrayOutOfBound Exception.
The model is to be trained to accept images of a particular size. So, whenever you classify, the image is to be reduced to that particular size.
Even your training data which when creating a pb/lite/tflite file will be converted to accept the same size images you mention within the model creation. The results will not affect to a larger extinct. You can give that a try.

Android Low image quality / altered image when using glReadPixels

I've spend a lot of time trying to figure this out but can't see what I am doing wrong.
This is my original image
Image recaptured 5 times
Recapturing the image multiple times clearly shows that there is something not right. Capturing it once is just ok but twice is enough to clearly see the difference.
I found these similar issues on stackoverflow:
Bitmap quality using glReadPixels with frame buffer objects
Extract pixels from TextureSurface using glReadPixels resulting in bad image Bitmap
(sorry limited to links I can add)
Unfortunately none of the proposed suggestions/solutions fixed my issue.
This is my code:
private Bitmap createSnapshot (int x, int y, int w, int h) {
int bitmapBuffer[] = new int[w * h];
int bitmapSource[] = new int[w * h];
IntBuffer intBuffer = IntBuffer.wrap(bitmapBuffer);
intBuffer.position(0);
try {
glReadPixels(x, y, w, h, GL_RGBA, GL_UNSIGNED_BYTE, intBuffer);
int offset1, offset2;
for (int i = 0; i < h; i++) {
offset1 = i * w;
offset2 = (h - i - 1) * w;
for (int j = 0; j < w; j++) {
int texturePixel = bitmapBuffer[offset1 + j];
int blue = (texturePixel >> 16) & 0xff;
int red = (texturePixel << 16) & 0x00ff0000;
int pixel = (texturePixel & 0xff00ff00) | red | blue;
bitmapSource[offset2 + j] = pixel;
}
}
} catch (GLException e) {
return null;
}
return Bitmap.createBitmap(bitmapSource, w, h, Bitmap.Config.ARGB_8888);
}
I'm using OpenGL 2. For bitmap compression I am using PNG. Tested it using JPEG (quality 100) and the result is the same but slightly worse.
There is also a slight yellowish tint added to the image.

Android: conerting Image into byteArray

In my project I have an bitmap image. I need to convert this picture to byteArray in order to manipulate some bytes and after that save it as image.
with this code image = BitmapFactory.decodeResource(context.getResources(), R.drawable.tasnim); I have acces to width and height but how can I have access to bytes of this image?
Thanks
AFAIK Most correct way is:
ByteBuffer copyToBuffer(Bitmap bitmap){
int size = bitmap.getHeight() * bitmap.getRowBytes();
ByteBuffer buffer = ByteBuffer.allocateDirect(size);
bitmap.copyPixelsToBuffer(buffer);
return buffer;
}
I'm assuming the OP wants to manipulate the pixels, not the header information of the Image...
Assuming your image is a Bitmap
int w = image.getWidth(), h = image.getHeight();
int[] rgbStream = new int[w * h];
image.getPixels(rgbStream, 0, w, 0, 0, w, h);
Of course, this gets you Pixel values as Integers...But you can always convert them again.
int t = w * h;
for (int i = 0; i < t; i++) {
pixel = rgbStream[i]; //get pixel value (ARGB)
int A = (pixel >> 24) & 0xFF; //Isolate Alpha value...
int R = (pixel >> 16) & 0xFF; //Isolate Red Channel value...
int G = (pixel >> 8) & 0xFF; //Isolate Green Channel value...
int B = pixel & 0xFF; //Isolate Blue Channel value...
//NOTE, A,R,G,B can be cast as bytes...
}

Categories

Resources