I am trying to draw the 'Y' component as greyscale from the image I get from the Camera via onPreviewFrame.
I am using the version of Canvas.drawBitmap that takes an array of 'colors' as a parameter. The Android docs don't mention what format the Color is in, so I'm assuming ARGB 8888.
I do get an image showing up, but it is showing up with an odd Yellow tint.
Here is my code below:
public void onPreviewFrame(byte[] bytes, Camera camera) {
Canvas canvas = null;
try {
synchronized(mSurfaceHolder) {
canvas = mSurfaceHolder.lockCanvas();
Size size = camera.getParameters().getPreviewSize();
int width = size.width;
int height = size.height;
if (mHeight * mWidth != height * width)
{
mColors = new int[width * height];
mHeight = height;
mWidth = width;
Log.i(TAG, "prewviw size = " + width + " x " + height);
}
for (int x = 0; x < width; x ++) {
for (int y = 0; y < height; y++) {
int yval = bytes[x + y * width];
mColors[x + y * width] = (0xFF << 24) | (yval << 16) | (yval << 8) | yval;
}
}
canvas.drawBitmap(mColors, 0, width, 0.f, 0.f, width, height, false, null);
}
}
finally {
if (canvas != null) {
mSurfaceHolder.unlockCanvasAndPost(canvas);
}
}
}
I've also tried using another version of Canvas.drawBitmap that takes a Bitmap as a parameter. I constructed the Bitmap in a similar way from the same array and I told it to use ARGB explicitly. But it still ended up being tinted Yellow!
What am I doing wrong here?
It's a different approach, but the following works, and don't suffer for from the color problems of your solution:
YuvImage yuv = new YuvImage(data, previewFormat, size.width, size.height, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
yuv.compressToJpeg(new Rect(0, 0, size.width, size.height), 50, out);
byte[] bytes = out.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
canvas.drawBitmap(bitmap, null, new Rect(0, 0, size.width, size.height), null);
Note: This probably needs to allocate data for every frame, which your solution doesn't.
Related
I am doing an image processing which require to convert RGB bitmap image to YCbCr color space. I retrieved RGB value for each pixel and apply the conversion matrix to it.
public void convertRGB (View v) {
if (imageLoaded) {
int width = inputBM.getWidth();
int height = inputBM.getHeight();
int pixel;
int alpha, red, green, blue;
int Y,Cb,Cr;
outputBM = Bitmap.createBitmap(width, height, inputBM.getConfig());
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
pixel = inputBM.getPixel(x, y);
alpha = Color.alpha(pixel);
red = Color.red(pixel);
green = Color.green(pixel);
blue = Color.blue(pixel);
Y = (int) (0.299 * red + 0.587 * green + 0.114 * blue);
Cb = (int) (128-0.169 * red-0.331 * green + 0.500 * blue);
Cr = (int) (128+0.500 * red - 0.419 * green - 0.081 * blue);
int p = (Y << 24) | (Cb << 16) | (Cr<<8);
outputBM.setPixel(x,y,p);
}
}
comImgView.setImageBitmap(outputBM);
}
}
The problem is he output color is different with original. I tried to use BufferedImage but it do not work in Android
Original:
After Conversion:
May I know what is the correct way to handle YCbCr image in android java.
Try setting using below code
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(your_yuv_data, ImageFormat.NV21, width, height, null);
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 50, out);
byte[] imageBytes = out.toByteArray();
Bitmap image = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
iv.setImageBitmap(image);
Check documentation for detailed description for YuvImage Class.
I am working on TensorFlow stylize image. But, the problem I am facing is that it resize my actual image. I want to apply style on whole image itself. For example, if my image resolution is 1280x960, it should be the same after I apply style on it.
I am not using default INPUT_SIZE value 256. Using default value it works fine. Here is my code I am using to prevent resize image.
private TensorFlowInferenceInterface inferenceInterface;
private void applyStyle(){
inferenceInterface = new TensorFlowInferenceInterface(mActivity.getAssets(), "bossK_float.pb");
Bitmap bitmap = getBitmapFromPath();
bitmap=Bitmap.createBitmap(bitmap,0,bitmap.getWidth(),bitmap.getHeight(), matrix, true);
INPUT_SIZE_WIDTH = bitmap.getWidth();
INPUT_SIZE_HEIGHT = bitmap.getHeight();
mStyledBitmap = stylizeImage(bitmap);
}
private Bitmap stylizeImage(Bitmap bitmap) {
Bitmap scaledBitmap = scaleBitmap(bitmap, INPUT_SIZE_WIDTH, INPUT_SIZE_HEIGHT);
intValues = new int[INPUT_SIZE_WIDTH * INPUT_SIZE_HEIGHT];
floatValues = new float[INPUT_SIZE_WIDTH * INPUT_SIZE_HEIGHT * 3];
scaledBitmap.getPixels(intValues, 0, scaledBitmap.getWidth(), 0, 0, scaledBitmap.getWidth(), scaledBitmap.getHeight());
scaledBitmap = scaledBitmap.copy(Bitmap.Config.ARGB_8888, true);
for (int i = 0; i < intValues.length; ++i) {
final int val = intValues[i];
floatValues[i * 3 + 0] = ((val >> 16) & 0xFF) * 1.0f;
floatValues[i * 3 + 1] = ((val >> 8) & 0xFF) * 1.0f;
floatValues[i * 3 + 2] = (val & 0xFF) * 1.0f;
}
Trace.beginSection("feed");
inferenceInterface.feed(INPUT_NAME, floatValues, INPUT_SIZE_WIDTH, INPUT_SIZE_HEIGHT, 3);
Trace.endSection();
Trace.beginSection("run");
inferenceInterface.run(new String[]{OUTPUT_NAME});
Trace.endSection();
Trace.beginSection("fetch");
inferenceInterface.fetch(OUTPUT_NAME, floatValues);
Trace.endSection();
for (int i = 0; i < intValues.length; ++i) {
intValues[i] =
0xFF000000
| (((int) (floatValues[i * 3 + 0])) << 16)
| (((int) (floatValues[i * 3 + 1])) << 8)
| ((int) (floatValues[i * 3 + 2]));
}
scaledBitmap.setPixels(intValues, 0, scaledBitmap.getWidth(), 0, 0, scaledBitmap.getWidth(), scaledBitmap.getHeight());
return scaledBitmap;
}
private Bitmap scaleBitmap(Bitmap origin, int newWidth, int newHeight) {
if (origin == null) {
return null;
}
int height = origin.getHeight();
int width = origin.getWidth();
float scaleWidth = ((float) newWidth) / width;
float scaleHeight = ((float) newHeight) / height;
Matrix matrix = new Matrix();
matrix.postScale(scaleWidth, scaleHeight);
Bitmap newBitmap = Bitmap.createBitmap(origin, 0, 0, width, height, matrix, false);
return newBitmap;
}
When I change my INPUT_SIZE values to INPUT_SIZE_WIDTH and INPUT_SIZE_HEIGHT, my application stops without error message. I debug this code, but it stucks on this piece of code and stop my app:
Trace.beginSection("run");
inferenceInterface.run(new String[]{OUTPUT_NAME});
Trace.endSection();
Please let me know, how can I style whole image using TensorFlow.
Thank You!
Your code stops there because of the differences in size. You probably must be getting an ArrayOutOfBound Exception.
The model is to be trained to accept images of a particular size. So, whenever you classify, the image is to be reduced to that particular size.
Even your training data which when creating a pb/lite/tflite file will be converted to accept the same size images you mention within the model creation. The results will not affect to a larger extinct. You can give that a try.
I use the saveFrame method in grafika. But I found there is orientation issue.
The left one is from saveFrame and the right one is what i see.
The code as following:
/**
* Saves the EGL surface to a file.
* <p>
* Expects that this object's EGL surface is current.
*/
public void saveFrame(File file) throws IOException {
if (!mEglCore.isCurrent(mEGLSurface)) {
throw new RuntimeException("Expected EGL context/surface is not current");
}
// glReadPixels fills in a "direct" ByteBuffer with what is essentially big-endian RGBA
// data (i.e. a byte of red, followed by a byte of green...). While the Bitmap
// constructor that takes an int[] wants little-endian ARGB (blue/red swapped), the
// Bitmap "copy pixels" method wants the same format GL provides.
//
// Ideally we'd have some way to re-use the ByteBuffer, especially if we're calling
// here often.
//
// Making this even more interesting is the upside-down nature of GL, which means
// our output will look upside down relative to what appears on screen if the
// typical GL conventions are used.
String filename = file.toString();
int width = getWidth();
int height = getHeight();
ByteBuffer buf = ByteBuffer.allocateDirect(width * height * 4);
buf.order(ByteOrder.LITTLE_ENDIAN);
GLES20.glReadPixels(0, 0, width, height,
GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, buf);
GlUtil.checkGlError("glReadPixels");
buf.rewind();
BufferedOutputStream bos = null;
try {
bos = new BufferedOutputStream(new FileOutputStream(filename));
Bitmap bmp = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bmp.copyPixelsFromBuffer(buf);
bmp.compress(Bitmap.CompressFormat.PNG, 90, bos);
bmp.recycle();
} finally {
if (bos != null) bos.close();
}
Log.d(TAG, "Saved " + width + "x" + height + " frame as '" + filename + "'");
}
So how to deal with the orientation issue ?
Using the following code to save the problem:
IntBuffer ib = IntBuffer.allocate(width * height);
IntBuffer ibt = IntBuffer.allocate(width * height);
gl.glReadPixels(0, 0, width, height, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, ib);
// Convert upside down mirror-reversed image to right-side up normal image.
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
ibt.put((height - i - 1) * width + j, ib.get(i * width + j));
}
}
Bitmap bitmap = Bitmap.createBitmap(width, height,Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(ibt);
I want to convert white background to transparent background in android bitmap.
My situation:
Original Image : I cannot post a image
public Bitmap replaceColor(Bitmap src){
if(src == null)
return null;
int width = src.getWidth();
int height = src.getHeight();
int[] pixels = new int[width * height];
src.getPixels(pixels, 0, width, 0, 0, width, height);
for(int x = 0;x < pixels.length;++x){
pixels[x] = ~(pixels[x] << 8 & 0xFF000000) & Color.BLACK;
}
Bitmap result = Bitmap.createBitmap(pixels, width, height, Bitmap.Config.ARGB_8888);
return result;
}
Processing After
It was detect pixel to pixel, one by one.
It's good but this bitmap image doesn't remain original color.
So, I append code to filter.
if (pixels[x] == Color.white)
public Bitmap replaceColor(Bitmap src){
if(src == null)
return null;
int width = src.getWidth();
int height = src.getHeight();
int[] pixels = new int[width * height];
src.getPixels(pixels, 0, width, 0, 0, width, height);
for(int x = 0;x < pixels.length;++x){
if(pixels[x] == Color.WHITE){
pixels[x] = ~(pixels[x] << 8 & 0xFF000000) & Color.BLACK;
}
}
Bitmap result = Bitmap.createBitmap(pixels, width, height, Bitmap.Config.ARGB_8888);
return result;
}
Processing After,
But, this picture can not remove completely color white.
So, It is not pretty.
I really want remove white background in android bitmap
My code is following in under stackoverflow article.
Android bitmap mask color, remove color
public Bitmap replaceColor(Bitmap src) {
if (src == null)
return null;
int width = src.getWidth();
int height = src.getHeight();
int[] pixels = new int[width * height];
src.getPixels(pixels, 0, 1 * width, 0, 0, width, height);
for (int x = 0; x < pixels.length; ++x) {
// pixels[x] = ~(pixels[x] << 8 & 0xFF000000) & Color.BLACK;
if(pixels[x] == Color.WHITE) pixels[x] = 0;
}
return Bitmap.createBitmap(pixels, width, height, Bitmap.Config.ARGB_8888);
}
Just replace one line as in code above and it should do what you want it to - replace white color with transperent
Working on Mi Note 7 - Oreo
I use tesseract library for android to capture certain text from an image. I know that the captured image is not saved anywhere, it gets recycled. I need to find the original colored bitmap. I have been trying to locate the original colored bitmap, but all I could find was a grayscaled bitmap:
Bitmap bitmap = activity.getCameraManager().buildLuminanceSource(data, width, height).renderCroppedGreyscaleBitmap();
When I save this bitmap to the sdcard, I get a gray scaled image. renderCroppedGreyscaleBitmap() method is as follows:
public Bitmap renderCroppedGreyscaleBitmap() {
int width = getWidth();
int height = getHeight();
int[] pixels = new int[width * height];
byte[] yuv = yuvData;
int inputOffset = top * dataWidth + left;
for (int y = 0; y < height; y++) {
int outputOffset = y * width;
for (int x = 0; x < width; x++) {
int grey = yuv[inputOffset + x] & 0xff;
pixels[outputOffset + x] = 0xFF000000 | (grey * 0x00010101);
}
inputOffset += dataWidth;
}
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixels, 0, width, 0, 0, width, height);
return bitmap;
}
I would greatly appreciate if someone can tell me to get the original colored image captured. Do I have to change this method to get the colored image(RGB)?
/**
* YUV to bitmap
* #param data The YUV preview frame.
* #param width The width of the preview frame.
* #param height The height of the preview frame.
* #return
*/
public static Bitmap byteArray2Bitmap(byte[] data, int width, int height) {
YuvImage yuvimage = new YuvImage(data, ImageFormat.NV21, width, height, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(new Rect(0, 0, width, height), 100, baos);
byte[] rawImage = baos.toByteArray();
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig = Bitmap.Config.RGB_565;
Bitmap bitmap = BitmapFactory.decodeByteArray(rawImage, 0, rawImage.length, options);
Matrix matrix = new Matrix();
matrix.postRotate(90);
return Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(), bitmap.getHeight(), matrix, true);
}