Convert RGB565 to Greyscale (8 bits) - android

I am trying to convert a rgb565 image (video stream from the Android phone camera) into a greyscale (8 bits) image.
So far I got to the following code (the conversion is computed in native code using the Android NDK). Note that my input image is 640*480 and I want to crop it to make it fit in a 128*128 buffer.
#define RED(a) ((((a) & 0xf800) >> 11) << 3)
#define GREEN(a) ((((a) & 0x07e0) >> 5) << 2)
#define BLUE(a) (((a) & 0x001f) << 3)
typedef unsigned char byte;
void toGreyscale(byte *rgbs, int widthIn, int heightIn, byte *greyscales)
{
const int textSize = 128;
int x,y;
short* rgbPtr = (short*)rgbs;
byte *greyPtr = greyscales;
// rgbs arrives in RGB565 (16 bits) format
for (y=0; y<textSize; y++)
{
for (x=0; x<textSize; x++)
{
short pixel = *(rgbPtr++);
int r = RED(pixel);
int g = GREEN(pixel);
int b = BLUE(pixel);
*(greyPtr++) = (byte)((r+g+b) / 3);
}
rgbPtr += widthIn - textSize;
}
}
The image is sent to the function like this
jbyte* cImageIn = env->GetByteArrayElements(imageIn, &b);
jbyte* cImageOut = (jbyte*)env->GetDirectBufferAddress(imageOut);
toGreyscale((byte*)cImageIn, widthIn, heightIn, (byte*)cImageOut);
The result I get is a horizontally-reversed image (no idea why...the UVs to display the result are correct...), but the biggest problem is that only the red channel is actually correct when I display them separately. The green and blue channels are all messed up and I have no idea why. I checked on the Internet and all the resources I found showed that the masks I am using are correct. Any idea where the mistake could be?
Thanks!

May be an endianess issue?
You could check quickly by reversing the two bytes of your 16 bits word before shifting out the RGB components.

Related

Android NDK converting YUV420 AIMAGE to RGB

I'm working on streaming camera frames using NDK CAMERA2 API. the format of my AIMAGE_READER is YUV420;however I would like to convert it to RGB. I looked up some Java examples that do the same, but for some reason it doesn't work well and the image is distorted.
the frame resolution is 640X480.
can someone tell me what I am doing wrong here
void NativeCamera::previewImageCallback(void *context, AImageReader *reader)
{
Log::Debug(TAG) << "previewImageCallback" << Log::Endl;
AImage *previewImage = nullptr;
auto status = AImageReader_acquireLatestImage(reader, &previewImage);
if(status !=AMEDIA_OK)
{
return;
}
std::thread processor([=]()
{
uint8_t *dataY = nullptr;
uint8_t *dataU = nullptr;
uint8_t *dataV = nullptr;
int lenY = 0;
int lenU = 0;
int lenV = 0;
AImage_getPlaneData(previewImage, 0, (uint8_t**)&dataY, &lenY);
AImage_getPlaneData(previewImage, 1, (uint8_t**)&dataU, &lenU);
AImage_getPlaneData(previewImage, 2, (uint8_t**)&dataV, &lenV);
uchar buff[lenY+lenU+lenV];
memcpy(buff+0,dataY,lenY);
memcpy(buff+lenY,dataV,lenV);
memcpy(buff+lenY+lenV,dataU,lenU);
cv::Mat yuvMat(480+240,640,CV_8UC1,&buff);
cv::Mat rgbMat;
cv::cvtColor(yuvMat,rgbMat,cv::COLOR_YUV2RGB_NV21,3);
//colorBuffer defined elsewhere
memcpy((char*)colorBuffer,rgbMat.data,640*480*3);
That happens because YUV420p format has 6 bytes per 4 pixels, but RGB has 12 bytes per 4 pixel. So you have to repack your array first, cv::cvtColor - does not do it. Check here about YUV420p packing.

How to fix the image preprocessing difference between tensorflow and android studio?

I'm trying to build a classification model with keras and deploy the model to my Android phone. I use the code from this website to deploy my own converted model, which is a .pb file, to my Android phone. I load a image from my phone and everything worked fine, but the prediction result is totally different from the result I got from my PC.
The procedure of testing on my PC are:
load the image with cv2, and convert to np.float32
use the keras resnet50 'preprocess_input' python function to preprocess the image
expand the image dimension for batching (batch size is 1)
forward the image to model and get the result
Relevant code:
img = cv2.imread('./my_test_image.jpg')
x = preprocess_input(img.astype(np.float32))
x = np.expand_dims(x, axis=0)
net = load_model('./my_model.h5')
prediction_result = net.predict(x)
And I noticed that the image preprocessing part of Android is different from the method I used in keras, which mode is caffe(convert the images from RGB to BGR, then zero-center each color channel with respect to the ImageNet dataset). It seems that the original code is for mode tf(will scale pixels between -1 to 1).
So I modified the following code of 'preprocessBitmap' to what I think it should be, and use a 3 channel RGB image with pixel value [127,127,127] to test it. The code predicted the same result as .h5 model did. But when I load a image to classify, the prediction result is different from .h5 model.
Does anyone has any idea? Thank you very much.
I have tried the following:
Load a 3 channel RGB image in my Phone with pixel value [127,127,127], and use the modified code below, and it will give me a prediction result that is same as prediction result using .h5 model on PC.
Test the converted .pb model on PC using tensorflow gfile module with a image, and it give me a correct prediction result (compare to .h5 model). So I think the converted .pb file does not have any problem.
Entire section of preprocessBitmap
// code of 'preprocessBitmap' section in TensorflowImageClassifier.java
TraceCompat.beginSection("preprocessBitmap");
// Preprocess the image data from 0-255 int to normalized float based
// on the provided parameters.
bitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
for (int i = 0; i < intValues.length; ++i) {
// this is a ARGB format, so we need to mask the least significant 8 bits to get blue, and next 8 bits to get green and next 8 bits to get red. Since we have an opaque image, alpha can be ignored.
final int val = intValues[i];
// original
/*
floatValues[i * 3 + 0] = (((val >> 16) & 0xFF) - imageMean) / imageStd;
floatValues[i * 3 + 1] = (((val >> 8) & 0xFF) - imageMean) / imageStd;
floatValues[i * 3 + 2] = ((val & 0xFF) - imageMean) / imageStd;
*/
// what I think it should be to do the same thing in mode caffe when using keras
floatValues[i * 3 + 0] = (((val >> 16) & 0xFF) - (float)123.68);
floatValues[i * 3 + 1] = (((val >> 8) & 0xFF) - (float)116.779);
floatValues[i * 3 + 2] = (((val & 0xFF)) - (float)103.939);
}
TraceCompat.endSection();
This question is old, but remains the top Google result for preprocess_input for ResNet50 on Android. I could not find an answer for implementing preprocess_input for Java/Android, so I came up with the following based on the original python/keras code:
/*
Preprocesses RGB bitmap IAW keras/imagenet
Port of https://github.com/tensorflow/tensorflow/blob/v2.3.1/tensorflow/python/keras/applications/imagenet_utils.py#L169
with data_format='channels_last', mode='caffe'
Convert the images from RGB to BGR, then will zero-center each color channel with respect to the ImageNet dataset, without scaling.
Returns 3D float array
*/
static float[][][] imagenet_preprocess_input_caffe( Bitmap bitmap ) {
// https://github.com/tensorflow/tensorflow/blob/v2.3.1/tensorflow/python/keras/applications/imagenet_utils.py#L210
final float[] imagenet_means_caffe = new float[]{103.939f, 116.779f, 123.68f};
float[][][] result = new float[bitmap.getHeight()][bitmap.getWidth()][3]; // assuming rgb
for (int y = 0; y < bitmap.getHeight(); y++) {
for (int x = 0; x < bitmap.getWidth(); x++) {
final int px = bitmap.getPixel(x, y);
// rgb-->bgr, then subtract means. no scaling
result[y][x][0] = (Color.blue(px) - imagenet_means_caffe[0] );
result[y][x][1] = (Color.green(px) - imagenet_means_caffe[1] );
result[y][x][2] = (Color.red(px) - imagenet_means_caffe[2] );
}
}
return result;
}
Usage with a 3D tensorflow-lite input with shape (1,224,224,3):
Bitmap bitmap = <your bitmap of size 224x224x3>;
float[][][][] imgValues = new float[1][bitmap.getHeight()][bitmap.getWidth()][3];
imgValues[0]=imagenet_preprocess_input_caffe(bitmap);
... <prep tfInput, tfOutput> ...
tfLite.run(tfInput, tfOutput);

Huge negative values extracted by using getPixel() method

I am having a problem with an image processing app I am developing (newbie here). I am trying to extract the value of specific pixels by using the getPixel() method.
I am having a problem though. The number I get from this method is a huge negative number, something like -1298383. Is this normal? How can I fix it?
Thanks.
I'm not an expert, but to me it looks like you are getting the hexadecimal value. Perhaps you want something more understandable like the value of each RGB layer.
To unpack a pixel into its RGB values you should do something like:
private short[][] red;
private short[][] green;
private short[][] blue;
/**
* Map each intensity of an RGB colour into its respective colour channel
*/
private void unpackPixel(int pixel, int row, int col) {
red[row][col] = (short) ((pixel >> 16) & 0xFF);
green[row][col] = (short) ((pixel >> 8) & 0xFF);
blue[row][col] = (short) ((pixel >> 0) & 0xFF);
}
And after changes in each channel you can pack the pixel back.
/**
* Create an RGB colour pixel.
*/
private int packPixel(int red, int green, int blue) {
return (red << 16) | (green << 8) | blue;
}
Sorry if it is not what you are looking for.
You can get the pixel from the view like this:
ImageView imageView = ((ImageView)v);
Bitmap bitmap = ((BitmapDrawable)imageView.getDrawable()).getBitmap();
int pixel = bitmap.getPixel(x,y);
Now you can get each channel with:
int redValue = Color.red(pixel);
int blueValue = Color.blue(pixel);
int greenValue = Color.green(pixel);
getPixel() returns the Color at the specified location. Throws an exception if x or y are out of bounds (negative or >= to the width or height respectively).
The returned color is a non-premultiplied ARGB value.

How to create Android bitmap from RGBA buffer in native code?

I'm having BMP as RGBA buffer (i'm able to save it as BMP in native code and view it as .bmp image) and i need to pass it to android from native code. I've found similar questions and answers and this is one of the solutions:
create android bitmap object in android
pass it to native code
set pixels buffer in native code
return bitmap back to android side
This is not suitable for me because:
pixels array is created i native code
if i create it on android side with specified width and height this makes android allocate the second buffer and it's not good as i'm going to have 24 bitmaps a second (streaming video).
I need smth like this:
pass Buffer from native code and Bitmap.createFromBuffer(Buffer buffer, int width, int height, int format)
create android bitmap object in native code, set pixels buffer and return back to android
Any suggestions/thoughts?
If you wanna to create java Bitmap object from native code, you should do something like this:
in native code read your buffer, then apply every pixel in bufer to argb format, if you have rgba, you can do something like this:
int a = 0xFF & yourPixelInt;
int r = 0xFF & yourPixelInt >> 24;
int g = 0xFF & yourPixelInt >> 16;
int b = 0xFF & yourPixelInt >> 8;
unsigned int newPixel = (a << 24) | (r << 16) | (g << 8) | (b)
Do it for all pixels to convert it from rgba to argb, after that you can create java Bitmap from native code:
jint* bytes = env->GetIntArrayElements( array, NULL );
if (bytes != NULL) {
memcpy(bytes, buffer, origBufferSize * sizeof (unsigned int));
env->ReleaseIntArrayElements( array, bytes, 0 );
}
jclass bitmapClass = env->FindClass("android/graphics/Bitmap");
jmethodID methodid = env->GetStaticMethodID(bitmapClass, "createBitmap", "([IIIIILandroid/graphics/Bitmap$Config;)Landroid/graphics/Bitmap;");
jclass bitmapConfig = env->FindClass("android/graphics/Bitmap$Config");
jfieldID argb8888FieldID = env->GetStaticFieldID(bitmapConfig, "ARGB_8888",
"Landroid/graphics/Bitmap$Config;");
jobject argb8888Obj = env->GetStaticObjectField(bitmapConfig, argb8888FieldID);
jobject java_bitmap = env->CallStaticObjectMethod(bitmapClass, methodid, array, 0, bitmapwidth, bitmapwidth, bitmapheight, argb8888Obj);
Don't forget to release objects to avoid memory leak
env->DeleteLocalRef(array);
env->DeleteLocalRef(bitmapClass);
env->DeleteLocalRef(bitmapConfig);
env->DeleteLocalRef(argb8888Obj);

VideoSurfaceView - render to file

I'm using VideoSurfaceView to render filtered video. I'm doing it buy changing the fragment shader according to my needs. Now I would like to save/render the video after the changes to a file of the same format(Ex. mp4 - h264) but couldn't find how to do it.
PS - saving texture as bitmap and the bitmap to a file is easy but I could find how to do it with videos..
Any experts here?
As you already found out and said in the comments, OpenGL can't export multiple frames as a video.
Though if you simply want to filter/process each frame of a video, then you don't need OpenGL at all, and you don't need a Fragment Shader, you can simply loop through all the pixels yourself.
Now let's say that you process your video one frame at a time, and each frame is a BufferedImage, you can of course use whatever you want or get provided with, as long as you have the option to get and set pixels.
I'm simply supplying you with a way of calculating and applying a filter, you will have to do the decoding and encoding of the video file yourself.
But back to the BufferedImage, first we want to get all the pixels in our BufferedImage, we do that using the following.
BufferedImage bi = ...; // Here you would get a frame from the video
int width = bi.getWidth();
int height = bi.getHeight();
int[] pixels = ((DataBufferInt) bi.getRaster().getDataBuffer()).getData();
Be aware that depending on the type of image and if the image contains transparency, the DataBuffer might vary between a DataBufferInt to DataBufferByte, etc. You can read about the different DataBuffers in the Oracle Docs, click here.
Now simply by looping through the pixels from the image, then we can apply and create any kind of effect and filtering.
Let's say we want to create a grayscale effect also called a black-and-white effect, you would then do that by the following.
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
final int index = x + y * width;
final int pixel = pixels[index];
final int alpha = (pixel >> 24) & 0xFF;
final int red = (pixel >> 16) & 0xFF;
final int green = (pixel >> 8) & 0xFF;
final int blue = pixel & 0xFF;
final int gray = (red + green + blue) / 3;
pixels[index] = alpha << 24 | gray << 16 | gray << 8 | gray;
}
}
Now you can simply save the image again, or do anything else you would like to do. Though you can also use and draw the BufferedImage, because the pixel array provided by the BufferedImage will of course change the BufferedImage as well.
Important if you want to perform a blur effect, then after you calculate each pixel store it into another array, because performing a blur effect, requires the surrounding pixels. Therefore it you replace the old once while you calculate all the pixels, some of the pixels will use the calculated values instead of the actual value.
The above code also works for images as well of course.
Extra
If you want to get RGBA values which is stored in a single int then you can do the following.
int pixel = 0xFFFF8040; // This is a random testing value
int alpha = (pixel >> 24) & 0xFF; // Would equal 255 using the testing value
int red = (pixel >> 16) & 0xFF; // ... 255 ...
int green = (pixel >> 8) & 0xFF; // ... 128 ...
int blue = pixel & 0xFF; // ... 64 ...
Then if you have the RGBA values and want to combine them to a single int then you can do the following.
int alpha = 255;
int red = 255;
int green = 128;
int blue = 64;
int pixel = alpha << 24 | red << 16 | green << 8 | blue;
If you only have the RGB values then you just do, either red << 16 | green << 8 | blue or you do 255 << 24 | red << 16 | green << 8 | blue

Categories

Resources