Optimize or Replace Bitmap.setPixel - android

I'm creating a heatzone bitmap from raw data. To do that i convert each value of my rawdata into a color then i affect it to a pixel of a bitmap :
for(int i = 0; i < this.heatDatas.length; i++)
{
for(int j = 0; j < this.maxY; j++)
{
ratio = this.heatDatas[i][j] / (double) this.maxValue;
ratio = ratio * this.nbIndexColors;
idxColor1 = (int) Math.floor(ratio);
idxColor2 = idxColor1 + 1;
distance = ratio - idxColor1;
r = (int) ((colors[idxColor2][0] - colors[idxColor1][0]) * distance + colors[idxColor1][0]);
g = (int) ((colors[idxColor2][1] - colors[idxColor1][1]) * distance + colors[idxColor1][1]);
b = (int) ((colors[idxColor2][2] - colors[idxColor1][2]) * distance + colors[idxColor1][2]);
bmp.setPixel(i, j, Color.argb(this.alpha, r, g, b));
}
}
This is working , but it's really slow (around 800ms for 512*512 bitmap on nexus 5). After some investigation it seems that bmp.setPixel(i, j, Color.argb(this.alpha, r, g, b)); took almost 50% of the total execution time. Color.argb() seems to be negligible.
What should i do to get better performances ?
Thanks
Note : The aim of this code is to display a heatzone hover an imageview

SetPixel has a big overhead. It is usually much faster to get a copy of the bitmap (getPixels) or create it from scratch, modify it and copy back (setPixels).

Related

Applying color gradient on an image in Android

I am trying to replicate the functionalities of QGIS software in an Android application. I want to apply a color gradient (Red to Green lut) on an image for further efficient analysis.
The image shown below is generated after applying the NDVI (Normalised Difference Vegetation Index) indicator on it. The value of NDVI ranges from -1 to +1.
The image shown below is generated after applying the color gradient on it. This image now can be analysed efficiently because it shows the plants as green and the ground as red.
I tried the following code in Android for the result where NDVI is the NDVI value ranging from -1 to +1. I have not added the code for finding out the maximum and minimum value of NDVI but for my image, minimum value of NDVI is 0.11 and maximum value of NDVI is 0.57:
for (int i = 0; i < width; i++) {
for (int j = 0; j < height; j++) {
final int pixel1 = src1[i][j];
final int pixel2 = src2[i][j];
final double X = pixel2 - pixel1;
final double Y = pixel2 + pixel1;
final double NDVI = (X / Y);
R = (int) ((255 * NDVI) / maxNDVI);
G = (int) (((255 * (maxNDVI - NDVI)) / maxNDVI));
B = 0;
pixels[i][j] = Color.argb((Color.alpha(pixel1)), R, G, B);
}
}
But I am not getting the appropriate result. Kindly help me out with this.

How to efficiently get and set bitmap pixels

I am currently making an app that involves altering the RGB values of pixels in a bitmap and creating a new bitmap after.
My problem is I need help increasing speed of this process. (It can take minutes to process a bitmap with inSampleSize = 2 and forever to process an inSampleSize = 1) Right now, I am using the getPixel and setPixel methods to alter the pixels and believe these two methods are the root of the problem as they are very inefficient. The getPixels method isn't suitable as I am not altering each pixel in order (ex. getting a pixel and changing a radius of 5 pixels around it to the same colour) unless anyone knows of a way to use getPixels (perhaps be able to put the pixels in a 2D array).
This is part of my code:
public static final alteredBitmp(Bitmap bp)
{
//initialize variables
// ..................
Bitmap bitmap = Bitmap.createBitmap(width, height,
Bitmap.Config.ARGB_8888);
for (int x = 0; x < width; x++) {
int left = Math.max(0, x - RADIUS);
int right = Math.min(x + RADIUS, width - 1);
for (int y = 0; y < height; ++y) {
int top = Math.max(0, y - RADIUS);
int bottom = Math.min(y + RADIUS, height - 1);
int maxIndex = -1;
for (int j = top; j <= bottom; j++) {
for (int i = left; i <= right; i++) {
pixelColor = bitmap.getPixel(i, j);
//get rgb values
//make changes to those values
}
}
}
}
//set new rgb values
bitmap.setPixel(x, y, Color.rgb(r, g, b));
//return new bitmap
Much thanks in advance!
Consider looking at RenderScript, which is Android's high performance compute framework. As you are iterating over width x height number of pixels and altering each one which in a modern device could be around a million pixels or higher, doing it in a single thread can take minutes. RenderScript can parallelize operations over CPU or the GPU where possible.
http://android-developers.blogspot.com/2012/01/levels-in-renderscript.html
http://developer.android.com/guide/topics/renderscript/index.html
Google IO 2013 session:
https://youtu.be/uzBw6AWCBpU
RenderScript compatibility library: http://android-developers.blogspot.com/2013/09/renderscript-in-android-support-library.html

Color Image in the Google Tango Leibniz API

I am trying to capture the image data in the onFrameAvailable method from a Google Tango. I am using the Leibniz release. In the header file it is said that the buffer contains HAL_PIXEL_FORMAT_YV12 pixel data. In the release notes they say the buffer contains YUV420SP. But in the documentation it is said the pixels are RGBA8888 format (). I am a little confused and additionally. I don't really get image data but a lot of magenta and green. Right now I am trying to convert from YUV to RGB similar to this one. I guess there is something wrong with the stride, too. Here eís the code of the onFrameAvailable method:
int size = (int)(buffer->width * buffer->height);
for (int i = 0; i < buffer->height; ++i)
{
for (int j = 0; j < buffer->width; ++j)
{
float y = buffer->data[i * buffer->stride + j];
float v = buffer->data[(i / 2) * (buffer->stride / 2) + (j / 2) + size];
float u = buffer->data[(i / 2) * (buffer->stride / 2) + (j / 2) + size + (size / 4)];
const float Umax = 0.436f;
const float Vmax = 0.615f;
y = y / 255.0f;
u = (u / 255.0f - 0.5f) ;
v = (v / 255.0f - 0.5f) ;
TangoData::GetInstance().color_buffer[3*(i*width+j)]=y;
TangoData::GetInstance().color_buffer[3*(i*width+j)+1]=u;
TangoData::GetInstance().color_buffer[3*(i*width+j)+2]=v;
}
}
I am doing the yuv to rgb conversion in the fragment shader.
Has anyone ever obtained an RGB image for the Google Tango Leibniz release? Or had someone similar problems when converting from YUV to RGB?
YUV420SP (aka NV21) is correct for the time being. An explanation is here. In this format you have a width x height array where each element is a Y byte, followed by a width/2 x height/2 array where each element is a V byte and a U byte. Your code is implementing YV21, which has separate arrays for V and U instead of interleaving them in one array.
You mention that you are doing YUV to RGB conversion in a fragment shader. If all you want to do with the camera images is draw then you can use TangoService_connectTextureId() and TangoService_updateTexture() instead of TangoService_connectOnFrameAvailable(). This approach delivers the camera image to you already in an OpenGL texture that gives your fragment shader RGB values without bothering with the pixel format details. You will need to bind to GL_TEXTURE_EXTERNAL_OES (instead of GL_TEXTURE_2D), and your fragment shader would look something like this:
#extension GL_OES_EGL_image_external : require
precision mediump float;
varying vec4 v_t;
uniform samplerExternalOES colorTexture;
void main() {
gl_FragColor = texture2D(colorTexture, v_t.xy);
}
If you really do want to pass YUV data to a fragment shader for some reason, you can do so without preprocessing it into floats. In fact, you don't need to unpack it at all - for NV21 just define a 1-byte texture for Y and a 2-byte texture for VU, and load the data as-is. Your fragment shader will use the same texture coordinates for both.
By the way, if someone experienced problems with capturing the image data on the Leibniz release, too: One of the developers told me that there is a bug concerning the camera and that it should be fixed with the Nash release.
The bug caused my buffer to be null but when I used the Nash update I got data again. However, right now the problem is that the data I am using doesn't make sense. I guess/hope the cause is that the Tablet didn't get the OTA update yet (there can be a gap between the actual release date and the OTA software update).
Just try code following:
//C#
public bool YV12ToPhoto(byte[] data, int width, int height, out Texture2D photo)
{
photo = new Texture2D(width, height);
int uv_buffer_offset = width * height;
for (int i = 0; i < height; i++)
{
for (int j = 0; j < width; j++)
{
int x_index = j;
if (j % 2 != 0)
{
x_index = j - 1;
}
// Get the YUV color for this pixel.
int yValue = data[(i * width) + j];
int uValue = data[uv_buffer_offset + ((i / 2) * width) + x_index + 1];
int vValue = data[uv_buffer_offset + ((i / 2) * width) + x_index];
// Convert the YUV value to RGB.
float r = yValue + (1.370705f * (vValue - 128));
float g = yValue - (0.689001f * (vValue - 128)) - (0.337633f * (uValue - 128));
float b = yValue + (1.732446f * (uValue - 128));
Color co = new Color();
co.b = b < 0 ? 0 : (b > 255 ? 1 : b / 255.0f);
co.g = g < 0 ? 0 : (g > 255 ? 1 : g / 255.0f);
co.r = r < 0 ? 0 : (r > 255 ? 1 : r / 255.0f);
co.a = 1.0f;
photo.SetPixel(width - j - 1, height - i - 1, co);
}
}
return true;
}
I have succeeded.

Android Histogram equalization algorithm gives me really bright or red image

I am doing histogram equalization on an image. I first get the RGB image and convert it to YUV. I run the histogram equalization algorithm on Y' of YUV and then convert back to RGB. Is it me, or does the image look weird? I am doing this correctly? this image is pretty bright, other images are a little red.
Here are the before/after images:
The algorithm (the commented values are values that I used previously for conversion. Both yield pretty much the same results) :
public static void createContrast(Bitmap src) {
int width = src.getWidth();
int height = src.getHeight();
Bitmap processedImage = Bitmap.createBitmap(width, height, src.getConfig());
int A = 0,R,G,B;
int pixel;
float[][] Y = new float[width][height];
float[][] U = new float[width][height];
float[][] V = new float [width][height];
int [] histogram = new int[256];
Arrays.fill(histogram, 0);
int [] cdf = new int[256];
Arrays.fill(cdf, 0);
float min = 257;
float max = 0;
for(int x = 0; x < width; ++x) {
for(int y = 0; y < height; ++y) {
pixel = src.getPixel(x, y);
//Log.i("TEST","("+x+","+y+")");
A = Color.alpha(pixel);
R = Color.red(pixel);
G = Color.green(pixel);
B = Color.blue(pixel);
/*Log.i("TESTEST","R: "+R);
Log.i("TESTEST","G: "+G);
Log.i("TESTEST","B: "+B);*/
// convert to YUV
/*Y[x][y] = 0.299f * R + 0.587f * G + 0.114f * B;
U[x][y] = 0.492f * (B-Y[x][y]);
V[x][y] = 0.877f * (R-Y[x][y]);*/
Y[x][y] = 0.299f * R + 0.587f * G + 0.114f * B;
U[x][y] = 0.565f * (B-Y[x][y]);
V[x][y] = 0.713f * (R-Y[x][y]);
// create a histogram
histogram[(int) Y[x][y]]+=1;
// get min and max values
if (Y[x][y] < min){
min = Y[x][y];
}
if (Y[x][y] > max){
max = Y[x][y];
}
}
}
cdf[0] = histogram[0];
for (int i=1;i<=255;i++){
cdf[i] = cdf[i-1] + histogram[i];
//Log.i("TESTEST","cdf of: "+i+" = "+cdf[i]);
}
float minCDF = cdf[(int)min];
float denominator = width*height - minCDF;
//Log.i("TEST","Histeq Histeq Histeq Histeq Histeq Histeq");
for(int x = 0; x < width; ++x) {
for(int y = 0; y < height; ++y) {
//Log.i("TEST","("+x+","+y+")");
pixel = src.getPixel(x, y);
A = Color.alpha(pixel);
Y[x][y] = ((cdf[ (int) Y[x][y]] - minCDF)/(denominator)) * 255;
/*R = minMaxCalc(Y[x][y] + 1.140f * V[x][y]);
G = minMaxCalc (Y[x][y] - 0.395f * U[x][y] - 0.581f * V[x][y]);
B = minMaxCalc (Y[x][y] + 2.032f * U[x][y]);*/
R = minMaxCalc(Y[x][y] + 1.140f * V[x][y]);
G = minMaxCalc (Y[x][y] - 0.344f * U[x][y] - 0.714f * V[x][y]);
B = minMaxCalc (Y[x][y] + 1.77f * U[x][y]);
//Log.i("TESTEST","A: "+A);
/*Log.i("TESTEST","R: "+R);
Log.i("TESTEST","G: "+G);
Log.i("TESTEST","B: "+B);*/
processedImage.setPixel(x, y, Color.argb(A, R, G, B));
}
}
}
My next step is to graph the histograms before and after. I just want to get an opinion here.
The question is a little bit old, but let me answer.
The reason is the way histogram equalization works. The algorithm tries to use all of the 0-255 range instead of given image's range.
So if you give it a dark image, it will change relatively brighter pixels to white colors. And relatively darker colors to black colors.
If you give it a bright image, for the same reason it will get darkened.

Android: Efficiently ensuring that the pixels in a scaled bitmap have only two colors

I have a bitmap whose pixels contain only two argb values: pure black and pure transparent. I then scale the bitmap up in Android, now the bitmap has many argb values: pure black and pure transparent and black with various levels of transparency (i.e half transparent black); this is due to the interpolation done automatically by android. I would like the bitmaps pixels to contain only the original two argb values.
Currently I accomplish this with the following process:
my_bitmap = Bitmap.createScaledBitmap(BitmapFactory
.decodeResource(context.getResources(),
R.drawable.my_resource),
new_width, new height, false);
for (int i = 0; i < my_bitmap.getWidth(); i++) {
for (int j = 0; j < my_bitmap.getWidth(); j++) {
if (my_bitmap.getPixel(i, j) != Color.TRANSPARENT) {
my_bitmap.setPixel(i, j, Color.BLACK);
}
}
}
This is achingly slow on a cheaper phone for even a small bitmap, does anyone know how to either A) do this much faster or B) scale a bitmap up with no new argb values appearing?
I think the answer here is an algorithmic one.
Bitmap operations are very expensive... perhaps you can cache the bitmap somewhere and only draw it when the interpolation is specifically requested?
My other idea would be to group some amount of pixels together and have a flag "hasChanged" or something like that, and set it to true when something is changed so the system knows it has to redraw that pixel group. This way you don't redraw things more often than necessary.
Hope this helps!
Do the scale in code since the built-in algorithm isn't what you want. You'll avoid the interpolation that you don't want and you won't have to undo it. (Excuse any coding errors -- I wrote this without access to an IDE or compiler.)
my_bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.my_resource);
int[] src = new int[my_bitmap.getWidth() * my_bitmap.getHeight()];
my_bitmap.getPixels(src, 0, my_bitmap.getWidth(), 0, 0, my_bitmap.getWidth(), my_bitmap.getHeight());
int[] dst = new int[new_width * new_height];
float scaleX = my_bitmap.getWidth() / new_width;
float scaleY = my_bitmap.getHeight() / new_height;
for (int y = 0; y < new_height; y++) {
for (int x = 0; x < new_width; x++) {
int srcY = (int) (y * scaleY);
int srcX = (int) (x * scaleX);
dst[y*new_height + x] = src[srcY*my_bitmap.getHeight() + srcX];
}
}
Bitmap newBitmap = Bitmap.createBitmap(dst, 0, new_width, new_width, new_height, Bitmap.Config.ARGB_8888);

Categories

Resources