I have a YUV_420_888 image I got from the camera. I want to crop a rectangle out of grayscale of this image to feed to an image processing algorithm. This is what I have so far:
public static byte[] YUV_420_888toCroppedY(Image image, Rect cropRect) {
byte[] yData;
ByteBuffer yBuffer = image.getPlanes()[0].getBuffer();
int ySize = yBuffer.remaining();
yData = new byte[ySize];
yBuffer.get(yData, 0, ySize);
if (cropRect != null) {
int cropArea = (cropRect.right - cropRect.left) * (cropRect.bottom - cropRect.top);
byte[] croppedY = new byte[cropArea];
int cropIndex = 0;
// from the top of the rectangle, to the bottom, sequentially add rows to the output array, croppedY
for (int y = cropRect.top; y < cropRect.top + cropRect.height(); y++) {
// (2x+W) * y + x
int rowStart = (2*cropRect.left + cropRect.width()) * y + cropRect.left;
// (2x+W) * y + x + W
int rowEnd = (2*cropRect.left + cropRect.width()) * y + cropRect.left + cropRect.width();
for (int x = rowStart; x < rowEnd; x++) {
croppedY[cropIndex] = yData[x];
cropIndex++;
}
}
return croppedY;
}
return yData;
}
This function runs without error but the image I get out of it is garbage - it looks something like this:
I'm not sure how to solve this problem or what I'm doing wrong.
Your rowStart/end calculations are wrong.
You need to calculate the row start location based on the source image dimensions, not on your crop window dimensions. And I'm not sure where you get the factor of 2 from; there's 1 byte per pixel in the Y channel of the image.
They should be roughly:
int yRowStride = image.getPlanes()[0].getRowStride();
..
int rowStart = y * yRowStride + cropRect.left();
int rowEnd = y * yRowStride + cropRect.left() + cropRect.width();
Related
I need to crop every frame from the camera's onPreviewFrame. I want to crop it directly doing operations with the byte array without converting it to a bitmap. The reason I want to do it directly on the array is because it can't be too slow or too expensive. After the crop operation I will use the output byte array directly so I don't need any bitmap conversion.
int frameWidth;
int frameHeight;
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
// Crop the data array directly.
cropData(data, frameWidth, frameHeight, startCropX, startCropY, outputWidth, outputHeight);
}
private static byte[] cropNV21(byte[] img, int imgWidth, #NonNull Rect cropRect) {
// 1.5 mean 1.0 for Y and 0.25 each for U and V
int croppedImgSize = (int)Math.floor(cropRect.width() * cropRect.height() * 1.5);
byte[] croppedImg = new byte[croppedImgSize];
// Start points of UV plane
int imgYPlaneSize = (int)Math.ceil(img.length / 1.5);
int croppedImgYPlaneSize = cropRect.width() * cropRect.height();
// Y plane copy
for (int w = 0; w < cropRect.height(); w++) {
int imgPos = (cropRect.top + w) * imgWidth + cropRect.left;
int croppedImgPos = w * cropRect.width();
System.arraycopy(img, imgPos, croppedImg, croppedImgPos, cropRect.width());
}
// UV plane copy
// U and V are reduced by 2 * 2, so each row is the same size as Y
// and is half U and half V data, and there are Y_rows/2 of UV_rows
for (int w = 0; w < (int)Math.floor(cropRect.height() / 2.0); w++) {
int imgPos = imgYPlaneSize + (cropRect.top / 2 + w) * imgWidth + cropRect.left;
int croppedImgPos = croppedImgYPlaneSize + (w * cropRect.width());
System.arraycopy(img, imgPos, croppedImg, croppedImgPos, cropRect.width());
}
return croppedImg;
}
NV21 structure:
P.S.: Also, if the starting position of your rectangle is odd, the U and V will be swapped. To handle this case, I use:
if (cropRect.left % 2 == 1) {
cropRect.left -= 1;
}
if (cropRect.top % 2 == 1) {
cropRect.top -= 1;
}
I've created an application that takes an image captured by the Android device, displays this image in the ImageView. The user can then press a button to either blur or deblur the image. When I run the application on my Android device I can take an image with the camera and display this without any problems. A problem occurs when I press the blur button, which runs some code to blur the image. The application becomes frozen and I get an OutOfMemoryException for a line of my code that creates a new array and stores this in another array in a nested for loop.
This is the code for the nested for loop:
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int xTranslated = (x + width / 2) % width;
int yTranslated = (y + height / 2) % height;
double real = temp[2 * (xTranslated + yTranslated * width)];
double imaginary = temp[2 * (xTranslated + yTranslated * width) + 1];
degradation[2 * (x + y * width)] = real;
degradation[2 * (x + y * width) + 1] = imaginary;
Complex c = new Complex(real, imaginary);
complex[y * width + x] = c;
}
}
This nested for loop deals with data extracted from the input image, which is stored as a Bitmap.
Here is the full method that applies the motion blur:
public Complex[] motionBlur(double[] degradation, int width, int height, double alpha, double gamma, double sigma) {
Complex[] complex = new Complex[width * height];
double[] temp = new double[2 * width * height];
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
double teta = Math.PI * ( (((x - width/2) % width) * gamma) + ((((y - height/2) % height) * sigma) ));
Sinc sinc = new Sinc();
double real = (Math.cos(teta) * sinc.value(teta)) * alpha;
double imaginary = (Math.sin(teta) * sinc.value(teta)) * alpha;
Complex cConj = new Complex(real, imaginary).conjugate();
temp[2 * (x + y * width)] = cConj.getReal();
temp[2 * (x + y * width) + 1] = cConj.getImaginary();
}
}
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int xTranslated = (x + width / 2) % width;
int yTranslated = (y + height / 2) % height;
double real = temp[2 * (xTranslated + yTranslated * width)];
double imaginary = temp[2 * (xTranslated + yTranslated * width) + 1];
degradation[2 * (x + y * width)] = real;
degradation[2 * (x + y * width) + 1] = imaginary;
Complex c = new Complex(real, imaginary);
complex[y * width + x] = c;
}
}
return complex;
}
Here is a link to what is being output by the logcat when I try to run the application: http://pastebin.com/ysbN9A3s
MainActivity.java:373 corresponds to the line,
Complex c = new Complex(real, imaginary);
Here is nice talk about android crashes.
Pierre-Yves talks about OOM (OutOfMemory error) at the time 11:39. So the problem in OOM is not place where OOM happens. You should profile your app to find the place where most memory is consumed. I should admit that OOM is one of the hardest error to resolve.
Good luck!
Background
I've made a tiny Android library for handling bitmaps using JNI (link here)
In the long past, I've made some code of Bilinear Interpolation as a possible algorithm for scaling of images. The algorithm is a bit complex and uses pixels around to form the target pixel.
The problem
Even though there are no errors (no compilation errors and no runtime errors), the output image look like this (scaled the width by x2) :
The code
Basically the original Java code used SWT and supported only RGB, but it's the same for the Alpha channel. It worked before just perfectly (though now that I look at it, it seems to create a lot of objects on the way) .
Here's the Java code:
/** class for resizing imageData using the Bilinear Interpolation method */
public class BilinearInterpolation
{
/** the method for resizing the imageData using the Bilinear Interpolation algorithm */
public static void resize(final ImageData inputImageData,final ImageData newImageData,final int oldWidth,final int oldHeight,final int newWidth,final int newHeight)
{
// position of the top left pixel of the 4 pixels to use interpolation on
int xTopLeft,yTopLeft;
int x,y,lastTopLefty;
final float xRatio=(float)newWidth/(float)oldWidth,yratio=(float)newHeight/(float)oldHeight;
// Y color ratio to use on left and right pixels for interpolation
float ycRatio2=0,ycRatio1=0;
// pixel target in the src
float xt,yt;
// X color ratio to use on left and right pixels for interpolation
float xcRatio2=0,xcratio1=0;
// copy data from source image to RGB values:
RGB rgbTopLeft,rgbTopRight,rgbBottomLeft=null,rgbBottomRight=null,rgbTopMiddle=null,rgbBottomMiddle=null;
RGB[][] startingImageData;
startingImageData=new RGB[oldWidth][oldHeight];
for(x=0;x<oldWidth;++x)
for(y=0;y<oldHeight;++y)
{
rgbTopLeft=inputImageData.palette.getRGB(inputImageData.getPixel(x,y));
startingImageData[x][y]=new RGB(rgbTopLeft.red,rgbTopLeft.green,rgbTopLeft.blue);
}
// do the resizing:
for(x=0;x<newWidth;x++)
{
xTopLeft=(int)(xt=x/xRatio);
// when meeting the most right edge, move left a little
if(xTopLeft>=oldWidth-1)
xTopLeft--;
if(xt<=xTopLeft+1)
{
// we are between the left and right pixel
xcratio1=xt-xTopLeft;
// color ratio in favor of the right pixel color
xcRatio2=1-xcratio1;
}
for(y=0,lastTopLefty=Integer.MIN_VALUE;y<newHeight;y++)
{
yTopLeft=(int)(yt=y/yratio);
// when meeting the most bottom edge, move up a little
if(yTopLeft>=oldHeight-1)
yTopLeft--;
// we went down only one rectangle
if(lastTopLefty==yTopLeft-1)
{
rgbTopLeft=rgbBottomLeft;
rgbTopRight=rgbBottomRight;
rgbTopMiddle=rgbBottomMiddle;
rgbBottomLeft=startingImageData[xTopLeft][yTopLeft+1];
rgbBottomRight=startingImageData[xTopLeft+1][yTopLeft+1];
rgbBottomMiddle=new RGB((int)(rgbBottomLeft.red*xcRatio2+rgbBottomRight.red*xcratio1),(int)(rgbBottomLeft.green*xcRatio2+rgbBottomRight.green*xcratio1),(int)(rgbBottomLeft.blue*xcRatio2+rgbBottomRight.blue*xcratio1));
}
else if(lastTopLefty!=yTopLeft)
{
// we went to a totally different rectangle (happens in every loop start,and might happen more when making the picture smaller)
rgbTopLeft=startingImageData[xTopLeft][yTopLeft];
rgbTopRight=startingImageData[xTopLeft+1][yTopLeft];
rgbTopMiddle=new RGB((int)(rgbTopLeft.red*xcRatio2+rgbTopRight.red*xcratio1),(int)(rgbTopLeft.green*xcRatio2+rgbTopRight.green*xcratio1),(int)(rgbTopLeft.blue*xcRatio2+rgbTopRight.blue*xcratio1));
rgbBottomLeft=startingImageData[xTopLeft][yTopLeft+1];
rgbBottomRight=startingImageData[xTopLeft+1][yTopLeft+1];
rgbBottomMiddle=new RGB((int)(rgbBottomLeft.red*xcRatio2+rgbBottomRight.red*xcratio1),(int)(rgbBottomLeft.green*xcRatio2+rgbBottomRight.green*xcratio1),(int)(rgbBottomLeft.blue*xcRatio2+rgbBottomRight.blue*xcratio1));
}
lastTopLefty=yTopLeft;
if(yt<=yTopLeft+1)
{
// color ratio in favor of the bottom pixel color
ycRatio1=yt-yTopLeft;
ycRatio2=1-ycRatio1;
}
// prepared all pixels to look at, so finally set the new pixel data
newImageData.setPixel(x,y,inputImageData.palette.getPixel(new RGB((int)(rgbTopMiddle.red*ycRatio2+rgbBottomMiddle.red*ycRatio1),(int)(rgbTopMiddle.green*ycRatio2+rgbBottomMiddle.green*ycRatio1),(int)(rgbTopMiddle.blue*ycRatio2+rgbBottomMiddle.blue*ycRatio1))));
}
}
}
}
And here's the C/C++ code I've tried to make from it:
typedef struct
{
uint8_t alpha, red, green, blue;
} ARGB;
int32_t convertArgbToInt(ARGB argb)
{
return (argb.alpha) | (argb.red << 16) | (argb.green << 8)
| (argb.blue << 24);
}
void convertIntToArgb(uint32_t pixel, ARGB* argb)
{
argb->red = ((pixel >> 24) & 0xff);
argb->green = ((pixel >> 16) & 0xff);
argb->blue = ((pixel >> 8) & 0xff);
argb->alpha = (pixel & 0xff);
}
...
/**scales the image using a high-quality algorithm called "Bilinear Interpolation" */ //
JNIEXPORT void JNICALL Java_com_jni_bitmap_1operations_JniBitmapHolder_jniScaleBIBitmap(
JNIEnv * env, jobject obj, jobject handle, uint32_t newWidth,
uint32_t newHeight)
{
JniBitmap* jniBitmap = (JniBitmap*) env->GetDirectBufferAddress(handle);
if (jniBitmap->_storedBitmapPixels == NULL)
return;
uint32_t oldWidth = jniBitmap->_bitmapInfo.width;
uint32_t oldHeight = jniBitmap->_bitmapInfo.height;
uint32_t* previousData = jniBitmap->_storedBitmapPixels;
uint32_t* newBitmapPixels = new uint32_t[newWidth * newHeight];
// position of the top left pixel of the 4 pixels to use interpolation on
int xTopLeft, yTopLeft;
int x, y, lastTopLefty;
float xRatio = (float) newWidth / (float) oldWidth, yratio =
(float) newHeight / (float) oldHeight;
// Y color ratio to use on left and right pixels for interpolation
float ycRatio2 = 0, ycRatio1 = 0;
// pixel target in the src
float xt, yt;
// X color ratio to use on left and right pixels for interpolation
float xcRatio2 = 0, xcratio1 = 0;
ARGB rgbTopLeft, rgbTopRight, rgbBottomLeft, rgbBottomRight, rgbTopMiddle,
rgbBottomMiddle, result;
for (x = 0; x < newWidth; ++x)
{
xTopLeft = (int) (xt = x / xRatio);
// when meeting the most right edge, move left a little
if (xTopLeft >= oldWidth - 1)
xTopLeft--;
if (xt <= xTopLeft + 1)
{
// we are between the left and right pixel
xcratio1 = xt - xTopLeft;
// color ratio in favor of the right pixel color
xcRatio2 = 1 - xcratio1;
}
for (y = 0, lastTopLefty = -30000; y < newHeight; ++y)
{
yTopLeft = (int) (yt = y / yratio);
// when meeting the most bottom edge, move up a little
if (yTopLeft >= oldHeight - 1)
--yTopLeft;
if (lastTopLefty == yTopLeft - 1)
{
// we went down only one rectangle
rgbTopLeft = rgbBottomLeft;
rgbTopRight = rgbBottomRight;
rgbTopMiddle = rgbBottomMiddle;
//rgbBottomLeft=startingImageData[xTopLeft][yTopLeft+1];
convertIntToArgb(
previousData[((yTopLeft + 1) * oldWidth) + xTopLeft],
&rgbBottomLeft);
//rgbBottomRight=startingImageData[xTopLeft+1][yTopLeft+1];
convertIntToArgb(
previousData[((yTopLeft + 1) * oldWidth)
+ (xTopLeft + 1)], &rgbBottomRight);
rgbBottomMiddle.alpha = rgbBottomLeft.alpha * xcRatio2
+ rgbBottomRight.alpha * xcratio1;
rgbBottomMiddle.red = rgbBottomLeft.red * xcRatio2
+ rgbBottomRight.red * xcratio1;
rgbBottomMiddle.green = rgbBottomLeft.green * xcRatio2
+ rgbBottomRight.green * xcratio1;
rgbBottomMiddle.blue = rgbBottomLeft.blue * xcRatio2
+ rgbBottomRight.blue * xcratio1;
}
else if (lastTopLefty != yTopLeft)
{
// we went to a totally different rectangle (happens in every loop start,and might happen more when making the picture smaller)
//rgbTopLeft=startingImageData[xTopLeft][yTopLeft];
convertIntToArgb(previousData[(yTopLeft * oldWidth) + xTopLeft],
&rgbTopLeft);
//rgbTopRight=startingImageData[xTopLeft+1][yTopLeft];
convertIntToArgb(
previousData[((yTopLeft + 1) * oldWidth) + xTopLeft],
&rgbTopRight);
rgbTopMiddle.alpha = rgbTopLeft.alpha * xcRatio2
+ rgbTopRight.alpha * xcratio1;
rgbTopMiddle.red = rgbTopLeft.red * xcRatio2
+ rgbTopRight.red * xcratio1;
rgbTopMiddle.green = rgbTopLeft.green * xcRatio2
+ rgbTopRight.green * xcratio1;
rgbTopMiddle.blue = rgbTopLeft.blue * xcRatio2
+ rgbTopRight.blue * xcratio1;
//rgbBottomLeft=startingImageData[xTopLeft][yTopLeft+1];
convertIntToArgb(
previousData[((yTopLeft + 1) * oldWidth) + xTopLeft],
&rgbBottomLeft);
//rgbBottomRight=startingImageData[xTopLeft+1][yTopLeft+1];
convertIntToArgb(
previousData[((yTopLeft + 1) * oldWidth)
+ (xTopLeft + 1)], &rgbBottomRight);
rgbBottomMiddle.alpha = rgbBottomLeft.alpha * xcRatio2
+ rgbBottomRight.alpha * xcratio1;
rgbBottomMiddle.red = rgbBottomLeft.red * xcRatio2
+ rgbBottomRight.red * xcratio1;
rgbBottomMiddle.green = rgbBottomLeft.green * xcRatio2
+ rgbBottomRight.green * xcratio1;
rgbBottomMiddle.blue = rgbBottomLeft.blue * xcRatio2
+ rgbBottomRight.blue * xcratio1;
}
lastTopLefty = yTopLeft;
if (yt <= yTopLeft + 1)
{
// color ratio in favor of the bottom pixel color
ycRatio1 = yt - yTopLeft;
ycRatio2 = 1 - ycRatio1;
}
// prepared all pixels to look at, so finally set the new pixel data
result.alpha = rgbTopMiddle.alpha * ycRatio2
+ rgbBottomMiddle.alpha * ycRatio1;
result.blue = rgbTopMiddle.blue * ycRatio2
+ rgbBottomMiddle.blue * ycRatio1;
result.red = rgbTopMiddle.red * ycRatio2
+ rgbBottomMiddle.red * ycRatio1;
result.green = rgbTopMiddle.green * ycRatio2
+ rgbBottomMiddle.green * ycRatio1;
newBitmapPixels[(y * newWidth) + x] = convertArgbToInt(result);
}
}
//get rid of old data, and replace it with new one
delete[] previousData;
jniBitmap->_storedBitmapPixels = newBitmapPixels;
jniBitmap->_bitmapInfo.width = newWidth;
jniBitmap->_bitmapInfo.height = newHeight;
}
The question
What am I doing wrong?
Is it also possible to make the code a bit more readable? I'm a bit rusty on C/C++ and I was more of a C developer than a C++ developer.
EDIT: now it works fine. I've edited and fixed the code.
Only thing that you guys can help is giving tips about how to make it better.
OK, it all started with a bad conversion of the colors, then it went to the usage of pointers , and then to the basic one of where to put the pixels.
The code I've written now works fine (added all of the needed fixes).
Soon you will all be able to use the new code on the Github project.
I am doing histogram equalization on an image. I first get the RGB image and convert it to YUV. I run the histogram equalization algorithm on Y' of YUV and then convert back to RGB. Is it me, or does the image look weird? I am doing this correctly? this image is pretty bright, other images are a little red.
Here are the before/after images:
The algorithm (the commented values are values that I used previously for conversion. Both yield pretty much the same results) :
public static void createContrast(Bitmap src) {
int width = src.getWidth();
int height = src.getHeight();
Bitmap processedImage = Bitmap.createBitmap(width, height, src.getConfig());
int A = 0,R,G,B;
int pixel;
float[][] Y = new float[width][height];
float[][] U = new float[width][height];
float[][] V = new float [width][height];
int [] histogram = new int[256];
Arrays.fill(histogram, 0);
int [] cdf = new int[256];
Arrays.fill(cdf, 0);
float min = 257;
float max = 0;
for(int x = 0; x < width; ++x) {
for(int y = 0; y < height; ++y) {
pixel = src.getPixel(x, y);
//Log.i("TEST","("+x+","+y+")");
A = Color.alpha(pixel);
R = Color.red(pixel);
G = Color.green(pixel);
B = Color.blue(pixel);
/*Log.i("TESTEST","R: "+R);
Log.i("TESTEST","G: "+G);
Log.i("TESTEST","B: "+B);*/
// convert to YUV
/*Y[x][y] = 0.299f * R + 0.587f * G + 0.114f * B;
U[x][y] = 0.492f * (B-Y[x][y]);
V[x][y] = 0.877f * (R-Y[x][y]);*/
Y[x][y] = 0.299f * R + 0.587f * G + 0.114f * B;
U[x][y] = 0.565f * (B-Y[x][y]);
V[x][y] = 0.713f * (R-Y[x][y]);
// create a histogram
histogram[(int) Y[x][y]]+=1;
// get min and max values
if (Y[x][y] < min){
min = Y[x][y];
}
if (Y[x][y] > max){
max = Y[x][y];
}
}
}
cdf[0] = histogram[0];
for (int i=1;i<=255;i++){
cdf[i] = cdf[i-1] + histogram[i];
//Log.i("TESTEST","cdf of: "+i+" = "+cdf[i]);
}
float minCDF = cdf[(int)min];
float denominator = width*height - minCDF;
//Log.i("TEST","Histeq Histeq Histeq Histeq Histeq Histeq");
for(int x = 0; x < width; ++x) {
for(int y = 0; y < height; ++y) {
//Log.i("TEST","("+x+","+y+")");
pixel = src.getPixel(x, y);
A = Color.alpha(pixel);
Y[x][y] = ((cdf[ (int) Y[x][y]] - minCDF)/(denominator)) * 255;
/*R = minMaxCalc(Y[x][y] + 1.140f * V[x][y]);
G = minMaxCalc (Y[x][y] - 0.395f * U[x][y] - 0.581f * V[x][y]);
B = minMaxCalc (Y[x][y] + 2.032f * U[x][y]);*/
R = minMaxCalc(Y[x][y] + 1.140f * V[x][y]);
G = minMaxCalc (Y[x][y] - 0.344f * U[x][y] - 0.714f * V[x][y]);
B = minMaxCalc (Y[x][y] + 1.77f * U[x][y]);
//Log.i("TESTEST","A: "+A);
/*Log.i("TESTEST","R: "+R);
Log.i("TESTEST","G: "+G);
Log.i("TESTEST","B: "+B);*/
processedImage.setPixel(x, y, Color.argb(A, R, G, B));
}
}
}
My next step is to graph the histograms before and after. I just want to get an opinion here.
The question is a little bit old, but let me answer.
The reason is the way histogram equalization works. The algorithm tries to use all of the 0-255 range instead of given image's range.
So if you give it a dark image, it will change relatively brighter pixels to white colors. And relatively darker colors to black colors.
If you give it a bright image, for the same reason it will get darkened.
I have the following routine in a subclass of view:
It calculates an array of points that make up a line, then erases the previous lines, then draws the new lines (impact refers to the width in pixels drawn with multiple lines). The line is your basic bell curve, squeezed or stretched by variance and x-factor.
Unfortunately, nothing shows on the screen. A previous version with drawPoint() and no array worked, and I've verified the array contents are being loaded correctly, and I can see that my onDraw() is being triggered.
Any ideas why it might not be drawn? Thanks in advance!
protected void drawNewLine( int maxx, int maxy, Canvas canvas, int impact, double variance, double xFactor, int color) {
// impact = 2 to 8; xFactor between 4 and 20; variance between 0.2 and 5
double x = 0;
double y = 0;
int cx = maxx / 2;
int cy = maxy / 2;
int mu = cx;
int index = 0;
points[maxx<<1][1] = points[maxx<<1][0];
for (x = 0; x < maxx; x++) {
points[index][1] = points[index][0];
points[index][0] = (float) x;
Log.i(DEBUG_TAG, "x: " + x);
index++;
double root = 1.0 / (Math.sqrt(2 * Math.PI * variance));
double exponent = -1.0 * (Math.pow(((x - mu)/maxx*xFactor), 2) / (2 * variance));
double ePow = Math.exp(exponent);
y = Math.round(cy * root * ePow);
points[index][1] = points[index][0];
points[index][0] = (float) (maxy - y - OFFSET);
index++;
}
points[maxx<<1][0] = (float) impact;
for (int line = 0; line < points[maxx<<1][1]; line++) {
for (int pt = 0; pt < (maxx<<1); pt++) {
pointsToPaint[pt] = points[pt][1];
}
for (int skip = 1; skip < (maxx<<1); skip = skip + 2)
pointsToPaint[skip] = pointsToPaint[skip] + line;
myLinePaint.setColor(Color.BLACK);
canvas.drawLines(pointsToPaint, bLinePaint); // draw over old lines w/blk
}
for (int line = 0; line < points[maxx<<1][0]; line++) {
for (int pt = 0; pt < maxx<<1; pt++) {
pointsToPaint[pt] = points[pt][0];
}
for (int skip = 1; skip < maxx<<1; skip = skip + 2)
pointsToPaint[skip] = pointsToPaint[skip] + line;
myLinePaint.setColor(color);
canvas.drawLines(pointsToPaint, myLinePaint); / new color
}
}
update: Replaced the drawLines() with drawPoint() in loop, still no joy
for (int p = 0; p<pointsToPaint.length; p = p + 2) {
Log.i(DEBUG_TAG, "x " + pointsToPaint[p] + " y " + pointsToPaint[p+1]);
canvas.drawPoint(pointsToPaint[p], pointsToPaint[p+1], myLinePaint);
}
/// canvas.drawLines(pointsToPaint, myLinePaint);
I was attempting to write from within onCreate() and onStart(). The View and its Canvas are never actually rendered for the first time until the end of onStart().
aren't you suppose to call invalidate (like a mapview) to force the view to reload?
YourView.invalidate() (or maybe postInvalidate(), depending where you are : main sthread or not)
here is the detail