Android AvoidXfermode is deprecated since API 16, is there a replacement? - android

I need to draw a bitmap on another bitmap, but I only want to draw on top of pixels that have a specific color (transparent in this case) .
I understand that AvoidXfermode could do that, but it is deprecated since API 16.
Is there a different method to this now ?
Thank you

I received the right answer in my private inbox, so I will share it here:
There is no replacement method.
AvoidXfermode was deprecated because it is not supported in hardware. However, it can still be used when drawing on bitmaps.
Another method, of course, is to go pixel by pixel, and replace the ones with a specific color, but that's not what I was looking for.

Here we go, assuming bitmap sizes are the same.
var image1, image2;
var newCanvas = document.createElement('canvas');
var newContext = newCanvas.getContext('2d');
newCanvas.width = image1.width;
newCanvas.height = image1.height;
newContext.drawImage(image1, 0, 0);
var imgData = newContext.getImageData(0,0,newCanvas.width, newCanvas.height);
newContext.drawImage(image2, 0, 0);
var imgData2 = newContext.getImageData(0,0,newCanvas.width, newCanvas.height);
for (var i = 0; i < imgData.data.length; i += 4) {
if( imgData.data[i] < 20 //If Red is less than 20
&& imgData.data[i+1] == 40 //If Green is 40
&& imgData.data[i+2] >= 240 //If Blue is over 240
&& imgData.data[i+3] >= 240) //If Alpha is over 240
{
imgData.data[i] = imgData2.data[i];
imgData.data[i+1] = imgData2.data[i+1];
imgData.data[i+2] = imgData2.data[i+2];
imgData.data[i+3] = imgData2.data[i+3];
}
}
imgData2 = null;
newContext.putImageData(imgData, 0, 0);
That should just copy it over as is if any pixel meets those parameters.
The end result is either imgData, a byte array of data that you can draw using putImageData, or a canvas (which you can essentially use as a new Image)

Related

Color values have the same value all over the image

I have the contour of an object and the center coordinates. Now I'm trying to detect the point lying on the border of the contour, but still is on the same height (y) as the center. The background is set to black. Therefor I'm calculating the threshold image and iterate over the pixel and check if they are set to white or not. The problem is that according to my code all values are 0.
Tried working on the original image as well. The problem still occurs, I always get 0/255/255. No matter if I'm inside the object or on the background.
private fun calcPointOnContour(point: Point, image: Mat): Point {
var pointOnContour = Point()
val ycrcb = getCbComponent(image)
val imageThresh = getThresholdImage(ycrcb)
for (i in point.x.toInt() until image.cols()) {
val pixel = imageThresh.get(i, point.y.toInt())
if (pixel[0] < 255) {
pointOnContour = Point(i.toDouble(), point.y)
break
}
}
return pointOnContour
}
private fun getCbComponent(mat: Mat): Mat {
val ycrcb = Mat(mat.rows(), mat.cols(), CvType.CV_8UC3)
val lYCrCb = ArrayList<Mat>(3)
Imgproc.cvtColor(mat, ycrcb, Imgproc.COLOR_RGB2YCrCb)
Core.split(mat, lYCrCb)
return lYCrCb[2]
}
private fun getThresholdImage(mat: Mat): Mat {
val imageThresh = Mat.zeros(mat.rows(), mat.cols(), CvType.CV_8UC1)
Imgproc.threshold(mat, imageThresh, 100.0, 255.0, Imgproc.THRESH_BINARY)
return imageThresh
}
Imgproc.threshold is a method that converts your image to binary based on the threshold values you have provided. https://docs.opencv.org/2.4/doc/tutorials/imgproc/threshold/threshold.html
And you have provided maxVal as 255 and hence only 0 or 255.
====
Imgproc.cvtColor(mat, ycrcb, Imgproc.COLOR_RGB2YCrCb)
Core.split(mat, lYCrCb)
I think it should be Core.split(ycrcb, lYCrCb)
Also, can you add the code where you are creating the original Mat from the image? Maybe, there's some issue there.

Android YUV to grayscale performance optimization

I'm trying to convert an YUV image to grayscale, so basically I just need the Y values.
To do so I wrote this little piece of code (with frame being the YUV image):
imageConversionTime = System.currentTimeMillis();
size = frame.getSize();
byte nv21ByteArray[] = frame.getImage();
int lol;
for (int i = 0; i < size.width; i++) {
for (int j = 0; j < size.height; j++) {
lol = size.width*j + i;
yMatrix.put(j, i, nv21ByteArray[lol]);
}
}
bitmap = Bitmap.createBitmap(size.width, size.height, Bitmap.Config.ARGB_8888);
Utils.matToBitmap(yMatrix, bitmap);
imageConversionTime = System.currentTimeMillis() - imageConversionTime;
However, this takes about 13500 ms. I need it to be A LOT faster (on my computer it takes 8.5 ms in python) (I work on a Motorola Moto E 4G 2nd generation, not super powerful but it should be enough for converting images right?).
Any suggestions?
Thanks in advance.
First of all I would assign size.width and size.height to a variable. I don't think the compiler will optimize this by default, but I am not sure about this.
Furthermore Create a byte[] representing the result instead of using a Matrix.
Then you could do something like this:
int[] grayScalePixels = new int[size.width * size.height];
int cntPixels = 0;
In your inner loop set
grayScalePixels[cntPixels] = nv21ByteArray[lol];
cntPixels++;
To get your final image do the following:
Bitmap grayScaleBitmap = Bitmap.createBitmap(grayScalePixels, size.width, size.height, Bitmap.Config.ARGB_8888);
Hope it works properly (I have not tested it, however at least the shown principle should be applicable -> relying on a byte[] instead of Matrix)
Probably 2 years too late but anyways ;)
To convert to gray scale, all you need to do is set the u/v values to 128 and leave the y values as is. Note that this code is for YUY2 format. You can refer to this document for other formats.
private void convertToBW(byte[] ptrIn, String filePath) {
// change all u and v values to 127 (cause 128 will cause byte overflow)
byte[] ptrOut = Arrays.copyOf(ptrIn, ptrIn.length);
for (int i = 0, ptrInLength = ptrOut.length; i < ptrInLength; i++) {
if (i % 2 != 0) {
ptrOut[i] = (byte) 127;
}
}
convertToJpeg(ptrOut, filePath);
}
For NV21/NV12, I think the loop would change to:
for (int i = ptrOut.length/2, ptrInLength = ptrOut.length; i < ptrInLength; i++) {}
Note: (didn't try this myself)
Also I would suggest to profile your utils method and createBitmap functions separately.

Loading multiple textures in an array in Android Andengine

I'm using a for loop to implement the loading and handling of Sprite objects for display for a keyboard for a game of hangman. The loop makes it through to the 4th iteration and crashes. The error it gives me says:
Texture must not exceed the bounds of the atlas
This should actually work as all the images are 64x64 and the atlas is declared as such:
this.mAtlas[i] = new BitmapTextureAtlas(this.getTextureManager(),256, 256,TextureOptions.BILINEAR);
I'm using an array of atlases and an array of textures in which to load the images and the I load the atlas. After that I'm then passing the texture into a custom class that implements sprite. And finally I attach the loaded sprite into the scene. Here is the whole code for the loop:
for(int i = 0; i < 28; i++)
{
String name = Integer.toString(i);
name+= ".png";
this.mAtlas[i] = new BitmapTextureAtlas(this.getTextureManager(),256, 256,TextureOptions.BILINEAR);
this.mTexture[i] = BitmapTextureAtlasTextureRegionFactory.createFromAsset(this.mAtlas[i], this, name, (i*64) + 5,0);
this.mAtlas[i].load();
if(i % 13 == 0)
{
yPos -= 64;
}
if(i < 26)
{
letterPass = alphabet.substring(i);
}
else if(i == 26)
{
letterPass = "BackSpace";
}
else if(i == 27)
{
letterPass = "return";
}
letters[i] = new Letter((i * 64)+ 5.0f, yPos, this.mTexture[i].getHeight(), this.mTexture[i].getHeight(), this.mTexture[i], this.mEngine.getVertexBufferObjectManager());
letters[i].setLetter(letterPass);
mScene.attachChild(letters[i]);
}
The line where the crash occurs is:
this.mTexture[i] = BitmapTextureAtlasTextureRegionFactory.createFromAsset(this.mAtlas[i], this, name, (i*64) + 5,0);
I cannot seem to figure out why it's crashing and I'd appreciate any help
You texture atlas is 256x256 pixels large. Your sprites are 64x64 pixels and you create an atlas for each of them... That means you are wasting a lot of space. And it doesn't even work because on this line:
this.mTexture[i] = BitmapTextureAtlasTextureRegionFactory.createFromAsset(this.mAtlas[i], this, name, (i*64) + 5,0);
You are placing the texture onto atlas at position [i * 64 + 5, 0]. I bet it fails on 4th texture. 3 * 64 + 5 +64 = 261, you are out of bounds.

Dealing with Android's texture size limit

I have a requirement to display somewhat big images on an Android app.
Right now I'm using an ImageView with a source Bitmap.
I understand openGL has a certain device-independent limitation as to
how big the image dimensions can be in order for it to process it.
Is there ANY way to display these images (with fixed width, without cropping) regardless of this limit,
other than splitting the image into multiple ImageView elements?
Thank you.
UPDATE 01 Apr 2013
Still no luck so far all suggestions were to reduce image quality. One suggested it might be possible to bypass this limitation by using the CPU to do the processing instead of using the GPU (though might take more time to process).
I don't understand, is there really no way to display long images with a fixed width without reducing image quality? I bet there is, I'd love it if anyone would at least point me to the right direction.
Thanks everyone.
You can use BitmapRegionDecoder to break apart larger bitmaps (requires API level 10). I've wrote a method that will utilize this class and return a single Drawable that can be placed inside an ImageView:
private static final int MAX_SIZE = 1024;
private Drawable createLargeDrawable(int resId) throws IOException {
InputStream is = getResources().openRawResource(resId);
BitmapRegionDecoder brd = BitmapRegionDecoder.newInstance(is, true);
try {
if (brd.getWidth() <= MAX_SIZE && brd.getHeight() <= MAX_SIZE) {
return new BitmapDrawable(getResources(), is);
}
int rowCount = (int) Math.ceil((float) brd.getHeight() / (float) MAX_SIZE);
int colCount = (int) Math.ceil((float) brd.getWidth() / (float) MAX_SIZE);
BitmapDrawable[] drawables = new BitmapDrawable[rowCount * colCount];
for (int i = 0; i < rowCount; i++) {
int top = MAX_SIZE * i;
int bottom = i == rowCount - 1 ? brd.getHeight() : top + MAX_SIZE;
for (int j = 0; j < colCount; j++) {
int left = MAX_SIZE * j;
int right = j == colCount - 1 ? brd.getWidth() : left + MAX_SIZE;
Bitmap b = brd.decodeRegion(new Rect(left, top, right, bottom), null);
BitmapDrawable bd = new BitmapDrawable(getResources(), b);
bd.setGravity(Gravity.TOP | Gravity.LEFT);
drawables[i * colCount + j] = bd;
}
}
LayerDrawable ld = new LayerDrawable(drawables);
for (int i = 0; i < rowCount; i++) {
for (int j = 0; j < colCount; j++) {
ld.setLayerInset(i * colCount + j, MAX_SIZE * j, MAX_SIZE * i, 0, 0);
}
}
return ld;
}
finally {
brd.recycle();
}
}
The method will check to see if the drawable resource is smaller than MAX_SIZE (1024) in both axes. If it is, it just returns the drawable. If it's not, it will break the image apart and decode chunks of the image and place them in a LayerDrawable.
I chose 1024 because I believe most available phones will support images at least that large. If you want to find the actual texture size limit for a phone, you have to do some funky stuff through OpenGL, and it's not something I wanted to dive into.
I wasn't sure how you were accessing your images, so I assumed they were in your drawable folder. If that's not the case, it should be fairly easy to refactor the method to take in whatever parameter you need.
You can use BitmapFactoryOptions to reduce size of picture.You can use somthing like that :
BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = 3; //reduce size 3 times
Have you seen how your maps working? I had made a renderer for maps once. You can use same trick to display your image.
Divide your image into square tiles (e.g. of 128x128 pixels). Create custom imageView supporting rendering from tiles. Your imageView knows which part of bitmap it should show now and displays only required tiles loading them from your sd card. Using such tile map you can display endless images.
It would help if you gave us the dimensions of your bitmap.
Please understand that OpenGL runs against natural mathematical limits.
For instance, there is a very good reason a texture in OpenGL must be 2 to the power of x. This is really the only way the math of any downscaling can be done cleanly without any remainder.
So if you give us the exact dimensions of the smallest bitmap that's giving you trouble, some of us may be able to tell you what kind of actual limit you're running up against.

ALPHA_8 bitmaps and getPixel

I am trying to load a movement map from a PNG image. In order to save memory
after I load the bitmap I do something like that.
`Bitmap mapBmp = tempBmp.copy(Bitmap.Config.ALPHA_8, false);`
If I draw the mapBmp I can see the map but when I use getPixel() I get
always 0 (zero).
Is there a way to retrieve ALPHA information from a bitmap other than
with getPixel() ?
Seems to be an Android bug in handling ALPHA_8. I also tried copyPixelsToBuffer, to no avail. Simplest workaround is to waste lots of memory and use ARGB_8888.
Issue 25690
I found this question from Google and I was able to extract the pixels using the copyPixelsToBuffer() method that Mitrescu Catalin ended up using. This is what my code looks like in case anyone else finds this as well:
public byte[] getPixels(Bitmap b) {
int bytes = b.getRowBytes() * b.getHeight();
ByteBuffer buffer = ByteBuffer.allocate(bytes);
b.copyPixelsToBuffer(buffer);
return buffer.array();
}
If you are coding for API level 12 or higher you could use getByteCount() instead to get the total number of bytes to allocate. However if you are coding for API level 19 (KitKat) you should probably use getAllocationByteCount() instead.
I was able to find a nice and sort of clean way to create boundary maps. I create an ALPHA_8 bitmap from the start. I paint my boundry map with paths. Then I use the copyPixelsToBuffer() and transfer the bytes into a ByteBuffer. I use the buffer to "getPixels" from.
I think is a good solution since you can scale down or up the path() and draw the boundary map at the desired screen resolution scale and no IO + decode operations.
Bitmap.getPixel() is useless for ALPHA_8 bitmaps, it always returns 0.
I developed solution with PNGJ library, to read image from assets and then create Bitmap with Config.ALPHA_8.
import ar.com.hjg.pngj.IImageLine;
import ar.com.hjg.pngj.ImageLineHelper;
import ar.com.hjg.pngj.PngReader;
public Bitmap getAlpha8BitmapFromAssets(String file) {
Bitmap result = null;
try {
PngReader pngr = new PngReader(getAssets().open(file));
int channels = pngr.imgInfo.channels;
if (channels < 3 || pngr.imgInfo.bitDepth != 8)
throw new RuntimeException("This method is for RGB8/RGBA8 images");
int bytes = pngr.imgInfo.cols * pngr.imgInfo.rows;
ByteBuffer buffer = ByteBuffer.allocate(bytes);
for (int row = 0; row < pngr.imgInfo.rows; row++) {
IImageLine l1 = pngr.readRow();
for (int j = 0; j < pngr.imgInfo.cols; j++) {
int original_color = ImageLineHelper.getPixelARGB8(l1, j);
byte x = (byte) Color.alpha(original_color);
buffer.put(row * pngr.imgInfo.cols + j, x ^= 0xff);
}
}
pngr.end();
result = Bitmap.createBitmap(pngr.imgInfo.cols,pngr.imgInfo.rows, Bitmap.Config.ALPHA_8);
result.copyPixelsFromBuffer(buffer);
} catch (IOException e) {
Log.e(LOG_TAG, e.getMessage());
}
return result;
}
I also invert alpha values, because of my particular needs. This code is only tested for API 21.

Categories

Resources