Android Bitmap Color Modify in Real Time - android

I have a imageview that I want to change the color based on a user choice
From examples on the internet I see the only way to really do this is by going through and modifying each pixel... however it seems to be EXTREMELY slow
if I add this into my code, it takes long enough that it prompts the user to force close or wait
for(int i =0 ; i < mBitmap.getHeight(); ++i)
{
for(int g = 0; g < mBitmap.getWidth(); ++g)
{
}
}
What is the best way to change the color of the image?
The Image is a small image 320x100 and is mostly transparent with a small image in the inside, the small image I want to change the color of

The Problem lies in using getPixel(x,y). Grabbing each pixel one by one is a very slow process. Instead use getPixels
void getPixels(int[] pixels, int offset, int stride, int x, int y, int width, int height)
Returns in pixels[] a copy of the data in the bitmap.
It'll return you an array of integers with the pixel values (and operate on that array and then use setPixels) and it will be faster (although requires more memory)
For a small image this method will do. Stride is equal to the image width.
mBitmap.getPixels( pixels, 0 , mBitmap.getWidth(), 0, 0 , mBitmap.getWidth(), mBitmap.getHeight());

In order of complexity:
Check the samples in API demos for usage of ColorFilter and ColorMatrix. But since you described it as a image within an image that you are trying to modify, this may not apply.
Put your processing code on its own thread to avoid the Application Not Responding issue. Look into AsyncTask. You may need to have a wait animation running while its processing.
Consider OpenGL ES 1.x. Use the image as a texture and overlay a color with alpha to get the effect. Although this would have better performance, the complexity of adding UI elements would need to be taken into account (i.e. build your own).

Related

Drawing on the android screen with a coordinate system

My question is probably going to be initially very confusing to read, so just bear with me. I'll start it off with a little preface for context:
Preface:
I have an app that will be using an array for path-finding from a map.
^That is very vague: There will be an array of characters, representing walls, stairs, etc., and there will be a function that finds the best path.
I want to display the path on the android screen.
There will be characters that are generated by the array function that represent the generated path (probably "x" or something).
Okay, to make it more clear: There will be a "path" of 'x's in an array. These 'x's represent the path that is going to show up on the Android screen.
My actual question:
How do I translate an 'x' in the array to displaying a line on the screen? I had the idea to use a for loop/if statement that checks if there is an 'x' and if there is, then to display a little red dot/line in a second array that represents the actual screen.
I was trying to find this, but it's such an awkward thing to type into google, so I finished my research with nothing.
Is there some sort of built-in android function that lets you assign different colours to different coordinates?
This is kind of what I want to appear on the screen. If this were the app, the blue would be represented by 'x's in the first array.
there could be several ways to achieve this effect of having a coordinate system mapping to a matrix that describes a path.
Depending on the size of the array and the frequency of update calls (it sounds like the path finding runs once with a single render after), it probably wouldn't be too expensive to just loop through. What I personally would do is start to look at how to draw on a canvas, get the screen size, and adjust the bounds accordingly.
Get screen dimensions in pixels - How to get screen dimensions
http://danielnadeau.blogspot.com/2012/01/android-canvas-beginners-tutorial.html - A nice tutorial on canvases
Once you can draw to a scaled canvas, it is simply a matter of running a loop that looks something like:
float scale_x = screen_width/columns;
float scale_y = screen_height/rows; //pixels per grid square
for( int x = 0; x < columns; x++)
for( int y = 0; y < rows; y++)
if( data[x][y] == 'x') drawRect(x*scale_x, y*scale_y, scale_x,scale_y) //if something found, draw a colored square

Using a rounded corners drawable

There is a nice post made by the popular Google developer Romain Guy that shows how to use a rounded corners drawable (called "StreamDrawable" in his code ) on a view efficiently.
The sample itself works very well on my Galaxy S3 when in portrait mode, but I have a few issues with it:
if the screen is small (for example on qvga screens), the shown images get cropped.
if I have an input bitmap that is too small than how I wish to show it, the output image has its edges smeared. Even on the Galaxy S3, when you run the sample code and it's on landscape, it looks awful:
I'm still not sure about it (since I use a workaround of scaling the image for using the sample code), but it think that even this solution is a bit slow when being used in a listView. Maybe there is a renderscript solution for this?
It doesn't matter if I use setImageDrawable or setBackgroundDrawable. It must be something in the drawable itself.
I've tried to play with the variables and the bitmapShader, but nothing worked. Sadly TileMode doesn't have a value for just stretching the image, only tiling it in some way.
As a workaround I can create a new scaled bitmap, but it's just a workaround. Surely there is a better way which will also not use more memory than it should.
How do I fix those issues and use this great code?
I think that the solution that is presented on this website works well.
unlike other solutions, it doesn't cause memory leaks, even though it is based on Romain Guy's solution.
EDIT: now on the support library, you can also use RoundedBitmapDrawable (using RoundedBitmapDrawableFactory ) .
I had some size issues with this code, and I solved it.
Maybe this will help you, too:
1) in the constructor store the bitmap in a local variable (e.g. private Bitmap bmp;)
2) override two more methods:
#Override
public int getIntrinsicWidth() {
return bmp.getWidth();
}
#Override
public int getIntrinsicHeight() {
return bmp.getHeight();
}
Best regards,
DaRolla
There underlying problem is that the BitmapShader's TileMode doesn't have a scaling option. You'll note in the source that it's been set to Shader.TileMode.CLAMP, and the docs describe that as:
replicate the edge color if the shader draws outside of its original bounds
To work around this, there are three solutions:
Constrain the size of the view in which the drawable is used to the size of the bitmap.
Constrain the drawing region; for instance, change:
int width = bounds.width() - mMargin;
int height = bounds.height() - mMargin;
mRect.set(mMargin, mMargin, width, height);
To:
int width = Math.min(mBitmap.getWidth(), bounds.width()) - mMargin;
int height = Math.min(mBitmap.getHeight(), bounds.height()) - mMargin;
mRect.set(mMargin, mMargin, width, height);
Scale the bitmap to the size of the drawable. I've moved creating the shader into onBoundsChange() and have opted to create a new bitmap from here:
bitmap = Bitmap.createScaledBitmap(mBitmap, width, height, true);
mBitmapShader = new BitmapShader(bitmap,
Shader.TileMode.CLAMP, Shader.TileMode.CLAMP);
Note that this a potentially slow operation and will be running on the main thread. You might want to carefully consider how you want to implement it before you go for this last solution.

OpenGL ES - glReadPixels

I am taking a screenshot with glReadPixels to perform a "cross-over" effect between two images.
On the Marmalade SDK simulator, the screenshot is taken just fine and the "cross-over" effect works a treat:
However, this is how it looks on iOS and Android devices - corrupted:
(source: eikona.info)
I always read the screen as RGBA 1 byte/channel, as the documentation says it's ALWAYS accepted.
Here is the code used to take the screenshot:
uint8* Gfx::ScreenshotBuffer(int& deviceWidth, int& deviceHeight, int& dataLength) {
/// width/height
deviceWidth = IwGxGetDeviceWidth();
deviceHeight = IwGxGetDeviceHeight();
int rowLength = deviceWidth * 4; /// data always returned by GL as RGBA, 1 byte/each
dataLength = rowLength * deviceHeight;
// set the target framebuffer to read
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
uint8* buffer = new uint8[dataLength];
glReadPixels(0, 0, deviceWidth, deviceHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
return buffer;
}
void Gfx::ScreenshotImage(CIwImage* img, uint8*& pbuffer) {
int deviceWidth, deviceHeight, dataLength;
pbuffer = ScreenshotBuffer(deviceWidth, deviceHeight, dataLength);
img->SetFormat(CIwImage::ABGR_8888);
img->SetWidth(deviceWidth);
img->SetHeight(deviceHeight);
img->SetBuffers(pbuffer, dataLength, 0, 0);
}
That is a driver bug. Simple as that.
The driver got the pitch of the surface in the video memory wrong. You can clearly see this in the upper lines. Also the garbage you see at the lower part of the image is the memory where the driver thinks the image is stored but there is different data there. Textures / Vertex data maybe.
And sorry, I know of no way to fix that. You may have better luck with a different surface-format or by enabling/disabling multisampling.
In the end, it was lack of memory. The "new uint8[dataLength];" never returned an existent pointer, thus the whole process went corrupted.
TomA, your idea of clearing the buffer actually helped me to solve the problem. Thanks.
I don't know about android or the SDK you're using, but on IOS when I take a screenshot I have to make the buffer the size of the next POT texture, something like this:
int x = NextPot((int)screenSize.x*retina);
int y = NextPot((int)screenSize.y*retina);
void *buffer = malloc( x * y * 4 );
glReadPixels(0,0,x,y,GL_RGBA,GL_UNSIGNED_BYTE,buffer);
The function NextPot just gives me the next POT size, so if the screen size was 320x480, the x,y would be 512x512.
Maybe what your seeing is the wrap around of the buffer because it's expecting a bigger buffer size ?
Also this could be a reason for it to work in the simulator and not on the device, my graphics card doesn't have the POT size limitation and I get similar (weird looking) result.
What I assume is happening is that you are trying to use glReadPixels on the window that is covered. If the view area is covered, then the result of glReadPixels is undefined.
See How do I use glDrawPixels() and glReadPixels()? and The Pixel Ownership Problem.
As said here :
The solution is to make an offscreen buffer (FBO) and render to the
FBO.
Another option is to make sure the window is not covered when you use glReadPixels.
I am getting screenshoot of my android game without any problems on android device using glReadPixels.
I am not sure yet what's the problem in your case, need more information.
So lets start:
I would recommend you not to specify PixelStore format. I am worried about your alignment in 1 byte, do you really "use it"/"know what does it do"? It seems you get exactly what you specify - one extra byte(look at your image, there one extra pixel all the time!) instead of fully packed image. SO try to remove this:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
I am not sure in C code, as I was working only in java, but this looks as possible point:
// width/height
deviceWidth = IwGxGetDeviceWidth();
deviceHeight = IwGxGetDeviceHeight();
Are you getting device size? You should use your OpenGL surface size, like this:
public void onSurfaceChanged(GL10 gl, int width, int height) {
int surfaceWidth = width;
int surfaceHeight = height;
}
What are you doing next with captured image? Are you aware that memory block you got from opengl is RGBA, but all not-opengl image operations expect ARGB?
For example here in your code you expect alpha to be first bit, not last:
img->SetFormat(CIwImage::ABGR_8888);
In case if 1, 2 and 3 did not help you might want to save the captured screen to phone sdcard to examine later. I have a program that converts opengl RGBA block to normal bitmap to examine on PC. I may share it with you.
I don't have a solution for fixing glReadPixels. My suggestion is that you change your algorithm to avoid the need to read the data back from the screen.
Take a look at this page. These guys have done a page flip effect all in Flash. It's all in 2D, the illusion is achieved just with shadow gradients.
I think you can use a similar approach, but a little better in 3D. Basically you have to split the effect into three parts: the front facing top page (clouds), the bottom page (the girl) and the back side of the front page. You have to draw each part separately. You can easily draw the front facing top page and the bottom page together in the same screen, you just need to invoke the drawing code for each with a preset clipping region that is aligned with the split line where the top page bends. After you have to top and back page sections drawn, you can draw the gray back facing portion on top, also aligned to the split line.
With this approach the only thing you lose is a little bit of deformation where the clouds image starts to bend up, of course no deformation will occur with my method. Hopefully that will not diminish the effect, I think the shadows are way more important to give the depth effect and will hide this minor inconsistency.

Android Drawable looks ugly due to scaling

I am writing a View that should show a drawable that seems to "never end".
It should be twice or third the displaysize and move slow through the display.
Therefore I studied some samplecode by Google and found the important Lines
public void surfaceChanged(SurfaceHolder holder, int format, int width,
int height) {
canvasWidth = width;
canvasHeight = height;
float sf = backgroundImage.getWidth() / canvasWidth;
backgroundImage = Bitmap.createScaledBitmap(backgroundImage,
(int) (canvasWidth * sf), canvasHeight, true);
}
To rescale the image and than
// decrement the far background
backgroundXPos = backgroundXPos - DELTAMOVE;
// calculate the wrap factor for matching image draw
int newFarX = backgroundImage.getWidth() - (-backgroundXPos);
// if we have scrolled all the way, reset to start
if (newFarX <= 0) {
backgroundXPos = 0;
// only need one draw
canvas.drawBitmap(backgroundImage, backgroundXPos, 0, null);
} else {
// need to draw original and wrap
canvas.drawBitmap(backgroundImage, backgroundXPos, 0, null);
canvas.drawBitmap(backgroundImage, newFarX, 0, null);
}
To draw the moving image. The images is already moving, it's fine.
But, and this is the point of my question, the image looks very ugly. Its original is 960*190 pixels by 240ppi. It should be drawn inside a view with 80dip of height and "fill_parent" width.
It should look same (and good) on all devices. I have tried a lot but I don't know how to make the picture look nice.
Thanks for your help.
Best regards,
Till
Since you're saying that it's a never ending drawable, probably you're writing a game of some sort. If your image is a pixel-art type, then you don't want any scaling; pixel-art-type images cannot be scaled and keep its crisp look (you can try using nearest neighbor interpolation and scaling to an integer multiple of the original, which sometimes might work, but sometimes you will still need manual tweaks). This is the rare case where you actually would need to have different image resource for different screen resolutions.
Otherwise you might want to use a vector image, but if -- as you said -- your original is a high resolution image, then vector image probably won't help much here.
btw, you probably want to show some screenshot. "Looks ugly" is just as helpful as saying my code does not work.
Just a guess, but instead of passing a null paint to your drawBitmap() calls, try making a paint with bitmap filtering disabled:
Paint p = new Paint();
p.setFilterBitmap(false);
canvas.drawBitmap(backgroundImage, backgroundXPos, 0, p);
Hope that helps.

Is there a way to load and draw partially a bitmap from file in Android?

Say I have a somewhat large (i.e. not fit in most phones' memory) bitmap on disk. I want to draw only parts of it on the screen in a way that isn't scaled (i.e. inSampleSize == 1)
Is there a way to load/draw just the part I want given a Rect specifying the area without loading the entire bitmap content?
I'm quite confident this is possible since you can load a really large bitmap file into an ImageView without problems so there must be some sort of a built-in way to handle large bitmaps... and after a few attempts, I've found a solution:
Instead of loading the entire bitmap and manually draw it yourself, load it as a Drawable instead:
InputStream mapInput = getResources().openRawResource(
R.drawable.transit_map);
_map = Drawable.createFromStream(mapInput, "transit_map");
_map.setBounds(0, 0, _mapDimension.width(), _mapDimension.height());
I'm using a resource file but since you can use Drawable.createFromStream to load image from any InputStream, it should works with arbitrary bitmap.
Then, use the Drawable.draw method to draw it onto the desired canvas like so:
int left = -(int) contentOffset.x;
int top = -(int) contentOffset.y;
int right = (int) (zoom * _mapDimension.width() - contentOffset.x);
int bottom = (int) (zoom * _mapDimension.height() - contentOffset.y);
_map.setBounds(left, top, right, bottom);
_map.draw(canvas);
As in the above case, You can also scale and translate the bitmap as well by manipulating the drawable's bounds and only the relevant parts of the bitmap will be loaded and drawn onto the Canvas.
The result is a pinch-zoomable view from just one single 200KB bitmap file. I've also tested this with a 22MB PNG file and it still works without any OutOfMemoryError including when screen orientation changes.
Now it's very relevant: BitmapRegionDecoder.
Note: available since Android SDK 10
It can easily be done by using RapidDecoder.
import rapid.decoder.BitmapDecoder;
Rect bounds = new Rect(10, 20, 30, 40);
Bitmap bitmap = BitmapDecoder.from("your-file.png")
.region(bounds)
.decode();
imageView.setImageBitmap(bitmap);
It supports down to Android 2.2 (API Level 8).
Generally speaking, that isn't possible, particularly since most image formats are compressed, so you don't even know which bytes to read until you've extracted the uncompressed form.
Break your image up into small tiles and load just the tiles you need to cover the region you want to display at runtime. To avoid jittery scrolling, you might also want to preload tiles that are just out of sight (the ones that border the visible tiles) on a background thread.

Categories

Resources