Draw bitmap over bitmap and set some pixels transparent - android

I have a custom View and two Bitmaps. I draw the one above the other like this
canvas.drawBitmap(backImage,0,0,null);
canvas.drawBitmap(frontImage,0,0,null);
Before painting I set some pixels in the frontImage transparent using the setPixel(...) function of Bitmap
frontImage.setPixel(x,y, Color.TRANSPARENT);
Instead of viewing backImage's pixels at x,y I see black color...

There may be a really simple solution to this. What is your source material for the images? If you are loading them from files all you may need to do is convert the source files to PNG's. PNG's maintain transparency information and most rendering engines will take this in to account when layering them together on the screen.

Another Possibility. I use this technique quite a bit in Java games on the PC. It uses a little transparency class I stumbled across a number of years ago at this location:
/*************************************************************************
* The Transparency class was also developed by a thrid party. Info
* on its use can be found at:
*
* http://www.rgagnon.com/javadetails/java-0265.html
*
*************************************************************************/
//Transparency is a "Static", "Inner" class that will set a given color
//as transparent in a given image.
class Transparency {
public static Image set(Image im, final Color color) {
ImageFilter filter = new RGBImageFilter() { //Inner - Inner class -- very bad
// the color we are looking for... Alpha bits are set to opaque
public int markerRGB = color.getRGB() | 0xFF000000;
public final int filterRGB(int x, int y, int rgb) {
if ( ( rgb | 0xFF000000 ) == markerRGB ) {
// Mark the alpha bits as zero - transparent
return 0x00FFFFFF & rgb;
}
else {
// nothing to do
return rgb;
}
}
};
//apply the filter created above to the image
ImageProducer ip = new FilteredImageSource(im.getSource(), filter);
return Toolkit.getDefaultToolkit().createImage(ip);
}
}
Taking a Java Image object as input it will take what ever color you give it and perform the math on the image turning the color transparent.
Good luck.

Related

How to change shade of color dynamically android?

I'm using the Palette class to programmatically get the most dominant color from an image which I then want to use for my status bar and toolbar. According to the material design guidelines, the status bar color should be two shades darker than the toolbar color.
Bitmap bitmap = ((BitmapDrawable) ((ImageView)mImageView).getDrawable()).getBitmap();
if (bitmap != null) {
palette = Palette.generate(bitmap);
vibrantSwatch = palette.getVibrantSwatch();
darkVibrantSwatch = palette.getDarkVibrantSwatch();
}
For the darker color I'm using the darkVibrantSwatch and for the lighter color, I'm using the vibrantSwatch. But in most of the cases, these turn out to be very different from each other and hence essentially becoming unusable. Is there any workaround for this?
Maybe if its possible to get just one color, say darkVibrantSwatch, and then programmatically generate a color which is two shades lighter?
I'm not sure about getting exactly 2 shades lighter but you can play around with the SHADE_FACTOR and see if you can achieve what you want.
private int getDarkerShade(int color) {
return Color.rgb((int)(SHADE_FACTOR * Color.red(color)),
(int)(SHADE_FACTOR * Color.green(color)),
(int)(SHADE_FACTOR * Color.blue(color)));
}
Code snippet taken from here
An approach that works well is to modify the brightness value in the color's HSV representation:
import android.graphics.Color;
public static int modifyBrightness(int color, float factor) {
float[] hsv = new float[3];
Color.colorToHSV(color, hsv);
hsv[2] *= factor;
return Color.HSVToColor(hsv);
}
To get a suitably darker color for the status bar, use a factor of 0.8:
int darkerColor = modifyBrightness(color, 0.8);

How to implement "Loading images" pattern (Opacity, Exposure and Saturation) from Google's new Material design guidelines

Has anyone looked into implementing the Loading images pattern from Google's latest Material Design guide.
It's a recommended way that "illustrations and photographs may load and transition in three phases at staggered durations". Those being Opacity, Exposure and Saturation:
I'm currently using the Volley NetworkImageView (actually a derived class from this).
I'm sure it's got to be some variant of the answer to this question. I'm just not sure which classes/code to use for both the saturation and animation curves that are described.
Thanks to #mttmllns! Previous Answer.
Since the previous answer shows an example written in C# and I was curious, I ported it to java. Complete GitHub Example
It outlines a 3-steps process where a combination of opacity, contrast/luminosity and saturation is used in concert to help salvage our poor users eyesight.
For a detailed explanation read this article.
EDIT:
See, the excellent answer provided by #DavidCrawford.
BTW: I fixed the linked GitHubProject to support pre-Lollipop devices. (Since API Level 11)
The Code
AlphaSatColorMatrixEvaluator.java
import android.animation.TypeEvaluator;
import android.graphics.ColorMatrix;
public class AlphaSatColorMatrixEvaluator implements TypeEvaluator {
private ColorMatrix colorMatrix;
float[] elements = new float[20];
public AlphaSatColorMatrixEvaluator() {
colorMatrix = new ColorMatrix ();
}
public ColorMatrix getColorMatrix() {
return colorMatrix;
}
#Override
public Object evaluate(float fraction, Object startValue, Object endValue) {
// There are 3 phases so we multiply fraction by that amount
float phase = fraction * 3;
// Compute the alpha change over period [0, 2]
float alpha = Math.min(phase, 2f) / 2f;
// elements [19] = (float)Math.round(alpha * 255);
elements [18] = alpha;
// We substract to make the picture look darker, it will automatically clamp
// This is spread over period [0, 2.5]
final int MaxBlacker = 100;
float blackening = (float)Math.round((1 - Math.min(phase, 2.5f) / 2.5f) * MaxBlacker);
elements [4] = elements [9] = elements [14] = -blackening;
// Finally we desaturate over [0, 3], taken from ColorMatrix.SetSaturation
float invSat = 1 - Math.max(0.2f, fraction);
float R = 0.213f * invSat;
float G = 0.715f * invSat;
float B = 0.072f * invSat;
elements[0] = R + fraction; elements[1] = G; elements[2] = B;
elements[5] = R; elements[6] = G + fraction; elements[7] = B;
elements[10] = R; elements[11] = G; elements[12] = B + fraction;
colorMatrix.set(elements);
return colorMatrix;
}
}
Here is how you can set it up:
ImageView imageView = (ImageView)findViewById(R.id.imageView);
final BitmapDrawable drawable = (BitmapDrawable) getResources().getDrawable(R.drawable.image);
imageView.setImageDrawable(drawable);
AlphaSatColorMatrixEvaluator evaluator = new AlphaSatColorMatrixEvaluator ();
final ColorMatrixColorFilter filter = new ColorMatrixColorFilter(evaluator.getColorMatrix());
drawable.setColorFilter(filter);
ObjectAnimator animator = ObjectAnimator.ofObject(filter, "colorMatrix", evaluator, evaluator.getColorMatrix());
animator.addUpdateListener( new ValueAnimator.AnimatorUpdateListener() {
#Override
public void onAnimationUpdate(ValueAnimator animation) {
drawable.setColorFilter (filter);
}
});
animator.setDuration(1500);
animator.start();
And here is the result:
Please note that this answer, as it stands, works for Lollipop only. The reason for this is because the colorMatrix property is not available to animate on the ColorMatrixColorFilter class (it doesn't provide getColorMatrix and setColorMatrix methods). To see this in action, try the code, in logcat output you should see a warning message like this:
Method setColorMatrix() with type class android.graphics.ColorMatrix not found on target class class android.graphics.ColorMatrixColorFilter
That being said, I was able to get this to work on older android versions (pre-Lollipop) by creating the following class (not the best name, I know)
private class AnimateColorMatrixColorFilter {
private ColorMatrixColorFilter mFilter;
private ColorMatrix mMatrix;
public AnimateColorMatrixColorFilter(ColorMatrix matrix) {
setColorMatrix(matrix);
}
public ColorMatrixColorFilter getColorFilter() {
return mFilter;
}
public void setColorMatrix(ColorMatrix matrix) {
mMatrix = matrix;
mFilter = new ColorMatrixColorFilter(matrix);
}
public ColorMatrix getColorMatrix() {
return mMatrix;
}
}
Then, the setup code would look something like the following. Note that I have this "setup" in a derived class from ImageView and so I'm doing this in the overriden method setImageBitmap.
#Override
public void setImageBitmap(Bitmap bm) {
final Drawable drawable = new BitmapDrawable(getContext().getResources(), bm);
setImageDrawable(drawable);
AlphaSatColorMatrixEvaluator evaluator = new AlphaSatColorMatrixEvaluator();
final AnimateColorMatrixColorFilter filter = new AnimateColorMatrixColorFilter(evaluator.getColorMatrix());
drawable.setColorFilter(filter.getColorFilter());
ObjectAnimator animator = ObjectAnimator.ofObject(filter, "colorMatrix", evaluator, evaluator.getColorMatrix());
animator.addUpdateListener( new ValueAnimator.AnimatorUpdateListener() {
#Override
public void onAnimationUpdate(ValueAnimator animation) {
drawable.setColorFilter(filter.getColorFilter());
}
});
animator.setDuration(1500);
animator.start();
}
Following up on rnrneverdies's excellent answer, I'd like to offer a small fix to this animation logic.
My problem with this implementation is when it comes to png images with transparency (for example, circular images, or custom shapes). For these images, the colour filter will draw the transparency of the image as black, rather than just leaving them transparent.
The problem is with this line:
elements [19] = (float)Math.round(alpha * 255);
What's happening here is that the colour matrix is telling the bitmap that the alpha value of each pixels is equal to the current phase of the alpha. This is obviously not perfect, since pixels which were already transparent will lose their transparency and appear as black.
To fix this, instead of applying the alpha of the "additive" alpha field in the colour matrix, apply it on the "multiplicative" field:
Rm | 0 | 0 | 0 | Ra
0 | Gm | 0 | 0 | Ga
0 | 0 | Bm | 0 | Ba
0 | 0 | 0 | Am | Aa
Xm = multiplicative field
Xa = additive field
So instead of applying the alpha value on the "Aa" field (elements[19]), apply it on the "Am" field (elements[18]), and use the 0-1 value rather than the 0-255 value:
//elements [19] = (float)Math.round(alpha * 255);
elements [18] = alpha;
Now the transition will multiply the original alpha value of the bitmap with the alpha phase of the animation and not force an alpha value when there shouldn't be one.
Hope this helps
Was just wondering this same thing. I found a blog post detailing how to go about it with an example written in Xamarin:
http://blog.neteril.org/blog/2014/11/23/android-material-image-loading/
The gist for someone writing in Java: use ColorMatrix and ColorMatrixColorFilter.
The post also mentions an important optimization: using the Lollipop hidden setColorMatrix() API to avoid GC and GPU churn.
Have you seen it in use in any Google apps yet? If you end up implementing it I'd love to see your source.

Image segmentation by background color - OpenCV Android

I'm trying to segment business cards and split them by background color to treat them as different regions of interest.
For example a card of this sort:
should be able to be to be split into two images as there are 2 background colors. Are there any suggestions on how to tackle this? I've tried doing some contour analysis which didn't turn out too successful.
Other example cards:
This card should give 3 segmentations, as there are three portions even though it's only 2 colors (though 2 colors will be okay).
The above card should give just one segmentation as it is just one background color.
I'm not trying to think of gradient backgrounds just yet.
It depends on how the other cards look, but if the images all are in that great quality, it should not be too hard.
In the example you posted, you could just collect the colors of the border pixels (most left column, most right column, first row, last row) and treat what you find as possible background colors. Perhaps check if there are enough pixels with roughly the same color. You need some kind of distance measuring. One easy solution is to just use the euclidean distance in RGB color space.
A more generic solution would be to find clusters in the color histograms of the whole image and treat every color (again with tolerance) that has more than x% of the overall pixel amount as a background color. But what you define as background depends on what you want to achieve and how your images look.
If you need further suggestions, you could post more images and tag what parts of the images you want to be detected as a background color and what parst not.
-
Edit: Your two new images also show the same pattern. Background colors occupy a big part of the image, there is no noise and there are no color gradients. So a simple approach could look like the following:
Calculate the histogram of the image: see http://docs.opencv.org/modules/imgproc/doc/histograms.html#calchist and http://docs.opencv.org/doc/tutorials/imgproc/histograms/histogram_calculation/histogram_calculation.html
Find the most prominent colors in the histogram. If you do not want to iterate over the Mat yourself you can use minMaxLoc ( http://docs.opencv.org/modules/core/doc/operations_on_arrays.html#minmaxloc) as shown in the calchist documentation (see above), and if the color takes up enough percentage of the pixel count save it and set the according bin in the histogram to zero. Repeat until your percentage is not reached any more. You will then have saved a list of the most prominent colors, your background colors.
Threshold the image for every background color you have. See: http://docs.opencv.org/doc/tutorials/imgproc/threshold/threshold.html
On the resulting threadholded images find the corresponding region to every background color. See: http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/find_contours/find_contours.html
If you have examples that do not work with this approach, just post them.
As an approach for also finding backgrounds with color gradients in them, one could use canny. The following code (yes, not android, I know, but the result should be the same if you port it) works fine with the three example images you posted so far. If you have other images, that do not work with this, please let me know.
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
Mat src;
Mat src_gray;
int canny_thresh = 100;
int max_canny_thresh = 255;
int size_per_mill = 120;
int max_size_per_mill = 1000;
RNG rng(12345);
bool cmp_contour_area_less(const vector<Point>& lhs, const vector<Point>& rhs)
{
return contourArea(lhs) < contourArea(rhs);
}
void Segment()
{
Mat canny_output;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
Canny(src_gray, canny_output, canny_thresh, canny_thresh*2, 3);
// Draw rectangle around canny image to also get regions touching the edges.
rectangle(canny_output, Point(1, 1), Point(src.cols-2, src.rows-2), Scalar(255));
namedWindow("Canny", CV_WINDOW_AUTOSIZE);
imshow("Canny", canny_output);
// Find the contours.
findContours(canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
// Remove largest Contour, because it represents always the whole image.
sort(contours.begin(), contours.end(), cmp_contour_area_less);
contours.resize(contours.size()-1);
reverse(contours.begin(), contours.end());
// Maximum contour size.
int image_pixels(src.cols * src.rows);
cout << "image_pixels: " << image_pixels << "\n";
// Filter the contours, leaving just large enough ones.
vector<vector<Point> > background_contours;
for(size_t i(0); i < contours.size(); ++i)
{
double area(contourArea(contours[i]));
double min_size((size_per_mill / 1000.0) * image_pixels);
if (area >= min_size)
{
cout << "Background contour " << i << ") area: " << area << "\n";
background_contours.push_back(contours[i]);
}
}
// Draw large contours.
Mat drawing = Mat::zeros(canny_output.size(), CV_8UC3);
for(size_t i(0); i < background_contours.size(); ++i)
{
Scalar color = Scalar(rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255));
drawContours(drawing, background_contours, i, color, 1, 8, hierarchy, 0, Point());
}
namedWindow("Contours", CV_WINDOW_AUTOSIZE);
imshow("Contours", drawing);
}
void size_callback(int, void*)
{
Segment();
}
void thresh_callback(int, void*)
{
Segment();
}
int main(int argc, char* argv[])
{
if (argc != 2)
{
cout << "Please provide an image file.\n";
return -1;
}
src = imread(argv[1]);
cvtColor(src, src_gray, CV_BGR2GRAY);
blur(src_gray, src_gray, Size(3,3));
namedWindow("Source", CV_WINDOW_AUTOSIZE);
imshow("Source", src);
if (!src.data)
{
cout << "Unable to load " << argv[1] << ".\n";
return -2;
}
createTrackbar("Canny thresh:", "Source", &canny_thresh, max_canny_thresh, thresh_callback);
createTrackbar("Size thresh:", "Source", &size_per_mill, max_size_per_mill, thresh_callback);
Segment();
waitKey(0);
}

Android Add image to text (in text View)?

first post here=)
I've been looking for this answer and haven't been able to find it.
What I want to do is have some text, then add a image, then rest of text. For example:
____
| |
Hi there, this is the photo |___|, hope you like it..
I've been looking but all I can find is add text to image or add image to image View, and I don't think that's what I want because the app is mainly text but with images on it.
So my question is: How do I add an Image to text?
thanks
UPDATE:
I used the advice R.daneel.olivaw gave me and it worked nice=)
But I have a problem. lets say i have: "a b c" where I set the position of b as spanable. But if I try to delete the text and I delete b, the next time I write something it will become the image I used in spanable. How do I correct this? Any1one got any advice?
thanks=)
I think you are looking for the Spannable interface, By using this you can add images to a text view.
This link might help.
your best option is to create your own view, and to override the onDraw() method using canvas.drawText() for text, and canvas.drawBitmap() for images.
See the Canvas doc : http://developer.android.com/reference/android/graphics/Canvas.html
Here is an example which draw some text at the center of the screen :
public class OverlayView extends View {
public OverlayView(final Context context) {
super(context);
}
/**
* Draw camera target.
*/
#Override
protected void onDraw(final Canvas canvas) {
// view size
int width = getWidth();
int height = getHeight();
float square_side = height - width * 0.8f; // size of the target square
Paint paint = new Paint();
paint.setAntiAlias(true);
paint.setStyle(Paint.Style.FILL);
// text size is 5% of the screen height
paint.setTextSize(height * 0.05f);
// draw message depending of its width
String message = getResources().getString(R.string.photo_target_text);
float message_width = paint.measureText(message);
paint.setColor(getResources().getColor(R.color.color_foreground));
canvas.drawText(message, (width - message_width) / 2,
(height - square_side) / 4, paint);
super.onDraw(canvas);
}
}
as I know, there's no way to do it.
but when you use webview and generated html you can do simillar one.
generate html from your contents, and load it to webview, set webview settings to no zoom control.
This can be done with image getter: Html.ImageGetter
You will have to use HTML tags for each image.
< string name="text" > "Hi there, this is the photo [img src=img.jpge/] , hope you like it. < / string>
Add this in to your String and it will work

Android Color Picking Not Getting Correct Colors

I implemented a simple color picker for my application, that is working correctly except the objects are being drawn with not there exact color id. So color IDs of 22,0,0, 23,0,0, 24,0,0 might be picked up by the glReadPixles as 22,0,0. I also tried disable dithering, but not sure if there is another gl setting I have to disable or enable to get the objects to draw as there exact color id.
if(picked)
{
GLES10.glClear(GLES10.GL_COLOR_BUFFER_BIT | GLES10.GL_DEPTH_BUFFER_BIT);
GLES10.glDisable(GLES10.GL_TEXTURE_2D);
GLES10.glDisable(GLES10.GL_LIGHTING);
GLES10.glDisable(GLES10.GL_FOG);
GLES10.glPushMatrix();
Camera.Draw(gl);
for(Actor actor : ActorManager.actors)
{
actor.Picking();
}
ByteBuffer pixels = ByteBuffer.allocate(4);
GLES10.glReadPixels(x, (int)_height - y, 1, 1, GLES10.GL_RGBA, GLES10.GL_UNSIGNED_BYTE, pixels);
for(Actor actor : ActorManager.actors)
{
if(actor._colorID[0] == (pixels.get(0) & 0xff) && actor._colorID[1] == (pixels.get(1) & 0xff) && actor._colorID[2] == (pixels.get(2) & 0xff))
{
actor._location.y += -1;
}
}
GLES10.glPopMatrix();
picked = false;
}
public void Picking()
{
GLES10.glPushMatrix();
GLES10.glTranslatef(_location.x, _location.y, _location.z);
GLES10.glVertexPointer(3, GLES10.GL_FLOAT, 0, _vertexBuffer);
GLES10.glColor4f((_colorID[0] & 0xff)/255.0f,( _colorID[1] & 0xff)/255.0f, (_colorID[2] & 0xff)/255.0f, 1.0f);
GLES10.glDrawElements(GLES10.GL_TRIANGLES, _numIndicies,
GLES10.GL_UNSIGNED_SHORT, _indexBuffer);
GLES10.glPopMatrix();
}
Looks like android doesn't default to 8888 color mode so this was causing the problem of the colors not drawing correctly for me. By setting up the correct color mode in the activity, I solved the problem for now as long as the device supports it. I will probably have to find a way later to draw to another buffer or texture later if I want my program to support more devices, but this will work for now.
_graphicsView.getHolder().setFormat(PixelFormat.RGBA_8888);

Categories

Resources