I've searched the web for help and although it looked like a solution it turned out to not work so good. For starters I'd like to say I've just jumped into android programming (this is my first day) I really learn by trial and error I'd like it that if you could help me you'd give me hints rather than paste the code in front of me.
my tileset
http://img217.imageshack.us/img217/4654/tileseth.png
the result
http://img199.imageshack.us/img199/7913/resultx.png
the issues I'm having is 1. it is obviously not splitting the image in 32 by 32 bits. is what I'm trying to achieve is take my big image and split it into 9* smaller images of 32 by 32 portions. Secondary the image quality gets distorted and I can't work out why.
*I don't want to use a 9 patch as there will be more then 9 images soon just a fluke that atm I have 9 images
my code (evidently plagiarized from the internet)
tilesetSliced = Bitmap.createScaledBitmap(bmp, 96, 96, true);
tileset[0] = Bitmap.createBitmap(tilesetSliced, 0, 0, 32, 32);
tileset[1] = Bitmap.createBitmap(tilesetSliced, 32, 0, 32, 32);
tileset[2] = Bitmap.createBitmap(tilesetSliced, 64, 0, 32, 32);
tileset[3] = Bitmap.createBitmap(tilesetSliced, 0, 32, 32, 32);
tileset[4] = Bitmap.createBitmap(tilesetSliced, 32, 32, 32, 32);
tileset[5] = Bitmap.createBitmap(tilesetSliced, 64, 32, 32, 32);
I'll make it more efficent once I got it working >.< any help would be great
the on draw
public void onDraw(Canvas canvas) {
//update();
for(int x=0; x<= mapWidth; x++){
for(int y = 0; y <= mapHeight; y++){
canvas.drawBitmap(tileset[map[x][y]], x *32, y*32, null);
}
}
}
o.k some more debugging has shead light on something 1. I removed the scaledbitmap that stopped the quality being destroyed (orginally ahd it due to bugs) however I found out that for some reason it thinks the width of my tileset is 64 when its 96 any help would be nice on this.
You may have more luck with a Bitmap Factory to generate your tilesetSliced. Within it is an Options class that allows you to set the sample size (inSampleSize) which can be used to scale down your image. It may not be precise enough for your needs, however.
Your images are likely distorted due to the scaling down process. Are you able to create these images with the right dimensions or pre-scale them?
Related
Not sure if this is the right way to ask, but please help. I have an image of a dented car. I have to process it and highlight the dents and return the number of dents. I was able to do it reasonably well with the following result:
The matlab code is:
img2=rgb2gray(i1);
imshow(img2);
img3=imtophat(img2,strel('disk',15));
img4=imadjust(img3);
layer=img4(:,:,1);
img5=layer>100 & layer<250;
img6=imfill(img5,'holes');
img7=bwareaopen(img6,5);
[L,ans]=bwlabeln(img7);
imshow(img7);
I=imread(i1);
Ians=CarDentIdentification(I);
However, when I try to do this using opencv, I get this:
With the following code:
Imgproc.cvtColor(source, middle, Imgproc.COLOR_RGB2GRAY);
Imgproc.equalizeHist(middle, middle);
Imgproc.threshold(middle, middle, 150, 255, Imgproc.THRESH_OTSU);
Please tell me how can I obtain better results in opencv, and also how to count the dents? I tried findcontour() but it gives a very large number. I tried on other images as well, but I'm not getting proper results.
Please help.
So you basically from the MATLAB site, imtophat does - Top-hat filtering computes the morphological opening of the image (using imopen) and then subtracts the result from the original image.
You could do this in OpenCV with the following steps:
Step 1: Get the disk structuring element
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (15, 15))
Step 2: Compute opening of the image and then subtract the result from the original image
tophat = cv2.morphologyEx(v, cv2.MORPH_TOPHAT, kernel)
This gives following result -
Step 3 - Now you could just manually threshold it or use Otsu -
ret, thresh = cv2.threshold(tophat, 17, 255, 0)
which gives you the following image -
Since the OP wants the code in Java, here is the probable code in Java:
private Mat topHat(Mat image)
{
Mat element = Imgproc.getStructuringElement(Imgproc.MORPH_ELLIPSE, new Size(15, 15), new Point (0, 0));
Mat dst = new Mat;
Imgproc.morphologyEx(image, dst, Imgproc.MORPH_TOPHAT, element, new Point(0, 0));
return dst;
}
Make sure you do this on a gray scale image (CvType.8UC1) and then you can threshold suitably.
I am attempting to create a simple game using LibGDX. I am trying to use 9 Patch images as the backgrounds to the buttons on the menu however it appears the 9 Patch qualities of the images are being ignored.
I have two images, "active.9.png" and "rest.9.png". These are square images that represent the button in it's active or rest state. I used this tool to create them: http://romannurik.github.io/AndroidAssetStudio/nine-patches.html so I am sure they meet 9 Patch requirements. Below is a picture of "active.9.png":
Because I am using LibGDX and there will be many assets I wanted to use a TextureAtlas to store my button images. After running the TexturePacker things still seem to be working, because the images have "split" defined which I think suggests they have been recognised as 9 Patch files. Below is "buttons.pack":
buttons.png
format: RGBA8888
filter: Nearest,Nearest
repeat: none
active
rotate: false
xy: 1, 1
size: 226, 225
split: 59, 57, 58, 58
orig: 226, 225
offset: 0, 0
index: -1
rest
rotate: false
xy: 229, 1
size: 226, 225
split: 59, 57, 58, 58
orig: 226, 225
offset: 0, 0
index: -1
Next I tried to create a TextureAtlas from this pack, create a Skin, and load the images into the Skin.
TextureAtlas buttonAtlas = new TextureAtlas("images/buttons/buttons.pack");
skin = new Skin();
skin.addRegions(buttonAtlas);
skin.add("rest", buttonAtlas.createPatch("rest"));
skin.add("active", buttonAtlas.createPatch("active"));
Finally I tried to apply this Skin to the button. I have tried two different ways..
Method 1:
TextButtonStyle buttonStyle = new TextButtonStyle();
buttonStyle.up = new NinePatchDrawable(buttonAtlas.createPatch("rest"));
buttonStyle.down = new NinePatchDrawable(buttonAtlas.createPatch("active"));
Output 1:
Method 2:
TextButtonStyle buttonStyle = new TextButtonStyle();
buttonStyle.up = new NinePatchDrawable(new NinePatch(new Texture(Gdx.files.internal("images/buttons/rest.9.png"))));
buttonStyle.down = new NinePatchDrawable(new NinePatch(new Texture(Gdx.files.internal("images/buttons/active.9.png"))));
Output 2:
Whilst output 2 looks like it is better, it actually seems as though the 9 Patch qualities are ignored and the image has been simply stretched to fit.
I would really appreciate any help with this, I am completely stumped and there doesn't seem to be any up to date tutorials or documentation available.
Thanks for your time
I think one of the mistakes is the image line.
I use draw9patch, I do not know if it is the tool you use, do not know.
this tool can be found at: yourAndroid-sdk/tools/-->draw9patch
"lookVertical Path" for example:
//variable Class:
private TextureAtlas buttonsAtlas;
private NinePatch buttonUpNine;
private TextButtonStyle textButtonStyle;
private TextButton textButton;
private BitmapFont font;
//add in Show or created for example:
buttonsAtlas = new TextureAtlas("data/ninePatch9/atlasUiScreenOverflow.atlas");
buttonUpNine = buttonsAtlas.createPatch("buttonUp");
font = new BitmapFont(); //** default font, for test**//
font.setColor(0, 0, 1, 1); //** blue font **//
font.setScale(2); //** 2 times size **//
textButtonStyle = new TextButtonStyle();
textButtonStyle.up = new NinePatchDrawable(buttonUpNine);
textButtonStyle.font = font;
textButton = new TextButton("test", textButtonStyle);
textButton.setBounds(100, 250, 250, 250);
add in stage for example:
yourStage.addActor(textButton);
//
eg: if your button is greater than or equal to the side of ninePath, looks like this: (sorry for my English Hope you understand me well)
textButton.setBounds(100, 250, 250, 250);
look good.
but if it is less than nine line for example 100 look this:
textButton.setBounds(100, 150, 250, 100);
asset for you used for test if you need:
//atlasUiScreenOverflow.atlas
atlasUiScreenOverflow.png
size: 232,231
format: RGBA8888
filter: Nearest,Nearest
repeat: none
buttonUp
rotate: false
xy: 2, 2
size: 230, 229
split: 49, 51, 49, 52
orig: 230, 229
offset: 0, 0
index: -1
//atlasUiScreenOverflow.png
there are several ways to use ninePatch, but this is the one that occurred to me right now for this case I hope, well understood, and to help you something.
Edit: I just tested the tool you use and works well.
You use textButton = new TextButton("test", textButtonStyle); ?
Wanted to achieve something like this: http://www.leptonica.com/binarization.html
While searching for solutions, most of the answers were general instructions such as advise to look at adaptive filter, gaussian blur, dilation and erosion but none of them provide any sample code to start with (so can play around with the values)..
I know different image require different methods and values to achieve optimum clarity, but I just need some general filter so that the image at least slightly sharper and less noisy compare to the original, before doing any OCR on it.
this is what I've tried so far..
Mat imageMat = new Mat();
Utils.bitmapToMat(photo, imageMat);
Imgproc.cvtColor(imageMat, imageMat, Imgproc.COLOR_BGR2GRAY);
Imgproc.GaussianBlur(imageMat, imageMat, new Size(3, 3), 0);
Imgproc.adaptiveThreshold(imageMat, imageMat, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY_INV, 5, 4);
but being an image processing newb, obviously I don't know what I'm doing XD
original image:
after applying the above:
How to do it correctly?
UPDATE: got it much closer thanks to metsburg, berak and Aurelius
Using the medianBlur method since cvSmooth with CV_MEDIAN is deprecated and replaced with medianBlur:
Imgproc.medianBlur(imageMat, imageMat, 3);
Imgproc.threshold(imageMat, imageMat, 0, 255, Imgproc.THRESH_OTSU);
Result:
Using back the GaussianBlur method, the result actually is slightly better:
Imgproc.GaussianBlur(imageMat, imageMat, new Size(3, 3), 0);
Imgproc.threshold(imageMat, imageMat, 0, 255, Imgproc.THRESH_OTSU);
Result:
For this image the difference is not noticable, so I tried another image which is a photo taken off the computer screen. Computer screen gives a lot of noises (wavy lines) so it is very hard to remove the noise.
Example original image:
Directly applying otsu:
using medianBlur before otsu:
using GaussianBlur before otsu:
Seems like gaussian blur is slightly better, however I'm still playing with the settings..
If anyone can advise on how to improve the computer screen photo further, please, let us know :)
One more thing.. using this method on the image inside the top link yields horrible results :( see it here: http://imgur.com/vOZAaE0
Well, you're almost there. Just try these modifications:
Instead of
Imgproc.GaussianBlur(imageMat, imageMat, new Size(3, 3), 0);
try:
cvSmooth(imageMat, imageMat, CV_MEDIAN, new Size(3, 3), 0);
check the syntax, may not exactly match
The link you posted uses thresholding of Otsu, so try this:
Imgproc.threshold(imageMat, imageMat, 0, 255, Imgproc.THRESH_OTSU);
for thresholding.
Try tweaking the parameters here and there, you should get something pretty close to your desired outcome.
Instead of using Imgproc.THRESH_BINARY_INV use Imgproc.THRESH_BINARY only as _INV is inverting your image after binarisations and resulted is the said output shown above in your example.
correct code:
Imgproc.adaptiveThreshold(imageMat, imageMat, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 5, 4);
var targetSize = Math.max($(window).width(), $(window).height());
var canvas = $("#canvas")[0];
canvas.setAttribute('width', $(window).width());
canvas.setAttribute('height', $(window).height());
var context = canvas.getContext("2d");
var img = new Image(); // Create new img element
img.onload = function () {
angle = Math.PI / 4;
context.setTransform(1, 0, 0, 1, 0, 0); //attempt to reset transform
context.drawImage(img, 0, 0);
};
img.src = '../../Images/FloorPlans/GroundFloor.jpg'; // Set source path
This code produces the first image below on firefox17 on my nexus7. The original image is NOT angled, all of the lines should be north-south and east-west. It appears correctly on Firefox and chrome on the desktop, and chrome on my nexus7.
If i try to "un-skewer" the image using...
context.setTransform(1, 0, Math.tan(angle), 1, 0, 0);
I get the second output below! My target platform has to be FF on the Nexus7 :(
How can this be fixed? Or is this a firefox bug?
I've kinda worked it out.
The original image was a jpg, 1859px * 1568px ~ 501kb.
I resized it down to 50% and it worked! I then went back to the full size image (still not working) before resizing down 5% at a time. All of the images failed until I got to 75% of the original size (1394px * 1176px ~ 342kb) which worked perfectly!
So, the issue is either one of image size or file size.
Happy hunting!
UPDATE!
Thanks to Edward Falk's comment below, we have a definitive answer. Yes, shaving a single pixel of of the width of the image (thus making the width an even number of pixels) fixed the problem entirely.
Firefox canvas requires (some?) images to have an even pixel width and height, otherwise they may render incorrectly.
Okay, this is quite simple to understand, but for some bizarre reason I can't get it working.. I've simplified this example from the actual code.
InputStream is = context.getResources().openRawResource(R.raw.someimage);
Bitmap bitmap = BitmapFactory.decodeStream(is);
try
{
int[] pixels = new int[32*32];
bitmap.getPixels(pixels, 0, 800, 0, 0, 32, 32);
}
catch(ArrayIndexOutOfBoundsException ex)
{
Log.e("testing", "ArrayIndexOutOfBoundsException", ex);
}
Why on earth do I keep getting an ArrayIndexOutOfBoundsException? the pixels array is 32x32 and as far as I'm aware I'm correctly using getPixels. The image dimensions is 800x800 and I am attempting to retrieve a 32x32 section. The image is a 32-bit PNG which is being reported as ARGB-8888.
Any ideas? even if I'm being an idiot! I'm about to throw the keyboard out of the window :D
use bitmap width as stride, in ur case 32
bitmap.getPixels(pixels, 0, 32, 0, 0, 32, 32);
every row gap with 800 causes ur pixelarray to get out of bound
"I'm about to throw the keyboard out of the window " funny lol
You're overflowing the destination buffer because you're asking for a stride of 800 entries between rows.
http://developer.android.com/reference/android/graphics/Bitmap.html#getPixels%28int[],%20int,%20int,%20int,%20int,%20int,%20int%29
You getting OutOfBounds exception becacuse stride is applied to pixels array not to the original bitmap,so in your case you're trying to retrieve 32*800 pixels which doesn't fit into your array.