I am working with OpenCV4Android version 3.0.0 and I am trying to remove background from video stream "non-static background". i want to do so because i have a problem when i try to detect the edges of a "card", the problem of detecting the edges of a "card" is based on its color and the background color as explained in my question here.
after referring to some posts i wrote the below code, but at run time, when i display the "mask" image i get a completely grey image. and when i display the "output" image after applying the "mask" on it i get the same preview being displayed on the camera
is there any way to remove the non-static background from video stream?
code:
mask = new Mat();
BackgroundSubtractorMOG2 mog2 = Video.createBackgroundSubtractorMOG2();
mog2.apply(mInputFrame,mask,.000005);
output = new Mat();
mInputFrame.copyTo(output, mask);
final Bitmap bitmap = Bitmap.createBitmap(mInputFrame.cols(), mInputFrame.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(output, bitmap);
getActivity().runOnUiThread(new Runnable() {
#Override
public void run() {
mIVEdges.setImageBitmap(bitmap);
}
});
Related
I am using AForge.portable on an Android tablet to create a way to convert a picture of a piece of paper into what might pass as a scanned image. My code is as follows:
public static Bitmap ConvertToScan(Bitmap img)
{
BlobCounterBase bc = new BlobCounter();
bc.ObjectsOrder = ObjectsOrder.Area;
var bm = ((System.Drawing.Bitmap)img).Clone(System.Drawing.Imaging.PixelFormat.Format24bppRgb);
Grayscale filter = new Grayscale(0.2125, 0.7154, 0.0721);
var grayImage = filter.Apply(bm);
SobelEdgeDetector sobelfilter = new SobelEdgeDetector();
sobelfilter.ApplyInPlace(grayImage);
bc.ProcessImage(grayImage);
Blob[] blobs = bc.GetObjectsInformation();
//I Assume the first/largest blob is what I want.
List<IntPoint> edgePoints = bc.GetBlobsEdgePoints(blobs[0]);
List<IntPoint> corners = PointsCloud.FindQuadrilateralCorners(edgePoints);
QuadrilateralTransformation qfilter = new QuadrilateralTransformation(corners, img.Width, img.Height);
Bitmap newImage = (Bitmap)qfilter.Apply((System.Drawing.Bitmap)img);
return newImage;
}
If I display grayImage after it has gone through the SobelEdgeDetector, I get a nice highlight of the boundary of the piece of paper. I was thinking that passing this image through the BlobCounter would give me the edgePoints of the largest blob, which should be the paper. What happens is that it only comes up with one blob being the entire image. I suspect that since the paper is just an outline and not a solid, the BlobCounter is not selecting it. Does anyone know how use Aforge to detect the corners of the largest object outline in an image?
I am using OpenCV4Android version 3.1.0 and I want to remove the background in each frame taken from the Android Camera. I referred to some posts and what i understood is, since the background should be removed from a non-static
background "Android Camera" i should use createBackgroundSubtractorMOG2
according to an example, i am using the createBackgroundSubtractorMOG2 as shown in the code below. But at run time, regardless of the changing background in the frame retrieved from the camera, i get mask fgmask contains always
a white image.
please let me know how to use createBackgroundSubtractorMOG2
Code:
//use createBackgroundSubtractorMOG2
fgmask = new Mat();
BackgroundSubtractorMOG2 bgs = Video.createBackgroundSubtractorMOG2(30, 16, false);
bgs.apply(mMatInputFrame,fgmask,0);
//to display the mask
final Bitmap bitmap = Bitmap.createBitmap(this.mMatInputFrame.cols(), this.mMatInputFrame.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(this.fgmask, bitmap);
getActivity().runOnUiThread(new Runnable() {
#Override
public void run() {
mIVEdges.setImageBitmap(bitmap);
}
});
As #Miki says, you cannot use this method if your background is not static.
BackgroundSubtractorMOG2 uses Gaussian Mixture Model to model the background, so it can adapt to minor changes on it (illumination, new static objects, etc), but it cannot adapt to a fully dynamic background.
But if you still want to try it, here is how you can use it:
public class MOG2Subtractor {
private final static double LEARNING_RATE = 0.01;
private BackgroundSubtractorMOG2 mog;
private Mat foreground;
public MOG2Subtractor() {
mog = Video.createBackgroundSubtractorMOG2();
foreground = new Mat();
// You can configure some parameters. For example:
mog.setDetectShadows(false);
}
public Mat process(Mat inputImage) {
mog.apply(inputImage, foreground, LEARNING_RATE);
return foreground;
}
}
Here you have all the parameters and their meaning: BackgroundSubtractorMOG2
I also had same issue. (only display gray screen)
My problem was that I created new BackgroundSubctractorMOG2 object EVERY frame.
So initialization of the object should be before while loop.
below is not working code but for your easy understanding of what I mean above.
// ### PLACE HERE! ###
BackgroundSubtractorMOG2 bgs = Video.createBackgroundSubtractorMOG2();
while(true) {
// ### NOT HERE! ###
//BackgroundSubtractorMOG2 bgs = Video.createBackgroundSubtractorMOG2();
fgmask = new Mat();
bgs.apply(inputFrame, fgmask);
// mat to bitmap and so on ..
}
I am having problems with dynamic loading of textures.
When a user double taps on the screen, the background and other sprites are changed. There is no error produced, but some times the textures are cleared and new textures are just not loaded.
This is my initial onCreateResource
ITextureRegion BackgroundTextureRegion;
BitmapTextureAtlas MainTexture1;
//Initiate Textures
MainTexture1 = new BitmapTextureAtlas(this.getTextureManager(),1000,1000, TextureOptions.BILINEAR);
//Clear Textures
MainTexture1.addEmptyTextureAtlasSource(0, 0, 1000,1000);
//Assign Image Files to TextureRegions
BackgroundTextureRegion = BitmapTextureAtlasTextureRegionFactory.createFromAsset(MainTexture1, this, "Evening.jpg",0,0);
//Loading the Main Texture to memory
MainTexture1.load();
There is no problem until after this point. After this when user double taps or swipes the background, I change the texture dynamically. Here is the code:
MainTexture1.clearTextureAtlasSources();
MainTexture1.addEmptyTextureAtlasSource(0, 0, 1000,1000);
BitmapTextureAtlasTextureRegionFactory.createFromAsset(MainTexture1, this, "WinterNight.jpg",0,0);
This usually changes the texture and I am getting the desired result. But in some devices (eg. Samsung Tab 2), 1 in 10 times the the MainTexture1 is cleared but its not loaded with new image.
So it just gives a black screen, how do I correct this?
MainTexture1.clearTextureAtlasSources();
// MainTexture1.addEmptyTextureAtlasSource(0, 0, 1000,1000);
BitmapTextureAtlasTextureRegionFactory.createFromAsset(MainTexture1, this, "WinterNight.jpg",0,0);
MainTexture1.load();
Try this
runOnUiThread(new Runnable() {
#Override
public void run() {
MainTexture1.clearTextureAtlasSources();
MainTexture1.addEmptyTextureAtlasSource(0, 0, 1024, 1024);
BitmapTextureAtlasTextureRegionFactory.createFromAsset(MainTexture1, this, "WinterNight.jpg",0,0);
MainTexture1.load();
}
});
Like this.
If it is not working try with creating another textureatlas instead of same atlas.
What I want, the text is being moved with the finger touch over the image, on the button click it redraws the existing Image into a new one which is as text pasted on it.
it works fine for v3.1 as well as on Emulator.
but i tried to test on v2.2 device it occurs the forse Close.While it has all support for the Devices.Can you help me out of here.Its gonna be crucial in few weeks.Thanks in advance.
///Redrawing the image & touchin Move of the Canvas with text
public void redrawImage(String path,float sizeValue,String textValue,int colorValue) {
BitmapFactory.Options options = new BitmapFactory.Options();
try {
options.inMutable = true;
} catch (Exception e) {
// TODO: handle exception
System.out.println("#############Error is======"+e.getMessage());
}
Bitmap bm = BitmapFactory.decodeFile(path,options);
proxy = Bitmap.createBitmap(bm.getWidth(), bm.getHeight(), Config.ARGB_8888);
Canvas c = new Canvas(proxy);
//Here, we draw the background image.
c.drawBitmap(bm, new Matrix(), null);
Paint paint = new Paint();
paint.setColor(colorValue); // Text Color
paint.setStrokeWidth(30); // Text Size
paint.setTextSize(sizeValue);
System.out.println("Values passing=========="+someGlobalXvariable+", "+someGlobalYvariable+", "
+sizeValue+", "+textValue);
//Here, we draw the text where the user last touched.
c.drawText(textValue, someGlobalXvariable, someGlobalYvariable, paint);
popImgae.setImageBitmap(proxy);
}
It would help to know when the force close happens, like right after the app starts, as soon as you touch before text ever draws?
Debug on the device
A pretty easy and fullproof technique is running the code in debug mode on the actual device. Add a breakpoint at the beginning of your function and step over each line until it force closes.
Possibly OOM
If you are calling redrawImage repeatedly, like every frame during a touch, then allocating of a new bitmap may eat a lot of memory quickly and cause the crash:
Bitmap bm = BitmapFactory.decodeFile(path,options);
Then the force close might happen after a bit. Try changing bm to a method param or member field that is allocated and read from file once.
EDIT:
After playing around with it for a few hours, I came to believe that the problem is in the image quality. For example, to first image is how it came from the camera. Decoder can't read it. The second image is turned into B/W with adjusted contrast and the decoder reads it great.
Since the demo app that came with zxing is able to read the fist image off the monitor in a few seconds, I think the problem might be in some setting deep within the zxing library. It doesn't wait long enough to process the image, but spits out NotFound almost instantly.
I'm making a simple QR-reader app. Here's a screenshot.
The top black area is a surfaceview, that shows frames from the camera. It works fine, only you can't see it in the screenshot.
Then, when I press the button, a bitmap is taken from that surfaceview, placed on an ImageView below and is attempted to be read by the zxing library.
Yet it will give out a NotFoundException. :/
**10-17 19:53:15.382: WARN/System.err(2238): com.google.zxing.NotFoundException
10-17 19:53:15.382: WARN/dalvikvm(2238): getStackTrace() called but no trace available**
On the other hand, if I crop the qr image from this screenshot, place it into the imageview ( instead of a camera feed ) and try to decode it, it works fine. Therefor the QR image itself and its quality are OK... but then why doesn't it decode in the first scenario?
Thanks!
public void dec(View v)
{
ImageView ivCam2 = (ImageView)findViewById(R.id.imageView2);
ivCam2.setImageBitmap(bm);
BitmapDrawable drawable = (BitmapDrawable) ivCam2.getDrawable();
Bitmap bMap = drawable.getBitmap();
TextView textv = (TextView) findViewById(R.id.mytext);
LuminanceSource source = new RGBLuminanceSource(bMap);
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
Reader reader = new MultiFormatReader();
try {
Result result = reader.decode(bitmap);
Global.text = result.getText();
byte[] rawBytes = result.getRawBytes();
BarcodeFormat format = result.getBarcodeFormat();
ResultPoint[] points = result.getResultPoints();
textv.setText(Global.text);
} catch (NotFoundException e) {
textv.setText("NotFoundException");
} catch (ChecksumException e) {
textv.setText("ChecksumException");
} catch (FormatException e) {
textv.setText("FormatException");
}
}
how the bitmap is created:
#Override
public void surfaceCreated(SurfaceHolder holder)
{
try
{
this.camera = Camera.open();
this.camera.setPreviewDisplay(this.holder);
this.camera.setPreviewCallback(new PreviewCallback() {
public void onPreviewFrame(byte[] _data, Camera _camera) {
Camera.Parameters params = _camera.getParameters();
int w = params.getPreviewSize().width;
int h = params.getPreviewSize().height;
int format = params.getPreviewFormat();
YuvImage image = new YuvImage(_data, format, w, h, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
Rect area = new Rect(0, 0, w, h);
image.compressToJpeg(area, 50, out);
bm = BitmapFactory.decodeByteArray(out.toByteArray(), 0, out.size());
}
});
}
catch(IOException ioe)
{
ioe.printStackTrace(System.out);
}
}
I wrote this code. Returning quickly isn't a problem. Decoding is very fast on a mobile, and very very fast on a desktop.
The general answer to this type of question is that some images just aren't going to decode. That's life -- the heuristics don't always get it right. But I don't think that is the problem here.
QR codes don't decode without a minimal white "quiet zone" around them. The image beyond its borders is considered white for this purpose. But in your raw camera image, there's little border around the code and it's not all considered white by the binarizer, I'd bet.
Still, there's more you can do. Set the TRY_HARDER hint to the decoder, for one, to have it spend a lot more CPU to try to decode. You can also try a different Binarizer implementation than the default HybridBinarizer.
(The rest looks just fine. I assume that RGBLuminanceSource is getting data in the format it expects; it ought to from Bitmap)
See this: http://zxing.org/w/docs/javadoc/com/google/zxing/NotFoundException.html The exception means that a barcode wasn't found in the image. My suggestion would be to use your work around that works instead of trying to decode the un-cropped image.