I am using OpenCV4Android version 3.1.0 and I want to remove the background in each frame taken from the Android Camera. I referred to some posts and what i understood is, since the background should be removed from a non-static
background "Android Camera" i should use createBackgroundSubtractorMOG2
according to an example, i am using the createBackgroundSubtractorMOG2 as shown in the code below. But at run time, regardless of the changing background in the frame retrieved from the camera, i get mask fgmask contains always
a white image.
please let me know how to use createBackgroundSubtractorMOG2
Code:
//use createBackgroundSubtractorMOG2
fgmask = new Mat();
BackgroundSubtractorMOG2 bgs = Video.createBackgroundSubtractorMOG2(30, 16, false);
bgs.apply(mMatInputFrame,fgmask,0);
//to display the mask
final Bitmap bitmap = Bitmap.createBitmap(this.mMatInputFrame.cols(), this.mMatInputFrame.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(this.fgmask, bitmap);
getActivity().runOnUiThread(new Runnable() {
#Override
public void run() {
mIVEdges.setImageBitmap(bitmap);
}
});
As #Miki says, you cannot use this method if your background is not static.
BackgroundSubtractorMOG2 uses Gaussian Mixture Model to model the background, so it can adapt to minor changes on it (illumination, new static objects, etc), but it cannot adapt to a fully dynamic background.
But if you still want to try it, here is how you can use it:
public class MOG2Subtractor {
private final static double LEARNING_RATE = 0.01;
private BackgroundSubtractorMOG2 mog;
private Mat foreground;
public MOG2Subtractor() {
mog = Video.createBackgroundSubtractorMOG2();
foreground = new Mat();
// You can configure some parameters. For example:
mog.setDetectShadows(false);
}
public Mat process(Mat inputImage) {
mog.apply(inputImage, foreground, LEARNING_RATE);
return foreground;
}
}
Here you have all the parameters and their meaning: BackgroundSubtractorMOG2
I also had same issue. (only display gray screen)
My problem was that I created new BackgroundSubctractorMOG2 object EVERY frame.
So initialization of the object should be before while loop.
below is not working code but for your easy understanding of what I mean above.
// ### PLACE HERE! ###
BackgroundSubtractorMOG2 bgs = Video.createBackgroundSubtractorMOG2();
while(true) {
// ### NOT HERE! ###
//BackgroundSubtractorMOG2 bgs = Video.createBackgroundSubtractorMOG2();
fgmask = new Mat();
bgs.apply(inputFrame, fgmask);
// mat to bitmap and so on ..
}
Related
I am working with OpenCV4Android version 3.0.0 and I am trying to remove background from video stream "non-static background". i want to do so because i have a problem when i try to detect the edges of a "card", the problem of detecting the edges of a "card" is based on its color and the background color as explained in my question here.
after referring to some posts i wrote the below code, but at run time, when i display the "mask" image i get a completely grey image. and when i display the "output" image after applying the "mask" on it i get the same preview being displayed on the camera
is there any way to remove the non-static background from video stream?
code:
mask = new Mat();
BackgroundSubtractorMOG2 mog2 = Video.createBackgroundSubtractorMOG2();
mog2.apply(mInputFrame,mask,.000005);
output = new Mat();
mInputFrame.copyTo(output, mask);
final Bitmap bitmap = Bitmap.createBitmap(mInputFrame.cols(), mInputFrame.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(output, bitmap);
getActivity().runOnUiThread(new Runnable() {
#Override
public void run() {
mIVEdges.setImageBitmap(bitmap);
}
});
When i have tried to render texture and transformation matrix to the EGLSurface, no display is seen in the view.
As a follow up of this issue , slightly i have modified slightly the code by following grafika/fadden sample code continuous capture
Here is my code:
Here is a draw method which runs on RenderThread.
This draw method is getting invoked properly whevener the data is produced at the producer end from Native Code.
public void drawFrame() {
mOffScreenSurface.makeCurrent();
mCameraTexture.updateTexImage();
mCameraTexture.getTransformMatrix(mTmpMatrix);
mSurfaceWindowUser.makeCurrent();
mFullFrameBlit.drawFrame(mTextureId, mTmpMatrix);
mSurfaceWindowUser.swapBuffers();
}
run method of RenderThread ->
public void run() {
Looper.prepare();
mHandler = new RenderHandler(this);
mEglCore = new EglCore(null, EglCore.FLAG_RECORDABLE);
mOffScreenSurface = new OffscreenSurface(mEglCore, 640, 480);
mOffScreenSurface.makeCurrent();
mFullFrameBlit = new FullFrameRect(
new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT));
mTextureId = mFullFrameBlit.createTextureObject();
mCameraTexture = new SurfaceTexture(mTextureId);
mCameraSurface = new Surface (mCameraTexture); // This surface i am sending to Native Code where i use ANativeWindow reference and copy the data using post method. {producer}
mCameraTexture.setOnFrameAvailableListener(new SurfaceTexture.OnFrameAvailableListener() {
#Override
public void onFrameAvailable(SurfaceTexture surfaceTexture) {
Log.d (TAG, "Long breath.. data is pumbed by Native Layer producer..");
mHandler.frameReceivedFromProducer();
}
});
mSurfaceWindowUser = new WindowSurface(mEglCore, mSurfaceUser, false); // this mSurfaceUser is a surface received from MainActivity TextureView.
}
To confirm if the produce at the native side producing the data, if i pass directly the user surface Without any EGL configurations, the frames are rendered into the screen.
At the native Level,
geometryResult = ANativeWindow_setBuffersGeometry(userNaiveWindow,640, 480, WINDOW_FORMAT_RGBA_8888);
To Render the frame i use
ANativeWindow_lock and ANativeWindow_unlockAndPost() to render directly frame into buffer.
I could not able to think what could be wrong and where i have to dig more ?
Thanks fadden for your help.
I am creating an app using AndEngine for the first time. My app is animation-oriented i.e. it has a number of images which on click animate. Most of these animations are frame by frame animations. I am using AndEngine because I require some animations with particle system, gravity and other stuff. Can someone help me with a simple onclick animation code in AndEngine or maybe point me to some good tutorial since all AndEngine tutorials are game tutorials that do not have frame by frame animation. Any help would be appreciated.
Before starting: note that this answer is using the TexturePacker extension for AndEngine.
For my frame by frame animations I am using a program called Texture Packer, which is supported by AndEngine. You actually just drag all your images to there, and it exports 3 files you need to use inside your project. The file are: .xml , .java , .png
By doing that i'm creating a big bitmap (try to stay below or equal to 2048x2048) with all the frames inside of it.
After assuming you have those files created, you need to copy them to your project. the .png and .xml go into the same directory, most likely under assets/gfx/ ... and the .java file should be located in the src directory with the rest of your classes.
Now lets check out some code..
First of all we will need to load all the textures from the files.. we will do that using the following code:
Those are the variables we will use to create our animatable object.
private TexturePack dustTexturePack;
private TexturePackTextureRegionLibrary dustTexturePackLibrary;
public TiledTextureRegion dust;
The following code actually loads the single textures from the bitmap into our variable
try {
dustTexturePack = new TexturePackLoader(activity.getTextureManager(),"gfx/Animations/Dust Animation/").loadFromAsset(activity.getAssets(),"dust_anim.xml");
dustTexturePack.loadTexture();
dustTexturePackLibrary = dustTexturePack.getTexturePackTextureRegionLibrary();
} catch (TexturePackParseException e) {
Debug.e(e);
}
TexturePackerTextureRegion[] obj = new TexturePackerTextureRegion[dustTexturePackLibrary.getIDMapping().size()];
for (int i = 0; i < dustTexturePackLibrary.getIDMapping().size(); i++) {
obj[i] = dustTexturePackLibrary.get(i);
}
dust = new TiledTextureRegion(dustTexturePack.getTexture(), obj);
As you can see, we are using the TiledTextureRegion object. what we've done until now is actually loading the textures, and giving our TiledTextureRegion object all the info needed about the regions of the smaller images located in our big bitmap.
Later on, to use this in any part of our game, we can do the following: (notice that my "dust" variable is located inside a ResourceManager class, and therefore its public - this info is given for the next code)
AnimatedSprite dustAnimTiledSprite = new AnimatedSprite(500, 125, resourcesManager.dust, vbom);
myScene.attachChild(dustAnimTiledSprite);
At last, to animated the object in a specific given time, we just use the simple method animate, just like this :
dustAnimTiledSprite.animate(40, 0);
(In this case, the duration of each frame is 40, and there is 0 loops - will be animated once)
** Not too sure what's the difference between AnimatedSprite to TiledSprite. but this is how I show simple animations in my game.
I hope this is what you were looking for. good luck
This is sprite sheet with 8 frames
Player.sprite.animate(
new long[] { 100, 100 }, 7, 8,
false, new IAnimationListener() {
public void onAnimationStarted(
AnimatedSprite pAnimatedSprite,
int pInitialLoopCount) {
}
public void onAnimationLoopFinished(
AnimatedSprite pAnimatedSprite,
int pRemainingLoopCount,
int pInitialLoopCount) {
}
public void onAnimationFrameChanged(
AnimatedSprite pAnimatedSprite,
int pOldFrameIndex,
int pNewFrameIndex) {
}
public void onAnimationFinished(
AnimatedSprite pAnimatedSprite) {
Player.sprite.animate(
new long[] { 100,
100, 100,
100, 100,
100, 100 },
0, 6, true);
}
});
I'm creating a simple drawing prototype to be used on Android where the user can drag his finger across the screen and draw basic lines/shapes etc. I'm having some performance issues when drawing over the same areas and after a while performance drops considerably.
I am wondering if there is any way to, after the line has been drawn (after a touch begin, touch move, and touch end event chain), to store the newly drawn line into a bitmap containing the rest of the drawings.
I've had a look at bitmap.merge() but this would create problems when it came to mixing colors. I simply want any new 'drawings' to be saved on top of everything drawn previously.
// To hold current 'drawing'
var clip:Shape = new Shape();
// To hold past 'drawings'
var drawing:Bitmap = new Bitmap();
public function Main()
{
Multitouch.inputMode = MultitouchInputMode.TOUCH_POINT;
addChild(drawing);
addChild (clip);
addEventListener(TouchEvent.TOUCH_BEGIN, tBegin);
addEventListener(TouchEvent.TOUCH_MOVE, tMove);
addEventListener(TouchEvent.TOUCH_END, tEnd);
}
private function tBegin(e:TouchEvent):void
{
clip.graphics.lineStyle(28,0x000000);
clip.graphics.moveTo(mouseX, mouseY);
}
private function tMove(e:TouchEvent):void
{
clip.graphics.lineTo(mouseX, mouseY);
}
private function tEnd(e:TouchEvent):void
{
// Save new graphics and merge with drawing
}
Just keep drawing in your clip shape, and on tEnd draw clip inside a bitmapData assigned to a bitmap
// To hold current 'drawing'
var bmpData:BitmapData = new BitmapData (800, 800) // put here your desired size
var clip:Shape = new Shape();
// To hold past 'drawings'
var drawing:Bitmap = new Bitmap(bmpData);
public function Main()
{
Multitouch.inputMode = MultitouchInputMode.TOUCH_POINT;
addChild(drawing);
addChild (clip);
addEventListener(TouchEvent.TOUCH_BEGIN, tBegin);
addEventListener(TouchEvent.TOUCH_MOVE, tMove);
addEventListener(TouchEvent.TOUCH_END, tEnd);
}
private function tBegin(e:TouchEvent):void
{
clip.graphics.lineStyle(28,0x000000);
clip.graphics.moveTo(mouseX, mouseY);
}
private function tMove(e:TouchEvent):void
{
clip.graphics.lineTo(mouseX, mouseY);
}
private function tEnd(e:TouchEvent):void
{
// Save new graphics and merge with drawing
bmpData.draw (clip);
clip.graphics.clear();
}
Here I need to remove paints.I did paints using surfaceview.inside erase button I use below code. Now when I click erase button the drawn paints all erased.But now again draw means paints not visible.please any one help me.
public void onClick(View view){
if(view==erasebtn)
{
if (!currentDrawingPath.isEmpty()) {
currentPaint .setXfermode(new PorterDuffXfermode(PorterDuff.Mode.CLEAR));
action=true;
}
}
}
If you want to completely erase all drawing you have to fill it with the "empty" color.
Assuming you have a canvas in which you draw:
canvas.drawColor(Color.WHITE);
If you have drawn lines etc in a Canvas where you just add your drawings all the time then you need to create a way to restore an older version of that. Changing the Paint you have used to draw things will not change the things you have already drawn. It just affects how any future drawing is done.
There are several possibilities to do that e.g. the following should work:
Bitmap bitmap = Bitmap.createBitmap(400, 400, null);
Canvas canvas = new Canvas(bitmap);
ByteBuffer buffer = ByteBuffer.allocate(bitmap.getByteCount());
//save the state
bitmap.copyPixelsToBuffer(buffer);
// draw something
canvas.drawLine();
// restore the state
bitmap.copyPixelsFromBuffer(buffer):
That way you could go back 1 state. If you need to undo more steps think about saving Bitmaps to disk since it will consume quite a lot of memory otherwise.
Another possibility is to save all those steps you have drawn numerically in a list (like a vector graphic) in a way that you can redraw the full image up to a certain point - then you can just undo drawing by drawing just the first part of your list to a fresh image.
Edit: Would it work if you add this to the code and use it instead of undo()?
// add me to the code that has undo()
public void undoAll (){
final int length = currentStackLength();
for (int i = lenght - 1; i >= 0; i--) {
final DrawingPath undoCommand = currentStack.get( i );
currentStack.remove( i );
undoCommand.undo();
redoStack.add( undoCommand );
}
}