I want to find that does smalller image or subimage is part of larger image or not? and if it is part of it then want to know its coordinates.
For doing that, i was using template matching method of opencv. It was working fine if the size of subimage is exactly same as size of part of Main Image(where it matches). But if i change the size/scale the sub image then template matching is not able to do what i what i want to perform. Can anyone know other methods of how to do that and please if possible give me the code to find it. Any help will be appreciated.
Related
I need the code to add invisible watermark to another image in Android
As the comments mentioned, Stackoverflow isn't a free coding service. I will provide you with a high level design advice from which you can implement your own code.
Invisible watermark could just be metadata. The point is to make your particular photo unique and identifiable, right? I would recommend you looking into image metadata manipulation for a simple solution.
That being said, if you are looking for some high tech stealthy watermarking, then you might be looking for pixel manipulation. You can change a few of the pixel colors so if it's compared with the original image with the naked eye, it looks identical but if compared with their base64 encoding you can see a difference. Simply create your own pattern as some sort of signature to attach to images to identify them.
Both method allows you to determine if an image is yours due to the "watermark" you leave on it.
I need to add to user interface in Android application using Camera API a frame showing ID card position with specific dimension on the screen, when the user is taking a picture.
like this:
Any suggestion?
Thank you.
You will have to make your own camera and process each frame to find and highlight edges.
It's not an easy task :)
https://www.tensorflow.org/ or OpenCV might be of interest to you.
I think you cant use android API (Android.Camera) for doing that. You can use the OpenCV to do anything in your application.
I have to stitch two images and I use openCV4Android.I read docs and some threads in about stitching images,for example: Panorama – Image Stitching in OpenCV , Homography between images using OpenCV for Android , Stitch multiple images , Error matching with ORB in Android and others.At first,it seems easy.But the result is strange!Below,you can see two images that I used for test and result:
Here is "image1":
This is "image2":
You can see drawed features:
And this is result of warping image1:
What I did wrong?Or it may be I did not understand good?
Quick answer:
I would say that you don't have enough overlap between your images. If you look at your matches (what you call "drawed features"), most of them are wrong. As a first test, try to stitch two images that have, say, 80% overlap.
More details:
Big picture:
When you stitch two images, you assume that there exists an affine transform (your "homography") that will project features from one image onto the other one. When you know this transform, then you know the relative position of your images and you can "put them together". If the homography transform that you find is bad, then the stitching will be bad as well.
How do we find the homography transform, then?
First of all, you detect features (with your FeatureDetector) on both images.
Then, you describe them (with your DescriptorExtractor). Basically this creates a representation of your features, so that you can compare two features and see how similar they are.
You match (using your DescriptorMatcher) features from the first image to the features from the second image. It means that for each feature in the first image, you try to find the most similar one in the second image. Those are your "drawed features".
From those matches, you use an algorithm called "RANSAC" to find the homography transform corresponding to your data. The idea is that you try to find a set of matches from all your "drawed features" that makes sense geometrically.
But why doesn't it work here?
If you look at your "drawed features", you will see that only a few ones on the "Go" part of "Google" and some in the boorkmarks correspond, when the others are wrong. It means that most of your matches are bad, and then it makes it possible to find a homography that works for this data, but that is wrong.
In order to have a better homography, you would need much more "good" matches. Consequently, you probably need to have more overlap between your images.
NOTE: try your code with the images used in "Panorama – Image Stitching in OpenCV"
Now a day i am doing a project related to image processing with the fallowing feature
1)Stretch 2)Scale 3)Twist
I am not understand how to achieve it in android.
Here i am putting some screen shot related to this project for makeing more clarity in my question.
The above image is the real image i want to apply image processing over this image for making it like
blow image.
Please me any suggestion,help url ,tutorial and other thinks for achieve this task.
You need to find a function which, when applied to pixel coordinates, outputs new pixel coordinates producing the twist effect you're looking for.
It may help to take a look at some of the functions listed at http://reference.wolfram.com/mathematica/ref/ImageTransformation.html (esp. the section "Neat examples").
Once you have defined the function, you'd need to implement in Android the equivalent of the ImageTransformation command. Basically, for each pixel in the output image, call the function to know where to sample in the input image; use windowing when sampling the input image so that you limit artifacts and get a smoother result.
I am new to android.I want to insert a pictue (which is in c: drive) using image view control. pls give some idea related to it.
this covers that in detail. let us know if you have specific questions.
http://developer.android.com/guide/topics/graphics/2d-graphics.html#drawables-from-images
Displaying an image in an ImageView is quite simple. Check out this family of files from API Demos:
ImageView1.java
image_view_1.xml
ImageView1.java is quite simple: it just loads the xml layout file. You can change the #drawable/... references in image_view_1.xml to point to your own resources and see the effects you get with different styles of ImageView.
I recommend exploring the API Demos code, as it covers a large portion of the Android framework and will give you an idea about what is possible.