How to equalize brightness, contrast, histrograms between two images using EMGUCV - android

What I am doing is attempting to using EMGU to perform and AbsDiff of two images.
Given the following conditions:
User starts their webcam and with the webcam stationary takes a picture.
User moves into the frame and takes another picture (WebCam has NOT moved).
AbsDiff works well but what I'm finding is that the ISO adjustments and White Balance adjustments made by certain cameras (even on Android and iPhone) are uncontrollable to a degree.
Therefore instead of fighting a losing battle I'd like to attempt some image post processing to see if I can equalize the two.
I found the following thread but it's not helping me much: How do I equalize contrast & brightness of images using opencv?
Can anyone offer specific details of what functions/methods/approach to take using EMGUCV?
I've tried using things like _EqualizeHist(). This yields very poor results.
Instead of equalizing the histograms for each image individually, I'd like to compare the brightness/contrast values and come up with an average that gets applied to both.
I'm not looking for someone to do the work for me (although code example would CERTAINLY be appreciated). I'm looking for either exact guidance or some way to point the ship in the right direction.
Thanks for your time.

Related

Android steganography detection LSB

I am trying to do detection of LSB Steganography using real-time camera on mobile phone. So far i havent had much luck with detecting the LSB Steganography, whether on printed material or on the PC Screen.
I tried using OpenCV and do the conversion of each frame to RBG, and then read the bits from each pixel, but that never detects the steganography.
I also tried using the Camera functionality, and check onFrame whether pixel by pixel the starting string is recognized or not, so i can read the actual hidden data in the remaining pixels.
This provided few times positive result, but then the reading of the data was impossible.
Any suggestions how to approach this?
Little bit more information on the hidden data:
1. It is all over the image, and i know the algorithm works, since if i just read the exact image through Bitmap in the app, the steganography is detected and decoded, but when i try to use the camera no such luck.
2. It is in a grid, 8x5 pixels all over the image, so it is not that it is only on 1 specific area of the image, and it can not be detected in the camera view.
I can post some code as well if needed.
Thanks.
You still haven't clarified on the specifics of how you do it, but I assume you do some flavour of the following:
embed a secret in a digital image,
print this stego image or have it displayed on a pc, and
take a photograph of that and detect the embedded secret.
For all practical purposes, this can't work. LSB pixel embedding steganography is a very fragile technique. You require a perfect copy of the stego pixels images for extraction to work. Even a simple digital manipulation is enough to destroy your secret. Scaling, cropping and rotation are to name a few. Then you have to worry about the angle you take the photo and the ambient light. And we're not even touching upon the colours that are shown on a pc monitor or the printed photo.
The only reason you get positives for the starting sequence is because you use a short one and you're bound to be lucky. Assuming the photographed stego image results in random deviations for each pixel from its true value, you'll still get lucky sometimes. Imagine the first pixel had the value 250 and after photographed it's 248. Well, the LSB in both cases is still 0.
On top of that, some sequences are more likely to come up. In most photos neighbouring pixels are correlated, because the colour gradient is smooth. This means that if the top left of a photo is dark and the top right is bright, the colour will change slowly. For example, the first 4 pixels have the value 10, then the next few have 11, and so on. In terms of LSBs, you have the pattern 00001111 and as I've just explained, that's likely to come up fairly frequently regardless of what image you photograph out there.

Real time mark recognition on Android

I'm building an Android app that has to identify, in realtime, a mark/pattern which will be on the four corners of a visiting card. I'm using a preview stream of the rear camera of the phone as input.
I want to overlay a small circle on the screen where the mark is present. This is similar to how reference dots will be shown on screen by a QR reader at the corner points of the QR code preview.
I'm aware about how to get the frames from camera using native Android SDK, but I have no clue about the processing which needs to be done and optimization for real time detection. I tried messing around with OpenCV and there seems to be a bit of lag in its preview frames.
So I'm trying to write a native algorithm usint raw pixel values from the frame. Is this advisable? The mark/pattern will always be the same in my case. Please guide me with the algorithm to use to find the pattern.
The below image shows my pattern along with some details (ratios) about the same (same as the one used in QR, but I'm having it at 4 corners instead of 3)
I think one approach is to find black and white pixels in the ratio mentioned below to detect the mark and find coordinates of its center, but I have no idea how to code it in Android. I looking forward for an optimized approach for real-time recognition and display.
Any help is much appreciated! Thanks
Detecting patterns on four corners of a visiting card:
Assuming background is white, you can simply try this method.
Needs to be done and optimization for real time detection:
Yes, you need OpenCV
Here is an example of real-time marker detection on Google Glass using OpenCV
In this example, image showing in tablet has delay (blutooth), Google Glass preview is much faster than that of tablet. But, still have lag.

Scan for Object with Android Camera

I need to scan a special object within my android application.
I thought about using OpenCV but it is scanning all objects inside the view of the camera. I only need the camera to regognize a rectangular piece of paper.
How can i do that?
My first thought was: How do barcode scanners work? They are able to regognize the barcode area and automatically take a picture when the barcode is inside a predefined area of the screen and when its sharp. I guess it must be possible to transfer that to my problem (tell me if im wrong).
So step by step:
Open custom camera application
Scan objects inside the view of the camera
Recognize the rectangular piece of paper
If paper is inside a predefined area and sharp -> take a picture
I would combine this with audio. If the camera recognized the paper make some noice like a peep or something and the more the object is fitting the predefined area the faster the peep sound is played. That would make taking pictures for blind people possible.
Hope someone got ideas on that.
OpenCV is an image processing framework/library. It does not "scan all objects inside the view of the camera". By itself it does nothing and yet it gives the use of a number of useful functions, many of which could be used for your specified application.
If the image is not cluttered and nothing is on the paper, I would look into using edge detection (i.e. Canny or similar) or even colour blobs (even though colour is never a good idea, if your application is always for white uncovered paper, it should work robustly).
OpenCV does add some overhead, but it would allow you to quickly use functions for a simple solution.

To eliminate gray stripes when taking photos from computer displays using Android SDK

Maybe a comparison of pictures best illustrate the problem.
This is the original picture:
Using Android SDK, I managed to take this photo from my Android phone:
You may see that, there are lots of gray strips on the photo.
Although the main shapes are there, for I'm processing these photos on an image recognition project, these gray stripe completely ruined the results.
It (Edit: does not ) seems that the built-in photo app would automatically eliminate them, but I don't know how to do it manually in my app. Seems that this is caused by display having a different refresh rate.
What you're seeing is happening because cameras have a small advantage over the human eye when taking the photo.
The refresh rate for most displays is 50Hz or 60Hz, which is too fast for our eyes to notice.
However, a camera sensor takes the image much faster than the human eye, and can see the scan lines created by the refreshing of the image on the display. You can work around this by using a longer exposure time, closer to the human eyes' speed, but you may not be able to control that on most Android devices.
I suggest you use your operating system's inbuilt screenshot utility instead.

Trouble recognizing digits in Tesseract - android

I was hoping someone could tell me why it is my Tesseract has trouble recognizing some images with digits, and if there is something i can do about it.
Everything is working according to test, and since it is only digits i need, i thought i could manage with the english pattern untill i had to start with the 7segmented display aswell.
Though i am having a lot of trouble with the appended images, i'd like to know if i should start working on my own recognition algorithms or if I could do my own datasets for Tesseract and then it would work, does anyone know where the limitation lies with Tesseract?
things tried:
tried to set psm to one_line, one_word, one_char(and chop up the picture).
With one_line and one_word there was no significant change.
with one_char it did recognize a bit better, but sometimes, due to big spacing it attached an extra number to it, which then screwed it up, if you look at the attached image then it resulted in 04.
I have also tried to do the binarization myself, this resulted in poorer recognition and was very rescource consuming.
I have tried to invert the pictures, this makes no difference at all for tesseract.
I have attached the pictures i'd need, among others, to be processed.
Explaination about the images:
is a image that the tesseract has no trouble recognizing, though it has been made in word for the conveniences of building an app around a working image.
is real life image matching the image_seven. But it cannot recognize this.
is another image i'd like it to recognize, and yes i know it cant be skrewed, and i did unskrew(think skrew is the term here=="straighting") it when testing.
I know of some options that might help you:
Add extra space between image border and text. Tesseract would work awful if text in the image is positioned at the edge.
Duplicate your image. For example, if you're performing OCR on a word 'foobar', clone the image and send 'foobar foobar foobar foobar foobar' to tesseract, results would be better.
Google for font training and image binarization for tesseract.
Keep in mind, that built-in camera in mobile devices mostly produce low quality images (blured, noised, skewed etc.) OCR itself is a resource comsuming process and if you add a worthy image preprocessing to that, low-end and mid mobile devices (which are likely to have android) could face unexpectedly slow performance or even lack of resources. That's OK for free/study projects, but if you're planning a commercial app - consider using a better SDK.
Have a look at this question for details: OCR for android
Tesseract doesn't do segmentation for you. Tesseract will do a thresholding of the image prior to the actual tesseract algo. After thresholding, there may be some edges, artefacts that remain in the image.
Try to manually modify your images to black and white colors and see what tesseract returns as output.
Try to threshold (automatically) your images and see what tesseract returns as output. The output of thresholding may be too bad causing tesseract to give bad output.
Your 4th image will probably fail due to thresholding (you have 3 colors: black background, greyish background and white letters) and the threshold may be between (black background, greyish background).
Generally Tesseract wants nice black and white images. Preprocessing of your images may be needed for better results.
For your first image (with the result "04"), try to see the box result (char + coordinates of box that contains the recognized char). The "0" may be a small artefact - like a 4 by 4 blob of pixels.
You may give javaocr a try ( http://sourceforge.net/projects/javaocr/ , yes, I'm developer )
Therre is no offocial release though, and you will have to look for sources ( good news: there is working android sample including sampler, offline trainer and recognizer application )
If you only one font, you can get pretty good results with it (I reached up to recognition rates 99.96 on digits of the same font)
PS: it is pure java and uses invariant moments to perform matching ( so no problems with scaling and rotation ) . There is also pretty effective binarisation.
See it in action:
https://play.google.com/store/apps/details?id=de.pribluda.android.ocrcall&feature=search_result#?t=W251bGwsMSwxLDEsImRlLnByaWJsdWRhLmFuZHJvaWQub2NyY2FsbCJd

Categories

Resources