Can a moving camera read the QR code? - android

I wish to use QR code as a landmark in my project, where the phone's camera is moving. I am using an Android phone. So I am wondering how fast can a QR code be read and recognized?
Normal use requires quite a long time. I think around 2~3 seconds.
Can the recognition process be accelerated to say 0.5 second?
Is it possible for a moving camera to catch the QR code?

Have you already tried this library? https://code.google.com/p/zxing/
From my experience, if the camera is in focus it recognize the QR code is 1 second or less.

It depends on sensitivity (ISO), shutter speed, and aperture of the camera. And on external aspects: luminosity, angle and relative speed of the subject QR code.

Related

ML Kit Barcode Scanning doesn't detect QR codes in the photo of the monitor screen

I am using com.google.mlkit:barcode-scanning:17.0.2 to detect QR codes in the pictures.
After getting URI from the gallery I create InputImage and then process this image with BarcodeScanner to find QR codes. When I select a photo of QR codes on paper code is found. But when I take a photo of the QR code on the monitor screen code is never found. What I should do to be able to detect a QR code in a photo of a monitor screen?
(When I use the same scanner with CameraX to do live QR code detection it finds code on the monitor screen)
val image = InputImage.fromFilePath(context, uri)
val scanOptions =
BarcodeScannerOptions.Builder()
.setBarcodeFormats(
Barcode.FORMAT_QR_CODE,
)
.build()
val scanner = BarcodeScanning.getClient(scanOptions)
scanner.process(image)
.addOnSuccessListener {
val code = it.getOrNull(0)?.rawValue
if (code == null) {
// code NOT found
} else {
// code was found
}
}
Example of QR code on paper which is found
Example of QR code on the monitor screen which is NOT found
Chances are that you're fighting against Moiré effect. Depending on the QR detection algorithm, the high frequencies introduced by the Moiré effect can throw the detector off its track. Frustratingly, it is often the better QRcode detectors that are defeated by Moiré patterns.
A good workaround is:
take the picture at the highest resolution you can
perform a blurring of the picture
increase contrast to the max, if possible
(optionally) run a sigma thresholding, or just rewrite all pixels with a luma component below 32 to 0, all those above 224 to 255.
Another way of doing approximately the same operation is
take the picture at the highest resolution you can
increase contrast to the max, if possible
downsample the picture to a resolution which is way lower
The second method gives worse results, but usually can be implemented with device primitives.
Another source of problems with monitors (not in your picture as far as I can see) is the refresh rate. Sometimes, you'll find that the QR code is actually an overexposed QRcode in the upper half of the picture and an underexposed QRcode in the bottom half of the picture. Neither are recognized. This effect is due to the monitor's refresh rate and strategy and is not easy to solve - you can try lowering the monitor's luminosity to increase exposure time, until it exceeds 1/50th or 1/25th of a second, or take the picture from farther away and use digital zooming. Modern monitors have higher refresh rates and actually refresh at more than their own dwell time, so this should not happen; with old analog monitors however it will happen every time.
A third, crazy way
This was discovered half by chance, but it works really well even on cheap hardware provided the QR SDK or library supplies some small extra frills.
Take a video of about 1 second length at the highest frame rate you can get (25 fps?).
From the middle (e.g. 13th) frame, extract the three QR "waypoints" - there might be a low-level function in your SDK called "containsQRCode()" that does this. If it returns true, the waypoints were found and their coordinates are returned to allow performing scaling/estimates. It might return a confidence figure ("this picture seems to contain a QR code with probability X%"). These are the APIs used by apps to show a frame or red dots around candidate QR codes. If your SDK doesn't have these APIs, sorry... you're out of luck.
Get the frames immediately before and after (12th and 14th), then the 11th and 15th, and so on. If any of these returns a valid QR code, you're home free.
If the QR code is found (even if not correctly decoded) in enough frames, but the waypoint coordinates vary much, the hand is not steady - say so to the user.
If you have enough frames with coordinates that vary little, you can center and align on those, and average the frames. Then run the real QRCode recognition on the resulting image. This gets rid of 100% of the Moiré effect, and also drastically reduces monitor dwell noise with next to no information loss. The results are way better than the resolution change, which isn't easy to perform on (some) devices that reset the camera upon resolution change.
This worked on a $19 ESP32 IoT device operating in a noisy, vibration-rich environment (it acquires QR codes from a camera image of carton boxes on a moving transport ribbon).

Real time mark recognition on Android

I'm building an Android app that has to identify, in realtime, a mark/pattern which will be on the four corners of a visiting card. I'm using a preview stream of the rear camera of the phone as input.
I want to overlay a small circle on the screen where the mark is present. This is similar to how reference dots will be shown on screen by a QR reader at the corner points of the QR code preview.
I'm aware about how to get the frames from camera using native Android SDK, but I have no clue about the processing which needs to be done and optimization for real time detection. I tried messing around with OpenCV and there seems to be a bit of lag in its preview frames.
So I'm trying to write a native algorithm usint raw pixel values from the frame. Is this advisable? The mark/pattern will always be the same in my case. Please guide me with the algorithm to use to find the pattern.
The below image shows my pattern along with some details (ratios) about the same (same as the one used in QR, but I'm having it at 4 corners instead of 3)
I think one approach is to find black and white pixels in the ratio mentioned below to detect the mark and find coordinates of its center, but I have no idea how to code it in Android. I looking forward for an optimized approach for real-time recognition and display.
Any help is much appreciated! Thanks
Detecting patterns on four corners of a visiting card:
Assuming background is white, you can simply try this method.
Needs to be done and optimization for real time detection:
Yes, you need OpenCV
Here is an example of real-time marker detection on Google Glass using OpenCV
In this example, image showing in tablet has delay (blutooth), Google Glass preview is much faster than that of tablet. But, still have lag.

Reading a 1D barcode regardless of Orientation

I am trying to use ZXing to read 1D barcodes and want to be able to read the barcode no matter the orientation since I am assuming the person may not be looking at the image. I noticed that ZXing can read the barcode up to 45 degrees. Is there a reason it doesn't test both orientations of the image, and is it possible to make it do this?
If not are there alternatives that can?
The reason is just that 99.9% of the time people scan a barcode in its natural orientation (or upside down). Scanning for vertical barcodes would usually just be a waste of time, when you could be getting on to another frame to scan. But it's easy to do, just add an extra chunk of code to rotate and re-scan the image.
#user117 it is not necessary to try all orientations. Any rotation for which a horizontal line still passes through the whole barcode works. You would only have to try additional rotations to cover cases beyond those, and it turns out that 4 would be the most that are needed to cover any orientation.

Location using QR code in Android

I want to use QR code to get the smart phone's location (either UTM or Lat/Lon). Reading this article, it looks like it is possible to get the position of the smart phone. In addition, I want to render some 3D models on the camera screen. Is it possible? Actually I have no clue from where should I start.
Can anyone help me out regarding this?
Thanks.
If you read that article carefully, all it suggests to get the location of the phone, is to simply encode the lat/lon in the QR code itself. This will only work if the location of the displayed QR codes are fixed (e.g. a sticker on a wall rather than printed on a flyer).
Is it possible to render 3D models on a camera screen? Sure. It wouldn't be the default camera app, you'd have to make your own. It would involve a fair bit of math if you wanted to position the 3D model relative to the QR code. You'd probably try to build planes based on the sides of the squares.

Better quality for camera preview frame

I write an application for Motorola Xoom tablet with Android 3.1 for my master thesis that can scan multiple QR Codes in real time with it's camera and that displays additional information in the display over recognised QR Codes.
The recognition is done with the ZXing android app (http://code.google.com/p/zxing/), I basically just changed the code of the ZXing app so that it can recognise multiple QR Codes at the same time and can do this scan continually, without freezing after a successful scan like the original app does. So my app is basically the ZXing app with continous scanning of multiple QR Codes.
But I'm facing a problem:
The recognition rate of QR Codes with the built in camera is not
very good. The ZXing app uses the pictures that it gets from the
camera preview. But these pictures do not have a very good quality.
Is there any possibility to make the camera preview making better
quality pictures?
P.S. I also tried to make real snapshots with camera.takePicture()
to get a better quality, but it takes too long to take the picture
so the real time experience for the user is lost.
Any help is highly appreciated!
Thanks.
Well, the question would be... why is the image quality that bad? Do the image have low resolution? Is the preview out of focus? I've worked with the ZXing Android app before and I know that it has a mechanism to keep the camera auto focusing the live scene.
If the auto focus mechanism is undergoing, then you are possibly decoding some images that might be out of focus. Rationaly, it would make sense to decode only when the camera is in focus, but that would delay the decoding process, since it would have to wait for the focusing to do the image processing phase. However, I wouldn't be too much worried about this for several reasons: 1) the auto focus is very quick, so there will be very few blurry images (if there are any at all), 2) the camera keeps focus for a sufficient amount of time that would allow for a couple decodings, 3) QRCodes typically do not require perfect images to be detected and decoded - they were designed that way.
If this is a problem for you, then disable the continous auto-focus and set the parameter to anything that suits you.
If the problem comes from low resolution frames, well increase it..., but QRCodes were also designed to be identified even in small resolutions. Also, keep in mind the increasing the resolution will also increase decoding time...

Categories

Resources