I write an application for Motorola Xoom tablet with Android 3.1 for my master thesis that can scan multiple QR Codes in real time with it's camera and that displays additional information in the display over recognised QR Codes.
The recognition is done with the ZXing android app (http://code.google.com/p/zxing/), I basically just changed the code of the ZXing app so that it can recognise multiple QR Codes at the same time and can do this scan continually, without freezing after a successful scan like the original app does. So my app is basically the ZXing app with continous scanning of multiple QR Codes.
But I'm facing a problem:
The recognition rate of QR Codes with the built in camera is not
very good. The ZXing app uses the pictures that it gets from the
camera preview. But these pictures do not have a very good quality.
Is there any possibility to make the camera preview making better
quality pictures?
P.S. I also tried to make real snapshots with camera.takePicture()
to get a better quality, but it takes too long to take the picture
so the real time experience for the user is lost.
Any help is highly appreciated!
Thanks.
Well, the question would be... why is the image quality that bad? Do the image have low resolution? Is the preview out of focus? I've worked with the ZXing Android app before and I know that it has a mechanism to keep the camera auto focusing the live scene.
If the auto focus mechanism is undergoing, then you are possibly decoding some images that might be out of focus. Rationaly, it would make sense to decode only when the camera is in focus, but that would delay the decoding process, since it would have to wait for the focusing to do the image processing phase. However, I wouldn't be too much worried about this for several reasons: 1) the auto focus is very quick, so there will be very few blurry images (if there are any at all), 2) the camera keeps focus for a sufficient amount of time that would allow for a couple decodings, 3) QRCodes typically do not require perfect images to be detected and decoded - they were designed that way.
If this is a problem for you, then disable the continous auto-focus and set the parameter to anything that suits you.
If the problem comes from low resolution frames, well increase it..., but QRCodes were also designed to be identified even in small resolutions. Also, keep in mind the increasing the resolution will also increase decoding time...
Related
I want to scan small QR-code(1cm x 1cm) I am trying to scan by Using this same code. It's detecting the large size qr-code perfectly but not detecting the small size qr-code. Is there have any way by which I can solve this issue?
The smartphone camera has to read each and every data module to be able to decode a QR Code And you know that quality of the camera varies widely across different smartphones. Some of them are very good and can scan even very small QR Codes. But others simply can’t so try with some other device to check you have issue in code or with device
I want to configure front and back both cameras into Android camera2 API, to take pictures and videos from both cameras simultaneously, I have created 2 texture views , when ever I am opening one camera (front or back) my code is working fine but whenever I am trying to open both cameras simultaneously , code is breaking upon creating session, I am getting cameraAccessException :configure stream : method not implimented.
I want to save both front and back camera captured images as one image and both video as one video.
Guys it will be very much helpful if you can put some sample code or some sample link.
i am using one plus 6, i recently downloaded an app "Dual camera fron back Camera", by using this i am able to capture image from front and back both camera on the same time, so if somebody want to suggest for no hardware support, i think it may be valid for other phones but for my case i think i am missing something in coding, till now from google search it looks like there is some problem with session creation for second camera, i debugged my code, during creation of second camera session it fails so if you have any idea about that, please share.
Thanks
Rakesh
The camera API is fine with it, but most Android devices do not have sufficient hardware resources to run both cameras at once, so you will generally get an error trying to open the second camera.
Both image sensors are typically connected to the same image signal processor (ISP), and that ISP can only operate one camera at a time. Some high-end devices have ISPs with multiple processing pipelines which can in theory run more than one camera at a time, but they often require using multiple pipelines to handle advanced functionality or very high resolutions for the main (back) camera.
So on those devices, multiple cameras may be usable at once, but not at maximum resolution or other similar restrictions.
Some manufacturers include multi-camera features in their own camera app, since they know exactly what the limitations are and can write application code to work within them. They may not make multi-camera available to normal apps, due to concerns about performance, thermal limits, or just lack of time to verify more than the exact use case they implement in their own app.
The Android camera API does not currently have a way to query if multiple cameras can be used at once, or if they can be, what the restrictions are. So the only thing you can do is try, and handle the errors in case that isn't feasible.
I am using this project android-camera2-secret-picture-taker to capture image without open camera view, but the captured images is very bad like this
any help to make this better?
thanks
[Edit]
I tried other phones and it works fine, I take this bad images on Huawei Y6II only and I don't know why? the phone camera is 13 mpx and works fine with native camera app.
Did you issue only a single capture request to the camera device? (No free-running preview or such).
Generally, the auto-exposure, focus, and white-balance routines take a second or so of streaming before they stabilize to good values.
Even if you don't want a preview on screen, you need to request 10-30 frames of data from the camera to start before you save a final image. Or to be more robust, set a repeating request targeting some low-resolution SurfaceTexture, and wait until the CaptureResult CONTROL_AE_STATE / AWB_STATE fields reach CONVERGED, and the AF_STATE field is what you want as well (depends on what AF mode you're using). Then capture your image.
This is a wildly blind guess, but hey, worth a try.
If you used some code snippet from the web which suggests to get a list of supported image sizes and just pick the first one - well this has backfired for me on Huawei devices (more than one model) because Huawei seems to provide the list in the ascending order of resolution (i.e. smallest-first), whereas most other devices I've seen does that in descending order (i.e. largest-first).
So if this is a resolution issue, it might be worth a check.
I was trying to implement a burst mode camera in my app, which can take multiple pictures at the rate of 5-10(or more) snaps per second.
FYI I already saw the previous questions here, here and here - tried and failed with speed. Also the questions are old and there are no comprehensive answers addressing all the concerns like how to manage heap etc.
I would really appreciate if someone can help with useful pointers, best practice or maybe an SSCCE.
Update :
Tried successfully with pulling preview frames # 15+snaps/sec, but the
problem is preview size is limited. On nexus 5 I can get only
1920x1080 which is ~2mp, whereas the full resolution pic possible on
n5 is 8mp :-(
I think a big part of the problem is the question: How does burst mode work in current phones? A couple of blogs point out that Google has confirmed that they will be adding a burst mode API.
I suspect current implementations work by setting exposure time to minimum and calling takePicture in a loop or using Camera.PreviewCallback
I played around with the latter for some computer vision projects and happened to look into writing a burst mode camera using this API. You could store the buffers you receive from Camera.PreviewCallback in memory and process them on a background thread.
If I remember correctly, the resolution was lower than the actual camera resolution, so this may not ideal.
Short of device-specific APIs offered by their manufacturers, the only way you can get a "burst mode" that has a shot of working across devices will be to use the preview frames as the images. takePicture() has no guarantees of when you will be able to call takePicture() again.
In my application I am using Zxing library for decoding barcodes. "Motorola Xoom" and "Samsung " are the target devices. The company for which I am developing this application uses Code 39 barcodes for their products.
Zxing decodes short barcodes fine, but when I try to decode lengthy "Code 39" barcodes it keeps on trying but produces no result. For image clearance I increased the scanning rectangle area which proved successful for Samsung but for Motorola it is not. Is there any way by which I can make it work for Motorola? Any feedback will be highly appreciated.
Often the problem is a difference in minimum focal distance. That is, if the Motorola device can't focus as closely, then widening the rectangle may make the user hold the barcode so close as to be too close to focus. I would look at this first.
Otherwise you're looking at improving the image processing for this case. The challenge is that the app does simple thresholding, which works well in common cases. It falls down when you have dense 1D barcodes whose bar width nears 1 pixel. Because each pixel is either black or white you lose proportionally a lot of detail about exactly where the bars are.
If that's really the issue you could look at rewriting your app to use a full-resolution capture from the camera, instead of preview. In normal cases, more resolution doesn't help; in these cases it might. You would not be able to have a continuous-scan app this way.
I am one of the Barcode Scanner devs, and maintain a (for-pay) enhanced version called Barcode Scanner+. It has a different image processing algorithm that finds boundaries at sub-pixel resolution, which works better for codes like these. You may want to see how it does -- and if that works well, at least that tells you the kind of approach that works better. I can't send you that code but can describe what it does, if you want to investigate that sort of image processing.