Can't read a QR code from camera - android

EDIT:
After playing around with it for a few hours, I came to believe that the problem is in the image quality. For example, to first image is how it came from the camera. Decoder can't read it. The second image is turned into B/W with adjusted contrast and the decoder reads it great.
Since the demo app that came with zxing is able to read the fist image off the monitor in a few seconds, I think the problem might be in some setting deep within the zxing library. It doesn't wait long enough to process the image, but spits out NotFound almost instantly.
I'm making a simple QR-reader app. Here's a screenshot.
The top black area is a surfaceview, that shows frames from the camera. It works fine, only you can't see it in the screenshot.
Then, when I press the button, a bitmap is taken from that surfaceview, placed on an ImageView below and is attempted to be read by the zxing library.
Yet it will give out a NotFoundException. :/
**10-17 19:53:15.382: WARN/System.err(2238): com.google.zxing.NotFoundException
10-17 19:53:15.382: WARN/dalvikvm(2238): getStackTrace() called but no trace available**
On the other hand, if I crop the qr image from this screenshot, place it into the imageview ( instead of a camera feed ) and try to decode it, it works fine. Therefor the QR image itself and its quality are OK... but then why doesn't it decode in the first scenario?
Thanks!
public void dec(View v)
{
ImageView ivCam2 = (ImageView)findViewById(R.id.imageView2);
ivCam2.setImageBitmap(bm);
BitmapDrawable drawable = (BitmapDrawable) ivCam2.getDrawable();
Bitmap bMap = drawable.getBitmap();
TextView textv = (TextView) findViewById(R.id.mytext);
LuminanceSource source = new RGBLuminanceSource(bMap);
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
Reader reader = new MultiFormatReader();
try {
Result result = reader.decode(bitmap);
Global.text = result.getText();
byte[] rawBytes = result.getRawBytes();
BarcodeFormat format = result.getBarcodeFormat();
ResultPoint[] points = result.getResultPoints();
textv.setText(Global.text);
} catch (NotFoundException e) {
textv.setText("NotFoundException");
} catch (ChecksumException e) {
textv.setText("ChecksumException");
} catch (FormatException e) {
textv.setText("FormatException");
}
}
how the bitmap is created:
#Override
public void surfaceCreated(SurfaceHolder holder)
{
try
{
this.camera = Camera.open();
this.camera.setPreviewDisplay(this.holder);
this.camera.setPreviewCallback(new PreviewCallback() {
public void onPreviewFrame(byte[] _data, Camera _camera) {
Camera.Parameters params = _camera.getParameters();
int w = params.getPreviewSize().width;
int h = params.getPreviewSize().height;
int format = params.getPreviewFormat();
YuvImage image = new YuvImage(_data, format, w, h, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
Rect area = new Rect(0, 0, w, h);
image.compressToJpeg(area, 50, out);
bm = BitmapFactory.decodeByteArray(out.toByteArray(), 0, out.size());
}
});
}
catch(IOException ioe)
{
ioe.printStackTrace(System.out);
}
}

I wrote this code. Returning quickly isn't a problem. Decoding is very fast on a mobile, and very very fast on a desktop.
The general answer to this type of question is that some images just aren't going to decode. That's life -- the heuristics don't always get it right. But I don't think that is the problem here.
QR codes don't decode without a minimal white "quiet zone" around them. The image beyond its borders is considered white for this purpose. But in your raw camera image, there's little border around the code and it's not all considered white by the binarizer, I'd bet.
Still, there's more you can do. Set the TRY_HARDER hint to the decoder, for one, to have it spend a lot more CPU to try to decode. You can also try a different Binarizer implementation than the default HybridBinarizer.
(The rest looks just fine. I assume that RGBLuminanceSource is getting data in the format it expects; it ought to from Bitmap)

See this: http://zxing.org/w/docs/javadoc/com/google/zxing/NotFoundException.html The exception means that a barcode wasn't found in the image. My suggestion would be to use your work around that works instead of trying to decode the un-cropped image.

Related

AndroidSVG fuzzy edges on image

I am want to display Barcode on android. As input I get SVG string. As a SVG library I use AndroidSVG. I used sample code from library website and everything seem to be fine. But when I zoom on image, I get distorted edges (Anti-alias?). I tried to disable all the flags. But the image still has fuzzy edges. What can be wrong with my code?
Picture:
Try to zoom to max, you will see the fuzzy edges.
Code:
private void loadQRCode(String svgString) {
SVG svg = null;
try {
svg = SVG.getFromString(svgString);
} catch (SVGParseException e) {
e.printStackTrace();
}
if (svg.getDocumentWidth() != -1) {
int widthPx = Utils.pxFromDp(400);
int heightDp = Utils.pxFromDp(300);
svg.setDocumentWidth(widthPx);
svg.setDocumentHeight(heightDp);
int width = (int) Math.ceil(svg.getDocumentWidth());
int height = (int) Math.ceil(svg.getDocumentHeight());
Bitmap newBM = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Canvas bmcanvas = new Canvas(newBM);
final DrawFilter filter = new PaintFlagsDrawFilter(Paint.ANTI_ALIAS_FLAG| Paint.FILTER_BITMAP_FLAG | Paint.DITHER_FLAG, 0);
bmcanvas.setDrawFilter(filter);
barcode.setLayerType(View.LAYER_TYPE_SOFTWARE,null);
bmcanvas.drawRGB(255, 255, 255);
svg.renderToCanvas(bmcanvas);
barcode.setImageBitmap(newBM);
}
}
If the edges of the bars do not lie exactly on pixel boundaries, you will get anti-aliasing. On a high resolution screen, this should not normally be visible.
However, in your code, you are rendering the SVG to a bitmap and setting the bitmap to an ImageView. If that ImageView has a size larger than the bitmap - ie. greater than 400 x 300, then the anti-aliased pixels in that bitmap will likely be rendered larger and thus more visible.
One solution is to avoid using a bitmap. Use a Picture/PictureDrawable instead. That way the barcode will be rendered at highest quality no matter what size it is. As vector graphics are supposed to be.
Follow the example on this page:
http://bigbadaboom.github.io/androidsvg/use_with_ImageView.html
So your code should probably look something like the following:
private void loadQRCode(String svgString) {
try {
SVG svg = SVG.getFromString(svgString);
barcode.setLayerType(View.LAYER_TYPE_SOFTWARE,null);
Drawable drawable = new PictureDrawable(svg.renderToPicture());
barcode.setImageDrawable(drawable);
} catch (SVGParseException e) {
e.printStackTrace();
}
}
If for some reason you need to use bitmaps - maybe you are caching them or something - then you should watch for changes in the size of the ImageView and then recreate the bitmap at the new size. So the bitmap is always the same size as the ImageView to which it is assigned.

Restoring the image from Native Memory using NDK returns Black Image with No display

I am trying to restore the image from Native memory (using NDK,C/C++) but that returns me an Black Image.
What i am doing ::
1)get the image from Drawable
2)apply the rotation to the image
3)After rotation apply the grayscale effect to the image
4)At the end i am trying to save the grayscale image in SD Card
For all the above steps, i am referring this awesome lib,which have the native method to store and restore the images.
Please note image is being stored in the SD card but when i am trying to see the image,its totally black with no display at all.
My Java Implementation ::
public boolean onOptionsItemSelected(MenuItem item)
{
switch (item.getItemId())
{
case R.id.item_rotate_90:
options.inPreferredConfig = Config.ARGB_8888;
bitmapOrig = BitmapFactory.decodeResource(this.getResources(), R.drawable.sample_cam,options);
storeBitmap(bitmapOrig);
bitmapOrig.recycle();
rotateBitmap(90,_handler);
tempBmp=getBitmapAndFree();
bitmapWip = Bitmap.createBitmap(bitmapOrig.getWidth(),bitmapOrig.getHeight(),Config.ALPHA_8);
jniConvertToGray(tempBmp,bitmapWip);
if(bitmapWip!=null)
{
try
{
Bitmap b = Bitmap.createBitmap(bitmapWip.getWidth(),bitmapWip.getHeight(),Bitmap.Config.ARGB_8888);
Canvas c = new Canvas(b);
Paint paint = new Paint();
ColorMatrix cm = new ColorMatrix();
ColorMatrixColorFilter f = new ColorMatrixColorFilter(cm);
paint.setColorFilter(f);
c.drawBitmap(bitmapWip, 0, 0, paint);
storeBitmap(b);
SaveGrayScaledImage(b);
b.recycle();
tempBmp.recycle();
} catch (IOException e) {
e.printStackTrace();
}
ivDisplay.setImageBitmap(bitmapWip);
}
break;
}
}
I have not make any changes in native method(means using the same method as this lib have for storing and restoring the image).
Saving image to SD Card ::
private void SaveGrayScaledImage(Bitmap finalBitmap)throws IOException
{
String imageFileName = "Temp" + "_gray";
File albumF = new File("/mnt/sdcard/","gray_img");
if(!albumF.exists())
{
albumF.mkdirs();
}
// File imageF = File.createTempFile(imageFileName, JPEG_FILE_SUFFIX,
// albumF);
File imageF = new File(albumF,imageFileName + ".jpeg");
if (imageF.exists()) {
imageF.delete();
imageF.createNewFile();
}
try {
FileOutputStream out = new FileOutputStream(imageF);
finalBitmap.compress(Bitmap.CompressFormat.JPEG, 100, out);
out.flush();
out.close();
} catch (Exception e) {
e.printStackTrace();
imageF = null;
}
}
While googling, i found that(may be i am wrong) image which returns for Native Memory have the ALPHA_8 bitmap config,so i convert the config ALPHA_8 t0 ARGB_8888,but the result is same.
Conversion of bitmap from ALPHA_8 to ARGB_8888 ::
Bitmap b = Bitmap.createBitmap(bitmapWip.getWidth(),bitmapWip.getHeight(),Bitmap.Config.ARGB_8888);
Canvas c = new Canvas(b);
Paint paint = new Paint();
ColorMatrix cm = new ColorMatrix();
ColorMatrixColorFilter f = new ColorMatrixColorFilter(cm);
paint.setColorFilter(f);
c.drawBitmap(bitmapWip, 0, 0, paint);
StoreBimap funcation ::
public void storeBitmap(final Bitmap bitmap)
{
if(_handler!=null)
freeBitmap();
_handler=jniStoreBitmapData(bitmap);
}
I have no clue about where i was wrong. i have checked the lib methods and implmentation again and again to find the issue.
I have spent my many hours on this small issue and it really frustrating me.
Let me know please if you need anything else from my side.
Please help me to resolve this issue.
Many Thanks in Advance....
EDIT ::
bitmapHolder=new JniBitmapHolder();
final Options options=new Options();
BitmapFactory.decodeFile(picPath, options);
options.inJustDecodeBounds=true;
options.inPreferredConfig=Config.ARGB_8888;
prepareForDownsampling(options,192,256);
System.gc();
bmpGrayscale=BitmapFactory.decodeFile(picPath,options);
int width = bmpGrayscale.getWidth();
int height = bmpGrayscale.getHeight();
bitmapHolder.storeBitmap(bmpGrayscale);
bmpGrayscale.recycle();
Bitmap thumbnail = null;
int rotationInDegrees = 0;
if (picPath != null) {
Uri uri = Uri.parse(picPath);
ExifInterface exif = null;
try {
exif = new ExifInterface(uri.getPath());
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
int rotation = exif.getAttributeInt(
ExifInterface.TAG_ORIENTATION,
ExifInterface.ORIENTATION_NORMAL);
rotationInDegrees = exifToDegrees(rotation);
}
rotationInDegrees = 90;
ByteBuffer _handler =null;
switch(rotationInDegrees)
{
case 90:
bitmapHolder.rotateBitmapCw90();
break;
case 180:
bitmapHolder.rotateBitmap180();
break;
}
Bitmap bitmapWip = Bitmap.createBitmap(width,height,Config.ALPHA_8);
bitmapHolder.bitmapGrayScale(bitmapWip);
if(bitmapWip!=null){
File CurrentFile = saveGrayScaledIamge(bitmapWip,
takePhotoFile);
}
I have followed your suggestion/steps but the result is same,getting black image with no display.
ok I've found multiple problems and tips for improvements:
the first createBitmap is run with width*height on a bitmap that got rotated instead of height*width. this should be as:
rotateBitmap(90,_handler);
tempBmp=getBitmapAndFree();
bitmapWip=Bitmap.createBitmap(bitmapOrig.getHeight(),bitmapOrig.getWidth(),Config.ALPHA_8);
when saving file you don't get the correct path (you use a hardcoded path, and Lint warns about it).
jniConvertToGray doesn't really need to go over arrays and can just use a pointer, as it just runs on a single pixel. you store the bitmap into JNI twice instead of once (just do: store, rotate, grayscale, restore&free).
you don't use the new bitmap after you have finished working on it, so if I call rotation multiple times, it doesn't seem to do anything.
you already have bitmapWip rotated and grayscaled. why do you need to make a new bitmap that has its content in it, do a grayscale on it, and then save it ?
functions should be named with lowercase letter in the beginning of their names.
and finally , the most important thing: you use ALPHA_8 for the image that you show and need to save to file. this configuration has no color. it's a mask. In order to see the problem, you should set a background color to the imageView :
ivDisplay.setBackgroundColor(0xFFff0000);
before choosing the rotation, you see nothing red. after choosing it, everything you think is white, has actually become red. that's because it's transparent...
If in any phase of your development you've succeeded saving the image to a file and thought it's a black image (yet the size is not 0) , try to edit it and put a background behind it. Maybe you got lucky and just got transparent pixels...
Adding the fact that you save the file to a jpg format, which doesn't support transparency, might also contribute to unexpected behaviors.
in order to solve this, you should use the same technique i've used - use a single bitmap all the time. no need to create so many. only one should exist on the java world, and it should support having colors.

Image appearing 90 degree tilt when downloaded from server in Android?

Working on an application in which we capture images and upload over server. Application is in Android And I Phone. When we post image from Android, they are of quantity in Kilo Bytes but when we Post image from I Phone they are of MB size.
When we the images posted from IPHONE 5 with URL on browser, they appear good as they supposed to appear but when we download that image in Android device and show in an IMAGE VIEW, they appears 90 deg tilt to the left hand side.
I am not using, any rotation code after downloading the images in Android or in I Phone.
In I Phone the images are appearing fine.
IMages captured from Android are also visible straight. Images of low resolution capture from I Phone are also visible straight in Android.
Image uploaded from Android:
https://s3.amazonaws.com/WeddingApp/Weddingimage/933_6_stan.jpg
Image uploaded from I Phone:
https://s3.amazonaws.com/WeddingApp/Weddingimage/937_6_stan.jpg
public static boolean downloadFile(final String fileURL,File directory,Context CONTEXT){
try{
URL url = new URL(fileURL);
URLConnection ucon = url.openConnection();
ucon.setReadTimeout(35000);
ucon.setConnectTimeout(10000);
InputStream is = ucon.getInputStream();
BufferedInputStream inStream = new BufferedInputStream(is, 1024 * 5);
File file = directory;
if (file.exists())
{
file.delete();
}
file.createNewFile();
FileOutputStream outStream = new FileOutputStream(file);
byte[] buff = new byte[5 * 1024];
int len;
while ((len = inStream.read(buff)) != -1)
{
outStream.write(buff, 0, len);
}
outStream.flush();
outStream.close();
inStream.close();
}
catch (IOException e){ //IF SDCARD NOT EXIST THEN PASS RESPONSE TRUE`
e.printStackTrace();
return false;
}catch(Exception e){
e.printStackTrace();
return false;
}
return true;
}
Please suggest me.
This is because of iPhone's silly new camera implementation. I found below on research (sources given below):
1. The iPhone camera's interpretation of "up" direction is rotated 90 degrees from the actual "up" direction we know. What you and I call "up", is iphone's "left" or "right". The camera's orientation has been changed. This was done so that they could support taking pictures with the volume button.
2. But, being aware that other devices and cameras have normal orientation, and so that those devices and browsers display the images properly, iOS adds EXIF data to the uploaded images. This is added on image upload. EXIF, as mentioned in the link, is a metadata that contains all the information of how the image is actually supposed to look.
3. The target platform on which the image is downloaded is supposed to read the EXIF data, and then show the correct representation of the image. So, if the target platform reads the EXIF, it will realize that the image received is 90 degrees rotated, and that it should rely on EXIF data, and not on what's received. This is why iPhone camera apps can show the image properly.
4 However, not everybody reads EXIF. Apps and browsers that are aware of EXIF, read the EXIF data of the image, and show it properly. But those that are not aware of EXIF, don't read that data, and show the image exactly as received --> 90 degrees rotated
5 People have worked around this problem, by rotating their iPhone 90 degrees, before taking a picture (so that the camera is oriented right, before the picture is taken)
Resources:
1. Source 1
2. Source 2
Others with same problem:
1. SO Post 1
2. SO Post 2
The problem seems to be related to EXIF data found on the images. We must process it before displaying the image. It appears to be that not every camera outputs EXIF data 'cause this issues only happens to me in some android handsets.
Take a look at: http://commons.wikimedia.org/wiki/Commons:Exif#Orientation_.28rotation_and_mirroring.29
EDIT:
We could implement something like:
public Bitmap getBitmap(String path){
Bitmap tmpBitmap = BitmapFactory.decodeFile(path);
Bitmap bitmap = null;
if(path != null){
try {
ExifInterface ei = new ExifInterface(path);
int orientation = ei.getAttributeInt(ExifInterface.TAG_ORIENTATION, ExifInterface.ORIENTATION_NORMAL);
Matrix mtx = new Matrix();
int w = tmpBitmap.getWidth();
int h = tmpBitmap.getHeight();
switch (orientation) {
case ExifInterface.ORIENTATION_ROTATE_90:
//rotate CCW
mtx.preRotate(-90);
bitmap = Bitmap.createBitmap(tmpBitmap, 0, 0, w, h, mtx, true);
break;
case ExifInterface.ORIENTATION_ROTATE_270:
//rotate CW
mtx.preRotate(90);
bitmap = Bitmap.createBitmap(tmpBitmap, 0, 0, w, h, mtx, true);
break;
//CONSIDER OTHER CASES HERE....
default:
bitmap = tmpBitmap;
break;
}
} catch (Exception e) {
e.printStackTrace();
}
}
return bitmap;
}
Regards.

Getting smaller data from camera preview in Android

Hi i am developing real time image processing application on android. I am using PreviewCallback to get image in every frame. When i get data in Tablet devices the data returns very big. So its too hard to work in large data in real time.
My question is, is there any way to get smaller resolution data from camera preview.
CAMERA PREVIEW CODE:
public void onPreviewFrame(byte[] data, Camera camera) {
// TODO Auto-generated method stub
Camera.Parameters params = camera.getParameters();
Log.v("image format", Integer.toString(params.getPreviewFormat()));
//Frame captureing via frameManager
frameManager.initCamFrame(params.getPreviewSize().width, params.getPreviewSize().height,
data);
}
});
You can call parameters.setPreviewSize(width, height), but you want to do it before camera preview starts. And you need to use supported value, viz previous answer.
And you also should not call camera.getParameters() every frame, just do that once and save the values to some variable. You have some limited time in onPreviewFrame, because byte[] data is overwritten on each frame, so try to do only the important stuff here.
And you should use setPreviewCallbackWithBuffer, it quite improves performance - check this post.
Are you aware you can get a list of supported preview sizes from the camera parameters by calling getSupportedPreviewSizes()? The devices I've tried this on all returned a sorted list, although sometimes in ascending and sometimes in descending order. You'll probably want to manually iterate the list to find the 'smallest' preview size, or sort it first and grab the first item.
you can try this:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
try {
byte[] baos = convertYuvToJpeg(data, camera);
StringBuilder dataBuilder = new StringBuilder();
dataBuilder.append("data:image/jpeg;base64,").append(Base64.encodeToString(baos, Base64.DEFAULT));
mSocket.emit("newFrame", dataBuilder.toString());
} catch (Exception e) {
Log.d("########", "ERROR");
}
}
};
public byte[] convertYuvToJpeg(byte[] data, Camera camera) {
YuvImage image = new YuvImage(data, ImageFormat.NV21,
camera.getParameters().getPreviewSize().width, camera.getParameters().getPreviewSize().height, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
int quality = 20; //set quality
image.compressToJpeg(new Rect(0, 0, camera.getParameters().getPreviewSize().width, camera.getParameters().getPreviewSize().height), quality, baos);//this line decreases the image quality
return baos.toByteArray();
}

Android: How to save a preview frame as jpeg image?

I would like to save a preview frame as a jpeg image.
I have tried to write the following code:
public void onPreviewFrame(byte[] _data, Camera _camera)
{
if(settings.isRecording())
{
Camera.Parameters params = _camera.getParameters();
params.setPictureFormat(PixelFormat.JPEG);
_camera.setParameters(params);
String path = "ImageDir" + frameCount;
fileRW.setPath(path);
fileRW.WriteToFile(_data);
frameCount++;
}
}
but it's not possible to open a saved file as a jpeg image. Does anyone know how to save preview frames as jpeg images?
Thanks
checkout this code. i hope it helps
camera.setPreviewCallback(new PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
// TODO Auto-generated method stub
Camera.Parameters parameters = camera.getParameters();
Size size = parameters.getPreviewSize();
YuvImage image = new YuvImage(data, ImageFormat.NV21,
size.width, size.height, null);
Rect rectangle = new Rect();
rectangle.bottom = size.height;
rectangle.top = 0;
rectangle.left = 0;
rectangle.right = size.width;
ByteArrayOutputStream out2 = new ByteArrayOutputStream();
image.compressToJpeg(rectangle, 100, out2);
DataInputStream in = new DataInputStream();
in.write(out2.toByteArray());
}
}
});
camera.startPreview();
You have to convert it manually, there are some examples on the android-developers list if you browse the archive - mostly dealing with the format (luminance/chrominance,etc) conversion, then writing the image to a bitmap, then saving to a file.
It's all a bit of a pain really.
I set the PreviewFormat with Camera.Parameters.setPreviewFormat(PixelFormat.JPEG) before preview,but it seems that it can't really set the previewformat......
By the way, the default format of the preview is YCbCr_420_SP....
You must first check what are the supported preview formats for your device by calling getSupportedPreviewFormats(). Make sure JPEG is supported before calling setPreviewFormat(PixelFormat.JPEG).
JPEG is not a format for Camera Preview. As official documentation says,
"Only ImageFormat.NV21 and ImageFormat.YUY2 are supported for now"
In order to get a picture from Camera Preview, you need to define preview format, as below:
Camera.Parameters parameters;
parameters.setPreviewFormat(ImageFormat.NV21); //or ImageFormat.YU2
After that, you compress & save JPEG as in Dany's example.
_data probably isn't in JPEG format. Did you call Camera.Parameters.setPreviewFormat(PixelFormat.JPEG) before calling start preview?

Categories

Resources