ZXing convert Bitmap to BinaryBitmap - android

I am using OpenCV and Zxing, and I'd like to add 2d code scanning. I have a few types of images that I could send. Probably the best is Bitmat (the other option is OpenCV Mat).
It looks like you used to be able to convert like this:
Bitmap frame = //this is the frame coming in
LuminanceSource source = new RGBLuminanceSource(frame);
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
//then I can use reader.decode(bitmap) to decode the BinaryBitmap
However, RGBLuminaceSource looks like it no longer takes a bitmap as an input. So how else can I convert an input image to BinaryBitmap???
Edit:
Ok so I believe I've made some progress, but I'm still having an issue. I think I have the code that converts the Bitmap into the correct format, however I am now getting an arrayIndexOutOfBounds
public void zxing(){
Bitmap bMap = Bitmap.createBitmap(frame.width(), frame.height(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(frame, bMap);
byte[] array = BitmapToArray(bMap);
LuminanceSource source = new PlanarYUVLuminanceSource(array, bMap.getWidth(), bMap.getHeight(), 0, 0, bMap.getWidth(), bMap.getHeight(), false);
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
Reader reader = new DataMatrixReader();
String sResult = "";
try {
Result result = reader.decode(bitmap);
sResult = result.getText();
Log.i("Result", sResult);
}
catch (NotFoundException e) {
Log.d(TAG, "Code Not Found");
e.printStackTrace();
}
}
public byte[] BitmapToArray(Bitmap bmp){
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bmp.compress(Bitmap.CompressFormat.JPEG, 50, stream);
byte[] byteArray = stream.toByteArray();
return byteArray;
}
I get the error
02-14 10:19:27.469: E/AndroidRuntime(29736): java.lang.ArrayIndexOutOfBoundsException: length=33341; index=34560 02-14 10:19:27.469: E/AndroidRuntime(29736): at
com.google.zxing.common.HybridBinarizer.calculateBlackPoints(HybridBinarizer.java:199)
I have logged the size of the byte[], and it is the length shown above. I cant figure out why zxing is expecting it to be bigger

Ok I got it. As Sean Owen said, PlanarYUVLuminaceSource would only be for the default android camera format, which I guess OpenCV does not use. So in short, here is how you would do it:
//(note, mTwod is the CV Mat that contains my datamatrix code)
Bitmap bMap = Bitmap.createBitmap(mTwod.width(), mTwod.height(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(mTwod, bMap);
int[] intArray = new int[bMap.getWidth()*bMap.getHeight()];
//copy pixel data from the Bitmap into the 'intArray' array
bMap.getPixels(intArray, 0, bMap.getWidth(), 0, 0, bMap.getWidth(), bMap.getHeight());
LuminanceSource source = new RGBLuminanceSource(bMap.getWidth(), bMap.getHeight(),intArray);
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
Reader reader = new DataMatrixReader();
//....doing the actually reading
Result result = reader.decode(bitmap);
So thats it, not hard at all. Just had to convert the Android Bitmap to an integer array, and its a piece of cake.

public static String readQRImage(Bitmap bMap) {
String contents = null;
int[] intArray = new int[bMap.getWidth()*bMap.getHeight()];
//copy pixel data from the Bitmap into the 'intArray' array
bMap.getPixels(intArray, 0, bMap.getWidth(), 0, 0, bMap.getWidth(), bMap.getHeight());
LuminanceSource source = new RGBLuminanceSource(bMap.getWidth(), bMap.getHeight(), intArray);
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
Reader reader = new MultiFormatReader();// use this otherwise ChecksumException
try {
Result result = reader.decode(bitmap);
contents = result.getText();
//byte[] rawBytes = result.getRawBytes();
//BarcodeFormat format = result.getBarcodeFormat();
//ResultPoint[] points = result.getResultPoints();
} catch (NotFoundException e) { e.printStackTrace(); }
catch (ChecksumException e) { e.printStackTrace(); }
catch (FormatException e) { e.printStackTrace(); }
return contents;
}

Bitmap is an Android class. Android's default image format from the camera is a planar YUV format. That is why only PlanarYUVLuminanceSource is needed and exists for Android. RGBLuminanceSource would have to be ported.
You are putting completely the wrong kind of data into the class. It is expecting pixels in YUV planar format. You are passing compressed bytes of a JPEG file.

Related

Android opencv byte[] to mat to byte[]

My goal is to add an overlay on the camera preview that will find book edges. For that, I override the onPreviewFrame where I do the following:
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
Mat mat = new Mat((int) (height*1.5), width, CvType.CV_8UC1);
mat.put(0,0,data);
byte[] bytes = new byte[(int) (height*width*1.5)];
mat.get(0,0,bytes);
if (!test) { //to only do once
File pictureFile = getOutputMediaFile();
try {
FileOutputStream fos = new FileOutputStream(pictureFile);
fos.write(bytes);
fos.close();
Uri picUri = Uri.fromFile(pictureFile);
updateGallery(picUri);
test = true;
} catch (IOException e) {
e.printStackTrace();
}
}
}
For now I simply want to take one of the previews and save it after the conversion to mat.
After spending countless hours getting the above to look right, the saved picture cannot be seen on my testing phone (LG Leon). I can't seem to find the issue. Am I mixing the height/width because I'm taking pictures in portrait mode? I tried switching them and still doesn't work. Where is the problem?
The fastest method I managed to find is described HERE in my recently asked question. You can find the method to extract the image in the answer I wrote in my question below. The thing is that the image you get through onPreviewFrame() is NV21. After receiving this image it may be that you need to convert it to RGB (depends on what do you want to achieve; this is also done in the answer I gave you previously).
Seems quite inefficient but it works for me (for now):
//get the camera parameters
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
//convert the byte[] to Bitmap through YuvImage;
//make sure the previewFormat is NV21 (I set it so somewhere before)
YuvImage yuv = new YuvImage(data, parameters.getPreviewFormat(), width, height, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
yuv.compressToJpeg(new Rect(0, 0, width, height), 70, out);
Bitmap bmp = BitmapFactory.decodeByteArray(out.toByteArray(), 0, out.size());
//convert Bitmap to Mat; note the bitmap config ARGB_8888 conversion that
//allows you to use other image processing methods and still save at the end
Mat orig = new Mat();
bmp = bmp.copy(Bitmap.Config.ARGB_8888, true);
Utils.bitmapToMat(bmp, orig);
//here you do whatever you want with the Mat
//Mat to Bitmap to OutputStream to byte[] to File
Utils.matToBitmap(orig, bmp);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bmp.compress(Bitmap.CompressFormat.JPEG, 70, stream);
byte[] bytes = stream.toByteArray();
File pictureFile = getOutputMediaFile();
try {
FileOutputStream fos = new FileOutputStream(pictureFile);
fos.write(bytes);
fos.close();
} catch (IOException e) {
e.printStackTrace();
}

byte[] to image using Xamarin.Android

I know, this is an old question, but I've got problems with encoding a byte[] into a bitmap...
Background: I'm writing an Andoid-App which receives picturebytes via UDP, encodes them into a bitmap and displays the picture in an image view.
Since my functions didn't work, I cancelled the UDP-Connection for testing and wrote all the image-bytes in a huge variable. So they're all correct...
The function returns "null".
The function I'm using:
public Bitmap ByteArrayToImage(byte[] imageData)
{
var bmpOutput = BitmapFactory.DecodeByteArray(imageData, 0, imageData.Length);
return bmpOutput;
}
another function I tried out:
public Bitmap ByteArrayToImage2(byte[] imageData)
{
Bitmap bmpReturn;
bmpReturn = (Android.Graphics.Bitmap) Android.Graphics.Bitmap.FromArray<byte>(imageData);
return bmpReturn;
}
A function I found in the internet:
public static Bitmap bytesToUIImage (byte[] bytes)
{
if (bytes == null)
return null;
Bitmap bitmap;
var documentsFolder = Environment.GetFolderPath (Environment.SpecialFolder.Personal);
//Create a folder for the images if not exists
System.IO.Directory.CreateDirectory(System.IO.Path.Combine (documentsFolder, "images"));
string imatge = System.IO.Path.Combine (documents, "images", "image.jpg");
System.IO.File.WriteAllBytes(imatge, bytes.Concat(new Byte[]{(byte)0xD9}).ToArray());
bitmap = BitmapFactory.DecodeFile(imatge);
return bitmap;
}
Most unfortunately, the last function didn't work as well, but here I have do admit, that I was a bit confused about the 'documents' in
string imatge = System.IO.Path.Combine (documents, "images", "image.jpg");
I got an error and changed it into documentsFolder since i guess, that should (or could) be right....
Thank you in advance for your help
it seems, I found the error...
I stored the public Bitmap ByteArrayToImage(byte[] imageData) in another class. I don't know why, but when I decode the Bytearray in the class that also receives the array, all works fine...
If someone knows the reason, feel welcome to let me know, but for now I'm happy ;-)
I did something similar
On sender side:
Camera.Parameters parameters = camera.getParameters();
if (parameters.getPreviewFormat() == ImageFormat.NV21) {
Rect rect = new Rect(0, 0, parameters.getPreviewSize().width, parameters.getPreviewSize().height);
YuvImage yuvimage = new YuvImage(data, ImageFormat.NV21, parameters.getPreviewSize().width, parameters.getPreviewSize().height, null);
ByteArrayOutputStream os = new ByteArrayOutputStream();
yuvimage.compressToJpeg(rect, 75, os);
byte[] videoFrame = os.toByteArray();
//send the video frame to reciever
}
On receiving side:
DataInputStream dIn = new DataInputStream(socket.getInputStream());
int length = 0;
length = dIn.readInt();
if (length > 0) {
byte[] message = new byte[length];
dIn.readFully(message, 0, message.length);
BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = 4;
final Bitmap bitmap = BitmapFactory.decodeByteArray(message, 0, message.length, options);
ReceiverActivity.this.runOnUiThread(new Runnable() {
#Override
public void run() {
imgPreview.setImageBitmap(bitmap);
}
});
There is a built in method to decode a byte array into a bitmap. The problem comes when we are talking of big images. With small ones you can use:
Bitmap bmp = BitmapFactory.DecodeByteArray (data, 0, data.length);
Be aware. Those bitmaps are not mutable, so you will not be able to use canvases on those. To make them mutable go to: BitmapFactory.decodeResource returns a mutable Bitmap in Android 2.2 and an immutable Bitmap in Android 1.6

QR code decoding images using zxing android

I am doing a simple application on Android which is the following:
Putting a QR code Image in the Drawable file of the application. By a ButtonClick, it should be decoded and Display the result (using Zxing library).
I have made the same application on Java (the decoding was then using BufferedImageLuminanceSource class).
In my android application, I used the RGBLuminanceSource class as follows:
LuminanceSource source = new RGBLuminanceSource(width, height, pixels)BinaryBitmap binaryBitmap = new BinaryBitmap(new HybridBinarizer(source));
The problem I am facing here is that: the image has to be too small to be decoded by the android application(and I had to try many sizes to finally got one where the QR code Image is decoded). Meanwhile the same images were decoded easily using the BufferedImageLuminanceSource in Java application without any need to be resized.
What to do to avoid this resizing Problem?
Its too late but it can be help to others,
So we can get Qr code Info from Bitmap using Zxing library.
Bitmap generatedQRCode;
int width = generatedQRCode.getWidth();
int height = generatedQRCode.getHeight();
int[] pixels = new int[width * height];
generatedQRCode.getPixels(pixels, 0, width, 0, 0, width, height);
RGBLuminanceSource source = new RGBLuminanceSource(width, height, pixels);
BinaryBitmap binaryBitmap = new BinaryBitmap(new HybridBinarizer(source));
Reader reader = new MultiFormatReader();
Result result = null;
try {
result = reader.decode(binaryBitmap);
} catch (NotFoundException e) {
e.printStackTrace();
} catch (ChecksumException e) {
e.printStackTrace();
} catch (FormatException e) {
e.printStackTrace();
}
String text = result.getText();
textViewQRCode.setText(" CONTENT: " + text);

Rotate byte array of JPEG after onPictureTaken

Is there a way to rotate byte array without decoding it to Bitmap?
Currently in jpeg PictureCallback I just write byte array directly to file. But pictures are rotated. I would like to rotate them without decoding to bitmap with hope that this will conserve my memory.
BitmapFactory.Options o = new BitmapFactory.Options();
o.inJustDecodeBounds = true;
BitmapFactory.decodeByteArray(data, 0, data.length, o);
int orientation;
if (o.outHeight < o.outWidth) {
orientation = 90;
} else {
orientation = 0;
}
File photo = new File(tmp, "demo.jpeg");
FileOutputStream fos;
BufferedOutputStream bos = null;
try {
fos = new FileOutputStream(photo);
bos = new BufferedOutputStream(fos);
bos.write(data);
bos.flush();
} catch (IOException e) {
Log.e(TAG, "Failed to save photo", e);
} finally {
IOUtils.closeQuietly(bos);
}
Try this. It will solve the purpose.
Bitmap storedBitmap = BitmapFactory.decodeByteArray(data, 0, data.length, null);
Matrix mat = new Matrix();
mat.postRotate("angle"); // angle is the desired angle you wish to rotate
storedBitmap = Bitmap.createBitmap(storedBitmap, 0, 0, storedBitmap.getWidth(), storedBitmap.getHeight(), mat, true);
You can set JPEG rotation via Exif header without decoding it. This is the most efficient method, but some viewers may still show a rotated image.
Alternatively, you can use JPEG lossless rotation. Unfortunately, I am not aware of free Java implementations of this algorithm.
Update on SourceForge, there is a Java open source class LLJTran. The Android port is on GitHub.
I don't think that there is such possibility. Bytes order depends from picture encoding (png, jpeg). So you are forced to decode image to do something with it.
Try like this,
private byte[] rotateImage(byte[] data, int angle) {
Log.d("labot_log_info","CameraActivity: Inside rotateImage");
Bitmap bmp = BitmapFactory.decodeByteArray(data, 0, data.length, null);
Matrix mat = new Matrix();
mat.postRotate(angle);
bmp = Bitmap.createBitmap(bmp, 0, 0, bmp.getWidth(), bmp.getHeight(), mat, true);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bmp.compress(Bitmap.CompressFormat.JPEG, 100, stream);
return stream.toByteArray();
}
You can call the rotateImage by providing the image data which is getting from onPictureTaken method and an angle for rotation.
Eg: rotateImage(data, 90);

Android Rotate Picture before saving

I just finished my camera activity and it's wonderfully saving the data.
What I do after the picture is taken:
protected void savePictureData() {
try {
FileOutputStream fs = new FileOutputStream(this.photo);
fs.write(this.lastCamData);
fs.close(); //okay, wonderful! file is just written to the sdcard
//---------------------
//---------------------
//TODO in here: dont save just the file but ROTATE the image and then save it!
//---------------------
//---------------------
Intent data = new Intent(); //just a simple intent returning some data...
data.putExtra("picture_name", this.fname);
data.putExtra("byte_data", this.lastCamData);
this.setResult(SAVED_TOOK_PICTURE, data);
this.finish();
} catch (IOException e) {
e.printStackTrace();
this.IOError();
}
}
What I want to is already as comment given in the code above. I dont want the image just to be saved to file but to be rotated and then saved! Thanks!
//EDIT: What I am currently up to (Works but still runs into memory issues with large images)
byte[] pictureBytes;
Bitmap thePicture = BitmapFactory.decodeByteArray(this.lastCamData, 0, this.lastCamData.length);
Matrix m = new Matrix();
m.postRotate(90);
thePicture = Bitmap.createBitmap(thePicture, 0, 0, thePicture.getWidth(), thePicture.getHeight(), m, true);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
thePicture.compress(CompressFormat.JPEG, 100, bos);
pictureBytes = bos.toByteArray();
FileOutputStream fs = new FileOutputStream(this.photo);
fs.write(pictureBytes);
fs.close();
Intent data = new Intent();
data.putExtra("picture_name", this.fname);
data.putExtra("byte_data", pictureBytes);
this.setResult(SAVED_TOOK_PICTURE, data);
this.finish();
Read the path from sd card and paste the following code...It'll Replace the existing photo after rotating it..
Note: Exif doesn't work on most of the devices, it returns incorrect data so it's good to hard code the rotation before saving to any degree you want to, Just change the angle value in postRotate.
String photopath = tempphoto.getPath().toString();
Bitmap bmp = BitmapFactory.decodeFile(photopath);
Matrix matrix = new Matrix();
matrix.postRotate(90);
bmp = Bitmap.createBitmap(bmp, 0, 0, bmp.getWidth(), bmp.getHeight(), matrix, true);
FileOutputStream fOut;
try {
fOut = new FileOutputStream(tempphoto);
bmp.compress(Bitmap.CompressFormat.JPEG, 85, fOut);
fOut.flush();
fOut.close();
} catch (FileNotFoundException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Before you create your FileOutputStream you can create a new Bitmap from the original that has been transformed using a Matrix. To do that you would use this method:
createBitmap(Bitmap source, int x, int y, int width, int height, Matrix m, boolean filter)
Where the m defines a matrix that will transpose your original bitmap.
For an example on how to do this look at this question:
Android: How to rotate a bitmap on a center point
bitmap = RotateBitmap(bitmap, 90);
public static Bitmap RotateBitmap(Bitmap source, float angle)
{
Matrix matrix = new Matrix();
matrix.postRotate(angle);
return Bitmap.createBitmap(source, 0, 0, source.getWidth(), source.getHeight(), matrix, true);
}

Categories

Resources