I am doing a simple application on Android which is the following:
Putting a QR code Image in the Drawable file of the application. By a ButtonClick, it should be decoded and Display the result (using Zxing library).
I have made the same application on Java (the decoding was then using BufferedImageLuminanceSource class).
In my android application, I used the RGBLuminanceSource class as follows:
LuminanceSource source = new RGBLuminanceSource(width, height, pixels)BinaryBitmap binaryBitmap = new BinaryBitmap(new HybridBinarizer(source));
The problem I am facing here is that: the image has to be too small to be decoded by the android application(and I had to try many sizes to finally got one where the QR code Image is decoded). Meanwhile the same images were decoded easily using the BufferedImageLuminanceSource in Java application without any need to be resized.
What to do to avoid this resizing Problem?
Its too late but it can be help to others,
So we can get Qr code Info from Bitmap using Zxing library.
Bitmap generatedQRCode;
int width = generatedQRCode.getWidth();
int height = generatedQRCode.getHeight();
int[] pixels = new int[width * height];
generatedQRCode.getPixels(pixels, 0, width, 0, 0, width, height);
RGBLuminanceSource source = new RGBLuminanceSource(width, height, pixels);
BinaryBitmap binaryBitmap = new BinaryBitmap(new HybridBinarizer(source));
Reader reader = new MultiFormatReader();
Result result = null;
try {
result = reader.decode(binaryBitmap);
} catch (NotFoundException e) {
e.printStackTrace();
} catch (ChecksumException e) {
e.printStackTrace();
} catch (FormatException e) {
e.printStackTrace();
}
String text = result.getText();
textViewQRCode.setText(" CONTENT: " + text);
Related
I have PDF files that contains 1 single icon.
I need to render this icons in GridView.
On Android 5 I'm using PdfRenderer system class and it works amazing:
On Android 4, I'm using "Android-Pdf-Viewer-Library".
https://github.com/jblough/Android-Pdf-Viewer-Library
It works, but the quality is poor:
Can you please suggest any alternatives for rendering PDF as Bitmap?
Or maybe there is a way to increase quality of rendering for that library?
Here is my code:
private static Bitmap renderToBitmap(Context context, InputStream inStream) {
Bitmap bitmap = null;
try {
byte[] decode = IOUtils.toByteArray(inStream);
ByteBuffer buf = ByteBuffer.wrap(decode);
PDFPage mPdfPage = new PDFFile(buf).getPage(0, true);
float width = mPdfPage.getWidth();
float height = mPdfPage.getHeight();
bitmap = mPdfPage.getImage((int) (width), (int) (height), null, true,
true);
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
inStream.close();
} catch (IOException e) {
}
}
return bitmap;
}
On android I am drawing into a android.graphics.Picture then save the Picture to a file. Later I reload the picture into memory and draw it to the canvas. I noticed that Bitmaps were never drawing. And after much debugging I managed to narrow down the problem to Picture.writeToStream and Picture.createFromStream. It seems that Bitmaps drawn into the picture don't get reloaded properly. Below is sample code I wrote to show the problem. In this sample my canvas is not hardware accelerated.
So my questions are as follows:
Am I doing something wrong?
Is this an Android bug? I filed the bug report https://code.google.com/p/android/issues/detail?id=54896 because I think this is.
Any known workaround?
#Override
protected void onDraw(Canvas canvas)
{
try
{
Picture picture = new Picture();
// Create a bitmap
Bitmap bitmap = Bitmap.createBitmap( 100, 100, Config.ARGB_8888);
Canvas bitmapCanvas = new Canvas(bitmap);
bitmapCanvas.drawARGB(255, 0, 255, 0);
// Draw the bitmap to the picture's canvas.
Canvas pictureCanvas = picture.beginRecording(canvas.getWidth(), canvas.getHeight());
RectF dstRect = new RectF(0, 0, 200, 200);
pictureCanvas.drawBitmap(bitmap, null, dstRect, null);
picture.endRecording();
// Save the Picture to a file.
File file = File.createTempFile("cache", ".pic");
FileOutputStream os = new FileOutputStream(file);
picture.writeToStream(os);
os.close();
// Read the picture back in
FileInputStream in = new FileInputStream(file);
Picture cachedPicture = Picture.createFromStream(in);
// Draw the cached picture to the view's canvas. This won't draw the bitmap!
canvas.drawPicture(cachedPicture);
// Uncomment the following line to see that Drawing the Picture without reloading
// it from disk works fine.
//canvas.drawPicture(picture);
}
catch (Exception e)
{
}
}
I did find an answer to this question after looking at the native android code that backs the Bitmap. Android only can only save certain types of bitmaps to the picture. This is because the SkBitmap class only supports certain types of inputs that result in a bitmap that can be saved to a Picture. So in this can I can workaround the problem by providing those magical inputs. Use a bitmap that is saved to disk and call BitmapFactory.decodeFileDescriptor to create it.
private Bitmap createReusableBitmap(Bitmap inBitmap)
{
Bitmap reuseableBitmap = null;
if (inBitmap== null)
return null;
try
{
// The caller is responsible for deleting the file.
File tmpBitmapFile = File.createTempFile("bitmap", ".png");
setBitmapPath(tmpBitmapFile.getAbsolutePath());
FileOutputStream out = new FileOutputStream(tmpBitmapFile);
boolean compressed = inBitmap.compress(CompressFormat.PNG, 100, out);
out.close();
if (compressed)
{
// Have to create a purgeable bitmap b/c that is the only kind that works right when drawing into a
// Picture. After digging through the android source I found decodeFileDescriptor will create the one we need.
// See https://github.com/android/platform_frameworks_base/blob/master/core/jni/android/graphics/BitmapFactory.cpp
// In short we have to give the options inPurgeable=true inInputShareable=true and call decodeFileDescriptor
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig = Bitmap.Config.ARGB_8888;
options.inInputShareable = true;
options.inPurgeable = true;
options.inSampleSize = 1;
options.inScaled = false;
options.inMutable = false;
options.inTempStorage = DraftRenderer.tempStorage;
FileInputStream inStream = new FileInputStream(tmpBitmapFile);
FileDescriptor fd = inStream.getFD();
reuseableBitmap = BitmapFactory.decodeFileDescriptor(fd, null, options);
inStream.close();
}
} catch (Exception e) {
}
return reuseableBitmap;
}
Note: a picture created from an input stream cannot be replayed on a hardware accelerated canvas.
Picture.createFromStream(InputStream stream)
you can use canvas.isHardwareAccelerated() to detect hardware accelerated or not.
I am using OpenCV and Zxing, and I'd like to add 2d code scanning. I have a few types of images that I could send. Probably the best is Bitmat (the other option is OpenCV Mat).
It looks like you used to be able to convert like this:
Bitmap frame = //this is the frame coming in
LuminanceSource source = new RGBLuminanceSource(frame);
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
//then I can use reader.decode(bitmap) to decode the BinaryBitmap
However, RGBLuminaceSource looks like it no longer takes a bitmap as an input. So how else can I convert an input image to BinaryBitmap???
Edit:
Ok so I believe I've made some progress, but I'm still having an issue. I think I have the code that converts the Bitmap into the correct format, however I am now getting an arrayIndexOutOfBounds
public void zxing(){
Bitmap bMap = Bitmap.createBitmap(frame.width(), frame.height(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(frame, bMap);
byte[] array = BitmapToArray(bMap);
LuminanceSource source = new PlanarYUVLuminanceSource(array, bMap.getWidth(), bMap.getHeight(), 0, 0, bMap.getWidth(), bMap.getHeight(), false);
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
Reader reader = new DataMatrixReader();
String sResult = "";
try {
Result result = reader.decode(bitmap);
sResult = result.getText();
Log.i("Result", sResult);
}
catch (NotFoundException e) {
Log.d(TAG, "Code Not Found");
e.printStackTrace();
}
}
public byte[] BitmapToArray(Bitmap bmp){
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bmp.compress(Bitmap.CompressFormat.JPEG, 50, stream);
byte[] byteArray = stream.toByteArray();
return byteArray;
}
I get the error
02-14 10:19:27.469: E/AndroidRuntime(29736): java.lang.ArrayIndexOutOfBoundsException: length=33341; index=34560 02-14 10:19:27.469: E/AndroidRuntime(29736): at
com.google.zxing.common.HybridBinarizer.calculateBlackPoints(HybridBinarizer.java:199)
I have logged the size of the byte[], and it is the length shown above. I cant figure out why zxing is expecting it to be bigger
Ok I got it. As Sean Owen said, PlanarYUVLuminaceSource would only be for the default android camera format, which I guess OpenCV does not use. So in short, here is how you would do it:
//(note, mTwod is the CV Mat that contains my datamatrix code)
Bitmap bMap = Bitmap.createBitmap(mTwod.width(), mTwod.height(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(mTwod, bMap);
int[] intArray = new int[bMap.getWidth()*bMap.getHeight()];
//copy pixel data from the Bitmap into the 'intArray' array
bMap.getPixels(intArray, 0, bMap.getWidth(), 0, 0, bMap.getWidth(), bMap.getHeight());
LuminanceSource source = new RGBLuminanceSource(bMap.getWidth(), bMap.getHeight(),intArray);
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
Reader reader = new DataMatrixReader();
//....doing the actually reading
Result result = reader.decode(bitmap);
So thats it, not hard at all. Just had to convert the Android Bitmap to an integer array, and its a piece of cake.
public static String readQRImage(Bitmap bMap) {
String contents = null;
int[] intArray = new int[bMap.getWidth()*bMap.getHeight()];
//copy pixel data from the Bitmap into the 'intArray' array
bMap.getPixels(intArray, 0, bMap.getWidth(), 0, 0, bMap.getWidth(), bMap.getHeight());
LuminanceSource source = new RGBLuminanceSource(bMap.getWidth(), bMap.getHeight(), intArray);
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
Reader reader = new MultiFormatReader();// use this otherwise ChecksumException
try {
Result result = reader.decode(bitmap);
contents = result.getText();
//byte[] rawBytes = result.getRawBytes();
//BarcodeFormat format = result.getBarcodeFormat();
//ResultPoint[] points = result.getResultPoints();
} catch (NotFoundException e) { e.printStackTrace(); }
catch (ChecksumException e) { e.printStackTrace(); }
catch (FormatException e) { e.printStackTrace(); }
return contents;
}
Bitmap is an Android class. Android's default image format from the camera is a planar YUV format. That is why only PlanarYUVLuminanceSource is needed and exists for Android. RGBLuminanceSource would have to be ported.
You are putting completely the wrong kind of data into the class. It is expecting pixels in YUV planar format. You are passing compressed bytes of a JPEG file.
I am working with pdf files, I want to implement page viewer to my pdf file. My idea is to convert pdf file into bmp images, and then to use viewPager. but I am stuck in converting pdf to bitmap. Any suggestions?
Include dependencies in your gradle
compile 'com.github.barteksc:android-pdf-viewer:2.8.1'
Use following function to convert PDF page to bitmap image
private Bitmap generateImageFromPdf(String assetFileName, int pageNumber, int width, int height) {
PdfiumCore pdfiumCore = new PdfiumCore(mActivity);
try {
File f = FileUtils.fileFromAsset(mActivity, assetFileName);
ParcelFileDescriptor fd = ParcelFileDescriptor.open(f, ParcelFileDescriptor.MODE_READ_ONLY);
PdfDocument pdfDocument = pdfiumCore.newDocument(fd);
pdfiumCore.openPage(pdfDocument, pageNumber);
//int width = pdfiumCore.getPageWidthPoint(pdfDocument, pageNumber);
//int height = pdfiumCore.getPageHeightPoint(pdfDocument, pageNumber);
Bitmap bmp = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
pdfiumCore.renderPageBitmap(pdfDocument, bmp, pageNumber, 0, 0, width, height);
//saveImage(bmp, filena);
pdfiumCore.closeDocument(pdfDocument);
return bmp;
} catch(Exception e) {
//todo with exception
}
return null;
}
this is a way to render pdf to image
its take 20 to 25 seconds to render pdf file to image
its working in android 10 , android 9 and all lower versions
private void generateImageFromPdf() {
try {
PDDocument doc=PDDocument.load(new File(fileurl));
PDFRenderer pdfRenderer = new PDFRenderer(doc);
Bitmap bffim = pdfRenderer.renderImageWithDPI(0, 100, Bitmap.Config.RGB_565);
String fileName = "image-" + 0 + ".png";
img.setImageBitmap(bffim);
} catch (IOException e) {
e.printStackTrace();
}
}
When trying to convert the byte[] of Camera.onPreviewFrame to Bitamp using BitmapFactory.decodeByteArray gives me an error SkImageDecoder::Factory returned null
Following is my code:
public void onPreviewFrame(byte[] data, Camera camera) {
Bitmap bmp=BitmapFactory.decodeByteArray(data, 0, data.length);
}
This has been hard to find! But since API 8, there is a YuvImage class in android.graphics. It's not an Image descendent, so all you can do with it is save it to Jpeg, but you could save it to memory stream and then load into Bitmap Image if that's what you need.
import android.graphics.YuvImage;
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
try {
Camera.Parameters parameters = camera.getParameters();
Size size = parameters.getPreviewSize();
YuvImage image = new YuvImage(data, parameters.getPreviewFormat(),
size.width, size.height, null);
File file = new File(Environment.getExternalStorageDirectory()
.getPath() + "/out.jpg");
FileOutputStream filecon = new FileOutputStream(file);
image.compressToJpeg(
new Rect(0, 0, image.getWidth(), image.getHeight()), 90,
filecon);
} catch (FileNotFoundException e) {
Toast toast = Toast
.makeText(getBaseContext(), e.getMessage(), 1000);
toast.show();
}
}
Since Android 3.0 you can use a TextureView and TextureSurface to display the camera, and then use mTextureView.getBitmap() to retrieve a friendly RGB preview frame.
A very skeletal example of how to do this is given in the TextureView docs. Note that you'll have to set your application or activity to be hardware accelerated by putting android:hardwareAccelerated="true" in the manifest.
I found the answer after a long time. Here it is...
Instead of using BitmapFactory, I used my custom method to decode this byte[] data to a valid image format. To decode the image to a valid image format, one need to know what picture format is being used by the camera by calling camera.getParameters().getPictureFormat(). This returns a constant defined by ImageFormat. After knowing the format, use the appropriate encoder to encode the image.
In my case, the byte[] data was in the YUV format, so I looked for YUV to BMP conversion and that solved my problem.
you can try this:
This example send camera frames to server
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
try {
byte[] baos = convertYuvToJpeg(data, camera);
StringBuilder dataBuilder = new StringBuilder();
dataBuilder.append("data:image/jpeg;base64,").append(Base64.encodeToString(baos, Base64.DEFAULT));
mSocket.emit("newFrame", dataBuilder.toString());
} catch (Exception e) {
Log.d("########", "ERROR");
}
}
};
public byte[] convertYuvToJpeg(byte[] data, Camera camera) {
YuvImage image = new YuvImage(data, ImageFormat.NV21,
camera.getParameters().getPreviewSize().width, camera.getParameters().getPreviewSize().height, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
int quality = 20; //set quality
image.compressToJpeg(new Rect(0, 0, camera.getParameters().getPreviewSize().width, camera.getParameters().getPreviewSize().height), quality, baos);//this line decreases the image quality
return baos.toByteArray();
}