I'm creating an app with QR barcode. The barcode load correctly, but somehow it load bit slow, about 3-5 secs after i tap/click the menu.
Can we make it faster? or is it normal the page load that long? other part loading only takes 1 sec or less. the app also offline, so no internet connection needed.
here my code to generate the QR barcode:
ImageView imageViewBarcode = (ImageView)findViewById(R.id.imageViewBarcode);
try {
bitmap = TextToImageEncode(barcode_user);
imageViewBarcode.setImageBitmap(bitmap);
} catch (WriterException e) {
e.printStackTrace();
}
Those code above is put inside onCreate. So when the page load, it generate the barcode.
Here the function to create barcode
Bitmap TextToImageEncode(String Value) throws WriterException {
BitMatrix bitMatrix;
try {
bitMatrix = new MultiFormatWriter().encode(
Value,
BarcodeFormat.DATA_MATRIX.QR_CODE,
QRcodeWidth, QRcodeWidth, null
);
} catch (IllegalArgumentException Illegalargumentexception) {
return null;
}
int bitMatrixWidth = bitMatrix.getWidth();
int bitMatrixHeight = bitMatrix.getHeight();
int[] pixels = new int[bitMatrixWidth * bitMatrixHeight];
for (int y = 0; y < bitMatrixHeight; y++) {
int offset = y * bitMatrixWidth;
for (int x = 0; x < bitMatrixWidth; x++) {
pixels[offset + x] = bitMatrix.get(x, y) ?
getResources().getColor(R.color.colorBlack):getResources().getColor(R.color.colorWhite);
}
}
Bitmap bitmap = Bitmap.createBitmap(bitMatrixWidth, bitMatrixHeight, Bitmap.Config.ARGB_4444);
bitmap.setPixels(pixels, 0, 500, 0, 0, bitMatrixWidth, bitMatrixHeight);
return bitmap;
}
You are calling getResources().getColor() inside double loop - ie when your image size is 100*100 pixels this will be called 10000 times. Instead assign color values to some variables outside of the loops and use these variables inside loops.
int color_black = getResources().getColor(R.color.colorBlack);
int color_white = getResources().getColor(R.color.colorWhite);
for (int y = 0; y < bitMatrixHeight; y++) {
int offset = y * bitMatrixWidth;
for (int x = 0; x < bitMatrixWidth; x++) {
pixels[offset + x] = bitMatrix.get(x, y) ? color_black : color_white;
}
}
EDIT: added code example
found this: zxing generate QR on another thread here. Solved a similar problem for me.
Related
I have a function which receives camera frames and makes contrast/brightness adjustments to them. When I have...
void applyContrastBrightnessToFrame(Mat &frame, float contrast, int brightness)
{
for (int i = 0; i < frame.rows; i++) {
uchar *basePixel = frame.ptr(i);
for (int j = 0; j != frame.cols * frame.channels(); j += frame.channels()) {
int channelsToBlend = min(3, frame.channels()); //never adjust alpha channel
for (int c = 0; c < channelsToBlend; c++) {
basePixel[j + c] = saturate_cast<uchar>(basePixel[j + c] * contrast + brightness);
}
}
}
}
It works perfectly.
But when I convert the image to HLS so that I can do these adjustments without ruining the the saturation, pixel manipulations fail...
void applyContrastBrightnessToFrame(Mat &frame, float contrast, int brightness)
{
cvtColor(frame, frame, CV_RGBA2RGB);
cvtColor(frame, frame, CV_RGB2HLS);
assert(frame.channels() == 3);
for (int i = 0; i < frame.rows; i++) {
uchar *basePixel = frame.ptr(i);
for (int j = 0; j != frame.cols * frame.channels(); j += frame.channels()) {
int lumaChannel = 1;
//all pixel manipulations fail....
basePixel[j + lumaChannel] = 0; //setting to a constant
saturate_cast<uchar>(basePixel[j + lumaChannel] + brightness); //adjusting
}
}
cvtColor(frame, frame, CV_HLS2RGB);
cvtColor(frame, frame, CV_BGR2RGBA);
assert(frame.channels() == 4);
}
Here's what I know: The conversions are successful. When I capture an image from the camera and run it through the same function, the pixel manipulations succeed - this is especially weird since the processing of frames and captured images is identical.
What could be going wrong?
I can see that you are trying to alter brightness/contrast of a frame, pixel-wise.
So instead of iterating through every pixel from all channels of the frame, you can first split the HLS channels, perform operations and merge them back.
void applyContrastBrightnessToFrame(Mat &frame, float contrast, int
brightness)
{
cvtColor(frame, frame, CV_RGBA2RGB);
cvtColor(frame, frame, CV_RGB2HLS);
vector<Mat> hlsChannels(3);
split(frame, hlsChannels);
hlsChannels[1] += brightness; //adding brightness to channel 2(lightness channel)
merge(hlschannels, frame);
cvtColor(frame, frame, CV_HLS2RGB);
cvtColor(frame, frame, CV_BGR2RGBA);
}
You can also try looping over the pixels in the lightness channel alone.
Hope this helps!
In order to align the intensity values of two grayscale Images (as a first step for further processing) I wrote a Java method that:
converts the bitmaps of the two images into two int[] arrays containing the bitmap's intensities (I just take the red component here, since it's grayscale, i.e. r=g=b ).
public static int[] bmpToData(Bitmap bmp){
int width = bmp.getWidth();
int height = bmp.getHeight();
int anzpixel = width*height;
int [] pixels = new int[anzpixel];
int [] data = new int[anzpixel];
bmp.getPixels(pixels, 0, width, 0, 0, width, height);
for (int i = 0 ; i < anzpixel ; i++) {
int p = pixels[i];
int r = (p & 0xff0000) >> 16;
//int g = (p & 0xff00) >> 8;
//int b = p & 0xff;
data[i] = r;
}
return data;
}
aligns the cumulated intensity distributions of Bitmap 2 to that of Bitmap 1
//aligns the intensity distribution of a grayscale picture moving (given by int[] //data2) the the intensity distribution of a reference picture fixed (given by // int[] data1)
public static int[] histMatch(int[] data1, int[] data2){
int anzpixel = data1.length;
int[] histogram_fixed = new int[256];
int[] histogram_moving = new int[256];
int[] cumhist_fixed = new int[256];
int[] cumhist_moving = new int[256];
int i=0;
int j=0;
//read intensities of fixed und moving in histogram
for (int n = 0; n < anzpixel; n++) {
histogram_fixed[data1[n]]++;
histogram_moving[data2[n]]++;
}
// calc cumulated distributions
cumhist_fixed[0]=histogram_fixed[0];
cumhist_moving[0]=histogram_moving[0];
for ( i=1; i < 256; ++i ) {
cumhist_fixed[i] = cumhist_fixed[i-1]+histogram_fixed[i];
cumhist_moving[i] = cumhist_moving[i-1]+histogram_moving [i];
}
// look-up-table lut[]. For each quantile i of the moving picture search the
// value j of the fixed picture where the quantile is the same as that of moving
int[] lut = new int[anzpixel];
j=0;
for ( i=0; i < 256; ++i ){
while(cumhist_fixed[j]< cumhist_moving[i]){
j++;
}
// check, whether the distance to the next-lower intensity is even lower, and if so, take this value
if ((j!=0) && ((cumhist_fixed[j-1]- cumhist_fixed[i]) < (cumhist_fixed[j]- cumhist_fixed[i]))){
lut[i]= (j-1);
}
else {
lut[i]= (j);
}
}
// apply the lut[] to moving picture.
i=0;
for (int n = 0; n < anzpixel; n++) {
data2[n]=(int) lut[data2[n]];
}
return data2;
}
converts the int[] arrays back to Bitmap.
public static Bitmap dataToBitmap(int[] data, int width, int heigth) {
int index=0;
Bitmap bmp = Bitmap.createBitmap(width, heigth, Bitmap.Config.ARGB_8888);
for (int x = 0; x < width; x++) {
for (int y = 0; y < heigth; y++) {
index=y*width+x;
int c = data[index];
bmp.setPixel(x,y,Color.rgb(c, c, c));
}
}
return bmp;
}
While the core procedure 2) is straightforward and fast, the conversion steps 1) and 3) are rather inefficient. It would be more than cool to do the whole thing in Renderscript. But, honestly, I am completely lost in doing so because of missing documentation and, while there are many impressing examples on what Renderscript COULD perform, I don't see a way to benefit from these possibilities (no books, no docu). Any advice is highly appreciated!
As a starting point, use Android Studio to "Import Sample..." and select Basic Render Script. This will give you a working project that we will now modify.
First, let's add more Allocation references to MainActivity. We will use them to communicate image data, histograms and the LUT between Java and Renderscript.
private Allocation mInAllocation;
private Allocation mInAllocation2;
private Allocation[] mOutAllocations;
private Allocation mHistogramAllocation;
private Allocation mHistogramAllocation2;
private Allocation mLUTAllocation;
Then in onCreate() load another image, which you will also need to add to /res/drawables/.
mBitmapIn2 = loadBitmap(R.drawable.cat_480x400);
In createScript() create additional allocations:
mInAllocation2 = Allocation.createFromBitmap(mRS, mBitmapIn2);
mHistogramAllocation = Allocation.createSized(mRS, Element.U32(mRS), 256);
mHistogramAllocation2 = Allocation.createSized(mRS, Element.U32(mRS), 256);
mLUTAllocation = Allocation.createSized(mRS, Element.U32(mRS), 256);
And now the main part (in RenderScriptTask):
/*
* Invoke histogram kernel for both images
*/
mScript.bind_histogram(mHistogramAllocation);
mScript.forEach_compute_histogram(mInAllocation);
mScript.bind_histogram(mHistogramAllocation2);
mScript.forEach_compute_histogram(mInAllocation2);
/*
* Variables copied verbatim from your code.
*/
int []histogram_fixed = new int[256];
int []histogram_moving = new int[256];
int[] cumhist_fixed = new int[256];
int[] cumhist_moving = new int[256];
int i=0;
int j=0;
// copy computed histograms to Java side
mHistogramAllocation.copyTo(histogram_fixed);
mHistogramAllocation2.copyTo(histogram_moving);
// your code again...
// calc cumulated distributions
cumhist_fixed[0]=histogram_fixed[0];
cumhist_moving[0]=histogram_moving[0];
for ( i=1; i < 256; ++i ) {
cumhist_fixed[i] = cumhist_fixed[i-1]+histogram_fixed[i];
cumhist_moving[i] = cumhist_moving[i-1]+histogram_moving [i];
}
// look-up-table lut[]. For each quantile i of the moving picture search the
// value j of the fixed picture where the quantile is the same as that of moving
int[] lut = new int[256];
j=0;
for ( i=0; i < 256; ++i ){
while(cumhist_fixed[j]< cumhist_moving[i]){
j++;
}
// check, whether the distance to the next-lower intensity is even lower, and if so, take this value
if ((j!=0) && ((cumhist_fixed[j-1]- cumhist_fixed[i]) < (cumhist_fixed[j]- cumhist_fixed[i]))){
lut[i]= (j-1);
}
else {
lut[i]= (j);
}
}
// copy the LUT to Renderscript side
mLUTAllocation.copyFrom(lut);
mScript.bind_LUT(mLUTAllocation);
// Apply LUT to the destination image
mScript.forEach_apply_histogram(mInAllocation2, mInAllocation2);
/*
* Copy to bitmap and invalidate image view
*/
//mOutAllocations[index].copyTo(mBitmapsOut[index]);
// copy back to Bitmap in preparation for viewing the results
mInAllocation2.copyTo((mBitmapsOut[index]));
Couple notes:
In your part of the code I also fixed LUT allocation size - only 256 locations are needed,
As you can see, I left the computation of cumulative histogram and LUT on Java side. These are rather difficult to efficiently parallelize due to data dependencies and small scale of the calculations, but considering the latter I don't think it's a problem.
Finally, the Renderscript code. The only non-obvious part is the use of rsAtomicInc() to increase values in histogram bins - this is necessary due to potentially many threads attempting to increase the same bin concurrently.
#pragma version(1)
#pragma rs java_package_name(com.example.android.basicrenderscript)
#pragma rs_fp_relaxed
int32_t *histogram;
int32_t *LUT;
void __attribute__((kernel)) compute_histogram(uchar4 in)
{
volatile int32_t *addr = &histogram[in.r];
rsAtomicInc(addr);
}
uchar4 __attribute__((kernel)) apply_histogram(uchar4 in)
{
uchar val = LUT[in.r];
uchar4 result;
result.r = result.g = result.b = val;
result.a = in.a;
return(result);
}
Hello i want to convert the color in image, i'm using per-pixel methods but it seems very slow
src.getPixels(pixels, 0, width, 0, 0, width, height);
// RGB values
int R;
for (int i = 0; i < pixels.length; i++) {
// Get RGB values as ints
// Set pixel color
pixels[i] = color;
}
// Set pixels
src.setPixels(pixels, 0, width, 0, 0, width, height);
my question, is there any way i can do it using openCV? change pixel to the color i want ?
I recommend this excellent article on how to access/modify an opencv image buffer. I recommend
"the efficient way":
int i,j;
uchar* p;
for( i = 0; i < nRows; ++i)
{
p = I.ptr<uchar>(i);
for ( j = 0; j < nCols; ++j)
{
p[j] = table[p[j]];
}
Or "the iterator-safe method":
MatIterator_<Vec3b> it, end;
for( it = I.begin<Vec3b>(), end = I.end<Vec3b>(); it != end; ++it)
{
(*it)[0] = table[(*it)[0]];
(*it)[1] = table[(*it)[1]];
(*it)[2] = table[(*it)[2]];
}
For further optimizations, using cv::LUT() (where possible) can give huge speedups, but it is more intensive to design/code.
You can access Pixels by using:
img.at<Type>(y, x);
So to change an RGB Value you can use:
// read color
Vec3b intensity = img.at<Vec3b>(y, x);
// compute new color using intensity.val[0] etc. to access color values
// write new color
img.at<Vec3b>(y, x) = intensity;
#Boyko mentioned an Article from OpenCV concerning fast access to the image pixels if you want to iterate over all Pixel. The Method I would prefer from this Article is the iterator Method, as it is only slightly slower than direct pointer access but safer to use.
Example Code:
Mat& AssignNewColors(Mat& img)
{
// accept only char type matrices
CV_Assert(img.depth() != sizeof(uchar));
const int channels = img.channels();
switch(channels)
{
// case 1: skipped here
case 3:
{
// Read RGG Pixels
Mat_<Vec3b> _img = img;
for( int i = 0; i < img.rows; ++i)
for( int j = 0; j < img.cols; ++j )
{
_img(i,j)[0] = computeNewColor(_img(i,j)[0]);
_img(i,j)[1] = computeNewColor(_img(i,j)[1]);
_img(i,j)[2] = computeNewColor(_img(i,j)[2]);
}
img = _img;
break;
}
}
return img;
}
While generating the qr code for android using zxing library is it possible to set the version number like version 4 or any other version .
Any guidance or link would be appreciable .
Thank you.
Yes check the EncodedHintType map:
private Bitmap stringToQRCode(String text, int width, int height) {
BitMatrix bitMatrix;
try {
HashMap<EncodeHintType, Object> map = new HashMap<>();
map.put(EncodeHintType.CHARACTER_SET, "utf-8");
map.put(EncodeHintType.ERROR_CORRECTION, ErrorCorrectionLevel.M);
map.put(EncodeHintType.QR_VERSION, 9); // (1-40)
map.put(EncodeHintType.MARGIN, 2); // pixels
bitMatrix = new MultiFormatWriter().encode(text, BarcodeFormat.QR_CODE, width, height, map);
int bitMatrixWidth = bitMatrix.getWidth();
int bitMatrixHeight = bitMatrix.getHeight();
int[] pixels = new int[bitMatrixWidth * bitMatrixHeight];
int colorWhite = 0xFFFFFFFF;
int colorBlack = 0xFF000000;
for (int y = 0; y < bitMatrixHeight; y++) {
int offset = y * bitMatrixWidth;
for (int x = 0; x < bitMatrixWidth; x++) {
pixels[offset + x] = bitMatrix.get(x, y) ? colorBlack : colorWhite;
}
}
Bitmap bitmap = Bitmap.createBitmap(bitMatrixWidth, bitMatrixHeight, Bitmap.Config.ARGB_4444);
bitmap.setPixels(pixels, 0, width, 0, 0, bitMatrixWidth, bitMatrixHeight);
return bitmap;
} catch (Exception i) {
i.printStackTrace();
return null;
}
}
No. There would be no real point to this. The version can't be lower than what is required to encode the data, and setting it higher just makes a denser QR code that's slightly harder to read.
I want to get the dominant color in an Android CvCameraViewFrame object. I use the following OpenCV Android code to do that. This code is converted from OpenCV c++ code to OpenCV Android code. In the following code I loop through all the pixels in my camera frame and find the color of each pixel and store them in a HashMap to find the dominant color at the end of the loop. To loop through each pixel it takes about 30 seconds. This is unacceptable for me. Could somebody please review this code and point me how can I find the dominant color in a camera frame.
private String[] colors = {"cBLACK", "cWHITE", "cGREY", "cRED", "cORANGE", "cYELLOW", "cGREEN", "cAQUA", "cBLUE", "cPURPLE", "cPINK", "cRED"};
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
mRgba = inputFrame.rgba();
if (mIsColorSelected) {
Imgproc.cvtColor(mRgba, mRgba, Imgproc.COLOR_BGR2HSV);
int h = mRgba.height(); // Pixel height
int w = mRgba.width(); // Pixel width
int rowSize = (int)mRgba.step1(); // Size of row in bytes, including extra padding
float initialConfidence = 1.0f;
Map<String, Integer> tallyColors = new HashMap<String, Integer>();
byte[] pixelsTotal = new byte[h*rowSize];
mRgba.get(0,0,pixelsTotal);
//This for loop takes about 30 seconds to process for my camera frame
for (int y=0; y<h; y++) {
for (int x=0; x<w; x++) {
// Get the HSV pixel components
int hVal = (int)pixelsTotal[(y*rowSize) + x + 0]; // Hue
int sVal = (int)pixelsTotal[(y*rowSize) + x + 1]; // Saturation
int vVal = (int)pixelsTotal[(y*rowSize) + x + 2]; // Value (Brightness)
// Determine what type of color the HSV pixel is.
String ctype = getPixelColorType(hVal, sVal, vVal);
// Keep count of these colors.
int totalNum = 0;
try{
totalNum = tallyColors.get(ctype);
} catch(Exception ex){
totalNum = 0;
}
totalNum++;
tallyColors.put(ctype, totalNum);
}
}
int tallyMaxIndex = 0;
int tallyMaxCount = -1;
int pixels = w * h;
for (int i=0; i<colors.length; i++) {
String v = colors[i];
int pixCount;
try{
pixCount = tallyColors.get(v);
} catch(Exception e){
pixCount = 0;
}
Log.i(TAG, v + " - " + (pixCount*100/pixels) + "%, ");
if (pixCount > tallyMaxCount) {
tallyMaxCount = pixCount;
tallyMaxIndex = i;
}
}
float percentage = initialConfidence * (tallyMaxCount * 100 / pixels);
Log.i(TAG, "Color of currency note: " + colors[tallyMaxIndex] + " (" + percentage + "% confidence).");
}
return mRgba;
}
private String getPixelColorType(int H, int S, int V)
{
String color;
if (V < 75)
color = "cBLACK";
else if (V > 190 && S < 27)
color = "cWHITE";
else if (S < 53 && V < 185)
color = "cGREY";
else { // Is a color
if (H < 14)
color = "cRED";
else if (H < 25)
color = "cORANGE";
else if (H < 34)
color = "cYELLOW";
else if (H < 73)
color = "cGREEN";
else if (H < 102)
color = "cAQUA";
else if (H < 127)
color = "cBLUE";
else if (H < 149)
color = "cPURPLE";
else if (H < 175)
color = "cPINK";
else // full circle
color = "cRED"; // back to Red
}
return color;
}
Thank you very much.
OpenCV has an Histogram method which counts all image colors. After the histogram is calculated all you would have to do is to chose the one with the biggest count...
Check here for a tutorial (C++): Histogram Calculation.
You might also the this stackoverflow answer which shows an example on how to use Android's histogram function Imgproc.calcHist().
Think about to resize your images, then you may multiply the results by the same scale:
resize( larg_image, smallerImage , interpolation=cv.CV_INTER_CUBIC );
Or,
you may check these solutions:
You could find dominant color using k-mean clustering method.
this link will be useful.
https://www.youtube.com/watch?v=f54-x3PckH8