How to Retrieve a Bitmap Arraylist from the sdcard - android

Here, I break the image in to 9*9 pieces, placed all the pieces in an array like this, ArrayList values. Then next I stored those ArrayList value in the sdcard successfully. Here my Question is how to retrieve those ArrayList value, and in that Arraylist value contains some break images. Below you can observe details.
ArrayList<Bitmap> cut = new ArrayList<Bitmap>(number_bixs);
Bitmap Bb = Bitmap.createScaledBitmap(Bsbmp, width,height, true);
rows = cols = (int) Math.sqrt(number_bixs);
hgt = Bb.getHeight() / rows;
wdt = Bb.getWidth() / cols;
int yaxis = 0;
for (int x = 0; x < rows; x++) {
int xaxis = 0;
for (int y = 0; y < cols; y++) {
cut.add(Bitmap.createBitmap(Bb, xaxis, yaxis, wdt, hgt));
xaxis += wdt;
}
}
here below code stored that ArrayList value in the sdcard.
File FracturedDirectory = new File(
Environment.getExternalStorageDirectory()
+ "/Fractured photoes/");
FracturedDirectory.mkdirs();
try {
FileOutputStream out = new FileOutputStream(Environment
.getExternalStorageDirectory().toString()
+ "/Fractured photoes/" + cut);
selectedphoto_bitmap.compress(Bitmap.CompressFormat.PNG, 90, out);
Then now, How can i retrieve those ArrayList value "cut" from the sdcard and store those in one more/ another ArrayList value.
Any suggestions thanks in advance

You are doing a few things wrong.
First, you should give a name to each bitmap in the arraylist (it could be as simple as 0.png, 1.png, 2.png etc. depending on the position of the bitmap in the arraylist) and pass the individual names to the FileOutputStream() constructor instead of passing cut. Passing cut makes no sense as the mentioned constructor should receive a single file name.
Don't forget to call out.close() after selectedphoto_bitmap.compress(...).
In order to read the bitmaps you can do something similar to this:
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig = Bitmap.Config.ARGB_8888;
for (int i = 0; i < 81; i++) {
Bitmap bitmap = BitmapFactory.decodeFile(photoPath, options);
cut.add(bitmap);
}

Related

Tensorflow Lite - Input shape must be 5 dimensional error

I am trying to port a tensorflow model to tensorflow lite to use it in an android application. The conversion is successful and everything runs except for Internal error: Failed to run on the given Interpreter: input must be 5-dimensional. The input in the original model was input_shape=(20, 320, 240, 1), which is 20 320 x 240 grayscale images (therefore ...,1). Here is the important code:
List<Mat> preprocessedFrames = preprocFrames(buf);
//has length of 20 -> no problem there (shouldn't affect dimensionality either...)
int[] output = new int[2];
float[][][] inputMatrices = new float[preprocessedFrames.toArray().length][320][240];
for(int i = 0; i < preprocessedFrames.toArray().length; i++) {
Mat inpRaw = preprocessedFrames.get(i);
Bitmap data = Bitmap.createBitmap(inpRaw.cols(), inpRaw.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(inpRaw, data);
int[][] pixels = pixelsFromBitmap(data);
float[][] inputMatrix = inputMatrixFromIntPixels(pixels);
// returns float[][] with floats from 0 to 1
inputMatrices[i] = inputMatrix;
}
try{
detector.run(inputMatrices, output);
Debug("results: " + output.toString());
}
The model gives me an output of 2 neurons translating into 2 labels.
The model code is the following:
model = tf.keras.Sequential(name='detector')
model.add(tf.keras.layers.Conv3D(filters=(56), input_shape=(20, 320, 240, 1), strides=(2,2,2), kernel_size=(3,11,11), padding='same', activation="relu"))
model.add(tf.keras.layers.AveragePooling3D(pool_size=(1,4,4)))
model.add(tf.keras.layers.Conv3D(filters=(72), kernel_size=(4,7,7), strides=(1,2,2), padding='same'))
model.add(tf.keras.layers.Conv3D(filters=(81), kernel_size=(2,4,4), strides=(2,2,2), padding='same'))
model.add(tf.keras.layers.Conv3D(filters=(100), kernel_size=(1,2,2), strides=(3,2,2), padding='same'))
model.add(tf.keras.layers.Conv3D(filters=(128), kernel_size=(1,2,2), padding='same'))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(768, activation='tanh', kernel_regularizer=tf.keras.regularizers.l2(0.011)))
model.add(tf.keras.layers.Dropout(rate=0.1))
model.add(tf.keras.layers.Dense(256, activation='sigmoid', kernel_regularizer=tf.keras.regularizers.l2(0.012)))
model.add(tf.keras.layers.Dense(2, activation='softmax'))
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.00001), loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=['accuracy'])
EDIT: I printed out the first input tensor as follows:
int[] shape = detector.getInputTensor(0).shape();
for(int r = 0; r < shape.length; r++){
Log.d("********" + r, "*******: " + r + " : " + shape[r]);
}
With that I first get the output [1,20,320,240,1]and after that I only get [20,320,240]. I am really quite desperate now...
So, I figured it out by myself and it seems like I really only had to make the input 5 dimensional by putting the content into a first dimension and every single pixel into a fifth dimension. I don't know why, but I will accept that xD.
float[][] output = new float[1][2];
float[][][][][] inputMatrices = new float[1][preprocessedFrames.toArray().length][320][240][1];
for(int i = 0; i < preprocessedFrames.toArray().length; i++) {
Mat inpRaw = preprocessedFrames.get(i);
Bitmap data = Bitmap.createBitmap(inpRaw.cols(), inpRaw.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(inpRaw, data);
int[][] pixels = pixelsFromBitmap(data);
float[][] inputMatrix = inputMatrixFromIntPixels(pixels);
for (int j = 0; j < inputMatrix.length - 1; j++) {
for(int k = 0; k < inputMatrix[0].length - 1; k++) {
inputMatrices[0][i][k][j][0] = inputMatrix[j][k];
}
}
}

How to load and get pixel value of a 16-bit single channel PNG image?

I tried to load a 16-bit single channel PNG image in Android and get the value of each pixels.
I used function "BitmapFactory.decodeStream" and get the value of pixels directly. I got the pixel values like "-16250872".
How can I get the correct values of this type of image in Android?
InputStream ist= null;
try {
ist = getAssets().open(pic_PATH2);
} catch (IOException e) {
e.printStackTrace();
}
Bitmap bmp2 = BitmapFactory.decodeStream(ist);
int width = bmp2.getWidth();//get the width of the image
int height = bmp2.getHeight();//get the height of the image
float[][] Px2 = new float[width][height];
for (int i = 0; i < height; i++)
for (int j = 0; j < width; j++)
{
Px2[j][i] = bmp2.getPixel(j, i);
}

Histogram Matching in Renderscript

In order to align the intensity values of two grayscale Images (as a first step for further processing) I wrote a Java method that:
converts the bitmaps of the two images into two int[] arrays containing the bitmap's intensities (I just take the red component here, since it's grayscale, i.e. r=g=b ).
public static int[] bmpToData(Bitmap bmp){
int width = bmp.getWidth();
int height = bmp.getHeight();
int anzpixel = width*height;
int [] pixels = new int[anzpixel];
int [] data = new int[anzpixel];
bmp.getPixels(pixels, 0, width, 0, 0, width, height);
for (int i = 0 ; i < anzpixel ; i++) {
int p = pixels[i];
int r = (p & 0xff0000) >> 16;
//int g = (p & 0xff00) >> 8;
//int b = p & 0xff;
data[i] = r;
}
return data;
}
aligns the cumulated intensity distributions of Bitmap 2 to that of Bitmap 1
//aligns the intensity distribution of a grayscale picture moving (given by int[] //data2) the the intensity distribution of a reference picture fixed (given by // int[] data1)
public static int[] histMatch(int[] data1, int[] data2){
int anzpixel = data1.length;
int[] histogram_fixed = new int[256];
int[] histogram_moving = new int[256];
int[] cumhist_fixed = new int[256];
int[] cumhist_moving = new int[256];
int i=0;
int j=0;
//read intensities of fixed und moving in histogram
for (int n = 0; n < anzpixel; n++) {
histogram_fixed[data1[n]]++;
histogram_moving[data2[n]]++;
}
// calc cumulated distributions
cumhist_fixed[0]=histogram_fixed[0];
cumhist_moving[0]=histogram_moving[0];
for ( i=1; i < 256; ++i ) {
cumhist_fixed[i] = cumhist_fixed[i-1]+histogram_fixed[i];
cumhist_moving[i] = cumhist_moving[i-1]+histogram_moving [i];
}
// look-up-table lut[]. For each quantile i of the moving picture search the
// value j of the fixed picture where the quantile is the same as that of moving
int[] lut = new int[anzpixel];
j=0;
for ( i=0; i < 256; ++i ){
while(cumhist_fixed[j]< cumhist_moving[i]){
j++;
}
// check, whether the distance to the next-lower intensity is even lower, and if so, take this value
if ((j!=0) && ((cumhist_fixed[j-1]- cumhist_fixed[i]) < (cumhist_fixed[j]- cumhist_fixed[i]))){
lut[i]= (j-1);
}
else {
lut[i]= (j);
}
}
// apply the lut[] to moving picture.
i=0;
for (int n = 0; n < anzpixel; n++) {
data2[n]=(int) lut[data2[n]];
}
return data2;
}
converts the int[] arrays back to Bitmap.
public static Bitmap dataToBitmap(int[] data, int width, int heigth) {
int index=0;
Bitmap bmp = Bitmap.createBitmap(width, heigth, Bitmap.Config.ARGB_8888);
for (int x = 0; x < width; x++) {
for (int y = 0; y < heigth; y++) {
index=y*width+x;
int c = data[index];
bmp.setPixel(x,y,Color.rgb(c, c, c));
}
}
return bmp;
}
While the core procedure 2) is straightforward and fast, the conversion steps 1) and 3) are rather inefficient. It would be more than cool to do the whole thing in Renderscript. But, honestly, I am completely lost in doing so because of missing documentation and, while there are many impressing examples on what Renderscript COULD perform, I don't see a way to benefit from these possibilities (no books, no docu). Any advice is highly appreciated!
As a starting point, use Android Studio to "Import Sample..." and select Basic Render Script. This will give you a working project that we will now modify.
First, let's add more Allocation references to MainActivity. We will use them to communicate image data, histograms and the LUT between Java and Renderscript.
private Allocation mInAllocation;
private Allocation mInAllocation2;
private Allocation[] mOutAllocations;
private Allocation mHistogramAllocation;
private Allocation mHistogramAllocation2;
private Allocation mLUTAllocation;
Then in onCreate() load another image, which you will also need to add to /res/drawables/.
mBitmapIn2 = loadBitmap(R.drawable.cat_480x400);
In createScript() create additional allocations:
mInAllocation2 = Allocation.createFromBitmap(mRS, mBitmapIn2);
mHistogramAllocation = Allocation.createSized(mRS, Element.U32(mRS), 256);
mHistogramAllocation2 = Allocation.createSized(mRS, Element.U32(mRS), 256);
mLUTAllocation = Allocation.createSized(mRS, Element.U32(mRS), 256);
And now the main part (in RenderScriptTask):
/*
* Invoke histogram kernel for both images
*/
mScript.bind_histogram(mHistogramAllocation);
mScript.forEach_compute_histogram(mInAllocation);
mScript.bind_histogram(mHistogramAllocation2);
mScript.forEach_compute_histogram(mInAllocation2);
/*
* Variables copied verbatim from your code.
*/
int []histogram_fixed = new int[256];
int []histogram_moving = new int[256];
int[] cumhist_fixed = new int[256];
int[] cumhist_moving = new int[256];
int i=0;
int j=0;
// copy computed histograms to Java side
mHistogramAllocation.copyTo(histogram_fixed);
mHistogramAllocation2.copyTo(histogram_moving);
// your code again...
// calc cumulated distributions
cumhist_fixed[0]=histogram_fixed[0];
cumhist_moving[0]=histogram_moving[0];
for ( i=1; i < 256; ++i ) {
cumhist_fixed[i] = cumhist_fixed[i-1]+histogram_fixed[i];
cumhist_moving[i] = cumhist_moving[i-1]+histogram_moving [i];
}
// look-up-table lut[]. For each quantile i of the moving picture search the
// value j of the fixed picture where the quantile is the same as that of moving
int[] lut = new int[256];
j=0;
for ( i=0; i < 256; ++i ){
while(cumhist_fixed[j]< cumhist_moving[i]){
j++;
}
// check, whether the distance to the next-lower intensity is even lower, and if so, take this value
if ((j!=0) && ((cumhist_fixed[j-1]- cumhist_fixed[i]) < (cumhist_fixed[j]- cumhist_fixed[i]))){
lut[i]= (j-1);
}
else {
lut[i]= (j);
}
}
// copy the LUT to Renderscript side
mLUTAllocation.copyFrom(lut);
mScript.bind_LUT(mLUTAllocation);
// Apply LUT to the destination image
mScript.forEach_apply_histogram(mInAllocation2, mInAllocation2);
/*
* Copy to bitmap and invalidate image view
*/
//mOutAllocations[index].copyTo(mBitmapsOut[index]);
// copy back to Bitmap in preparation for viewing the results
mInAllocation2.copyTo((mBitmapsOut[index]));
Couple notes:
In your part of the code I also fixed LUT allocation size - only 256 locations are needed,
As you can see, I left the computation of cumulative histogram and LUT on Java side. These are rather difficult to efficiently parallelize due to data dependencies and small scale of the calculations, but considering the latter I don't think it's a problem.
Finally, the Renderscript code. The only non-obvious part is the use of rsAtomicInc() to increase values in histogram bins - this is necessary due to potentially many threads attempting to increase the same bin concurrently.
#pragma version(1)
#pragma rs java_package_name(com.example.android.basicrenderscript)
#pragma rs_fp_relaxed
int32_t *histogram;
int32_t *LUT;
void __attribute__((kernel)) compute_histogram(uchar4 in)
{
volatile int32_t *addr = &histogram[in.r];
rsAtomicInc(addr);
}
uchar4 __attribute__((kernel)) apply_histogram(uchar4 in)
{
uchar val = LUT[in.r];
uchar4 result;
result.r = result.g = result.b = val;
result.a = in.a;
return(result);
}

how to get pixel color using byte array in Android

In my Android project,
Here is my code.
for (int x = 0; x < targetBitArray.length; x += weight) {
for (int y = 0; y < targetBitArray[x].length; y += weight) {
targetBitArray[x][y] = bmp.getPixel(x, y) == mSearchColor;
}
}
but this code wastes a lot of time.
So I need to find way faster than bitmap.getPixel().
I'm trying to get pixel color using byte array converted from bitmap, but I can't.
How to replace Bitmap.getPixel()?
Each Bitmap.getPixel method invocation requires a lot of resources, so you need to avoid the amount of requests in order to improve the performace of your code.
My suggestion is:
Read the image data row-by-row with Bitmap.getPixels method into a local array
Iterate along your local array
e.g.
int [] rowData= new int [bitmapWidth];
for (int row = 0; row < bitmapHeight; row ++) {
// Load row of pixels
bitmap.getPixels(rowData, 0, bitmapWidth, 0, row, bitmapWidth, 1);
for (int column = 0; column < bitmapWidth; column ++) {
targetBitArray[column][row] = rowData(column) == mSearchColor;
}
}
This will be a great improvement for the performace of your code

How to add image to ArrayList<string> in android?

I need to add an image into "item". item is an xml file with TextView...
item = new ArrayList<String>();
item.add("an image");
Try this code
ArrayList<Bitmap> mBit = new ArrayList<Bitmap>(9);
for (int i = 0; i < 9; i++)
{
mBit.add(Bitmap.createBitmap(bitmapOrg, (i % 3) * newWidth, (i / 3) * newHeight, newWidth, newHeight));
}
Collections.shuffle(mBit);
for (int i = 0; i < 10; i++)
{
Bitmap bitmap = mBit.get(i));
//Do something here
}
You should create an ArrayList of objects and you can put everything you want in it and manipulate like this :
ArrayList<Object> array = new ArrayList<Object>();
array.put(0,"A string");
array.put(1,yourbitmap);
String string = (String) array.get(0);
Bitmap bitmap = (Bitmap) array.get(1);
You must cast when you get get because it is an object array.
If by image, you mean an image File and not Image object. Then use
add(fileObject.toString())
and while retrieving recreate File object using that object String.
new File(array.get(0)).getPath()

Categories

Resources