Running tflite sample segmentation app with different model - android

I am trying to run the sample app from tensorflow for image segmentation with a different model.
I would like to run it with the model shufflenetv2 with dpc.
So I copied the model, and changed imageSize to 225 in ImageSegmentationModelExecutor.kt.
Then I am getting the error
something went wrong: y + height must be <= bitmap.height()
Doing some small adjustments in the function scaleBitmapAndKeepRatio of ImageUtils.kt solves the problem. (Just changed targetBmp.width to height twice, once in the matrix and 2nd time in the return.)
This brings the next error
something went wrong: Cannot convert between a TensorFlowLite buffer with 202500 bytes and a Java Buffer with 4252500 bytes.
The ratio of these 2 numbers is the NUM_CLASSES. Not sure if this is the right way to get it running or how to continue from here.
Any ideas or suggestions?

You seem to have two unrelated problems
1) The method scaleBitmapAndKeepRatio seems buggy.
By replacing width with height you're not solving the problem. Only changing the moment when it will happen. See the reference for createBitmap
if the x, y, width, height values are outside of the dimensions of the source bitmap, or width is <= 0, or height is <= 0, or if the source bitmap has already been recycled
so to get a squared adjusted bitmap, you'd be better off by getting the smallest dimension like this
val dimension = min(targetBmp.height, targetBmp.width)
and replacing width with dimension
2) I believe the input node of your tflite model is not compatible with tflite segmentation example
Have a look to the default model Google provides with Netron
You can see the the default model input node is a quantized float32 To me, it seems likely that you've converted your model to tflite using the default instructions in quantize.md. This will get you a model expecting a quantized uint8 input node, and so, the datatype is a mismatch. Remember tflite examples in the repository are taylor-made for an specific input and not very generic.
You'd rather do the conversion shown below to get as input node a quantized float32
# Load the TensorFlow model
converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(
graph_def_file = MODEL_FILE,
input_arrays = ['sub_2'], # For the Xception model it needs to be `sub_7`, for MobileNet it would be `sub_2`
output_arrays = ['ResizeBilinear_2'], # For the Xception model it needs to be `ResizeBilinear_3`, for MobileNet it would be `ResizeBilinear_2`
input_shapes={'sub_2':[1,257,257,3]}
)

Related

Tensorflow.lite model produces wrong (different) results in Android app?

I've made an Image classification model and converted it to tflite format.
Then I've verified tflite model in Python using tf.lite.Interpreter — it produces same results for my test image as the original model. Here's a colab link to verify.
Then I embedded it to a sample Android app, using Android Studio ML Model Binding and exact example code from Android studio.
Here's the main activity code, you can also use this link to navigate to the full android project.
val assetManager = this.assets
val istr = assetManager.open("test_image.JPG") //The same image
val b = BitmapFactory.decodeStream(istr)
val model = Model2.newInstance(this) //Model definition generated by Android Studio
// Creates inputs for reference.
val image = TensorImage.fromBitmap(b)
// Runs model inference and gets result.
val outputs = model.process(image)
val probability = outputs.probabilityAsCategoryList
probability.sortByDescending { it.score }
val top9 = probability.take(9)
this.findViewById<TextView>(R.id.results_text).text = top9.toString()
And then I'm getting completely different results on Android for the same model and the same input image.
Here are results matching my initial model in Python:
Here are wrong results I'm getting in Android app:
Links to the model and the test image are there in both examples, but I'll post them into the question once again:
tflite model
test image
I guess it has something to do with input/output formats of the model. Or the image is interpreted differently in python and in android. Or the metadata I added to the model is somehow wrong. Anyways, I've tried everything to localize the issue and now I'm stuck.
How do I fix my model or Android code so it produces the same results as my python code?
I've managed to find and fix the issue:
My model from this tutorial included a built-in image normalization layer. Image normalization is when you transform standard 0-255 image color values to 0.0-1.0 float values, suitable for machine learning.
But the metadata I used for the tflite model included 2 parameters for external normalization: mean and std.
Formula for each value being: normalized_value = (value - mean) / std
Since my model handles its own normalization, I need to turn off external normalization by setting mean = 0 and std = 1.
This way I'll get normalized_value = value.
So, setting the tflite metadata parameters to these:
image_min=0,
image_max=255.0,
mean=[0.0],
std=[1.0]
fixed the double normalization issue and my model now produces correct results in Android app.

How to customize parameters used on renderscript root function?

Background
I'm new to renderscript, and I would like to try some experiments with it (but small ones and not the complex ones we find in the SDK), so I thought of an exercise to try out, which is based on a previous question of mine (using NDK).
What I want to do
In short, I would like to pass a bitmap data to renderscript, and then I would like it to copy the data to another bitmap that has the dimensions opposite to the previous one, so that the second bitmap would be a rotation of the first one.
For illustration:
From this bitmap (width:2 , height:4):
01
23
45
67
I would like it to rotate (counter clock-wise of 90 degrees) to:
1357
0246
The problem
I've noticed that when I try to change the signature of the root function, Eclipse gives me errors about it.
Even making new functions creates new errors. I've even tried the same code written on Google's blog (here ), but I couldn't find out how he got to create the functions he used, and how come I can't change the filter function to have the input and output bitmap arrays.
What can I do in order to customize the parameters I send to renderscript, and use the data inside it?
Is it ok not to use "filter" or "root" functions (API 11 and above)? What can I do in order to have more flexibility about what I can do there?
You are asking a bunch of separate questions here, so I will answer them in order.
1) You want to rotate a non-square bitmap. Unfortunately, the bitmap model for Renderscript won't allow you to do this easily. The reason for this is that that input and output allocations must have the same shape (i.e. same number of dimensions and values of those dimensions, even if the Types are different). In order to get the effect you want, you should use a root function that only has an output allocation of the new shape (i.e. input columns X input rows). You can create an rs_allocation global variable for holding your input bitmap (which you can then create/bind on the Java side). The kernel then merely needs to set the output cell to the result of rsGetElementAt(globalInAlloc, y, x).
2) If you are using API 11, you can't adjust the signature of the root() function (you can pass NULL allocations as input, output on the Java side if you are not using them). You also can't create more than 1 kernel per source file on these older API levels, so you are forced to only have a single "root()" function. If you want to use more kernels per source file, consider targeting a higher API level.

create big arrays from jni (ndk) and return them to android

I am developing an aplication where I need to read large image files (6000x6000) ,then applying some filtering (like blurring and color effects) and then saving the image.
The filtering library is a 3rd party library that is programmed in Java and take something like this as a input :
/**
* rgbArray : is an Array of pixels
*/
public int[] ImageFiltering(int[] rgbArray,int Width,int Height);
The problem is that if I load the image in memory (6000 x 6000 x 4 = 137.33 MB) android throws OutOfMemory error.
After reading some documentation and knowing that the memory allocated from NDK is not part of the application heap ,I get an interesting idea:
Open the image from NDK Read it contents and save it in an array
Pass the array back to Java
Apply filter to the array data
Return the array to NDK
Save the data array into a new image and release the array memory
Here is an example of NDK function with returns the big,fat array:
jint*
Java_com_example_hellojni_HelloJni_stringFromJNI( JNIEnv* env,jobject thiz,jint w,jint h)
{
int* pixels = (int*)malloc(w * h * 4);
read_image_into_array("image.jpg",pixels);
return pixels;
}
The goal is to reserve the memory in native in order to avoid getting OutOfMemory error,and pass the memory reference to Java in order to work with it.
Since I am have not C developer and have never touched JNI ,is all this making sense and how could it be implemented in NDK.
Use a direct ByteBuffer, using the allocated memory as its backing buffer. You can allocate a direct byte buffer from JNI (using NewDirectByteBuffer()) and return it.
You'll need to provide a complementary method for disposing of the memory or otherwise indicating to the native side that it is no longer in use.

Converting a 32bpp image in android (Bitmap instance of type ARGB_8888) to 8bpp, 4bpp and 2bpp

I need to convert 32bpp images in Android , an instance of Bitmap class with a Bitmap.Config as ARGB_8888.
1. How can I reduce the color depth image to 8bpp and 4bpp?
2. Does android provide any java helper classes to achieve the same?
Use the copy()-method of your bitmap. Here you can specify the resulting color-depth from one of the ones available throught Bitmap.Config (16 or 8 bpp, I have seen a few other configurations in various fields in Android but the only ones that seems compatible with the Bitmap are the ones in Bitmap.Config).
You need to use an external image processing library to do that kind of color quantization. I would prefer use Leptonica. Its written in C but you can find Android Java bindings in this project

Is there a way to import a 3D model into Android?

Is it possible to create a simple 3D model (for example in 3DS MAX) and then import it to Android?
That's where I got to:
I've used Google's APIDemos as a starting point - there are rotating cubes in there, each specified by two arrays: vertices and indices.
I've build my model using Blender and exported it as OFF file - it's a text file that lists all the vertices and then faces in terms of these vertices (indexed geometry)
Then I've created a simple C++ app that takes that OFF and writes it as two XMLs containing arrays (one for vertices and one for indices)
These XML files are then copied to res/values and this way I can assign the data they contain to arrays like this:
int vertices[] = context.getResources().getIntArray(R.array.vertices);
I also need to manually change the number of faces to be drawn in here: gl.glDrawElements(GL10.GL_TRIANGLES, 212*6, GL10.GL_UNSIGNED_SHORT, mIndexBuffer); - you can find that number (212 in this case) on top of the OFF file
Here you can find my project page, which uses this solution: Github project > vsiogap3d
you may export it to ASE format.
from ASE, you can convert it to your code manually or programatically.
You will need vertex for vertices array and faces for indices in Android.
don't forget you have to set
gl.glFrontFace(GL10.GL_CCW);
because 3ds max default is counter clockwise.
It should be possible. You can have the file as a data file with your program (and as such it will be pushed onto the emulator and packaged for installation onto an actual device). Then you can write a model loader and viewer in java using the Android and GLES libraries to display the model.
Specific resources on this are probably limited though. 3ds is a proprietry format so 3rd party loaders are in shortish supply and mostly reverse engineered. Other formats (such as blender or milkshape) are more open and you should be able to find details on writing a loader for them in java fairly easily.
Have you tried min3d for android? It supports 3ds max,obj and md2 models.
Not sure about Android specifically, but generally speaking you need a script in 3DS Max that manually writes out the formatting you need from the model.
As to whether one exists for Android or not, I do not know.
You can also convert 3DS MAX model with the 3D Object Converter
http://web.t-online.hu/karpo/
This tool can convert 3ds object to text\xml format or c code.
Please note that the tool is not free. You can try for a 30-day trial period. 'C' code and XML converters are available.
'c' OpenGL output example:
glDisable(GL_TEXTURE_2D);
glEnable(GL_LIGHTING);
glEnable(GL_NORMALIZE);
GLfloat Material_1[] = { 0.498039f, 0.498039f, 0.498039f, 1.000000f };
glBegin(GL_TRIANGLES);
glMaterialfv(GL_FRONT,GL_DIFFUSE,Material_1
glNormal3d(0.452267,0.000000,0.891883);
glVertex3d(5.108326,1.737655,2.650969);
glVertex3d(9.124107,-0.002484,0.614596);
glVertex3d(9.124107,4.039649,0.614596);
glEnd();
Or direct 'c' output:
Point3 Object1_vertex[] = {
{5.108326,1.737655,2.650969},
{9.124107,-0.002484,0.614596},
{9.124107,4.039649,0.614596}};
long Object1_face[] = {
3,0,1,2,
3,3,4,5
3,6,3,5};
You can migrate than those collections of objects to your Java code.

Categories

Resources