Tensorflow.lite model produces wrong (different) results in Android app? - android

I've made an Image classification model and converted it to tflite format.
Then I've verified tflite model in Python using tf.lite.Interpreter — it produces same results for my test image as the original model. Here's a colab link to verify.
Then I embedded it to a sample Android app, using Android Studio ML Model Binding and exact example code from Android studio.
Here's the main activity code, you can also use this link to navigate to the full android project.
val assetManager = this.assets
val istr = assetManager.open("test_image.JPG") //The same image
val b = BitmapFactory.decodeStream(istr)
val model = Model2.newInstance(this) //Model definition generated by Android Studio
// Creates inputs for reference.
val image = TensorImage.fromBitmap(b)
// Runs model inference and gets result.
val outputs = model.process(image)
val probability = outputs.probabilityAsCategoryList
probability.sortByDescending { it.score }
val top9 = probability.take(9)
this.findViewById<TextView>(R.id.results_text).text = top9.toString()
And then I'm getting completely different results on Android for the same model and the same input image.
Here are results matching my initial model in Python:
Here are wrong results I'm getting in Android app:
Links to the model and the test image are there in both examples, but I'll post them into the question once again:
tflite model
test image
I guess it has something to do with input/output formats of the model. Or the image is interpreted differently in python and in android. Or the metadata I added to the model is somehow wrong. Anyways, I've tried everything to localize the issue and now I'm stuck.
How do I fix my model or Android code so it produces the same results as my python code?

I've managed to find and fix the issue:
My model from this tutorial included a built-in image normalization layer. Image normalization is when you transform standard 0-255 image color values to 0.0-1.0 float values, suitable for machine learning.
But the metadata I used for the tflite model included 2 parameters for external normalization: mean and std.
Formula for each value being: normalized_value = (value - mean) / std
Since my model handles its own normalization, I need to turn off external normalization by setting mean = 0 and std = 1.
This way I'll get normalized_value = value.
So, setting the tflite metadata parameters to these:
image_min=0,
image_max=255.0,
mean=[0.0],
std=[1.0]
fixed the double normalization issue and my model now produces correct results in Android app.

Related

My custom tflite image classifier perfoms well in python but when I launch it in Android studio it doesn't work at all

We were doing a course project and faced this issue. The idea is simple, three categories: cats, dogs, and cars, the app should classify the photo in one of these. There are lots of tutorials how to implement an already taught net like ImageNet but we needed to do a custom one. I used Tensorflow Keras, saved model as h5 and then converted it to tflite with metadata, in which I specified resize and normalize options. Unfortunately, it didn't work at all.
Official Tensorflow documentation says that you need to preprocess the image like this
import org.tensorflow.lite.DataType;
import org.tensorflow.lite.support.image.ImageProcessor;
import org.tensorflow.lite.support.image.TensorImage;
import org.tensorflow.lite.support.image.ops.ResizeOp;
// Initialization code
// Create an ImageProcessor with all ops required. For more ops, please
// refer to the ImageProcessor Architecture section in this README.
ImageProcessor imageProcessor =
new ImageProcessor.Builder()
.add(new ResizeOp(224, 224, ResizeOp.ResizeMethod.BILINEAR))
.build();
// Create a TensorImage object. This creates the tensor of the corresponding
// tensor type (uint8 in this case) that the TensorFlow Lite interpreter needs.
TensorImage tensorImage = new TensorImage(DataType.UINT8);
// Analysis code for every frame
// Preprocess the image
tensorImage.load(bitmap);
tensorImage = imageProcessor.process(tensorImage);
`
We tried many times, and as I understand it's not needed if you specified metadata because without this it works a little bit better.
I guess that the problem is that it does preprocessing automatically but I haven't find anywhere which interpolation method it uses or how can I override that interpolation method. In Keras I use ImageDataGenerator and flow_from_directory to load data and it uses nearest neighbour by default. I have no opportunity to train the model again so it would be better if you tell how can I override it in Android Studio.
Thank you in advance

best way to compare images for similarity in android

how to compare two images, to know are they similar for 100%?
I was getting path of all images from mediastore, then converted to bitmap and compared using bitmap.sameAs(bitmapToCompare), but it takes to much memory and got outofmemory exepcetion
Now i am trying to use OpenCv library as:
val img1: Mat = Imgcodecs.imread(it)
val img2: Mat = Imgcodecs.imread(it1)
val result = Mat()
Core.compare(img1, img2, result, Core.CMP_EQ)
val ines = Core.countNonZero(result)
if(ines==0){
//similar images
}
but get an error in Core.countNonZero as following:
cv::Exception: OpenCV(4.5.3) /home/quickbirdstudios/opencv/releases/opencv-4.5.3/modules/core/src/count_non_zero.dispatch.cpp:128: error: (-215:Assertion failed) cn == 1 in function 'countNonZero'
so what is best way to compare two images?
First off, let's correct you. Neither your OpenCV snippet not Android can directly compare if two images are "similar". They can compare if they are exactly the same. That's not the same thing. You'd have to decide if its good enough.
Secondly, OpenCV is overkill for this. If the two images are Bitmaps in memory, just loop over the byte arrays of the two files. If they're on disk, just compare the byte by byte data of the two files.
You said you "got the paths of all images, then converted to Bitmap". Yeah, that would take a ton of memory. Instead, if you want to compare all the files, do this:
val map = mutableMapOf()
fileNames.each {
val hash = hash_file(it)
if (map.contains(hash)) {
//In this case, the two file stored in it and map[hash] are the same
}
else {
map[hash] = it
}
}
Here hash_file is any well known hash function. MD5 would work fine.
Now if you actually want similarity- good luck, you're going to need to learn a lot of AI and machine learning to determine that. Or find someone who already has a model for an appropriate training set.

Running tflite sample segmentation app with different model

I am trying to run the sample app from tensorflow for image segmentation with a different model.
I would like to run it with the model shufflenetv2 with dpc.
So I copied the model, and changed imageSize to 225 in ImageSegmentationModelExecutor.kt.
Then I am getting the error
something went wrong: y + height must be <= bitmap.height()
Doing some small adjustments in the function scaleBitmapAndKeepRatio of ImageUtils.kt solves the problem. (Just changed targetBmp.width to height twice, once in the matrix and 2nd time in the return.)
This brings the next error
something went wrong: Cannot convert between a TensorFlowLite buffer with 202500 bytes and a Java Buffer with 4252500 bytes.
The ratio of these 2 numbers is the NUM_CLASSES. Not sure if this is the right way to get it running or how to continue from here.
Any ideas or suggestions?
You seem to have two unrelated problems
1) The method scaleBitmapAndKeepRatio seems buggy.
By replacing width with height you're not solving the problem. Only changing the moment when it will happen. See the reference for createBitmap
if the x, y, width, height values are outside of the dimensions of the source bitmap, or width is <= 0, or height is <= 0, or if the source bitmap has already been recycled
so to get a squared adjusted bitmap, you'd be better off by getting the smallest dimension like this
val dimension = min(targetBmp.height, targetBmp.width)
and replacing width with dimension
2) I believe the input node of your tflite model is not compatible with tflite segmentation example
Have a look to the default model Google provides with Netron
You can see the the default model input node is a quantized float32 To me, it seems likely that you've converted your model to tflite using the default instructions in quantize.md. This will get you a model expecting a quantized uint8 input node, and so, the datatype is a mismatch. Remember tflite examples in the repository are taylor-made for an specific input and not very generic.
You'd rather do the conversion shown below to get as input node a quantized float32
# Load the TensorFlow model
converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(
graph_def_file = MODEL_FILE,
input_arrays = ['sub_2'], # For the Xception model it needs to be `sub_7`, for MobileNet it would be `sub_2`
output_arrays = ['ResizeBilinear_2'], # For the Xception model it needs to be `ResizeBilinear_3`, for MobileNet it would be `ResizeBilinear_2`
input_shapes={'sub_2':[1,257,257,3]}
)

Securing tensorflow-lite model

I'm developing an Android app that will hold a tensorflow-lite model for offline inference.
I know that it is impossible to completely avoid someone stealing my model, but I would like to make a hard time for someone trying it.
I thought to keep my .tflite model inside the .apk but without the weights of the top layer. Then, at execution time I could download the weights of the last layer and load it in memory.
So, if someone try to steal my model he would get a useless model because it couldn't be used due to the missing weights of the last layer.
It is possible to generate a tflite model without the weights of the last layer?
Is it possible load those weights in a already loaded model in memory?
This is how I loading my .tflite model:
tflite = new Interpreter(loadModelFile(), tfliteOptions);
// loads tflite grapg from file
private MappedByteBuffer loadModelFile() throws IOException {
AssetFileDescriptor fileDescriptor = mAssetManager.openFd(chosen);
FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor());
FileChannel fileChannel = inputStream.getChannel();
long startOffset = fileDescriptor.getStartOffset();
long declaredLength = fileDescriptor.getDeclaredLength();
return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength);
}
Are there other approaches to make my model safer? I really need to make inference locally.
If we are talking about Keras models ( or any other model in TF ), we can easily remove the last layer and then convert it to a TF Lite model with tf.lite.TFLiteConverter. That should not be a problem.
Now, in Python, get the last layer's weights and convert it to a nice JSON file. This JSON file could be hosted on cloud ( like Firebase Cloud Storage ) and can be downloaded by the app.
The weights could be parsed as an array() object. The actiavtions from the TF Lite model could be dot multiplied with the weights parsed from the JSON. Lastly, we apply an activation to provide predictions, which we need indeed!
The model is so precisely trained that it could be rarely used for any other use case. So, I think we do not need to worry about that.
Also, it will be better if we use some cloud hosting platforms, which use requests and APIs instead of directly loading a raw model.

TFLite Conversion changing model weights

I have a custom built tensorflow graph implementing MobileNetV2-SSDLite which I implemented myself. It is working fine on the PC.
However, when I convert the model to TFLite (all float, no quantization), the model weights are changed drastically.
To give an example, a filter which was initially -
0.13172674179077148,
2.3185202252437188e-32,
-0.003990101162344217
becomes-
4.165565013885498,
-2.3981268405914307,
-1.1919032335281372
The large weight values are completely throwing off my on-device inferences. Need help! :(
What command are you using to convert to tflite? For instance are you using toco, and if so what parameters are you using? While I haven't been looking at the filters, here are my default instructions for finetuning a MobileNetV2-SSD and SSDLite graphs and the model has been performing well.

Categories

Resources