Tensorflowlite on android buffer size error - android

Im trying to build a image classifier android app. I've built my model using keras.
The model is as follows:
model.add(MobileNetV2(include_top=False, weights='imagenet',input_shape=(224, 224, 3)))
model.add(GlobalAveragePooling2D())
model.add(Dropout(0.5))
model.add(Dense(3, activation='softmax'))
model.layers[0].trainable = False
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
Output:
Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
mobilenetv2_1.00_224 (Functi (None, 7, 7, 1280) 2257984
_________________________________________________________________
global_average_pooling2d_2 ( (None, 1280) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 1280) 0
_________________________________________________________________
dense_1 (Dense) (None, 3) 3843
=================================================================
Total params: 2,261,827
Trainable params: 3,843
Non-trainable params: 2,257,984
After training Im converting the model using
model = tf.keras.models.load_model('model.h5')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open(f"myModel.tflite", "wb").write(tflite_model)
for android the code is as:
make_prediction.setOnClickListener(View.OnClickListener {
var resized = Bitmap.createScaledBitmap(bitmap, 224, 224, true)
val model = MyModel.newInstance(this)
var tbuffer = TensorImage.fromBitmap(resized)
var byteBuffer = tbuffer.buffer
// Creates inputs for reference.
val inputFeature0 = TensorBuffer.createFixedSize(intArrayOf(1, 224, 224, 3), DataType.FLOAT32)
inputFeature0.loadBuffer(byteBuffer)
// Runs model inference and gets result.
val outputs = model.process(inputFeature0)
val outputFeature0 = outputs.outputFeature0AsTensorBuffer
var max = getMax(outputFeature0.floatArray)
text_view.setText(labels[max])
// Releases model resources if no longer used.
model.close()
})
but whenever i try to run my app it closes and i get this error in the logcat.
java.lang.IllegalArgumentException: The size of byte buffer and the shape do not match.
if I change the input shape of my image to 300 from 224 and train my model on 300 input shape and plug in to android I get anthor error.
java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite buffer with 1080000 bytes and a Java Buffer with 150528 bytes
Any kind of help will be really appreciated.

Use it like:
make_prediction.setOnClickListener(View.OnClickListener {
var resized = Bitmap.createScaledBitmap(bitmap, 224, 224, true)
val model = MyModel.newInstance(this)
var tImage = TensorImage(DataType.FLOAT32)
var tensorImage = tImage.load(resized)
var byteBuffer = tensorImage.buffer
// Creates inputs for reference.
//val inputFeature0 = TensorBuffer.createFixedSize(intArrayOf(1, 224, 224, 3), DataType.FLOAT32)
//inputFeature0.loadBuffer(byteBuffer)
// Runs model inference and gets result.
val outputs = model.process(byteBuffer)
val outputFeature0 = outputs.outputFeature0AsTensorBuffer
var max = getMax(outputFeature0.floatArray)
text_view.setText(labels[max])
// Releases model resources if no longer used.
model.close()
})
Then check with debugger if the problem persists or
val outputFeature0 = outputs.outputFeature0AsTensorBuffer
causes another one.
Ping me if you need more help

Related

How to run Channel first tflite model in Android

I am able to run my custom tflite model in android but the output is totally wrong. I suspect it is due to my model needs input shape [1, 3, 640, 640] but the code makes channel last ByteBuffer. I have created tensor buffer like this TensorBuffer.createFixedSize(intArrayOf(1, 3, 640, 640), DataType.FLOAT32) but I still suspect inside the for loop, the channel is not properly set in the flat input (ByteBuffer).
I have copied this code from example where the required model shape was [1,32,32,3] (channel last). This is the reason for my doubt.
Below is my code:-
val model = YoloxPlate.newInstance(applicationContext)
val inputFeature0 = TensorBuffer.createFixedSize(intArrayOf(1, 3, 640, 640), DataType.FLOAT32)
val input = ByteBuffer.allocateDirect(640*640*3*4).order(ByteOrder.nativeOrder())
for (y in 0 until 640) {
for (x in 0 until 640) {
val px = bitmap.getPixel(x, y)
// Get channel values from the pixel value.
val r = Color.red(px)
val g = Color.green(px)
val b = Color.blue(px)
// Normalize channel values to [-1.0, 1.0]. This requirement depends on the model.
// For example, some models might require values to be normalized to the range
// [0.0, 1.0] instead.
val rf = r/ 1f
val gf = g/ 1f
val bf = b/ 1f
input.putFloat(bf)
input.putFloat(gf)
input.putFloat(rf)
}
}
inputFeature0.loadBuffer(input)
val outputs = model.process(inputFeature0)
val outputFeature0 = outputs.outputFeature0AsTensorBuffer
val flvals = outputFeature0.getFloatArray();
After using whiteboard and making and setting dim manually of the matrix, I figured it out.
It also used BGR instead of RGB as required by the model.
Working Perfectly now, here is the code (need to optimize multiple loop):-
val model = YoloxPlate.newInstance(applicationContext)
val inputFeature0 = TensorBuffer.createFixedSize(intArrayOf(1, 3, 640, 640), DataType.FLOAT32)
val input = ByteBuffer.allocateDirect(640*640*3*4).order(ByteOrder.nativeOrder())
for (y in 0 until 640) {
for (x in 0 until 640) {
val px = bitmap.getPixel(x, y)
val b = Color.blue(px)
val bf = b/ 1f
input.putFloat(bf)
}
}
for (y in 0 until 640) {
for (x in 0 until 640) {
val px = bitmap.getPixel(x, y)
val g = Color.green(px)
val gf = g/ 1f
input.putFloat(gf)
}
}
for (y in 0 until 640) {
for (x in 0 until 640) {
val px = bitmap.getPixel(x, y)
val r = Color.red(px)
val rf = r/ 1f
input.putFloat(rf)
}
}
inputFeature0.loadBuffer(input)
val outputs = model.process(inputFeature0)
val outputFeature0 = outputs.outputFeature0AsTensorBuffer
val flvals = outputFeature0.getFloatArray();

Error The size of byte buffer and the shape do not match

I am making my first ML integrated android app and I am trying to add this model to my app. But I am facing this error
var resized = Bitmap.createScaledBitmap(bitmap, 416, 416, true)
val model = Yolov4416Fp16.newInstance(this)
var tbuffer = TensorImage.fromBitmap(resized)
var byteBuffer = tbuffer.buffer
val inputFeature0 = TensorBuffer.createFixedSize(intArrayOf(1, 416, 416, 3), DataType.FLOAT32)
inputFeature0.loadBuffer(byteBuffer)
val outputs = model.process(inputFeature0)
val outputFeature0 = outputs.outputFeature0AsTensorBuffer
var max = getMax(outputFeature0.floatArray)
text_view.text = labels[max]
model.close()

Android tflite convert output location to bounding box on screen

I'm trying to convert my tflite output location to position that user can see on screen.
I don't know what I'm doing wrong but the tflite supposed to give me the location (left,right,top,bottom) between 0 and 1. but it give the output between 320 and 320 (this my model input).
this is my image analyzer class:(It's live detection and I use CameraX)
// Initializing the jaw detection model
private val jawDetectionModel = JawDetection.newInstance(context)
val imageProcessor = ImageProcessor.Builder()
.add(ResizeWithCropOrPadOp(320, 320))
.add(ResizeOp(320, 320, ResizeOp.ResizeMethod.NEAREST_NEIGHBOR))
.add(NormalizeOp(0f, 1f))
.build()
// Convert Image to Bitmap then to TensorImage
val tfImage = TensorImage.fromBitmap(imageProxy.convertImageProxyToBitmap(context))
// Process the image using the trained model, sort and pick out the top results
val outputs = jawDetectionModel.process(imageProcessor.process(tfImage))
.detectionResultList.take(MaxResultDisplay)
val left = outputs.first().locationAsRectF.left
val top = outputs.first().locationAsRectF.top
val right = outputs.first().locationAsRectF.right
val bottom = outputs.first().locationAsRectF.bottom
and this for convert image proxy to bitmap:
#SuppressLint("UnsafeExperimentalUsageError")
fun ImageProxy.convertImageProxyToBitmap(context: Context): Bitmap? {
val yuvToRgbConverter = YuvToRgbConverter(context)
val image = this.image ?: return null
// Initialise Buffer
if (!::bitmapBuffer.isInitialized) {
rotationMatrix = Matrix()
rotationMatrix.postRotate(this.imageInfo.rotationDegrees.toFloat())
bitmapBuffer = Bitmap.createBitmap(
this.width, this.height, Bitmap.Config.ARGB_8888
)
}
// Pass image to an image analyser
yuvToRgbConverter.yuvToRgb(image, bitmapBuffer)
// Create the Bitmap in the correct orientation
return Bitmap.createBitmap(
bitmapBuffer,
0,
0,
bitmapBuffer.width,
bitmapBuffer.height,
rotationMatrix,
false
)
}

How to get output tensor from a specific layer?

I would like to figure it out if is it possible to get the output from a specific layer using tensorflow lite for android environment.
At the moment I know that using: 'interpreter.run()' we take the "standard" output, but this is not what I'm looking for.
Thanks for any advice.
#Simo I will write here a workaround of this problem. How about saving the part of the model you want to .tflite file. Let me explain myself. Instead of doing below and save the whole model:
# WHOLE MODEL
tflite_model = tf.keras.models.load_model('face_recog.weights.best.hdf5')
converter = tf.lite.TFLiteConverter.from_keras_model(tflite_model)
tflite_save = converter.convert()
open("face_recog.tflite", "wb").write(tflite_save)
You can print the layers of your keras model:
print([layer.name for layer in keras_model.layers])
Output:
['anchor', 'positive', 'negative', 'model', 'lambda']
print([layer.name for layer in keras_model.get_layer('model').layers])
Output:
['input_1', 'Conv1_pad', 'Conv1', 'bn_Conv1', 'Conv1_relu', 'expanded_conv_depthwise', 'expanded_conv_depthwise_BN', 'expanded_conv_depthwise_relu', 'expanded_conv_project', 'expanded_conv_project_BN', 'block_1_expand', 'block_1_expand_BN', 'block_1_expand_relu', 'block_1_pad', 'block_1_depthwise', 'block_1_depthwise_BN', 'block_1_depthwise_relu', 'block_1_project', 'block_1_project_BN', 'block_2_expand', 'block_2_expand_BN', 'block_2_expand_relu', 'block_2_depthwise', 'block_2_depthwise_BN', 'block_2_depthwise_relu', 'block_2_project', 'block_2_project_BN', 'block_2_add', 'block_3_expand', 'block_3_expand_BN', 'block_3_expand_relu', 'block_3_pad', 'block_3_depthwise', 'block_3_depthwise_BN', 'block_3_depthwise_relu', 'block_3_project', 'block_3_project_BN', 'block_4_expand', 'block_4_expand_BN', 'block_4_expand_relu', 'block_4_depthwise', 'block_4_depthwise_BN', 'block_4_depthwise_relu', 'block_4_project', 'block_4_project_BN', 'block_4_add', 'block_5_expand', 'block_5_expand_BN', 'block_5_expand_relu', 'block_5_depthwise', 'block_5_depthwise_BN', 'block_5_depthwise_relu', 'block_5_project', 'block_5_project_BN', 'block_5_add', 'block_6_expand', 'block_6_expand_BN', 'block_6_expand_relu', 'block_6_pad', 'block_6_depthwise', 'block_6_depthwise_BN', 'block_6_depthwise_relu', 'block_6_project', 'block_6_project_BN', 'block_7_expand', 'block_7_expand_BN', 'block_7_expand_relu', 'block_7_depthwise', 'block_7_depthwise_BN', 'block_7_depthwise_relu', 'block_7_project', 'block_7_project_BN', 'block_7_add', 'block_8_expand', 'block_8_expand_BN', 'block_8_expand_relu', 'block_8_depthwise', 'block_8_depthwise_BN', 'block_8_depthwise_relu', 'block_8_project', 'block_8_project_BN', 'block_8_add', 'block_9_expand', 'block_9_expand_BN', 'block_9_expand_relu', 'block_9_depthwise', 'block_9_depthwise_BN', 'block_9_depthwise_relu', 'block_9_project', 'block_9_project_BN', 'block_9_add', 'block_10_expand', 'block_10_expand_BN', 'block_10_expand_relu', 'block_10_depthwise', 'block_10_depthwise_BN', 'block_10_depthwise_relu', 'block_10_project', 'block_10_project_BN', 'block_11_expand', 'block_11_expand_BN', 'block_11_expand_relu', 'block_11_depthwise', 'block_11_depthwise_BN', 'block_11_depthwise_relu', 'block_11_project', 'block_11_project_BN', 'block_11_add', 'block_12_expand', 'block_12_expand_BN', 'block_12_expand_relu', 'block_12_depthwise', 'block_12_depthwise_BN', 'block_12_depthwise_relu', 'block_12_project', 'block_12_project_BN', 'block_12_add', 'block_13_expand', 'block_13_expand_BN', 'block_13_expand_relu', 'block_13_pad', 'block_13_depthwise', 'block_13_depthwise_BN', 'block_13_depthwise_relu', 'block_13_project', 'block_13_project_BN', 'block_14_expand', 'block_14_expand_BN', 'block_14_expand_relu', 'block_14_depthwise', 'block_14_depthwise_BN', 'block_14_depthwise_relu', 'block_14_project', 'block_14_project_BN', 'block_14_add', 'block_15_expand', 'block_15_expand_BN', 'block_15_expand_relu', 'block_15_depthwise', 'block_15_depthwise_BN', 'block_15_depthwise_relu', 'block_15_project', 'block_15_project_BN', 'block_15_add', 'block_16_expand', 'block_16_expand_BN', 'block_16_expand_relu', 'block_16_depthwise', 'block_16_depthwise_BN', 'block_16_depthwise_relu', 'block_16_project', 'block_16_project_BN', 'Conv_1', 'Conv_1_bn', 'out_relu', 'global_average_pooling2d', 'predictions', 'dense', 'dense_1']
And then you can take whatever layer you want from your model and save it to .tflite:
# PART OF MODEL
tflite_model = tf.keras.models.load_model('face_recog.weights.best.hdf5')
converter = tf.lite.TFLiteConverter.from_keras_model(tflite_model.get_layer('model'))
tflite_save = converter.convert()
open("face_recog.tflite", "wb").write(tflite_save)
So using the above code the .tflite file will have input tensor = "input_1" and output = "dense_1"
Then inside android you have to use inputs for the specific layer "model" and you will get outputs of the specific shape as when you print the output details in python:
interpreter = tf.lite.Interpreter('face_recog.tflite')
print(interpreter.get_output_details())
interpreter.get_tensor_details()
Android part:
// Initialize interpreter
#Throws(IOException::class)
private suspend fun initializeInterpreter(app: Application) = withContext(Dispatchers.IO) {
// Load the TF Lite model from asset folder and initialize TF Lite Interpreter without NNAPI enabled.
val assetManager = app.assets
val model = loadModelFile(assetManager, "face_recog_model_layer.tflite")
val options = Interpreter.Options()
options.setUseNNAPI(false)
interpreter = Interpreter(model, options)
// Reads type and shape of input and output tensors, respectively.
val imageTensorIndex = 0
val imageShape: IntArray =
interpreter.getInputTensor(imageTensorIndex).shape()
Log.i("INPUT_TENSOR_WHOLE", Arrays.toString(imageShape))
val imageDataType: DataType =
interpreter.getInputTensor(imageTensorIndex).dataType()
Log.i("INPUT_DATA_TYPE", imageDataType.toString())
val probabilityTensorIndex = 0
val probabilityShape: IntArray =
interpreter.getOutputTensor(probabilityTensorIndex).shape()
Log.i("OUTPUT_TENSOR_SHAPE", Arrays.toString(probabilityShape))
val probabilityDataType: DataType =
interpreter.getOutputTensor(probabilityTensorIndex).dataType()
Log.i("OUTPUT_DATA_TYPE", probabilityDataType.toString())
Log.i(TAG, "Initialized TFLite interpreter.")
}
#Throws(IOException::class)
private fun loadModelFile(assetManager: AssetManager, filename: String): MappedByteBuffer {
val fileDescriptor = assetManager.openFd(filename)
val inputStream = FileInputStream(fileDescriptor.fileDescriptor)
val fileChannel = inputStream.channel
val startOffset = fileDescriptor.startOffset
val declaredLength = fileDescriptor.declaredLength
return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength)
}
I hope this will help someone. And of course if you need anything else tag me :)

How to declare a small image for test in openCV Android?

The document is minimal or non existing, I've read OpenCV's official doc and declare Mat in OpenCV java from stackoverflow
I've did some research of image processing algorithm use Python Jupyter notebook, at this moment I want to verify it in Android python.
For the Python part:
from numpy import asarray
dimg = cv2.imread('story/IMG_0371.PNG')
dimg_small = dimg[571:572, 401:402]
dimg_small_gamma = adjust_gamma(dimg_small)
data = asarray(dimg_small)
data_gamma = asarray(dimg_small_gamma)
# print data
array([[[52, 45, 44]]], dtype=uint8)
# print data_gamma
array([[[115, 107, 105]]], dtype=uint8)
The image of dimg_small would like:
Meanwhile the image of dimg_small_gamma would like:
So far, I want to create an image with the same data [52, 45, 44] and 115, 107, 105 in Android to verify my algorithm, both Java or Kotlin would be fine.
private fun genTestMat(): Mat { //upper one
val img = Mat( 3, 1, CvType.CV_8UC3)
img.put(3, 1,52.0, 45.0, 44.0)
return img
}
private fun genTestMat2(): Mat { //under one
val img = Mat( 3, 1, CvType.CV_8UC3)
img.put(3, 1,115.0, 107.0, 105.0)
return img
}
private fun opencvMatToBitmap(mat: Mat): Bitmap {
val bitmap: Bitmap = Bitmap.createBitmap(mat.width(), mat.height(), Bitmap.Config.ARGB_8888)
Utils.matToBitmap(mat, bitmap)
return bitmap
}
...
val testImg = opencvMatToBitmap(genTestMat())
val testImg2 = opencvMatToBitmap(genTestMat2())
image_under.setImageBitmap(testImg2)
image_upper.setImageBitmap(testImg)
With the image show in my Android device:
I was wondering how to make the Android show the same image as is in Python Jupyter notebook?
Updated:
Tried to Mat.put using array as follows:
private fun genTestMat(): Mat { //upper one
val img = Mat( 3, 1, CvType.CV_8UC3)
val data = floatArrayOf(52f, 45f, 44f)
img.put(1,1, data)
return img
}
Unfortunately it crashed:
Caused by: java.lang.UnsupportedOperationException: Mat data type is not compatible: 16
at org.opencv.core.Mat.put(Mat.java:801)
at com.example.cap2hist.MainActivity.genTestMat(MainActivity.kt:131)
After chat with folks in openCV forum, got the answer.
private fun genTestMat(): Mat {
val inputMat = Mat( 1, 1, CvType.CV_8UC3)
val rgbMat = Mat()
Imgproc.cvtColor(inputMat, rgbMat, Imgproc.COLOR_BGR2RGBA, 3)
rgbMat.setTo(Scalar(52.0, 45.0, 44.0))
return rgbMat
}

Categories

Resources