How to get output tensor from a specific layer? - android
I would like to figure it out if is it possible to get the output from a specific layer using tensorflow lite for android environment.
At the moment I know that using: 'interpreter.run()' we take the "standard" output, but this is not what I'm looking for.
Thanks for any advice.
#Simo I will write here a workaround of this problem. How about saving the part of the model you want to .tflite file. Let me explain myself. Instead of doing below and save the whole model:
# WHOLE MODEL
tflite_model = tf.keras.models.load_model('face_recog.weights.best.hdf5')
converter = tf.lite.TFLiteConverter.from_keras_model(tflite_model)
tflite_save = converter.convert()
open("face_recog.tflite", "wb").write(tflite_save)
You can print the layers of your keras model:
print([layer.name for layer in keras_model.layers])
Output:
['anchor', 'positive', 'negative', 'model', 'lambda']
print([layer.name for layer in keras_model.get_layer('model').layers])
Output:
['input_1', 'Conv1_pad', 'Conv1', 'bn_Conv1', 'Conv1_relu', 'expanded_conv_depthwise', 'expanded_conv_depthwise_BN', 'expanded_conv_depthwise_relu', 'expanded_conv_project', 'expanded_conv_project_BN', 'block_1_expand', 'block_1_expand_BN', 'block_1_expand_relu', 'block_1_pad', 'block_1_depthwise', 'block_1_depthwise_BN', 'block_1_depthwise_relu', 'block_1_project', 'block_1_project_BN', 'block_2_expand', 'block_2_expand_BN', 'block_2_expand_relu', 'block_2_depthwise', 'block_2_depthwise_BN', 'block_2_depthwise_relu', 'block_2_project', 'block_2_project_BN', 'block_2_add', 'block_3_expand', 'block_3_expand_BN', 'block_3_expand_relu', 'block_3_pad', 'block_3_depthwise', 'block_3_depthwise_BN', 'block_3_depthwise_relu', 'block_3_project', 'block_3_project_BN', 'block_4_expand', 'block_4_expand_BN', 'block_4_expand_relu', 'block_4_depthwise', 'block_4_depthwise_BN', 'block_4_depthwise_relu', 'block_4_project', 'block_4_project_BN', 'block_4_add', 'block_5_expand', 'block_5_expand_BN', 'block_5_expand_relu', 'block_5_depthwise', 'block_5_depthwise_BN', 'block_5_depthwise_relu', 'block_5_project', 'block_5_project_BN', 'block_5_add', 'block_6_expand', 'block_6_expand_BN', 'block_6_expand_relu', 'block_6_pad', 'block_6_depthwise', 'block_6_depthwise_BN', 'block_6_depthwise_relu', 'block_6_project', 'block_6_project_BN', 'block_7_expand', 'block_7_expand_BN', 'block_7_expand_relu', 'block_7_depthwise', 'block_7_depthwise_BN', 'block_7_depthwise_relu', 'block_7_project', 'block_7_project_BN', 'block_7_add', 'block_8_expand', 'block_8_expand_BN', 'block_8_expand_relu', 'block_8_depthwise', 'block_8_depthwise_BN', 'block_8_depthwise_relu', 'block_8_project', 'block_8_project_BN', 'block_8_add', 'block_9_expand', 'block_9_expand_BN', 'block_9_expand_relu', 'block_9_depthwise', 'block_9_depthwise_BN', 'block_9_depthwise_relu', 'block_9_project', 'block_9_project_BN', 'block_9_add', 'block_10_expand', 'block_10_expand_BN', 'block_10_expand_relu', 'block_10_depthwise', 'block_10_depthwise_BN', 'block_10_depthwise_relu', 'block_10_project', 'block_10_project_BN', 'block_11_expand', 'block_11_expand_BN', 'block_11_expand_relu', 'block_11_depthwise', 'block_11_depthwise_BN', 'block_11_depthwise_relu', 'block_11_project', 'block_11_project_BN', 'block_11_add', 'block_12_expand', 'block_12_expand_BN', 'block_12_expand_relu', 'block_12_depthwise', 'block_12_depthwise_BN', 'block_12_depthwise_relu', 'block_12_project', 'block_12_project_BN', 'block_12_add', 'block_13_expand', 'block_13_expand_BN', 'block_13_expand_relu', 'block_13_pad', 'block_13_depthwise', 'block_13_depthwise_BN', 'block_13_depthwise_relu', 'block_13_project', 'block_13_project_BN', 'block_14_expand', 'block_14_expand_BN', 'block_14_expand_relu', 'block_14_depthwise', 'block_14_depthwise_BN', 'block_14_depthwise_relu', 'block_14_project', 'block_14_project_BN', 'block_14_add', 'block_15_expand', 'block_15_expand_BN', 'block_15_expand_relu', 'block_15_depthwise', 'block_15_depthwise_BN', 'block_15_depthwise_relu', 'block_15_project', 'block_15_project_BN', 'block_15_add', 'block_16_expand', 'block_16_expand_BN', 'block_16_expand_relu', 'block_16_depthwise', 'block_16_depthwise_BN', 'block_16_depthwise_relu', 'block_16_project', 'block_16_project_BN', 'Conv_1', 'Conv_1_bn', 'out_relu', 'global_average_pooling2d', 'predictions', 'dense', 'dense_1']
And then you can take whatever layer you want from your model and save it to .tflite:
# PART OF MODEL
tflite_model = tf.keras.models.load_model('face_recog.weights.best.hdf5')
converter = tf.lite.TFLiteConverter.from_keras_model(tflite_model.get_layer('model'))
tflite_save = converter.convert()
open("face_recog.tflite", "wb").write(tflite_save)
So using the above code the .tflite file will have input tensor = "input_1" and output = "dense_1"
Then inside android you have to use inputs for the specific layer "model" and you will get outputs of the specific shape as when you print the output details in python:
interpreter = tf.lite.Interpreter('face_recog.tflite')
print(interpreter.get_output_details())
interpreter.get_tensor_details()
Android part:
// Initialize interpreter
#Throws(IOException::class)
private suspend fun initializeInterpreter(app: Application) = withContext(Dispatchers.IO) {
// Load the TF Lite model from asset folder and initialize TF Lite Interpreter without NNAPI enabled.
val assetManager = app.assets
val model = loadModelFile(assetManager, "face_recog_model_layer.tflite")
val options = Interpreter.Options()
options.setUseNNAPI(false)
interpreter = Interpreter(model, options)
// Reads type and shape of input and output tensors, respectively.
val imageTensorIndex = 0
val imageShape: IntArray =
interpreter.getInputTensor(imageTensorIndex).shape()
Log.i("INPUT_TENSOR_WHOLE", Arrays.toString(imageShape))
val imageDataType: DataType =
interpreter.getInputTensor(imageTensorIndex).dataType()
Log.i("INPUT_DATA_TYPE", imageDataType.toString())
val probabilityTensorIndex = 0
val probabilityShape: IntArray =
interpreter.getOutputTensor(probabilityTensorIndex).shape()
Log.i("OUTPUT_TENSOR_SHAPE", Arrays.toString(probabilityShape))
val probabilityDataType: DataType =
interpreter.getOutputTensor(probabilityTensorIndex).dataType()
Log.i("OUTPUT_DATA_TYPE", probabilityDataType.toString())
Log.i(TAG, "Initialized TFLite interpreter.")
}
#Throws(IOException::class)
private fun loadModelFile(assetManager: AssetManager, filename: String): MappedByteBuffer {
val fileDescriptor = assetManager.openFd(filename)
val inputStream = FileInputStream(fileDescriptor.fileDescriptor)
val fileChannel = inputStream.channel
val startOffset = fileDescriptor.startOffset
val declaredLength = fileDescriptor.declaredLength
return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength)
}
I hope this will help someone. And of course if you need anything else tag me :)
Related
Can I get the Exif data from an Android camera preview without saving to file?
I want to use the Android camera to report lighting and colour information from a sampled patch on the image preview. The camerax preview generates ImageProxy images, and I can get the average LUV data for a patch. I would like to turn this data into absolute light levels using the exposure information and the camera white balance. The exposure data is in the Exif information, and maybe the white balance information too. I would like this information, however we get it. Exif seems a very likely route, but any other non-Exif solutions are welcome. At first sight, it looks as if Exif is always read from a file. However, ExifInterface can be created from an InputStream, and one of the streamType options is STREAM_TYPE_EXIF_DATA_ONLY. This looks promising - it seems something makes and streams just the EXIF data, and a camera preview could easily do just that. Or maybe we can get Exif from the ImageProxy somehow. I found many old threads on how to get at Exif data to find out the camera orientation. About 4 years ago these people were saying Exif is only read from a file. Is this still so? Reply to comment: With due misgiving, I attach my dodgy code... private class LuvAnalyzer(private val listener:LuvListener) : ImageAnalysis.Analyzer { private fun ByteBuffer.toByteArray(): ByteArray { rewind() // Rewind the buffer to zero val data = ByteArray(remaining()) get(data) // Copy the buffer into a byte array return data // Return the byte array } override fun analyze(image: ImageProxy) { // Sum for 1/5 width square of YUV_420_888 image val YUV = DoubleArray(3) val w = image.width val h = image.height val sq = kotlin.math.min(h,w) / 5 val w0 = ((w - sq)/4)*2 val h0 = ((h - sq)/4)*2 var ySum = 0 var uSum = 0 var vSum = 0 val y = image.planes[0].buffer.toByteArray() val stride = image.planes[0].rowStride var offset = h0*stride + w0 for (row in 1..sq) { var o = offset for (pix in 1..sq) { ySum += y[o++].toInt() and 0xFF } offset += stride } YUV[0] = ySum.toDouble()/(sq*sq).toDouble() val uv = image.planes[1].buffer.toByteArray() offset = (h0/2)*stride + w0 for (row in 1..sq/2) { var o = offset for (pix in 1..sq/2) { uSum += uv[o++].toInt() and 0xFF vSum += uv[o++].toInt() and 0xFF } offset += stride } YUV[1] = uSum.toDouble()/(sq*sq/4).toDouble() YUV[2] = vSum.toDouble()/(sq*sq/4).toDouble() // val exif = Exif.createFromImageProxy(image) listener(YUV) image.close() } } private fun startCamera() { val cameraProviderFuture = ProcessCameraProvider.getInstance(this) cameraProviderFuture.addListener({ // Used to bind the lifecycle of cameras to the lifecycle owner val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get() // Preview val preview = Preview.Builder() .build() .also { it.setSurfaceProvider(binding.viewFinder.surfaceProvider) } imageCapture = ImageCapture.Builder() .build() // Image analyser val imageAnalyzer = ImageAnalysis.Builder() .build() .also { it.setAnalyzer(cameraExecutor, LuvAnalyzer { LUV -> // Log.d(TAG, "Average LUV: %.1f %.1f %.1f".format(LUV[0], LUV[1], LUV[2])) luvText = "Average LUV: %.1f %.1f %.1f".format(LUV[0], LUV[1], LUV[2]) }) } // Select back camera as a default val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA try { // Unbind use cases before rebinding cameraProvider.unbindAll() // Bind use cases to camera cameraProvider.bindToLifecycle( this, cameraSelector, preview, imageCapture, imageAnalyzer) } catch(exc: Exception) { Log.e(TAG, "Use case binding failed", exc) } }, ContextCompat.getMainExecutor(this)) } I am doing my image averaging from an ImageProxy. I am currently trying to get the Exif data from the same ImageProxy because there not saving images to files, because this is intended to provide a stream of colour values. And there is an intriguing Exif.createFromImageProxy(image) (now commented out) which I discovered after writing the original note, but I can't get it to do anything. I might get the Exif information if I saved an image to a .jpg file and then read it back in again. The camera is putting out a stream of preview images, and the exposure settings may be changing all the time, so I would have to save a stream of images. If I was really stuck, I might try that. But I feel there are enough Exif bits and pieces to get the information live from the camera. Update The Google camerax-developers suggest getting the exposure information using the camera2 Extender. I have got it working enough to see the numbers go up and down roughly as they should. This feels a lot better than the Exif route. I am tempted to mark this as the solution, as it is the solution for me, but I shall leave it open as my original question in the title may have an answer. val previewBuilder = Preview.Builder() val previewExtender = Camera2Interop.Extender(previewBuilder) // Turn AWB off previewExtender.setCaptureRequestOption(CaptureRequest.CONTROL_AWB_MODE, CaptureRequest.CONTROL_AWB_MODE_DAYLIGHT) previewExtender.setSessionCaptureCallback( object : CameraCaptureSession.CaptureCallback() { override fun onCaptureCompleted( session: CameraCaptureSession, request: CaptureRequest, result: TotalCaptureResult ) { result.get(CaptureResult.SENSOR_EXPOSURE_TIME) result.get(CaptureResult.SENSOR_SENSITIVITY) result.get(CaptureResult.COLOR_CORRECTION_GAINS) result.get(CaptureResult.COLOR_CORRECTION_TRANSFORM) } } )
Tensorflowlite on android buffer size error
Im trying to build a image classifier android app. I've built my model using keras. The model is as follows: model.add(MobileNetV2(include_top=False, weights='imagenet',input_shape=(224, 224, 3))) model.add(GlobalAveragePooling2D()) model.add(Dropout(0.5)) model.add(Dense(3, activation='softmax')) model.layers[0].trainable = False model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.summary() Output: Model: "sequential_3" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= mobilenetv2_1.00_224 (Functi (None, 7, 7, 1280) 2257984 _________________________________________________________________ global_average_pooling2d_2 ( (None, 1280) 0 _________________________________________________________________ dropout_2 (Dropout) (None, 1280) 0 _________________________________________________________________ dense_1 (Dense) (None, 3) 3843 ================================================================= Total params: 2,261,827 Trainable params: 3,843 Non-trainable params: 2,257,984 After training Im converting the model using model = tf.keras.models.load_model('model.h5') converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert() open(f"myModel.tflite", "wb").write(tflite_model) for android the code is as: make_prediction.setOnClickListener(View.OnClickListener { var resized = Bitmap.createScaledBitmap(bitmap, 224, 224, true) val model = MyModel.newInstance(this) var tbuffer = TensorImage.fromBitmap(resized) var byteBuffer = tbuffer.buffer // Creates inputs for reference. val inputFeature0 = TensorBuffer.createFixedSize(intArrayOf(1, 224, 224, 3), DataType.FLOAT32) inputFeature0.loadBuffer(byteBuffer) // Runs model inference and gets result. val outputs = model.process(inputFeature0) val outputFeature0 = outputs.outputFeature0AsTensorBuffer var max = getMax(outputFeature0.floatArray) text_view.setText(labels[max]) // Releases model resources if no longer used. model.close() }) but whenever i try to run my app it closes and i get this error in the logcat. java.lang.IllegalArgumentException: The size of byte buffer and the shape do not match. if I change the input shape of my image to 300 from 224 and train my model on 300 input shape and plug in to android I get anthor error. java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite buffer with 1080000 bytes and a Java Buffer with 150528 bytes Any kind of help will be really appreciated.
Use it like: make_prediction.setOnClickListener(View.OnClickListener { var resized = Bitmap.createScaledBitmap(bitmap, 224, 224, true) val model = MyModel.newInstance(this) var tImage = TensorImage(DataType.FLOAT32) var tensorImage = tImage.load(resized) var byteBuffer = tensorImage.buffer // Creates inputs for reference. //val inputFeature0 = TensorBuffer.createFixedSize(intArrayOf(1, 224, 224, 3), DataType.FLOAT32) //inputFeature0.loadBuffer(byteBuffer) // Runs model inference and gets result. val outputs = model.process(byteBuffer) val outputFeature0 = outputs.outputFeature0AsTensorBuffer var max = getMax(outputFeature0.floatArray) text_view.setText(labels[max]) // Releases model resources if no longer used. model.close() }) Then check with debugger if the problem persists or val outputFeature0 = outputs.outputFeature0AsTensorBuffer causes another one. Ping me if you need more help
Firebase Local Model throws "Didn't find op for builtin opcode 'CONV_2D' version '2'"
I got a model which is looking like this and trained it: keras.layers.Conv2D(128, (3,3), activation='relu', input_shape=(150,150,3)), keras.layers.MaxPooling2D(2,2), keras.layers.Dropout(0.5), keras.layers.Conv2D(256, (3,3), activation='relu'), keras.layers.MaxPooling2D(2,2), keras.layers.Conv2D(512, (3,3), activation='relu'), keras.layers.MaxPooling2D(2,2), keras.layers.Flatten(), keras.layers.Dropout(0.3), keras.layers.Dense(280, activation='relu'), keras.layers.Dense(4, activation='softmax') ]) I converted it to .tflite with the following Code: import tensorflow as tf converter = tf.compat.v1.lite.TFLiteConverter.from_keras_model_file("model.h5") converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.post_training_quantize=True converter.allow_custom_ops=True tflite_model = converter.convert() open("model.tflite", "wb").write(tflite_model) Then I want it to use it with local Firebase: val bitmap = Bitmap.createScaledBitmap(image, 150, 150, true) val batchNum = 0 val input = Array(1) { Array(150) { Array(150) { FloatArray(3) } } } for (x in 0..149) { for (y in 0..149) { val pixel = bitmap.getPixel(x, y) input[batchNum][x][y][0] = (Color.red(pixel) - 127) / 255.0f input[batchNum][x][y][1] = (Color.green(pixel) - 127) / 255.0f input[batchNum][x][y][2] = (Color.blue(pixel) - 127) / 255.0f } } val localModel = FirebaseCustomLocalModel.Builder() .setAssetFilePath("model.tflite") .build() val interpreter = FirebaseModelInterpreter.getInstance(FirebaseModelInterpreterOptions.Builder(localModel).build()) val inputOutputOptions = FirebaseModelInputOutputOptions.Builder() .setInputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, 150, 150, 3)) .setOutputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, 4)) .build() val inputs = FirebaseModelInputs.Builder() .add(input) .build() interpreter?.run(inputs, inputOutputOptions) ?.addOnSuccessListener { result -> val output = result.getOutput<Array<FloatArray>>(0) val probabilities = output[0] But it throws this Error: Internal error: Cannot create interpreter: Didn't find op for builtin opcode 'CONV_2D' version '2' Somebody knows what I'm doing wrong? I'm using tensorflow-gpu and tensorflow-estimator 2.3.0
Check out the version of Tensorflow which you use while training, Use the same version in android build.gridle(app). I use tensorflow 2.4.0 (which is latest) while training, so I put (implementation 'org.tensorflow:tensorflow-lite:2.4.0') in my android build.gridle(app)
TFLite operators have different versions. Looks like that you have converted model with newer version of Conv2D and your current interpreter does not support it. I have meet that issue on Android when tried converter.optimizations = [tf.lite.Optimize.DEFAULT]. So I would suggest you drop optimization and custom ops at the beginning: import tensorflow as tf converter = tf.compat.v1.lite.TFLiteConverter.from_keras_model_file("model.h5") converter.optimizations = [] converter.post_training_quantize=True converter.allow_custom_ops=False tflite_model = converter.convert() open("model.tflite", "wb").write(tflite_model) Edit: Also make sure you are using same version of Tensorflow while converting model and in your application. Version mismatch caused by older version of interpreter which does not support new op version. Edit 2(maybe more useful): Try to convert your model with older version of Tensorflow, lets say 2.1 with command: import tensorflow as tf converter = tf.compat.v1.lite.TFLiteConverter.from_keras_model_file("model.h5") converter.optimizations = [] converter.allow_custom_ops=False converter.experimental_new_converter = True tflite_model = converter.convert()
I fixed it with following changes: I saved my model like this (tf-gpu 2.2.0) or by my Callback (.pb): tf.saved_model.save(trainedModel,path) In build.gradle I added: implementation 'org.tensorflow:tensorflow-lite:0.0.0-nightly' implementation'org.tensorflow:tensorflow-lite-gpu:0.0.0-nightly' I updated my tensorflow version (only for the converter) to tf-nightly (2.5.0) by running: pip3 install tf-nightly And used this code (Thanks to Alex K.): new_model= tf.keras.models.load_model(filepath=path) converter = tf.lite.TFLiteConverter.from_keras_model(new_model) converter.optimizations = [] converter.allow_custom_ops=False converter.experimental_new_converter = True tflite_model = converter.convert() open("model.tflite", "wb").write(tflite_model) That's it.
Kotlin - convert map result to typed arraylist
I am having a hard time converting the result of map in to its model type val options = message.options.map {{ it.lastShare = user?.lastShare it.lastWatch = user?.lastWatch }} I want the options to return as type ArrayList<Option> but I can't find anything on the internet. The message.options is of type List. when I do .map it returns List<() -> Unit>.
Please try next code: val options = message.options.map { it.lastShare = user?.lastShare it.lastWatch = user?.lastWatch it } or in your case I think you don't need to use map function, you can simply use forEach to update properties of each object in the list: message.options.forEach { it.lastShare = user?.lastShare it.lastWatch = user?.lastWatch } val optionsArrayList: ArrayList<Option> = arrayListOf(*message.options.toTypedArray()) * - is a spread operator. Or simpler way to create an ArrayList is just use its constructor: val optionsArrayList: ArrayList<Option> = ArrayList(message.options)
Just write val options = message.options.map {{ it.lastShare = user?.lastShare it.lastWatch = user?.lastWatch }}.toMutableList()
Mapbox custom style for line join
I want to customize line join style as in the picture below: How can I do it? My source code: val lineString = LineString.fromLngLats(tempCoordinateList) val geoJsonSource = GeoJsonSource(DASHED_LINE_SOURCE_ID, Feature.fromGeometry(lineString)) mapboxMap?.addSource(geoJsonSource) val lineLayer = LineLayer(DASHED_LINE_LAYER_ID, DASHED_LINE_SOURCE_ID).apply { setProperties( PropertyFactory.lineDasharray(arrayOf(2f, 1f)), PropertyFactory.lineCap(Property.LINE_CAP_SQUARE), PropertyFactory.lineJoin(Property.LINE_JOIN_ROUND), PropertyFactory.lineWidth(3f), PropertyFactory.lineColor(Color.parseColor("#e55e5e"))) } mapboxMap?.addLayer(lineLayer)
I didn't find a solution to how to achieve this with standard LineLayer's properties. Therefore, I decided to add these points in a separate layer with markers. val markerCoordinates = arrayListOf<Feature>() tempCoordinateList .forEach { val feature = Feature.fromGeometry( Point.fromLngLat(it.longitude, it.latitude)) markerCoordinates.add(feature) } val geoJsonSource = GeoJsonSource(MARKER_POINTS_SOURCE_ID, FeatureCollection.fromFeatures(markerCoordinates)) mapboxMap?.addSource(geoJsonSource) mapboxMap?.addImage(MARKER_POINTS_IMAGE_ID, pointIcon) val markers = SymbolLayer(MARKER_POINTS_LAYER_ID, MARKER_POINTS_SOURCE_ID) .withProperties(PropertyFactory.iconImage(MARKER_POINTS_IMAGE_ID)) mapboxMap?.addLayer(markers)