ValueError: Invalid tensors 'input' were found - android

I'm not able to convert .pb to tflite
Here is the command that I'm executing to generate .pb I am successful in generating it.
IMAGE_SIZE=224
ARCHITECTURE="mobilenet_1_1.0_${IMAGE_SIZE}"
python retrain.py
--bottleneck_dir=tf_files/bottlenecks
--how_many_training_steps=500
--model_dir=tf_files/models/
--summaries_dir=tf_files/training_summaries/"${ARCHITECTURE}"
--output_graph=tf_files/retrained_graph.pb
--output_labels=tf_files/retrained_labels.txt
--architecture="${ARCHITECTURE}"
--image_dir=tf_files/flower_photos
Once I am trying to create that .pb to .tflite get fail with same error "ValueError: Invalid tensors 'input' were found."
tflite_convert \
--output_file=foo.tflite \
--graph_def_file=retrained_graph.pb \
--input_arrays=input \
--output_arrays=MobilenetV1/Predictions/Reshape_1

I just follow this google code demo.
https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/#0
Working fine
IMAGE_SIZE=224
ARCHITECTURE="mobilenet_1.0_${IMAGE_SIZE}"
python -m scripts.retrain \
--bottleneck_dir=tf_files/bottlenecks \
--how_many_training_steps=500 \
--model_dir=tf_files/models/ \
--summaries_dir=tf_files/training_summaries/"${ARCHITECTURE}" \
--output_graph=tf_files/retrained_graph.pb \
--output_labels=tf_files/retrained_labels.txt \
--architecture="${ARCHITECTURE}" \
--image_dir=tf_files/flower_photos
tflite_convert --graph_def_file=tf_files/retrained_graph.pb --output_file=tf_files/optimized_graph.tflite --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE --input_shape=1,224,224,3 --input_array=input --output_array=final_result --inference_type=FLOAT --input_data_type=FLOAT
I made one change for it simpley change mobilenet version.

I got the same error as you with tflite converter python api.
This caused by the params we passed in input_arrays.
input_arrays need tensor_name defined in tf.placeholder(name="input") not proto map key string defined in build_signature_def(inputs={"input": tensor_info_proto},outputs...).
Here is a simple example.
x = tf.placeholder(tf.float32, [None], name="input_x")
...
builder = tf.saved_model.builder.SavedModelBuilder(saved_model_path)
input_tensor_info = {"input": tf.saved_model.build_tensor_info(x)}
output_tensor_info = ...
signature_def = tf.saved_model.build_signature_def(inputs=input_tensor_info,
outputs=...,
method_name=...)
builder.add_meta_graph_and_variables(...)
builder.save()
# convert saved_model to tflite format.
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_path,
input_arrays=["input"],
...)
...
...
Once you run a code like this will raise an error "ValueError: Invalid tensors 'input' were found."
If we make a small change as bellow, it will succeed.
# a small change when convert
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_path,
input_arrays=["input_x"],
...)

Related

Keras model tensorflow lite conversion input shape

I'm trying to convert my simple Keras model frozen graph to tensorflowlite but I'm not sure what the input shape is.
toco
--input_file='my_model.pb'
--input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \
--output_file=/tmp/my_model.tflite \
--inference_type=FLOAT \
--input_type=FLOAT \
--input_arrays=input_tensor \
--output_arrays=output_tensor \
--input_shapes=0,4,2851
My model is:
# create model
model = Sequential()
model.add(Dense(50, activation="tanh", input_dim=4, kernel_initializer="random_uniform", name="input_tensor"))
model.add(Dense(50, activation="tanh", kernel_initializer="random_uniform"))
model.add(Dense(1, activation="linear", kernel_initializer='random_uniform', name="output_tensor"))

Tensorflow lite accuracy drop on mobile device

I followed the both Tensorflow for Poets Tutorials:
Tensorflow for Poets 1 and Tensorflow for Poets 2.
My retrained model gives accurate results for a test on my laptop but after converting into the .tflite file and trying to classify the same image on my Android device the accuracy drops under 1%.
I used the following commands to retrain und convert:
python retrain.py \
--bottleneck_dir=tf_files/bottlenecks \
--how_many_training_steps=500 \
--model_dir=tf_files/models/ \
--summaries_dir=tf_files/training_summaries/"${ARCHITECTURE}" \
--output_graph=tf_files/retrained_graph.pb \
--output_labels=tf_files/retrained_labels.txt \
--architecture="${ARCHITECTURE}" \
--image_dir=tf_files/flower_photos
toco \
--input_file=tf_files/retrained_graph.pb \
--output_file=tf_files/optimized_graph.lite \
--input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \
--input_shape=1,224,224,3 \
--input_array=Placeholder \
--output_array=final_result \
--inference_type=FLOAT \
--input_data_type=FLOAT
Strangely the optimized file is almost as high as the original (both around 80 MB).
Using Tensorflow 1.9.0 and Python 3.6.6.
Any help or tip is appreciated!
Well I figured it out. Apparently the ARCHITECTURE variable was not set to the right value. So if anyone encounters the same problem, first of all check that

Custom implementation for Dequantize when converting .pb to .tflite

I'm trying to convert tensorflow lite quantised .pb file to .lite using toco. The command for creating .pb file is :
retrain.py is here and here.
python retrain.py \
--bottleneck_dir=/mobilenet_q/bottlenecks \
--how_many_training_steps=4000 \
--output_graph=/mobilenet_q/retrained_graph_mobilenet_q_1_224.pb \
--output_labels=/mobilenet_q/retrained_labels_mobilenet_q_1_224.txt \
--image_dir=/data \
--architecture=mobilenet_1.0_224_quantized
When I'm trying to convert the .pb file to .tflite using toco command:
bazel run --config=opt //tensorflow/contrib/lite/toco:toco \
-- --input_file= retrained_graph_mobilenet_q_1_224.pb \
--output_file= retrained_graph_mobilenet_q_1_224.lite \
--input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \
--input_shape=1,224,224,3 \
--input_array=input \
--output_array=final_result \
--inference_type=FLOAT \
--input_data_type=FLOAT
I'm getting the error:
Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.contrib.lite.toco_convert(). Here is a list of operators for which you will need custom implementations: Dequantize.
I've searched in github and stackoverflow but I've not come across a satisfactory answer.
The discussion and the solution are here.

TFLite: Unable to obtain Inference using MobilenetV2 on custom Dataset

I have followed this link and successfully created frozen graph for MoiblenetV2_1.4_224 by fine tuning on my custom dataset.
Then, I followed the tensorflow-for-poets:tflite to create the tflite graph using toco using following command.
IMAGE_SIZE=224
toco \
--input_file=frozen_mobilenet_v2.pb \
--output_file=optimized_graph.lite \
--input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \
--input_shape=10,${IMAGE_SIZE},${IMAGE_SIZE},3 \
--input_array=input \
--output_array=MobilenetV2/Predictions/Softmax \
--inference_type=FLOAT \
--input_data_type=FLOAT
The lite graph was successfully created but during inference, while running the tflite Interpretor I get the following error. Due to this, I am not getting any inferences.
Input error: Failed to get input dimensions. 0-th input should have 6021120 bytes, but found 602112 bytes.
Have you tried this argument --input_array=Predictions.
You can try to use quantized model
Input error: Failed to get input dimensions.
The commandline flag should be --input_arrays and not --input_array. (Plural instead of singular).
Ditto --output_arrays instead of --output_array. That should resolve your error. So the command should be:
IMAGE_SIZE=224
toco \
--input_file=frozen_mobilenet_v2.pb \
--output_file=optimized_graph.lite \
--input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \
--input_shape=10,${IMAGE_SIZE},${IMAGE_SIZE},3 \
--input_arrays=input \
--output_arrays=MobilenetV2/Predictions/Softmax \
--inference_type=FLOAT \
--input_data_type=FLOAT
Additional debugging tip
For future reference, if your flags are all correct, then next step would be to check that your input and output tensor names are correct. You can try to visualize the inputs and outputs of your graph in Tensorboard or convert to pbtxt as follows and then read it out:
import tensorflow as tf
path_to_pb = '...'
output_file = '...'
g = tf.GraphDef()
with open(path_to_pb, 'rb') as f:
g.ParseFromString(f.read())
with open(output_file, 'w') as f:
f.write(str(g))

How to convert tensorflow model file to tensorflow lite? graph.pb to graph.lite

(base) C:\tensorflow-master>bazel run --config=opt \ //tensorflow/contrib/lite/toco:toco -- \ --input_file=optimized_graph.pb \ --output_file=abc.tflite \ --input_format=TENSORFLOW_GRAPHDEF \ --output_format=TFLITE \ --inference_type=FLOAT \ --input_shape=1,128,128,3 \ --input_array=input \ --output_array=final_result
WARNING: Config values are not defined in any .rc file: opt
ERROR: No targets found to run
INFO: Elapsed time: 11.002s
FAILED: Build did NOT complete successfully (0 packages loaded)
ERROR: Build failed. Not running target
The error message suggests that you failed to build TOCO.
The C++ toco converter has been deprecated, in favor of the python API.
See these docs.

Categories

Resources