Interpret output of TF lite on object detection API - android

I am using Object detection api to train on my custom data for a 2 class problem.
I am using SSD Mobilenet v2. I am converting the model to TF lite and I am trying to execute it on python interpreter.
The value of score and class is somewhat confusing for me and I am unable to make a valid justification for the same. I am getting the following values for score.
[[ 0.9998122 0.2795332 0.7827836 1.8154384 -1.1171713 0.152002
-0.90076405 1.6943774 -1.1098632 0.6275915 ]]
I am getting the following values for class:
[[ 0. 1.742706 0.5762139 -0.23641224 -2.1639721 -0.6644413
-0.60925585 0.5485272 -0.9775026 1.4633082 ]]
How can I get a score of greater than 1 or less than 0 for e.g. -1.1098632 or 1.6943774.
Also the classes should be integers ideally 1 or 2 as it is a 2 class object detection problem
I am using the following code
import numpy as np
import tensorflow as tf
import cv2
# Load TFLite model and allocate tensors.
interpreter = tf.contrib.lite.Interpreter(model_path="C://Users//Admin//Downloads//tflitenew//detect.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
print(input_details)
print(output_details)
input_shape = input_details[0]['shape']
print(input_shape)
# change the following line to feed into your own data.
#input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
input_data = cv2.imread("C:/Users/Admin/Pictures/fire2.jpg")
#input_data = cv2.imread("C:/Users/Admin/Pictures/images4.jpg")
#input_data = cv2.imread("C:\\Users\\Admin\\Downloads\\FlareModels\\lessimages\\video5_image_178.jpg")
input_data = cv2.resize(input_data, (300, 300))
input_data = np.expand_dims(input_data, axis=0)
input_data = (2.0 / 255.0) * input_data - 1.0
input_data=input_data.astype(np.float32)
interpreter.reset_all_variables()
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data_scores = []
output_data_scores = interpreter.get_tensor(output_details[2]['index'])
print(output_data_scores)
output_data_class = []
output_data_class = interpreter.get_tensor(output_details[1]['index'])
print(output_data_class)
​

Looks like the problem is caused by the wrong input image channel order. Opencv imread reads images in 'BGR' format. You can try adding
input_data = cv2.cvtColor(input_data, cv2.COLOR_BGR2RGB)
to get a 'RGB' format image and then see if the result is reasonable.
Reference: ref

The output of tflite model requires post-processing. The model returns a fixed number (here, 10 detections) by default. Use the output tensor at index 3 to get the number of valid boxes, num_det. (i.e. top num_det detections are valid, ignore the rest)
num_det = int(interpreter.get_tensor(output_details[3]['index']))
boxes = interpreter.get_tensor(output_details[0]['index'])[0][:num_det]
classes = interpreter.get_tensor(output_details[1]['index'])[0][:num_det]
scores = interpreter.get_tensor(output_details[2]['index'])[0][:num_det]
Next, the box coordinates need to be scaled to the image size and adjusted so that the box is within the image (some visualization APIs require this).
df = pd.DataFrame(boxes)
df['ymin'] = df[0].apply(lambda y: max(1,(y*img_height)))
df['xmin'] = df[1].apply(lambda x: max(1,(x*img_width)))
df['ymax'] = df[2].apply(lambda y: min(img_height,(y*img_height)))
df['xmax'] = df[3].apply(lambda x: min(img_width,(x * img_width)))
boxes_scaled = df[['ymin', 'xmin', 'ymax', 'xmax']].to_numpy()
Here's a link to an inference script with input preprocessing, output post-processing and mAP evaluation.

Related

Shape does not match size of AI model in android studio

I am trying to get a text classifying model to work in android the model has trained using the python below and then converted into a tflite file. I am to figure out what the shap should be as. The text inputed is processed pn a server into a bag of words format which is passed into the model. Is there any guidance on wht i can do to get it work
`
import json
import pickle
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout
from tensorflow.keras.optimizers import SGD
import random
from keras.models import load_model
# create an object of WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
# importing the GL Bot corpus file for pre-processing
words=[]
classes = []
documents = []
ignore_words = ['?', '!']
data_file = open("higherCompText.json").read()
intents = json.loads(data_file,strict=False,)
# preprocessing the json data
# tokenization
nltk.download('punkt')
nltk.download('wordnet')
for intent in intents['intents']:
for pattern in intent['patterns']:
#tokenize each word
w = nltk.word_tokenize(pattern)
words.extend(w)
#add documents in the corpus
documents.append((w, intent['tag']))
# add to our classes list
if intent['tag'] not in classes:
classes.append(intent['tag'])
# lemmatize, lower each word and remove duplicates
words = [lemmatizer.lemmatize(w.lower()) for w in words if w not in ignore_words]
words = sorted(list(set(words)))
# sort classes
classes = sorted(list(set(classes)))
# documents = combination between patterns and intents
print (len(documents), "documents")
# classes = intents
print (len(classes), "classes", classes)
# words = all words, vocabulary
print (len(words), "unique lemmatized words", words)
# creating a pickle file to store the Python objects which we will use while predicting
pickle.dump(words,open('words.pkl','wb'))
pickle.dump(classes,open('classes.pkl','wb'))
# create our training data
training = []
# create an empty array for our output
output_empty = [0] * len(classes)
# training set, bag of words for each sentence
for doc in documents:
# initialize our bag of words
bag = []
# list of tokenized words for the pattern
pattern_words = doc[0]
# lemmatize each word - create base word, in attempt to represent related words
pattern_words = [lemmatizer.lemmatize(word.lower()) for word in pattern_words]
# create our bag of words array with 1, if word match found in current pattern
for w in words:
bag.append(1) if w in pattern_words else bag.append(0)
# output is a '0' for each tag and '1' for current tag (for each pattern)
output_row = list(output_empty)
output_row[classes.index(doc[1])] = 1
training.append([bag, output_row])
# shuffle features and converting it into numpy arrays
random.shuffle(training)
training = np.array(training)
##
### create train and test lists
train_x = list(training[:,0])
print(len(train_x))
train_y = list(training[:,1])
print("Training data created")
# Create NN model to predict the responses
model = Sequential()
model.add(Dense(128, input_shape=(len(train_x[0]),), activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(len(train_y[0]), activation='softmax'))
# Compile model. Stochastic gradient descent with Nesterov accelerated gradient gives good results for this model
sgd = SGD(learning_rate=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
#fitting and saving the model
hist = model.fit(np.array(train_x), np.array(train_y), epochs=200, batch_size=5, verbose=1)
model.save('chatbot.h5', hist) # we will pickle this model to use in the future
print("\n")
print("*"*50)
print("\nModel Created Successfully!")
`
`
fun toTheArray(someString:String): IntArray {
var someArray = someString.removeSurrounding("[", "]")
.takeIf(String::isNotEmpty) // this handles the case of "[]"
?.split(", ")
?: emptyList() // in the case of "[]"
var intArray:MutableList<Int> = mutableListOf()
Log.d("TRYING","${someArray[0].split(",")}")
someArray = someArray[0].split(",")
Log.d("I AM TRYING","$someArray")
for (each in someArray){
Log.d("I AM TRYING","$each")
intArray.add(each.toInt())
}
return intArray.toIntArray()
}
fun classifyTag(intArray:IntArray,context: Context){
Log.d("array?", "${intArray.size}")
var count = 0
var byteBuffer = ByteBuffer.allocate(39122)
for (each in intArray){
Log.d("array?", "$count")
count++
byteBuffer.putFloat(each.toFloat())
Log.d("array?", "$byteBuffer")
}
byteBuffer.rewind()
val model = Chatbot.newInstance(context)
val inputFeature0 = TensorBuffer.createFrom(byteBuffer.,DataType.FLOAT32)
Log.d("shape", byteBuffer.toString())
Log.d("shape", inputFeature0.buffer.toString())
Log.d("size", intArray.size.toString())
Log.d("shape", inputFeature0.shape.size.toString())
inputFeature0.loadArray(intArray)
//byteBuffer = inputFeature0.buffer
inputFeature0.loadBuffer(byteBuffer)
// Runs model inference and gets result.
val outputs = model.process(inputFeature0).outputFeature0AsTensorBuffer
Log.d("Disaster","$outputs")
// Releases model resources if no longer used.
model.close()
}
`
I've tried everything i can thing of and can't seem to find any answers online as the examples i've found online all use image classifying modles

integrating audio classification model in android

I have an audio classification trained model with tensorflow and I want to integrated with Android application.
How can I change the 1d float array to 1 * 236 * 40 dimension?
I tried the following code but wasn’t work
‘’'
var byteBuffer : ByteBuffer = ByteBuffer.allocate(4 * 236 *40)
for(element in meanMFCCValues){
val valArray = element
val inpShapeDim: IntArray = intArrayOf(1,meanMFCCValues[0].size,1)
val valInTnsrBuffer: TensorBuffer = TensorBuffer.createDynamic(imageDataType)
valInTnsrBuffer.loadArray(valArray, inpShapeDim)
val valInBuffer : ByteBuffer = valInTnsrBuffer.getBuffer()
byteBuffer.put(valInBuffer)
}
byteBuffer.rewind()
’’'
Is there any way that help me understanding the conversion of model input??
The error is: java.lang.IllegalArgumentException: The size of the array to be loaded does not match the specified shape.
Thanks for your help

How to test Tensorflowlite model with multiple inputs?

I created a simple MLP Regression Keras model with 4 inputs and one output. I converted this model to TFlite now I'm just trying to find out how to test it on android studio. How can I input multiple 4D objects to test in Java?
The following gives an error when trying to run the model:
try{
tflite = new Interpreter(loadModelFile());
}
catch(Exception ex){
ex.printStackTrace();
}
double[][] inp= new double[1][4];
inp[0][1]= 0;
inp[0][0] = 0;
inp[0][2]= 0;
inp[0][3]=-2.01616982303105;
double[] output = new double[100];
tflite.run(inp,output);
EDIT:
Here is the model I originally created:
# create model
model = Sequential()
model.add(Dense(50, activation="tanh", input_dim=4,
kernel_initializer="random_uniform", name="input_tensor"))
model.add(Dense(50, activation="tanh",
kernel_initializer="random_uniform"))
model.add(Dense(1, activation="linear",
kernel_initializer='random_uniform', name="output_tensor"))
If your inputs are actually 4 separate tensors, then you should use the Interpreter.runForMultipleInputsAndOutputs API which allows multiple separate inputs. See also this example from the TensorFlow Lite repository. For example:
double[] input0 = {...};
double[] input1 = {...};
Object[] inputs = {input0, input1};
double[] output = new double[100];
Map<Integer, Object> outputs = new HashMap<>();
outputs.put(0, output);
interpreter.runForMultipleInputsOutputs(inputs, outputs);
This is my code:
Object[] inputArray = {iArray[0],iArray[1]};
tflite.runForMultipleInputsOutputs(inputArray,outputMap);
The 1st object works fine. But the 2nd object in the function fails in Tensor getInputTensor(int index) on this condition:
if(index >= 0 && index < this.inputTensors.length)
But the index is 1. Is there any problem with my code?

I keep receiving this error "filename = sys.argv[1] IndexError: list index out of range". Any idea what the issue could be?

The code is supposed to display a animated image walking in front of a background. I received this code from my professor and I'm not sure what the issue is.
import sys, os, math
sys.path.append("./")
from livewires import games
import spriteUtils
from spriteUtils import *
filename = sys.argv[1]
x = int(sys.argv[2])
y = int(sys.argv[3])
##print(filename, "\t", x, "\t", y)
games.init(screen_width = 1152, screen_height = 864, fps = 50)
nebula_image = games.load_image(os.path.join('.', "race_track.jpg"), transparent = 0)
games.screen.background = nebula_image
anim_list = load_2d_sheets(x, y, filename)
anim = games.Animation(images = anim_list,
x = games.screen.width/2,
y = 2*games.screen.height/4,
n_repeats = 15,
repeat_interval = 10)
games.screen.add(anim)
games.screen.mainloop()
I'd first print sys.argv so that you can see what it represents. For example:
$ ./myscript.py
>>> sys.argv
['/myscript.py']
>>> sys.argv[1]
IndexError
You probably want to pass additional command line arguments to your function like:
$ ./myscript.py firstvar secondvar
sys.argv[1] is the first argument that you pass to your script, so you need to run your script like this:
python my_script.py arg1
sys.argv[0] is the name of your script itself, in this example my_script.py
Looks like this script requires 3 arguments, and you aren't giving it any. The syntax for calling it on the command line is python script_name.py <file_name> <x> <y> where script_name.py is the name of the actual program, <file_name> is a string, and <x> and <y> are integers.

cell2mat error on Matlab from accelerometer txt data and how to plot it

I am newbie on Matlab, and i am trying to plot the data from the txt file written by an android application ( https://play.google.com/store/apps/details?id=com.lul.accelerometer&hl=it)
I cleaned the file and i have only the 4 columns with values separated by the " "
X Y Z time_from_previous_sample(ms)
e.g.
-1.413 6.572 6.975 0
-1.2 6.505 7.229 5
-1.047 6.341 7.26 5
-1.024 6.305 7.295 5
-1.154 6.318 7.247 5
-1.118 6.444 7.104 5
-1.049 6.225 7.173 5
-1.098 6.063 6.939 5
-0.769 6.53 6.903 5
fileID = fopen ('provamatlav.txt');
C = textscan (fileID, '%s %s %s %s');
fclose (fileID);
>> celldisp (C)`
After importing the data i created three new variables
X = C {1};
Y = C {2};
Z= C {3}
The error occur when i try to convert the cell array X into an ordinary array
xx = cell2mat('X')
the error is the following
Cell contents reference from a non-cell array object.
Error in cell2mat (line 36)
if isnumeric(c{1}) || ischar(c{1}) || islogical(c{1}) || isstruct(c{1})
Analyzing the code:
% Copyright 1984-2010 The MathWorks, Inc.
% Error out if there is no input argument
if nargin==0
error(message('MATLAB:cell2mat:NoInputs'));
end
% short circuit for simplest case
elements = numel(c);
if elements == 0
m = [];
return
end
if elements == 1
if isnumeric(c{1}) || ischar(c{1}) || islogical(c{1}) || isstruct(c{1})
m = c{1};
return
end
end
% Error out if cell array contains mixed data types
cellclass = class(c{1});
ciscellclass = cellfun('isclass',c,cellclass);
if ~all(ciscellclass(:))
error(message('MATLAB:cell2mat:MixedDataTypes'));
end
What did i do wrong?
After solved this, what would be the next step to plot the X Y Z data in the same window, but in separate graphs?
Thank you so much!
When using cell2mat, you do not need to use quotes when giving the input argument. The quotes are the reason for the error you got. Generally speaking, you would call it like so:
xx = cell2mat(X)
But you will run into a different error with this in your code, because the cell elements in your case are strings (cell2mat expects numerical values as the output). So you need to convert them to numerical format, e.g. by using this:
xx=cellfun(#str2num,X)
Please try the code line above. It worked ok in my small test case.

Categories

Resources