Cannot run LSTM in tensorflow lite 1.15 - android

TLDR: Can someone show how to create LSTM, convert it to TFLite, and run it in android version 1.15?
I am trying to create a simple LSTM model and run in in android application with tensorflow v115.
** It is the same case when using GRU and SimpleRNN layers **
Creating simple LSTM model
I am working in Python, trying two tensorflow and keras versions: LATEST (2.4.1 with built-in keras), and 1.1.5 (and I install keras version 2.2.4).
I create this simple model:
model = keras.Sequential()
model.add(layers.Embedding(input_dim=1000, output_dim=64))
model.add(layers.LSTM(128))
model.add(layers.Dense(10))
model.summary()
Saving it
I save it in both "SavedModel" and "h5" format:
model.save(f'output_models/simple_lstm_saved_model_format_{tf.__version__}', save_format='tf')
model.save(f'output_models/simple_lstm_{tf.__version__}.h5', save_format='h5')
Converting to TFLite
I try create & save the model in both v115 and v2 versions.
Then, I try to convert it to TFLite in several methods.
In TF2:
I try to convert from keras model:
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
with open(f"output_models/simple_lstm_tf_v{tf.__version__}.tflite", 'wb') as f:
f.write(tflite_model)
I try to convert from saved model:
converter_saved_model = tf.lite.TFLiteConverter.from_saved_model(saved_model_path)
tflite_model_from_saved_model = converter_saved_model.convert()
with open(f"{saved_model_path}_converted_tf_v{tf.__version__}.tflite", 'wb') as f:
f.write(tflite_model_from_saved_model)
I try to convert from keras saved model (h5) - I try to use both tf.compat.v1.lite.TFLiteConverter and tf..lite.TFLiteConverter.
converter_h5 = tf.compat.v1.lite.TFLiteConverter.from_keras_model_file(h5_model_path)
# converter_h5 = tf.lite.TFLiteConverter.from_keras_model_file(h5_model_path) # option 2
tflite_model_from_h5 = converter_h5.convert()
with open(f{h5_model_path.replace('.h5','')}_converted_tf_v1_lite_from_keras_model_file_v{tf.__version__}.tflite", 'wb') as f:
f.write(tflite_model_from_h5)
Android Application
build.gradle (Module: app)
When I want to use v2, I use:
implementation 'org.tensorflow:tensorflow-lite-task-vision:0.0.0-nightly'
implementation 'org.tensorflow:tensorflow-lite-task-text:0.0.0-nightly'
When I want to use v115, I use implementation 'org.tensorflow:tensorflow-lite:1.15.0'
in the build grade.
Then, I follow common tflite loading code in android:
private MappedByteBuffer loadModelFile(Activity activity) throws IOException {
AssetFileDescriptor fileDescriptor = activity.getAssets().openFd(getModelPath());
FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor());
FileChannel fileChannel = inputStream.getChannel();
long startOffset = fileDescriptor.getStartOffset();
long declaredLength = fileDescriptor.getDeclaredLength();
return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength);
}
LoadLSTM(Activity activity) {
try {
tfliteModel = loadModelFile(activity);
} catch (IOException e) {
e.printStackTrace();
}
tflite = new Interpreter(tfliteModel, tfliteOptions);
Log.d(TAG, "*** Loaded model *** " + getModelPath());
}
When I use v2, the model is loaded.
When I use the v115, in ALL of the options i've tried, I receive errors as the following:
A/libc: Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x70 in tid 17686 (CameraBackgroun), pid 17643 (flitecamerademo)
I need a simple outcome - create LSTM and make it work in android v115.
What am I missing? Thanks

Related

issue in invoking tflite model in Android

i am trying to use already trained model as tflite model in android but getting below error when executing the tflite model for the output:
**A/libc: Fatal signal 8 (SIGFPE), code 1 (FPE_INTDIV), fault addr 0xb7bd4543 in tid 12009 (ing.tensorflow3), pid 12009 (ing.tensorflow3)**
below is the code:
//calling
bitmap = getBitmapFromAsset("aval1.png");
imageViewInput.setImageBitmap(bitmap);
testFunctionInference(bitmap);
//method body
public void testFunctionInference(Bitmap strName){
try {
//____________________________________
ImageProcessor imageProcessor =
new ImageProcessor.Builder()
.add(new ResizeOp(1, 1, ResizeOp.ResizeMethod.BILINEAR))
.build();
Log.w("testFunc:","after image processor");
// Create a TensorImage object. This creates the tensor of the corresponding
// tensor type (uint8 in this case) that the TensorFlow Lite interpreter needs.
TensorImage tensorImage = new TensorImage(DataType.FLOAT32);
// Analysis code for every frame
// Preprocess the image
tensorImage.load(strName);
Log.w("testFunc:","265 L no.");
tensorImage = imageProcessor.process(tensorImage);
Log.w("testFunc:","before inputBuffer0");
// Creates inputs for reference.
TensorBuffer inputFeature0 = TensorBuffer.createFixedSize(new int[]{1, 640*480*3}, DataType.FLOAT32);
MappedByteBuffer tfliteModel
= FileUtil.loadMappedFile(this,"converted_model.tflite");
Interpreter tflite = new Interpreter(tfliteModel);
Object a=tensorImage.getBuffer();
Log.w("testFunc:","278");
tflite.run(tensorImage.getBuffer(), inputFeature0.getBuffer());
} catch (IOException e) {
// TODO Handle the exception
}
}
anyone please assist in getting this issue resolved.
To get a detailed log, you can use debug version of nightly-SNAPSHOT.
https://www.tensorflow.org/lite/guide/android#use_the_tensorflow_lite_aar_from_mavencentral
dependencies {
implementation 'org.tensorflow:tensorflow-lite:0.0.0-nightly-debug-SNAPSHOT'
}
But maybe it's better to check if you provided inputs correctly since you used DataType.FLOAT32, your model should have inputs with float32.

TensorFlow Lite 2.0 advanced GPU using on Android with C++

I am new in TensorFlow. I built TensorFlow Lite libraries from sources. I try to use TensorFlow for face recognition. This one a part of my project. And I have to use GPU memory for input/output e.g. input data: opengl texture, output data: opengl texture. Unfortunately, this information is outdated: https://www.tensorflow.org/lite/performance/gpu_advanced. I tried to use gpu::gl::InferenceBuilder for building gpu::gl::InferenceRunner. And I have problem. I don’t understand how I can get the model in GraphFloat32 (Model>) format and TfLiteContext.
Example of my experemental code:
using namespace tflite::gpu;
using namespace tflite::gpu::gl;
const TfLiteGpuDelegateOptionsV2 options = {
.inference_preference = TFLITE_GPU_INFERENCE_PREFERENCE_SUSTAINED_SPEED,
.is_precision_loss_allowed = 1 // FP16
};
tfGPUDelegate = TfLiteGpuDelegateV2Create(&options);
if (interpreter->ModifyGraphWithDelegate(tfGPUDelegate) != kTfLiteOk) {
__android_log_print(ANDROID_LOG_ERROR, "Tensorflow", "GPU Delegate hasn't been created");
return ;
} else {
__android_log_print(ANDROID_LOG_INFO, "Tensorflow", "GPU Delegate has been created");
}
InferenceEnvironmentOptions envOption;
InferenceEnvironmentProperties properties;
auto envStatus = NewInferenceEnvironment(envOption, &env, &properties);
if (envStatus.ok()){
__android_log_print(ANDROID_LOG_INFO, "Tensorflow", "Inference environment has been created");
} else {
__android_log_print(ANDROID_LOG_ERROR, "Tensorflow", "Inference environment hasn't been created");
__android_log_print(ANDROID_LOG_ERROR, "Tensorflow", "Message: %s", envStatus.error_message().c_str());
}
InferenceOptions builderOptions;
builderOptions.usage = InferenceUsage::SUSTAINED_SPEED;
builderOptions.priority1 = InferencePriority::MIN_LATENCY;
builderOptions.priority2 = InferencePriority::AUTO;
builderOptions.priority3 = InferencePriority::AUTO;
//The last part requires a model
// GraphFloat32* graph;
// TfLiteContext* tfLiteContex;
//
// auto buildStatus = BuildModel(tfLiteContex, delegate_params, &graph);
// if (buildStatus.ok()){}
You may look function BuildFromFlatBuffer (https://github.com/tensorflow/tensorflow/blob/6458d346470158605ecb5c5ba6ad390ae0dc6014/tensorflow/lite/delegates/gpu/common/testing/tflite_model_reader.cc). It creates Interpreter and graph from it.
Also Mediapipe uses InferenceRunner you may find for useful in files:
https://github.com/google/mediapipe/blob/master/mediapipe/calculators/tflite/tflite_inference_calculator.cc
https://github.com/google/mediapipe/blob/ecb5b5f44ab23ea620ef97a479407c699e424aa7/mediapipe/util/tflite/tflite_gpu_runner.cc

ARCore 1.2 Unity Create AugmentedImageDatabase on the fly

I am trying to dynamically create an image database using arcores new image tracking feature.
Currently I have a server serving me image locations which I download to the persistent data path of my device. I use these images to then create new database entries like below:
Public Variables:
public AugmentedImageDatabase newBD;
public AugmentedImageDatabaseEntry newEntry;
Here I do regex matching to get the images from the datapath and convert them to texture2D's in order to populate the AugmentedImageDatabaseEntry values.
Regex r1 = new Regex(#"https?://s3-([^.]+).amazonaws.com/([^/]+)/([^/]+)/(.*)");
// Match the input for file name
Match match = r1.Match(input);
if (match.Success)
{
string v = match.Groups[4].Value;
RegexMatch = v;
Texture2D laodedTexture = LoadTextureToFile(v);
laodedTexture.EncodeToPNG();
AugmentedImageDatabaseEntry newEntry = new AugmentedImageDatabaseEntry(v, laodedTexture, Application.persistentDataPath + "/" + v);
newEntry.Name = v;
newEntry.Texture = laodedTexture;
newEntry.TextureGUID = Application.persistentDataPath + "/" + v;
Debug.Log(newEntry.Name);
Debug.Log(newEntry.Texture);
Debug.Log(newEntry.TextureGUID);
newBD.Add(newEntry);
}
To get this to work on android I had to modify the source of ARCore's unity implementation a little so that the database.Add() function would work outside of the editor.
All of this seems to work seamlessly as I don't get any errors yet.
Once I change scenes to the ARCore scene I instantiate an ARCore Camera and create a new sessionconfig which holds a reference to the database populated above.
Here is that code:
public class NewConfigSetup : MonoBehaviour {
public GameObject downloadManager;
public GameObject arcoreDevice;
// Use this for initialization
void Start () {
downloadManager = GameObject.Find("DownlaodManager");
TestModelGenerator generator = downloadManager.GetComponent<TestModelGenerator>();
GoogleARCore.ARCoreSessionConfig newconfig = new GoogleARCore.ARCoreSessionConfig();
GoogleARCore.ARCoreSessionConfig config = ScriptableObject.CreateInstance<GoogleARCore.ARCoreSessionConfig>();
config.AugmentedImageDatabase = generator.newBD;
Debug.Log("transfered db size --------------- " + config.AugmentedImageDatabase.Count);
arcoreDevice.GetComponent<GoogleARCore.ARCoreSession>().SessionConfig = config;
Instantiate(arcoreDevice,new Vector3(0,0,0), Quaternion.identity);
}
}
When I run in the editor, I dont get errors untill I view the database in the editor, thats when I get this error:
ERROR: flag '--input_image_path' is missing its argument; flag
description: Path of image to be evaluated. Currently only supports
*.png, *.jpg and *.jpeg.
When I debug and look in the memory of the AugmentedImageDatabase. Everything seems to be there and working fine. Also once I build for android I get no errors whatsoever, as well as when I use 'adb logcat -s Unity' in the command line, no exceptions are thrown.
Could this be a limitation with the new ARCore feature? Are the AugmentedImageDatabases not allowing for dynamic creation on android? If so than why are there built in functions for creating them?
I understand the features are brand new and there is not much documentation anywhere so any help would be greatly appreciated.
I posted an Issue on ARCore's Github page, and got a response that the feature you're talking about isn't yet exposed in the Unity API :
https://github.com/google-ar/arcore-unity-sdk/issues/256

Xamarin.Forms (Read file from platform project in PCL code)

I am building a Xamarin.Forms project with a PCL, iOS and Android project. One of my requirements is that I have to read a JSON file stored in the platform project (iOS/Android) from the PCL.
How can I do this please? I can't find a solution for this problem. :(
Thank you very much,
Nikolai
If the file that you want to read is embedded in the platform project assembly you can use the following code:
var assembly = typeof(MyPage).GetTypeInfo().Assembly;
Stream stream = assembly.GetManifestResourceStream("WorkingWithFiles.PCLTextResource.txt");
string text = "";
using (var reader = new System.IO.StreamReader (stream)) {
text = reader.ReadToEnd ();
}
Make sure that you replace WorkingWithFiles with namespace of your project and PCLTextResource.txt with name of the file.
Check Xamarin documentation at Loading Files Embedded as Resources for more details
If on the other hand you want to create and read/write files at runtime you can use PCLStorage library:
public async Task PCLStorageSample()
{
IFolder rootFolder = FileSystem.Current.LocalStorage;
IFolder folder = await rootFolder.CreateFolderAsync("MySubFolder",
CreationCollisionOption.OpenIfExists);
IFile file = await folder.CreateFileAsync("answer.txt",
CreationCollisionOption.ReplaceExisting);
await file.WriteAllTextAsync("42");
}
I got it working by using an IoC container.
In my iOS project I have created a ConfigAccess class:
public class ConfigAccess : IConfigAccess
{
public string ReadConfigAsString()
{
return System.IO.File.ReadAllText("config.json");
}
}
I also had to add the following line to my AppDelegate.cs
SimpleIoc.Default.Register<IConfigAccess, ConfigAccess>();
And in my PCL I am simply asking for a ConfigAccess object during runtime:
var configAccess = SimpleIoc.Default.GetInstance<IConfigAccess>();
var test = configAccess.ReadConfigAsString();

Loading Tensorflow graph from binary file in Android application

I'm trying to integrate TensorFlow into an Android application. Since I'm new to TensorFlow, I'm proceeding starting with very simple operations.
As a first step I created just the following model:
import tensorflow as tf
with tf.Graph().as_default() as g:
​
x = tf.placeholder("float", [1, 10], name="input")
a = tf.zeros([10, 5], dtype=tf.float32, name="a")
y = tf.matmul(x, a, name="output")
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
​
graph_def = g.as_graph_def()
tf.train.write_graph(graph_def, 'models/', 'graph.pb', as_text=False)
This works fine. I'm able to properly get output from my C++ code (and then from Android as well).
I tried then to generate a by using tf.random_normal. It seems this is not feasible by just replacing tf.zeros with tf.random_normal. That's because tf.zeros returs a costant, while tf.random_normal not. In particular it seems I must handle it as a variable.
Idea I followed is the same proposed in other examples I've found on GitHub... so, evaluating a before writing the graph, as reported in code below:
import tensorflow as tf
a = tf.Variable(tf.random_normal([10, 5], dtype=tf.float32), name="a")
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
a_eval = a.eval(sess)
# print here properly produces matrix in output
print a_eval
sess.close()
with tf.Graph().as_default() as g:
x = tf.placeholder(tf.float32, [1, 10], name="input")
a_2 = tf.constant(a_eval, name="a_2")
y = tf.matmul(x, a_2, name="output")
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
graph_def = g.as_graph_def()
tf.train.write_graph(graph_def, 'models/', 'graph.pb', as_text=False)
Unfortunately it seems it doesn't work due to an error occurring in reading binary file:
Out of range: Read less bytes than requested
This is C++ code that I'm currently using for loading graph from file:
tensorflow::GraphDef graph_def;
Status load_graph_status = ReadBinaryProto(Env::Default(), filepath, &graph_def);
if (!load_graph_status.ok()) {
LOG(ERROR) << "could not create tensorflow graph: " << load_graph_status;
return NULL;
}
Hope someone could help me with this problem.

Categories

Resources