Loading Tensorflow graph from binary file in Android application - android

I'm trying to integrate TensorFlow into an Android application. Since I'm new to TensorFlow, I'm proceeding starting with very simple operations.
As a first step I created just the following model:
import tensorflow as tf
with tf.Graph().as_default() as g:
​
x = tf.placeholder("float", [1, 10], name="input")
a = tf.zeros([10, 5], dtype=tf.float32, name="a")
y = tf.matmul(x, a, name="output")
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
​
graph_def = g.as_graph_def()
tf.train.write_graph(graph_def, 'models/', 'graph.pb', as_text=False)
This works fine. I'm able to properly get output from my C++ code (and then from Android as well).
I tried then to generate a by using tf.random_normal. It seems this is not feasible by just replacing tf.zeros with tf.random_normal. That's because tf.zeros returs a costant, while tf.random_normal not. In particular it seems I must handle it as a variable.
Idea I followed is the same proposed in other examples I've found on GitHub... so, evaluating a before writing the graph, as reported in code below:
import tensorflow as tf
a = tf.Variable(tf.random_normal([10, 5], dtype=tf.float32), name="a")
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
a_eval = a.eval(sess)
# print here properly produces matrix in output
print a_eval
sess.close()
with tf.Graph().as_default() as g:
x = tf.placeholder(tf.float32, [1, 10], name="input")
a_2 = tf.constant(a_eval, name="a_2")
y = tf.matmul(x, a_2, name="output")
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
graph_def = g.as_graph_def()
tf.train.write_graph(graph_def, 'models/', 'graph.pb', as_text=False)
Unfortunately it seems it doesn't work due to an error occurring in reading binary file:
Out of range: Read less bytes than requested
This is C++ code that I'm currently using for loading graph from file:
tensorflow::GraphDef graph_def;
Status load_graph_status = ReadBinaryProto(Env::Default(), filepath, &graph_def);
if (!load_graph_status.ok()) {
LOG(ERROR) << "could not create tensorflow graph: " << load_graph_status;
return NULL;
}
Hope someone could help me with this problem.

Related

Cannot run LSTM in tensorflow lite 1.15

TLDR: Can someone show how to create LSTM, convert it to TFLite, and run it in android version 1.15?
I am trying to create a simple LSTM model and run in in android application with tensorflow v115.
** It is the same case when using GRU and SimpleRNN layers **
Creating simple LSTM model
I am working in Python, trying two tensorflow and keras versions: LATEST (2.4.1 with built-in keras), and 1.1.5 (and I install keras version 2.2.4).
I create this simple model:
model = keras.Sequential()
model.add(layers.Embedding(input_dim=1000, output_dim=64))
model.add(layers.LSTM(128))
model.add(layers.Dense(10))
model.summary()
Saving it
I save it in both "SavedModel" and "h5" format:
model.save(f'output_models/simple_lstm_saved_model_format_{tf.__version__}', save_format='tf')
model.save(f'output_models/simple_lstm_{tf.__version__}.h5', save_format='h5')
Converting to TFLite
I try create & save the model in both v115 and v2 versions.
Then, I try to convert it to TFLite in several methods.
In TF2:
I try to convert from keras model:
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
with open(f"output_models/simple_lstm_tf_v{tf.__version__}.tflite", 'wb') as f:
f.write(tflite_model)
I try to convert from saved model:
converter_saved_model = tf.lite.TFLiteConverter.from_saved_model(saved_model_path)
tflite_model_from_saved_model = converter_saved_model.convert()
with open(f"{saved_model_path}_converted_tf_v{tf.__version__}.tflite", 'wb') as f:
f.write(tflite_model_from_saved_model)
I try to convert from keras saved model (h5) - I try to use both tf.compat.v1.lite.TFLiteConverter and tf..lite.TFLiteConverter.
converter_h5 = tf.compat.v1.lite.TFLiteConverter.from_keras_model_file(h5_model_path)
# converter_h5 = tf.lite.TFLiteConverter.from_keras_model_file(h5_model_path) # option 2
tflite_model_from_h5 = converter_h5.convert()
with open(f{h5_model_path.replace('.h5','')}_converted_tf_v1_lite_from_keras_model_file_v{tf.__version__}.tflite", 'wb') as f:
f.write(tflite_model_from_h5)
Android Application
build.gradle (Module: app)
When I want to use v2, I use:
implementation 'org.tensorflow:tensorflow-lite-task-vision:0.0.0-nightly'
implementation 'org.tensorflow:tensorflow-lite-task-text:0.0.0-nightly'
When I want to use v115, I use implementation 'org.tensorflow:tensorflow-lite:1.15.0'
in the build grade.
Then, I follow common tflite loading code in android:
private MappedByteBuffer loadModelFile(Activity activity) throws IOException {
AssetFileDescriptor fileDescriptor = activity.getAssets().openFd(getModelPath());
FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor());
FileChannel fileChannel = inputStream.getChannel();
long startOffset = fileDescriptor.getStartOffset();
long declaredLength = fileDescriptor.getDeclaredLength();
return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength);
}
LoadLSTM(Activity activity) {
try {
tfliteModel = loadModelFile(activity);
} catch (IOException e) {
e.printStackTrace();
}
tflite = new Interpreter(tfliteModel, tfliteOptions);
Log.d(TAG, "*** Loaded model *** " + getModelPath());
}
When I use v2, the model is loaded.
When I use the v115, in ALL of the options i've tried, I receive errors as the following:
A/libc: Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x70 in tid 17686 (CameraBackgroun), pid 17643 (flitecamerademo)
I need a simple outcome - create LSTM and make it work in android v115.
What am I missing? Thanks

Unable to load font for use with ImageSharp in Xamarin Android app

I have a Xamarin Forms app where I've included a font file called Roboto-Regular.ttf in the Assets folder of the Android project. Its Build Action is set to AndroidAsset.
Using the SixLabors.Fonts NuGet package, I'm trying to load this font to use it for watermarking.
However, trying to install the font using the asset stream, an exception is thrown saying:
System.NotSupportedException: Specified method is not supported.
var fonts = new FontCollection();
FontFamily fontFamily;
using (var fontStream = Assets.Open("Roboto-Regular.ttf"))
{
fontFamily = fonts.Install(fontStream); // Fails with "method not supported"
}
return fontFamily;
Any ideas what might be causing this, or if there is a better way to load fonts for use with the SixLabors.ImageSharp package?
Edit: I tried the suggestion below by SushiHangover, but it yields the same result:
There is are two Assets.Open methods and one provides an accessMode (C# Access enum flag set):
using (var fontStream = Assets.Open("Roboto-Regular.ttf", Android.Content.Res.Access.Random))
{
fontFamily = fonts.Install(fontStream);
}
re: https://developer.android.com/reference/android/content/res/AssetManager.html#open(java.lang.String,%20int)
public enum Access
{
[IntDefinition (null, JniField = "android/content/res/AssetManager.ACCESS_BUFFER")]
Buffer = 3,
[IntDefinition (null, JniField = "android/content/res/AssetManager.ACCESS_RANDOM")]
Random = 1,
[IntDefinition (null, JniField = "android/content/res/AssetManager.ACCESS_STREAMING")]
Streaming,
[IntDefinition (null, JniField = "android/content/res/AssetManager.ACCESS_UNKNOWN")]
Unknown = 0
}
Seems the underlying Stream did not have Length or Position properties (which explains the exception), so for now I resorted to converting to a seekable MemoryStream instead:
using (var assetStreamReader = new StreamReader(Assets.Open("Roboto-Regular.ttf"))
{
using (var ms = new MemoryStream())
{
assetStreamReader.BaseStream.CopyTo(ms);
ms.Position = 0;
var fontFamily = new FontCollection().Install(ms);
}
}
Looking at the FontReader implementation, the error now makes even more sense: https://github.com/SixLabors/Fonts/blob/master/src/SixLabors.Fonts/FontReader.cs
However, I'm not sure why Assets doesn't return a seekable stream?

ARCore 1.2 Unity Create AugmentedImageDatabase on the fly

I am trying to dynamically create an image database using arcores new image tracking feature.
Currently I have a server serving me image locations which I download to the persistent data path of my device. I use these images to then create new database entries like below:
Public Variables:
public AugmentedImageDatabase newBD;
public AugmentedImageDatabaseEntry newEntry;
Here I do regex matching to get the images from the datapath and convert them to texture2D's in order to populate the AugmentedImageDatabaseEntry values.
Regex r1 = new Regex(#"https?://s3-([^.]+).amazonaws.com/([^/]+)/([^/]+)/(.*)");
// Match the input for file name
Match match = r1.Match(input);
if (match.Success)
{
string v = match.Groups[4].Value;
RegexMatch = v;
Texture2D laodedTexture = LoadTextureToFile(v);
laodedTexture.EncodeToPNG();
AugmentedImageDatabaseEntry newEntry = new AugmentedImageDatabaseEntry(v, laodedTexture, Application.persistentDataPath + "/" + v);
newEntry.Name = v;
newEntry.Texture = laodedTexture;
newEntry.TextureGUID = Application.persistentDataPath + "/" + v;
Debug.Log(newEntry.Name);
Debug.Log(newEntry.Texture);
Debug.Log(newEntry.TextureGUID);
newBD.Add(newEntry);
}
To get this to work on android I had to modify the source of ARCore's unity implementation a little so that the database.Add() function would work outside of the editor.
All of this seems to work seamlessly as I don't get any errors yet.
Once I change scenes to the ARCore scene I instantiate an ARCore Camera and create a new sessionconfig which holds a reference to the database populated above.
Here is that code:
public class NewConfigSetup : MonoBehaviour {
public GameObject downloadManager;
public GameObject arcoreDevice;
// Use this for initialization
void Start () {
downloadManager = GameObject.Find("DownlaodManager");
TestModelGenerator generator = downloadManager.GetComponent<TestModelGenerator>();
GoogleARCore.ARCoreSessionConfig newconfig = new GoogleARCore.ARCoreSessionConfig();
GoogleARCore.ARCoreSessionConfig config = ScriptableObject.CreateInstance<GoogleARCore.ARCoreSessionConfig>();
config.AugmentedImageDatabase = generator.newBD;
Debug.Log("transfered db size --------------- " + config.AugmentedImageDatabase.Count);
arcoreDevice.GetComponent<GoogleARCore.ARCoreSession>().SessionConfig = config;
Instantiate(arcoreDevice,new Vector3(0,0,0), Quaternion.identity);
}
}
When I run in the editor, I dont get errors untill I view the database in the editor, thats when I get this error:
ERROR: flag '--input_image_path' is missing its argument; flag
description: Path of image to be evaluated. Currently only supports
*.png, *.jpg and *.jpeg.
When I debug and look in the memory of the AugmentedImageDatabase. Everything seems to be there and working fine. Also once I build for android I get no errors whatsoever, as well as when I use 'adb logcat -s Unity' in the command line, no exceptions are thrown.
Could this be a limitation with the new ARCore feature? Are the AugmentedImageDatabases not allowing for dynamic creation on android? If so than why are there built in functions for creating them?
I understand the features are brand new and there is not much documentation anywhere so any help would be greatly appreciated.
I posted an Issue on ARCore's Github page, and got a response that the feature you're talking about isn't yet exposed in the Unity API :
https://github.com/google-ar/arcore-unity-sdk/issues/256

Keep Tensorflow session open in a Kivy app

I am trying to run an app made in Kivy along with a Tensorflow session and keep it from loading it every time when I make a prediction. To be more precise, I want to know how I can call the function from inside the session.
Here is the code for the session:
def decode():
# Only allocate part of the gpu memory when predicting.
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.2)
config = tf.ConfigProto(gpu_options=gpu_options)
with tf.Session(config=config) as sess:
# Create model and load parameters.
model = create_model(sess, True)
model.batch_size = 1
enc_vocab_path = os.path.join(gConfig['working_directory'],"vocab%d.enc" % gConfig['enc_vocab_size'])
dec_vocab_path = os.path.join(gConfig['working_directory'],"vocab%d.dec" % gConfig['dec_vocab_size'])
enc_vocab, _ = data_utils.initialize_vocabulary(enc_vocab_path)
_, rev_dec_vocab = data_utils.initialize_vocabulary(dec_vocab_path)
# !!! This is the function that I'm trying to call. !!!
def answersqs(sentence):
token_ids = data_utils.sentence_to_token_ids(tf.compat.as_bytes(sentence), enc_vocab)
bucket_id = min([b for b in xrange(len(_buckets))
if _buckets[b][0] > len(token_ids)])
encoder_inputs, decoder_inputs, target_weights = model.get_batch(
{bucket_id: [(token_ids, [])]}, bucket_id)
_, _, output_logits = model.step(sess, encoder_inputs, decoder_inputs,
target_weights, bucket_id, True)
outputs = [int(np.argmax(logit, axis=1)) for logit in output_logits]
if data_utils.EOS_ID in outputs:
outputs = outputs[:outputs.index(data_utils.EOS_ID)]
return " ".join([tf.compat.as_str(rev_dec_vocab[output]) for output in outputs])
Here is where I'm calling the function:
def resp(self, msg):
def p():
if len(msg) > 0:
# If I try to do decode().answersqs(msg), it starts a new session.
ansr = answersqs(msg)
ansrbox = Message()
ansrbox.ids.mlab.text = str(ansr)
ansrbox.ids.mlab.color = (1, 1, 1)
ansrbox.pos_hint = {'x': 0}
ansrbox.source = './icons/ansr_box.png'
self.root.ids.chatbox.add_widget(ansrbox)
self.root.ids.scrlv.scroll_to(ansrbox)
threading.Thread(target=p).start()
And here is the last part:
if __name__ == "__main__":
if len(sys.argv) - 1:
gConfig = brain.get_config(sys.argv[1])
else:
# get configuration from seq2seq.ini
gConfig = brain.get_config()
threading.Thread(target=decode()).start()
KatApp().run()
Also, should I change the session from GPU to CPU before I port it on Android?
You should have two variables graph and session that you keep around.
When you load the model you do something like:
graph = tf.Graph()
session = tf.Session(config=config)
with graph.as_default(), session.as_default():
# The reset of your model loading code.
When you need to make a prediction:
with graph.as_default(), session.as_default():
return session.run([your_result_tensor])
What happens is that the sessions is loaded and in memory and you just tell the system that's the context where you want to run.
In your code move def answersqs outside of the with part. It should bind automatically to graph and session from the surrounding function (but you need to make them available outside the with).
For the second part, normally if you follow the guides the exported model should be free of hardware binding information and when you load it tensorflow will figure out a good placement (that might be GPU if available and sufficiently capable).

Unity WebPlayer to Android Build Issues

I am new to Unity so a well stepped out answer would be nice. I am trying to make a dice roller on an Android platform. I was following this very well put together tutorial http://games.ucla.edu/resource/unity-1-beginner-tutorial-dice-making-pt-1/ (There is a second part too)
The problem is that it was made for a Web Player. If I try to build it for Android I get two particular errors.
I have two simple scripts with one error associated with each one.
SideTrigger.js - Error: BCE0019: 'currentValue' is not a member of 'UnityEngine.Component'.
public var faceValue = 0;
function OnTriggerEnter( other : Collider ) {
var dieGameObject = GameObject.Find("SixSidedDie");
var dieValueComponent = dieGameObject.GetComponent("DieValue");
dieValueComponent.currentValue = faceValue; //ERROR HERE
Debug.Log("Die1: " + faceValue);
}
DieValue.js - Error: BCE0019: 'text' is not a member of 'UnityEngine.Component'.
public var currentValue = 0;
function Update () {
var dieTextGameObject = GameObject.Find("DieText");
var textMeshComponent = dieTextGameObject.GetComponent("TextMesh");
textMeshComponent.text = currentValue.ToString(); //ERROR HERE
}
I'm assume it's purely a syntactical issue, but I can't seem to find a solution.
GetComponent using string is not recommended due to performance reasons, it is documented here.
It is better to use this:
var dieValueComponent : DieValue = dieGameObject.GetComponent(DieValue)
Or this:
var dieValueComponent : DieValue = dieGameObject.GetComponent.<DieValue>()
See if that works.

Categories

Resources