I am new in TensorFlow. I built TensorFlow Lite libraries from sources. I try to use TensorFlow for face recognition. This one a part of my project. And I have to use GPU memory for input/output e.g. input data: opengl texture, output data: opengl texture. Unfortunately, this information is outdated: https://www.tensorflow.org/lite/performance/gpu_advanced. I tried to use gpu::gl::InferenceBuilder for building gpu::gl::InferenceRunner. And I have problem. I don’t understand how I can get the model in GraphFloat32 (Model>) format and TfLiteContext.
Example of my experemental code:
using namespace tflite::gpu;
using namespace tflite::gpu::gl;
const TfLiteGpuDelegateOptionsV2 options = {
.inference_preference = TFLITE_GPU_INFERENCE_PREFERENCE_SUSTAINED_SPEED,
.is_precision_loss_allowed = 1 // FP16
};
tfGPUDelegate = TfLiteGpuDelegateV2Create(&options);
if (interpreter->ModifyGraphWithDelegate(tfGPUDelegate) != kTfLiteOk) {
__android_log_print(ANDROID_LOG_ERROR, "Tensorflow", "GPU Delegate hasn't been created");
return ;
} else {
__android_log_print(ANDROID_LOG_INFO, "Tensorflow", "GPU Delegate has been created");
}
InferenceEnvironmentOptions envOption;
InferenceEnvironmentProperties properties;
auto envStatus = NewInferenceEnvironment(envOption, &env, &properties);
if (envStatus.ok()){
__android_log_print(ANDROID_LOG_INFO, "Tensorflow", "Inference environment has been created");
} else {
__android_log_print(ANDROID_LOG_ERROR, "Tensorflow", "Inference environment hasn't been created");
__android_log_print(ANDROID_LOG_ERROR, "Tensorflow", "Message: %s", envStatus.error_message().c_str());
}
InferenceOptions builderOptions;
builderOptions.usage = InferenceUsage::SUSTAINED_SPEED;
builderOptions.priority1 = InferencePriority::MIN_LATENCY;
builderOptions.priority2 = InferencePriority::AUTO;
builderOptions.priority3 = InferencePriority::AUTO;
//The last part requires a model
// GraphFloat32* graph;
// TfLiteContext* tfLiteContex;
//
// auto buildStatus = BuildModel(tfLiteContex, delegate_params, &graph);
// if (buildStatus.ok()){}
You may look function BuildFromFlatBuffer (https://github.com/tensorflow/tensorflow/blob/6458d346470158605ecb5c5ba6ad390ae0dc6014/tensorflow/lite/delegates/gpu/common/testing/tflite_model_reader.cc). It creates Interpreter and graph from it.
Also Mediapipe uses InferenceRunner you may find for useful in files:
https://github.com/google/mediapipe/blob/master/mediapipe/calculators/tflite/tflite_inference_calculator.cc
https://github.com/google/mediapipe/blob/ecb5b5f44ab23ea620ef97a479407c699e424aa7/mediapipe/util/tflite/tflite_gpu_runner.cc
Related
TLDR: Can someone show how to create LSTM, convert it to TFLite, and run it in android version 1.15?
I am trying to create a simple LSTM model and run in in android application with tensorflow v115.
** It is the same case when using GRU and SimpleRNN layers **
Creating simple LSTM model
I am working in Python, trying two tensorflow and keras versions: LATEST (2.4.1 with built-in keras), and 1.1.5 (and I install keras version 2.2.4).
I create this simple model:
model = keras.Sequential()
model.add(layers.Embedding(input_dim=1000, output_dim=64))
model.add(layers.LSTM(128))
model.add(layers.Dense(10))
model.summary()
Saving it
I save it in both "SavedModel" and "h5" format:
model.save(f'output_models/simple_lstm_saved_model_format_{tf.__version__}', save_format='tf')
model.save(f'output_models/simple_lstm_{tf.__version__}.h5', save_format='h5')
Converting to TFLite
I try create & save the model in both v115 and v2 versions.
Then, I try to convert it to TFLite in several methods.
In TF2:
I try to convert from keras model:
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
with open(f"output_models/simple_lstm_tf_v{tf.__version__}.tflite", 'wb') as f:
f.write(tflite_model)
I try to convert from saved model:
converter_saved_model = tf.lite.TFLiteConverter.from_saved_model(saved_model_path)
tflite_model_from_saved_model = converter_saved_model.convert()
with open(f"{saved_model_path}_converted_tf_v{tf.__version__}.tflite", 'wb') as f:
f.write(tflite_model_from_saved_model)
I try to convert from keras saved model (h5) - I try to use both tf.compat.v1.lite.TFLiteConverter and tf..lite.TFLiteConverter.
converter_h5 = tf.compat.v1.lite.TFLiteConverter.from_keras_model_file(h5_model_path)
# converter_h5 = tf.lite.TFLiteConverter.from_keras_model_file(h5_model_path) # option 2
tflite_model_from_h5 = converter_h5.convert()
with open(f{h5_model_path.replace('.h5','')}_converted_tf_v1_lite_from_keras_model_file_v{tf.__version__}.tflite", 'wb') as f:
f.write(tflite_model_from_h5)
Android Application
build.gradle (Module: app)
When I want to use v2, I use:
implementation 'org.tensorflow:tensorflow-lite-task-vision:0.0.0-nightly'
implementation 'org.tensorflow:tensorflow-lite-task-text:0.0.0-nightly'
When I want to use v115, I use implementation 'org.tensorflow:tensorflow-lite:1.15.0'
in the build grade.
Then, I follow common tflite loading code in android:
private MappedByteBuffer loadModelFile(Activity activity) throws IOException {
AssetFileDescriptor fileDescriptor = activity.getAssets().openFd(getModelPath());
FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor());
FileChannel fileChannel = inputStream.getChannel();
long startOffset = fileDescriptor.getStartOffset();
long declaredLength = fileDescriptor.getDeclaredLength();
return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength);
}
LoadLSTM(Activity activity) {
try {
tfliteModel = loadModelFile(activity);
} catch (IOException e) {
e.printStackTrace();
}
tflite = new Interpreter(tfliteModel, tfliteOptions);
Log.d(TAG, "*** Loaded model *** " + getModelPath());
}
When I use v2, the model is loaded.
When I use the v115, in ALL of the options i've tried, I receive errors as the following:
A/libc: Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x70 in tid 17686 (CameraBackgroun), pid 17643 (flitecamerademo)
I need a simple outcome - create LSTM and make it work in android v115.
What am I missing? Thanks
i have been working on this problem for 2 weeks now, i have integrate C++ code into my Voip call recording app, the code is supposed to take care of forcefully setting Input_source of mediaRecorder to same one as from the Voip Call (in my case it is input_source=7 / Voice_Communication).
In order to achieve my goal i load shared library : libaudioflinger.so and attempt to reach SetParameters function, as can be seen from the snipet below :
handleLibAudioFlinger = dlopen("libaudioflinger.so", RTLD_LAZY | RTLD_GLOBAL);
if (handleLibAudioFlinger != NULL) {
func = dlsym(handleLibAudioFlinger, "setParameters"); // i do not know the mangled name of
SetParameters function
if (func != NULL) {
__android_log_print(ANDROID_LOG_ERROR, "TRACKERS", "%s", "Function is not null");
result = 0;
}
audioSetParameters = (lasp) func;
} else {
__android_log_print(ANDROID_LOG_ERROR, "TRACKERS", "%s", "Function is null");
result = -1;
}
dlopen does not return null, but dlsym does, reason is that i need to have the exact mangled name of the function setParameters from AudioFlinger.cpp As in Android source code.
i am new to handling android c++ code and dealing with shared libraries,etc... if someone can tell me step by step how to get correct mangled name for the function that i need?
I have been trying to implement the flashlight/torch feature of the camera using the GooglePlay Services Vision API (using Nuget from Visual Studio) for the past few days without success. I have noticed that there is a GitHub implementation of this API which has such functionality but that is only available to Java users.
I was wondering if there is anything related to C# Xamarin users.
The Camera object is not made available on this API therefore I am not able to alter the Camera parameters needed to activate the flashlight.
I would like to be sure if that functionality is not available so I don't waste more time over this. It just might be the case that the Xamarin developers have not attended to this functionality and they might in a near future.
UPDATE
https://github.com/googlesamples/android-vision/blob/master/visionSamples/barcode-reader/app/src/main/java/com/google/android/gms/samples/vision/barcodereader/BarcodeCaptureActivity.java
In there you can see that on line 214 we have such method call:
mCameraSource = builder.setFlashMode(useFlash ? Camera.Parameters.FLASH_MODE_TORCH : null).build();
SetFlashMode is not a method of the CameraSource in Nuget, but it is on the GitHub (open source version).
Xamarin Vision Library Didn't expose the method to set Flash Mode.
WorkAround.
Using Reflection. You can get the Camera Object from CameraSouce and add the flash parameter then set the updated parameters to the camera.
This should be called after surfaceview has been created
Code
public Camera getCameraObject (CameraSource _camSource)
{
Field [] cFields = _camSource.Class.GetDeclaredFields ();
Camera _cam = null;
try {
foreach (Field item in cFields) {
if (item.Name.Equals ("zzbNN")) {
Console.WriteLine ("Camera");
item.Accessible = true;
try {
_cam = (Camera)item.Get (_camSource);
} catch (Exception e) {
Logger.LogException (this, e);
}
}
}
} catch (Exception e) {
Logger.LogException (this, e);
}
return _cam;
}
public void setFlash (bool isEnable)
{
try {
isTorch = !isEnable;
var _cam = getCameraObject (mCameraSource);
if (_cam == null) return;
var _pareMeters = _cam.GetParameters ();
var _listOfSuppo = _cam.GetParameters ().SupportedFlashModes;
_pareMeters.FlashMode = isTorch ? _listOfSuppo [0] : _listOfSuppo [3];
_cam.SetParameters (_pareMeters);
} catch (Exception e) {
Logger.LogException (this, e);
}
}
Basically, anything you can do with Android can be done with Xamarin.Android. All the underlying APIs area available.
Since you have existing Java code, you can create a binding project that enables you to call the code from your Xamarin.Android project. Here's a good article on how to get started: Binding a Java Library
On the other hand, I don't think you need a library to do what you want to. If you only want torch/flashlight functionality, you just need to adapt the Java code from this answer to work in Xamarin.Android with C#.
I try to use Gstreamer on Android via Qt C++.
I already use Gstreamer on these platforms but now I have an issues with the plugins:
G_BEGIN_DECLS
GST_PLUGIN_STATIC_DECLARE(coreelements);
GST_PLUGIN_STATIC_DECLARE(audioconvert);
GST_PLUGIN_STATIC_DECLARE(playback);
G_END_DECLS
void MainWindow::play(){
GST_PLUGIN_STATIC_REGISTER(coreelements);
GST_PLUGIN_STATIC_REGISTER(audioconvert);
GST_PLUGIN_STATIC_REGISTER(playback);
GstElement *pipeline;
GError *error = NULL;
pipeline = gst_parse_launch("playbin uri=http://docs.gstreamer.com/media/sintel_trailer-368p.ogv", &error);
if (!pipeline) {
ui->label->setText("error");
return;
}
if(error != NULL){
qDebug("GST error: ");
qDebug(error->message);
}else{
qDebug("GST without errors");
}
gst_element_set_state(pipeline, GST_STATE_READY);
gst_video_overlay_set_window_handle(GST_VIDEO_OVERLAY(pipeline), this->ui->playback_widget->winId());
gst_element_set_state(pipeline, GST_STATE_PLAYING);
ui->label->setText("Playing...");
}
After executing of this code I don't get either video in the playback_widget or the audio, but error var is clear(equals NULL) and label set to "Playing...". So, maybe I missed something?
How do I use HtmlAgilityPack with Android (Mono for Android - C#)? I've added the reference, but I keep getting this error:
Error CS0012: The type 'System.Xml.XPath.IXPathNavigable' is defined
in an assembly that is not referenced. You must add a reference to
assembly 'System.Xml, Version=2.0.0.0
I have incorporated HtmlAglilityPack into the base library assembly I use in all of my MonoDroid projects with great success. I have not even tried to use them in precompiled form, but simply added it's source to my project.
I then shamelessly edited HtmlWeb.cs to throw out the Windows stuff:
lines 893 to 903 (may have somewhat changed, just look around near there):
if (!helper.GetIsDnsAvailable())
{
#if Android
contentType = def;
#else
//do something.... not at full trust
try
{
RegistryKey reg = Registry.ClassesRoot;
reg = reg.OpenSubKey(extension, false);
if (reg != null) contentType = (string)reg.GetValue("", def);
}
catch (Exception)
{
contentType = def;
}
#endif
}
lines 934 to 946 (may have somewhat changed, just look around near there):
if (helper.GetIsRegistryAvailable())
{
#if Android
ext = def;
#else
try
{
RegistryKey reg = Registry.ClassesRoot;
reg = reg.OpenSubKey(#"MIME\Database\Content Type\" + contentType, false);
if (reg != null) ext = (string)reg.GetValue("Extension", def);
}
catch (Exception)
{
ext = def;
}
#endif
}
I then added Android to the conditional compilation symbols of my project.
My references are:
Microsoft.CSharp
Mono.Android
System
System.Core
System.Data
System.Xml
System.Xml.Linq
Please add a comment, telling if you can compile it now, or if you need more information.