issue in invoking tflite model in Android - android

i am trying to use already trained model as tflite model in android but getting below error when executing the tflite model for the output:
**A/libc: Fatal signal 8 (SIGFPE), code 1 (FPE_INTDIV), fault addr 0xb7bd4543 in tid 12009 (ing.tensorflow3), pid 12009 (ing.tensorflow3)**
below is the code:
//calling
bitmap = getBitmapFromAsset("aval1.png");
imageViewInput.setImageBitmap(bitmap);
testFunctionInference(bitmap);
//method body
public void testFunctionInference(Bitmap strName){
try {
//____________________________________
ImageProcessor imageProcessor =
new ImageProcessor.Builder()
.add(new ResizeOp(1, 1, ResizeOp.ResizeMethod.BILINEAR))
.build();
Log.w("testFunc:","after image processor");
// Create a TensorImage object. This creates the tensor of the corresponding
// tensor type (uint8 in this case) that the TensorFlow Lite interpreter needs.
TensorImage tensorImage = new TensorImage(DataType.FLOAT32);
// Analysis code for every frame
// Preprocess the image
tensorImage.load(strName);
Log.w("testFunc:","265 L no.");
tensorImage = imageProcessor.process(tensorImage);
Log.w("testFunc:","before inputBuffer0");
// Creates inputs for reference.
TensorBuffer inputFeature0 = TensorBuffer.createFixedSize(new int[]{1, 640*480*3}, DataType.FLOAT32);
MappedByteBuffer tfliteModel
= FileUtil.loadMappedFile(this,"converted_model.tflite");
Interpreter tflite = new Interpreter(tfliteModel);
Object a=tensorImage.getBuffer();
Log.w("testFunc:","278");
tflite.run(tensorImage.getBuffer(), inputFeature0.getBuffer());
} catch (IOException e) {
// TODO Handle the exception
}
}
anyone please assist in getting this issue resolved.

To get a detailed log, you can use debug version of nightly-SNAPSHOT.
https://www.tensorflow.org/lite/guide/android#use_the_tensorflow_lite_aar_from_mavencentral
dependencies {
implementation 'org.tensorflow:tensorflow-lite:0.0.0-nightly-debug-SNAPSHOT'
}
But maybe it's better to check if you provided inputs correctly since you used DataType.FLOAT32, your model should have inputs with float32.

Related

Unable to load tensorflow tflite model in android studio

I have trained a TensorFlow model and convert it to TensorFlow lite using the below code:
# Convert the model
import tensorflow as tf
import numpy as np
# path to the SavedModel directory is TFLITE_PATH
converter = tf.lite.TFLiteConverter.from_saved_model(TFLITE_PATH)
tflite_model = converter.convert()
# Save the model.
with open('model_1.tflite', 'wb') as f:
f.write(tflite_model)
Attaching my model_1.tflite model in case you want to investigate.
I have tested it inside my python environment, where it is producing output using the below script:
import numpy as np
import tensorflow as tf
MODEL_PATH = "model_1.tflite"
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path=MODEL_PATH)
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
Print required input shape for the model
print(input_shape)
[ 1 320 320 3]
Providing input details to the interpreter
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
Print the output that we get from the model
print(output_data)
[[[0.01350823 0.02189949 0.9918406 0.9821147 ]
[0.33122188 0.11993879 0.9528857 0.90357083]
[0.04370229 0.13977486 0.5076436 0.9069242 ]
[0.36508453 0.00325416 0.63923967 0.1383895 ]
[0.12694997 0.01493323 0.4414968 0.14510964]
[0.21113579 0.00826943 0.5027399 0.13861066]
[0.28166008 0.9081802 0.57174915 1.0400366 ]
[0.38398495 0.9090722 0.6709249 1.0427872 ]
[0.561202 0.32376498 0.8054305 0.6049366 ]
[0.3257156 0.65075576 0.43758994 0.80955625]]]
But when I am going to load it inside Android studio it is giving me the error.
Note: when I downloaded a pre-trained TensorFlow model from here(https://github.com/am15h/tflite_flutter_plugin) and called it, it is working fine but I am unable to load my customized trained model and it is giving me the below error:
[VERBOSE-2:dart_isolate.cc(1137)] Unhandled exception:
Bad state: failed precondition
#0 checkState (package:quiver/check.dart:73:5)
#1 Tensor.setTo (package:tflite_flutter/src/tensor.dart:150:5)
#2 Interpreter.runForMultipleInputs (package:tflite_flutter/src/interpreter.dart:194:33)
#3 Classifier.predict (package:bewizor/tflite/classifier.dart:139:18)
#4 IsolateUtils.entryPoint (package:bewizor/tflite/tfutils/isolate_utils.dart:45:51)
<asynchronous suspension>
Below is the comparison of the output that I ran using the Netron app.
On the left-hand side, Netron view of working pre-trained model, whereas on the right-hand side, Netron view of a failed customized trained model
Can you please help to understand what I am lacking here and what are the things that I can try out to resolve this?
Why pre-trained tflite model was working? and why not my current custom model?
Is the error related to my model or the way I am calling it inside android studio should be changed?
Things that I have tried out to resolve this?
Try to make a model in a way to take uint8 as an input.(Idea is to make this looks like the model that is working fine but I don't think it is making an impact on model working but yes it is helpful to reduce the size of my model) Used the below code for this.
# Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model(SAVED_MODEL_PATH) # path to the SavedModel directory
converter.optimizations = [tf.lite.Optimize.DEFAULT]
num_calibration_steps = 100
def representative_dataset_gen():
for _ in range(num_calibration_steps):
input_shape = [1, 320, 320, 3]
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
# Get sample input data as a numpy array in a method of your choosing.
yield [input_data]
converter.representative_dataset = representative_dataset_gen
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8 # or tf.int8
converter.inference_output_type = tf.uint8 # or tf.int8
converter.experimental_new_converter = False
quantized_tflite_model = converter.convert()
tflite_model_name = 'model_2_uint_type.tflite'
if tf.__version__.startswith('1.'):
open(tflite_model_name, "wb").write(quantized_tflite_model)
if tf.__version__.startswith('2.'):
with open(tflite_model_name, 'wb') as f:
f.write(quantized_tflite_model)
Also sharing .dart file code which we are using to call the model inside android studio
import 'dart:math';
import 'dart:ui';
import 'package:bewizor/tflite/recognition.dart';
import 'package:flutter/material.dart';
import 'package:image/image.dart' as imageLib;
import 'package:tflite_flutter/tflite_flutter.dart';
import 'package:tflite_flutter_helper/tflite_flutter_helper.dart';
import 'stats.dart';
/// Classifier
class Classifier {
/// Instance of Interpreter
Interpreter _interpreter;
/// Labels file loaded as list
List<String> _labels;
// static const String MODEL_FILE_NAME = "tfModels/detect.tflite";
static const String MODEL_FILE_NAME = "tfModels/detect_new.tflite";
static const String LABEL_FILE_NAME = "tfModels/label_map.pbtxt";
/// Input size of image (height = width = 300)
static const int INPUT_SIZE = 300;
/// Result score threshold
static const double THRESHOLD = 0.5;
/// [ImageProcessor] used to pre-process the image
ImageProcessor imageProcessor;
/// Padding the image to transform into square
int padSize;
/// Shapes of output tensors
List<List<int>> _outputShapes;
/// Types of output tensors
List<TfLiteType> _outputTypes;
/// Number of results to show
static const int NUM_RESULTS = 5;
Classifier({
Interpreter interpreter,
List<String> labels,
}) {
loadModel(interpreter: interpreter);
loadLabels(labels: labels);
}
/// Loads interpreter from asset
void loadModel({Interpreter interpreter}) async {
try {
_interpreter = interpreter ??
await Interpreter.fromAsset(
MODEL_FILE_NAME,
options: InterpreterOptions()..threads = 4,
);
var outputTensors = _interpreter.getOutputTensors();
_outputShapes = [];
_outputTypes = [];
outputTensors.forEach((tensor) {
_outputShapes.add(tensor.shape);
_outputTypes.add(tensor.type);
});
} catch (e) {
print("Error while creating interpreter: $e");
}
}
/// Loads labels from assets
void loadLabels({List<String> labels}) async {
try {
_labels =
labels ?? await FileUtil.loadLabels("assets/" + LABEL_FILE_NAME);
} catch (e) {
print("Error while loading labels: $e");
}
}
/// Pre-process the image
TensorImage getProcessedImage(TensorImage inputImage) {
padSize = max(inputImage.height, inputImage.width);
if (imageProcessor == null) {
imageProcessor = ImageProcessorBuilder()
.add(ResizeWithCropOrPadOp(padSize, padSize))
.add(ResizeOp(INPUT_SIZE, INPUT_SIZE, ResizeMethod.BILINEAR))
.build();
}
inputImage = imageProcessor.process(inputImage);
return inputImage;
}
/// Runs object detection on the input image
Map<String, dynamic> predict(imageLib.Image image) {
var predictStartTime = DateTime.now().millisecondsSinceEpoch;
if (_interpreter == null) {
print("Interpreter not initialized");
return null;
}
var preProcessStart = DateTime.now().millisecondsSinceEpoch;
// Create TensorImage from image
TensorImage inputImage = TensorImage.fromImage(image);
// Pre-process TensorImage
inputImage = getProcessedImage(inputImage);
var preProcessElapsedTime =
DateTime.now().millisecondsSinceEpoch - preProcessStart;
// // TensorBuffers for output tensors
TensorBuffer outputLocations = TensorBufferFloat(_outputShapes[0]);
TensorBuffer outputClasses = TensorBufferFloat(_outputShapes[1]);
TensorBuffer outputScores = TensorBufferFloat(_outputShapes[2]);
TensorBuffer numLocations = TensorBufferFloat(_outputShapes[3]);
// Inputs object for runForMultipleInputs
// Use [TensorImage.buffer] or [TensorBuffer.buffer] to pass by reference
List<Object> inputs = [inputImage.buffer];
// Outputs map
Map<int, Object> outputs = {
0: outputLocations.buffer,
1: outputClasses.buffer,
2: outputScores.buffer,
3: numLocations.buffer,
};
var inferenceTimeStart = DateTime.now().millisecondsSinceEpoch;
// run inference
_interpreter.runForMultipleInputs(inputs, outputs);
var inferenceTimeElapsed =
DateTime.now().millisecondsSinceEpoch - inferenceTimeStart;
// Maximum number of results to show
int resultsCount = min(NUM_RESULTS, numLocations.getIntValue(0));
// Using labelOffset = 1 as ??? at index 0
int labelOffset = 1;
// Using bounding box utils for easy conversion of tensorbuffer to List<Rect>
List<Rect> locations = BoundingBoxUtils.convert(
tensor: outputLocations,
valueIndex: [1, 0, 3, 2],
boundingBoxAxis: 2,
boundingBoxType: BoundingBoxType.BOUNDARIES,
coordinateType: CoordinateType.RATIO,
height: INPUT_SIZE,
width: INPUT_SIZE,
);
List<Recognition> recognitions = [];
for (int i = 0; i < resultsCount; i++) {
// Prediction score
var score = outputScores.getDoubleValue(i);
// Label string
var labelIndex = outputClasses.getIntValue(i) + labelOffset;
var label = _labels.elementAt(labelIndex);
if (score > THRESHOLD) {
// inverse of rect
// [locations] corresponds to the image size 300 X 300
// inverseTransformRect transforms it our [inputImage]
Rect transformedRect = imageProcessor.inverseTransformRect(
locations[i], image.height, image.width);
recognitions.add(
Recognition(i, label, score, transformedRect),
);
}
}
var predictElapsedTime =
DateTime.now().millisecondsSinceEpoch - predictStartTime;
return {
"recognitions": recognitions,
"stats": Stats(
totalPredictTime: predictElapsedTime,
inferenceTime: inferenceTimeElapsed,
preProcessingTime: preProcessElapsedTime)
};
}
/// Gets the interpreter instance
Interpreter get interpreter => _interpreter;
/// Gets the loaded labels
List<String> get labels => _labels;
}

Cannot run LSTM in tensorflow lite 1.15

TLDR: Can someone show how to create LSTM, convert it to TFLite, and run it in android version 1.15?
I am trying to create a simple LSTM model and run in in android application with tensorflow v115.
** It is the same case when using GRU and SimpleRNN layers **
Creating simple LSTM model
I am working in Python, trying two tensorflow and keras versions: LATEST (2.4.1 with built-in keras), and 1.1.5 (and I install keras version 2.2.4).
I create this simple model:
model = keras.Sequential()
model.add(layers.Embedding(input_dim=1000, output_dim=64))
model.add(layers.LSTM(128))
model.add(layers.Dense(10))
model.summary()
Saving it
I save it in both "SavedModel" and "h5" format:
model.save(f'output_models/simple_lstm_saved_model_format_{tf.__version__}', save_format='tf')
model.save(f'output_models/simple_lstm_{tf.__version__}.h5', save_format='h5')
Converting to TFLite
I try create & save the model in both v115 and v2 versions.
Then, I try to convert it to TFLite in several methods.
In TF2:
I try to convert from keras model:
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
with open(f"output_models/simple_lstm_tf_v{tf.__version__}.tflite", 'wb') as f:
f.write(tflite_model)
I try to convert from saved model:
converter_saved_model = tf.lite.TFLiteConverter.from_saved_model(saved_model_path)
tflite_model_from_saved_model = converter_saved_model.convert()
with open(f"{saved_model_path}_converted_tf_v{tf.__version__}.tflite", 'wb') as f:
f.write(tflite_model_from_saved_model)
I try to convert from keras saved model (h5) - I try to use both tf.compat.v1.lite.TFLiteConverter and tf..lite.TFLiteConverter.
converter_h5 = tf.compat.v1.lite.TFLiteConverter.from_keras_model_file(h5_model_path)
# converter_h5 = tf.lite.TFLiteConverter.from_keras_model_file(h5_model_path) # option 2
tflite_model_from_h5 = converter_h5.convert()
with open(f{h5_model_path.replace('.h5','')}_converted_tf_v1_lite_from_keras_model_file_v{tf.__version__}.tflite", 'wb') as f:
f.write(tflite_model_from_h5)
Android Application
build.gradle (Module: app)
When I want to use v2, I use:
implementation 'org.tensorflow:tensorflow-lite-task-vision:0.0.0-nightly'
implementation 'org.tensorflow:tensorflow-lite-task-text:0.0.0-nightly'
When I want to use v115, I use implementation 'org.tensorflow:tensorflow-lite:1.15.0'
in the build grade.
Then, I follow common tflite loading code in android:
private MappedByteBuffer loadModelFile(Activity activity) throws IOException {
AssetFileDescriptor fileDescriptor = activity.getAssets().openFd(getModelPath());
FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor());
FileChannel fileChannel = inputStream.getChannel();
long startOffset = fileDescriptor.getStartOffset();
long declaredLength = fileDescriptor.getDeclaredLength();
return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength);
}
LoadLSTM(Activity activity) {
try {
tfliteModel = loadModelFile(activity);
} catch (IOException e) {
e.printStackTrace();
}
tflite = new Interpreter(tfliteModel, tfliteOptions);
Log.d(TAG, "*** Loaded model *** " + getModelPath());
}
When I use v2, the model is loaded.
When I use the v115, in ALL of the options i've tried, I receive errors as the following:
A/libc: Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x70 in tid 17686 (CameraBackgroun), pid 17643 (flitecamerademo)
I need a simple outcome - create LSTM and make it work in android v115.
What am I missing? Thanks

Need to capture a still image during face detection with MLKit and Camera2

I'm developing a Face Detection feature with Camera2 and MLKit.
In the Developer Guide, in the Performance Tips part, they say to capture images in ImageFormat.YUV_420_888 format if using the Camera2 API, which is my case.
Then, in the Face Detector part, they recommend to use an image with dimensions of at least 480x360 pixels for faces recognition in real time, which is again my case.
Ok, let's go ! Here is my code, working well
private fun initializeCamera() = lifecycleScope.launch(Dispatchers.Main) {
// Open the selected camera
cameraDevice = openCamera(cameraManager, getCameraId(), cameraHandler)
val previewSize = if (isPortrait) {
Size(RECOMMANDED_CAPTURE_SIZE.width, RECOMMANDED_CAPTURE_SIZE.height)
} else {
Size(RECOMMANDED_CAPTURE_SIZE.height, RECOMMANDED_CAPTURE_SIZE.width)
}
// Initialize an image reader which will be used to display a preview
imageReader = ImageReader.newInstance(
previewSize.width, previewSize.height, ImageFormat.YUV_420_888, IMAGE_BUFFER_SIZE)
// Retrieve preview's frame and run detector
imageReader.setOnImageAvailableListener({ reader ->
lifecycleScope.launch(Dispatchers.Main) {
val image = reader.acquireNextImage()
logD { "Image available: ${image.timestamp}" }
faceDetector.runFaceDetection(image, getRotationCompensation())
image.close()
}
}, imageReaderHandler)
// Creates list of Surfaces where the camera will output frames
val targets = listOf(viewfinder.holder.surface, imageReader.surface)
// Start a capture session using our open camera and list of Surfaces where frames will go
session = createCaptureSession(cameraDevice, targets, cameraHandler)
val captureRequest = cameraDevice.createCaptureRequest(
CameraDevice.TEMPLATE_PREVIEW).apply {
addTarget(viewfinder.holder.surface)
addTarget(imageReader.surface)
}
// This will keep sending the capture request as frequently as possible until the
// session is torn down or session.stopRepeating() is called
session.setRepeatingRequest(captureRequest.build(), null, cameraHandler)
}
Now, I want to capture a still image...and this is my problem because, ideally, I want:
a full resolution image or, as least, bigger than 480x360
in JPEG format to be able to save it
The Camera2Basic sample demonstrates how to capture an image (samples for Video and SlowMotion are crashing) and MLKit sample uses the so old Camera API !! Fortunately, I've succeeded is mixing these samples to develop my feature but I'm failed to capture a still image with a different resolution.
I think I have to stop the preview session to recreate one for image capture but I'm not sure...
What I have done is the following but it's capturing images in 480x360:
session.stopRepeating()
// Unset the image reader listener
imageReader.setOnImageAvailableListener(null, null)
// Initialize an new image reader which will be used to capture still photos
// imageReader = ImageReader.newInstance(768, 1024, ImageFormat.JPEG, IMAGE_BUFFER_SIZE)
// Start a new image queue
val imageQueue = ArrayBlockingQueue<Image>(IMAGE_BUFFER_SIZE)
imageReader.setOnImageAvailableListener({ reader - >
val image = reader.acquireNextImage()
logD {"[Still] Image available in queue: ${image.timestamp}"}
if (imageQueue.size >= IMAGE_BUFFER_SIZE - 1) {
imageQueue.take().close()
}
imageQueue.add(image)
}, imageReaderHandler)
// Creates list of Surfaces where the camera will output frames
val targets = listOf(viewfinder.holder.surface, imageReader.surface)
val captureRequest = createStillCaptureRequest(cameraDevice, targets)
session.capture(captureRequest, object: CameraCaptureSession.CaptureCallback() {
override fun onCaptureCompleted(
session: CameraCaptureSession,
request: CaptureRequest,
result: TotalCaptureResult) {
super.onCaptureCompleted(session, request, result)
val resultTimestamp = result.get(CaptureResult.SENSOR_TIMESTAMP)
logD {"Capture result received: $resultTimestamp"}
// Set a timeout in case image captured is dropped from the pipeline
val exc = TimeoutException("Image dequeuing took too long")
val timeoutRunnable = Runnable {
continuation.resumeWithException(exc)
}
imageReaderHandler.postDelayed(timeoutRunnable, IMAGE_CAPTURE_TIMEOUT_MILLIS)
// Loop in the coroutine's context until an image with matching timestamp comes
// We need to launch the coroutine context again because the callback is done in
// the handler provided to the `capture` method, not in our coroutine context
# Suppress("BlockingMethodInNonBlockingContext")
lifecycleScope.launch(continuation.context) {
while (true) {
// Dequeue images while timestamps don't match
val image = imageQueue.take()
if (image.timestamp != resultTimestamp)
continue
logD {"Matching image dequeued: ${image.timestamp}"}
// Unset the image reader listener
imageReaderHandler.removeCallbacks(timeoutRunnable)
imageReader.setOnImageAvailableListener(null, null)
// Clear the queue of images, if there are left
while (imageQueue.size > 0) {
imageQueue.take()
.close()
}
// Compute EXIF orientation metadata
val rotation = getRotationCompensation()
val mirrored = cameraFacing == CameraCharacteristics.LENS_FACING_FRONT
val exifOrientation = computeExifOrientation(rotation, mirrored)
logE {"captured image size (w/h): ${image.width} / ${image.height}"}
// Build the result and resume progress
continuation.resume(CombinedCaptureResult(
image, result, exifOrientation, imageReader.imageFormat))
// There is no need to break out of the loop, this coroutine will suspend
}
}
}
}, cameraHandler)
}
If I uncomment the new ImageReader instanciation, I have this exception:
java.lang.IllegalArgumentException: CaptureRequest contains
unconfigured Input/Output Surface!
Can anyone help me ?
This IllegalArgumentException:
java.lang.IllegalArgumentException: CaptureRequest contains unconfigured Input/Output Surface!
... obviously refers to imageReader.surface.
Meanhile (with CameraX) this works different, see CameraFragment.kt ...
Issue #197: Firebase Face Detection Api issue while using cameraX API;
there might soon be a sample application matching your use case.
ImageReader is sensitive to the choice of format and/or combination of usage flags. The documentation points certain combinations of format may be unsupported. With some Android devices (perhaps some older phone models) you might find the IllegalArgumentException is not thrown using the JPEG format. But it doesn't help much - you want something versatile.
What I have done in the past is to use ImageFormat.YUV_420_888 format (this will be backed by the hardware and ImageReader implementation). This format does not contain pre-optimizations that prevent the application accessing the image via the internal array of planes. I notice you have used it successfully already in your initializeCamera() method.
You may then extract the image data from the frame you want
Image.Plane[] planes = img.getPlanes();
byte[] data = planes[0].getBuffer().array();
and then via a Bitmap create the still image using JPEG compression, PNG, or whichever encoding you choose.
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, width, height, null);
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 100, out);
byte[] imageBytes = out.toByteArray();
Bitmap bitmap= BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
ByteArrayOutputStream out2 = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 75, out2);

TensorFlow Lite 2.0 advanced GPU using on Android with C++

I am new in TensorFlow. I built TensorFlow Lite libraries from sources. I try to use TensorFlow for face recognition. This one a part of my project. And I have to use GPU memory for input/output e.g. input data: opengl texture, output data: opengl texture. Unfortunately, this information is outdated: https://www.tensorflow.org/lite/performance/gpu_advanced. I tried to use gpu::gl::InferenceBuilder for building gpu::gl::InferenceRunner. And I have problem. I don’t understand how I can get the model in GraphFloat32 (Model>) format and TfLiteContext.
Example of my experemental code:
using namespace tflite::gpu;
using namespace tflite::gpu::gl;
const TfLiteGpuDelegateOptionsV2 options = {
.inference_preference = TFLITE_GPU_INFERENCE_PREFERENCE_SUSTAINED_SPEED,
.is_precision_loss_allowed = 1 // FP16
};
tfGPUDelegate = TfLiteGpuDelegateV2Create(&options);
if (interpreter->ModifyGraphWithDelegate(tfGPUDelegate) != kTfLiteOk) {
__android_log_print(ANDROID_LOG_ERROR, "Tensorflow", "GPU Delegate hasn't been created");
return ;
} else {
__android_log_print(ANDROID_LOG_INFO, "Tensorflow", "GPU Delegate has been created");
}
InferenceEnvironmentOptions envOption;
InferenceEnvironmentProperties properties;
auto envStatus = NewInferenceEnvironment(envOption, &env, &properties);
if (envStatus.ok()){
__android_log_print(ANDROID_LOG_INFO, "Tensorflow", "Inference environment has been created");
} else {
__android_log_print(ANDROID_LOG_ERROR, "Tensorflow", "Inference environment hasn't been created");
__android_log_print(ANDROID_LOG_ERROR, "Tensorflow", "Message: %s", envStatus.error_message().c_str());
}
InferenceOptions builderOptions;
builderOptions.usage = InferenceUsage::SUSTAINED_SPEED;
builderOptions.priority1 = InferencePriority::MIN_LATENCY;
builderOptions.priority2 = InferencePriority::AUTO;
builderOptions.priority3 = InferencePriority::AUTO;
//The last part requires a model
// GraphFloat32* graph;
// TfLiteContext* tfLiteContex;
//
// auto buildStatus = BuildModel(tfLiteContex, delegate_params, &graph);
// if (buildStatus.ok()){}
You may look function BuildFromFlatBuffer (https://github.com/tensorflow/tensorflow/blob/6458d346470158605ecb5c5ba6ad390ae0dc6014/tensorflow/lite/delegates/gpu/common/testing/tflite_model_reader.cc). It creates Interpreter and graph from it.
Also Mediapipe uses InferenceRunner you may find for useful in files:
https://github.com/google/mediapipe/blob/master/mediapipe/calculators/tflite/tflite_inference_calculator.cc
https://github.com/google/mediapipe/blob/ecb5b5f44ab23ea620ef97a479407c699e424aa7/mediapipe/util/tflite/tflite_gpu_runner.cc

JSON experimental from Android App

On Android I am able to upload an object Ok. But when I try to download it, I get the following error on getObject.executeMediaAndDownloadTo(out).
java.lang.IllegalArgumentException: Type must be in the 'maintype/subtype; parameter=value' format
Code is from the Google example:
Storage.Objects.Get getObject = storage.objects().get("bucket", "myObject");
if (getMetadata == true) {
getObject.setAlt("json"); // Temporary workaround.
StorageObject object = getObject.execute();
} else {
// Downloading data.
out = new ByteArrayOutputStream();
// If you're not in AppEngine, download the whole thing in one request, if possible.
// NOTE: As of right now, this will not retry on retryable failure.
// http://code.google.com/p/google-api-java-client/issues/detail?id=579
getObject.getMediaHttpDownloader().setDirectDownloadEnabled(true);
getObject.executeMediaAndDownloadTo(out);
}

Categories

Resources