MediaPipe how to use the BoxTracking aar? - android

Being new to MediaPipe, I am not familiar with concept of graph, node, subgraph etc.
After building an aar file of BoxTracking, unable to run it within a Android Studio gradle based project due to some unknown input and output parameters required by model
On comparing HandTracking graph
and BoxTracking graph using the visualizer tool and with a working project of HandTracking with aar file added as lib, I added new required input streams and side packets as seen in the graph.
Results are always some errors, mainly due to something wrong in inputs or BoxTracking being a subgraph which is used directly. How to know which input is required and data type of input to run this?
2021-02-05 21:15:23.477 22514-22564/com.example.mediapipemultihandstrackingapp E/FrameProcessor: Mediapipe error:
com.google.mediapipe.framework.MediaPipeException: internal: Graph has errors:
Calculator::Open() for node "objectdetectionsubgraphgpu__TfLiteInferenceCalculator" failed: ; could not read asset: ssdlite_object_detection.tfliteer_util.cc:158)
at com.google.mediapipe.framework.Graph.nativeMovePacketToInputStream(Native Method)
at com.google.mediapipe.framework.Graph.addConsumablePacketToInputStream(Graph.java:360)
at com.google.mediapipe.components.FrameProcessor.onNewFrame(FrameProcessor.java:442)
at com.google.mediapipe.components.ExternalTextureConverter$RenderThread.renderNext(ExternalTextureConverter.java:364)
at com.google.mediapipe.components.ExternalTextureConverter$RenderThread.lambda$onFrameAvailable$0$ExternalTextureConverter$RenderThread(ExternalTextureConverter.java:309)
at com.google.mediapipe.components.-$$Lambda$ExternalTextureConverter$RenderThread$Y1vV_XyLsWZ0ebOvq-iwjQ0H3Sw.run(Unknown Source:4)
at android.os.Handler.handleCallback(Handler.java:883)
at android.os.Handler.dispatchMessage(Handler.java:100)
at android.os.Looper.loop(Looper.java:237)
at com.google.mediapipe.glutil.GlThread.run(GlThread.java:141)

Datatypes required as input and output was not included in the default build, the build configurations has to be modified to include box_tracker.proto and its dependencies.
https://github.com/google/mediapipe/issues/1624

Related

Issue in tflite based estimator model loading on Android device

I generated a Boosted Trees Estimator on structured data and tried to use that estimator to predict on a mobile device. But I am facing the issue, that App generates following error in Android Studio when I start the app.
2020-09-30 12:26:34.892 8966-8966/com.example.tflitetest W/System.err: java.lang.IllegalStateException: Internal error: Unexpected failure when preparing tensor allocations: Encountered unresolved custom op: ParseExampleV2.
2020-09-30 12:26:34.892 8966-8966/com.example.tflitetest W/System.err: Node number 0 (ParseExampleV2) failed to prepare.
I followed this guide for estimator training, code highlighted below:
tf.feature_column.make_parse_example_spec([input_column]))
estimator_base_path = os.path.join(tmpdir, 'from_estimator')
estimator_path = estimator.export_saved_model(estimator_base_path, serving_input_fn)
(If I am doing something wrong, kindly let me know about it.)
All data files (training/testing data, sample Android app and .tflite models) are available on the following link of Google Drive:
https://drive.google.com/drive/folders/1-qDhCJGNZyqu_-XoMyW0kvimyiDdMndO?usp=sharing
Edit1: Usage of Parse Example is included in question.

Failed to find code-generated model provider - AWS Amplify

I'm getting the below error:
Failed to find a code-generated model provider.
AWS amplify code which throwing this error:
Amplify.addPlugin(new AWSApiPlugin());
Amplify.addPlugin(new AWSDataStorePlugin());
Amplify.configure(context);
I am following the below tutorials:
https://docs.amplify.aws/start/getting-started/generate-model/q/integration/android
https://docs.amplify.aws/cli/graphql-transformer/overview
I have tried generating models and models get generated successfully but still while running the app I am getting above exception.
When you generate models, you should expect to find various code-generated files in your application project. One of them will be app/src/main/java/com/amplifyframework/datastore/generated/model/AmplifyModelProvider.java.
When you build your app, Android Studio will compile that java file into a class file, and include it into your binary.
At runtime, on the phone, the AWSDataStorePlugin() constructor will attempt to find that same AmplifyModelProvider by means of reflection.
I would verify that:
You do actually have the code-generated AmplifyModelProvider;
It is being built successfully;
It is not being stripped out by ProGuard/R8 minification.
If you're still not able to get it working, just use the one-argument version of the AWSDataStorePlugin(...) constructor, instead. That version allows you to explicitly specify the model provider, and does not use runtime reflection.
Amplify.addPlugin(AWSDataStorePlugin(AmplifyModelProvider.getInstance()))
Your datasource needs to be updated:
Try running modelGen, then amplifyPush tasks:
Category
Resource name
Operation
Provider plugin
Api
amplifyDatasource
Update
awscloudformation

TensorFlow Lite GPU delegate failure

I am trying to use TensorFlow Lite with GPU delegate on Android. I am using the lib version (.so files) built from sources from the master branch of the repo. The problem is: the ModifyGraphWithDelegate function always returns error. And there is the following error message in logs:
2019-04-22 15:21:16.212 688-688/com.my.app E/tflite: TfLiteGpuDelegate Prepare: Shader compilation failed: ERROR: 0:6: 'unknown' : not a legal layout qualifier id
ERROR: 0:6: 'unknown' : Syntax error: syntax error
INTERNAL ERROR: no main() function!
ERROR: 2 compilation errors. No code generated.
2019-04-22 15:21:16.212 688-688/com.my.app E/tflite: Node number 54 (TfLiteGpuDelegate) failed to prepare.
If I use JAVA/JNI prebuilt lib version ('org.tensorflow:tensorflow-lite:0.0.0-gpu-experimental') like in official example project - there are no such errors. But I really need to use C++ interface for my cross-platform code.
Any thoughts / suggestions appreciated.
If you're building native shared library then you might need to manually load the .so library.
See https://groups.google.com/a/tensorflow.org/forum/#!topic/tflite/5YhFsCFtKi4
I finally made it work. The internal reason of the error is still completely unknown for me but the point is:
The used (master branch) version of the TFLite GPU delegate for Android fails to properly prepare for running on GPU the standard (for regression task) output nodes combination = flatten + dense.
If replace it with reshape + convolution (pointwise) + squeeze, then it works fine.

Does logback-android support DBAppender?

The android implementation of logback appears to be missing the DBAppender class.
Here's my relevant logback appender config, located in assets/logback.xml.
<appender name="DB" class="ch.qos.logback.classic.db.DBAppender">
<connectionSource class="ch.qos.logback.core.db.DriverManagerConnectionSource">
<driverClass>com.mysql.jdbc.Driver</driverClass>
<url>jdbc:mysql://10.2.2.222:3306/logback</url>
<user>username</user>
<password>thepassword</password>
</connectionSource>
</appender>
And gradle:
implementation 'org.slf4j:slf4j-api:1.7.25'
implementation 'com.github.tony19:logback-android:1.1.1-12'
and the resulting error in my logcat:
20:40:50,225 |-ERROR in ch.qos.logback.core.joran.action.AppenderAction -
Could not create an Appender of type [ch.qos.logback.classic.db.DBAppender].
ch.qos.logback.core.util.DynamicClassLoadingException:
Failed to instantiate type ch.qos.logback.classic.db.DBAppender
at ch.qos.logback.core.util.DynamicClassLoadingException:
Failed to instantiate type ch.qos.logback.classic.db.DBAppender
and
Caused by: ch.qos.logback.core.util.DynamicClassLoadingException:
Failed to instantiate type ch.qos.logback.classic.db.DBAppender
and
Caused by: java.lang.ClassNotFoundException:
Didn't find class "ch.qos.logback.classic.db.DBAppender"
Logback is properly working otherwise, if I comment out the database stuff and just have it log to a file, it instantiates correctly and generates text in the log file.
I've found examples of people using the DBAppender, but haven't found any that appear to be android based.
ps: I've also tried the other option, DataSourceConnectionSource (as opposed to the shown DriverManagerConnectionSource) but it actually uses the same appender, and so has the same error as a result. I also can't find any references to to DBAppender in the github files.
logback-android currently does not support DBAppender, and there are no firm plans to carry over that feature. The only database appender supported is SQLiteAppender.
It should be relatively simple to pull DBAppender's relevant source from logback into its own library that could be used in logback-android. For a future major release, I plan on splitting out several built-in appenders in this way to minimize the library size.

No OpKernel was registered to support Op 'Cos' when running inference on Android

I've trained a TensorFlow model that among other things performs input preparation involving a tf.cos operation. I've now integrated this model into an Android application, but it cannot perform inference and produces an error No OpKernel was registered to support Op 'Cos' (full error below).
What I've tried:
I've built a selective registration header and made sure that ops_to_register.h contains the Cos operator
I've rebuilt libtensorflow_inference.so as suggested in a related TensorFlow issue while making sure that ops_to_register.h is being used while building the .so file
I placed the new libtensorflow_inference.so file in my app's app/src/main/jniLibs/<architecture> while makings sure that the new .so file is being used by the app
I still get the same error.
Also, not sure if this is related, but cwise_op_cos.cc is missing in tf_op_files.txt and BUILD.
Is there something I'm doing wrong? How do I get the tf.cos operation to work on Android?
Here's the relevant excerpt from the error:
java.lang.IllegalArgumentException: No OpKernel was registered to support Op 'Cos' with these attrs. Registered devices: [CPU], Registered kernels:
<no registered kernels>
[[Node: stft/hann_window/Cos = Cos[T=DT_FLOAT](stft/hann_window/truediv)]]
at org.tensorflow.Session.run(Native Method)
at org.tensorflow.Session.access$100(Session.java:48)
at org.tensorflow.Session$Runner.runHelper(Session.java:298)
at org.tensorflow.Session$Runner.run(Session.java:248)
at org.tensorflow.contrib.android.TensorFlowInferenceInterface.run(TensorFlowInferenceInterface.java:228)
at org.tensorflow.contrib.android.TensorFlowInferenceInterface.run(TensorFlowInferenceInterface.java:197)
at org.tensorflow.contrib.android.TensorFlowInferenceInterface.run(TensorFlowInferenceInterface.java:187)
It turns out that it is indeed necessary to add these commands to the BUILD (tensorflow/core/kernels/BUILD) file manually.
So, for example, to include tf.cos opperation into your libtensorflow_inference.so you need to do the following:
Make sure || isequal(op, "Cos") is in the ops_to_register.h file (see my explanation above)
Add cwise_op_cos.cc to android_extended_ops_group1 filegroup in tensorflow/core/kernels/BUILD
bazel build //tensorflow/contrib/android:libtensorflow_inference.so ... for the right architecture
Later I even found that one of the TensorFlow developers suggested that hacking the BUILD file is the recommended way in this case :/
Android only builds a subset of the ops, You need to add the ops that
you need that are not in the "commonly used set" by hacking the build
files for your needs.
Source: https://github.com/tensorflow/tensorflow/issues/11804#issuecomment-318415228

Categories

Resources