I am trying to use TensorFlow Lite with GPU delegate on Android. I am using the lib version (.so files) built from sources from the master branch of the repo. The problem is: the ModifyGraphWithDelegate function always returns error. And there is the following error message in logs:
2019-04-22 15:21:16.212 688-688/com.my.app E/tflite: TfLiteGpuDelegate Prepare: Shader compilation failed: ERROR: 0:6: 'unknown' : not a legal layout qualifier id
ERROR: 0:6: 'unknown' : Syntax error: syntax error
INTERNAL ERROR: no main() function!
ERROR: 2 compilation errors. No code generated.
2019-04-22 15:21:16.212 688-688/com.my.app E/tflite: Node number 54 (TfLiteGpuDelegate) failed to prepare.
If I use JAVA/JNI prebuilt lib version ('org.tensorflow:tensorflow-lite:0.0.0-gpu-experimental') like in official example project - there are no such errors. But I really need to use C++ interface for my cross-platform code.
Any thoughts / suggestions appreciated.
If you're building native shared library then you might need to manually load the .so library.
See https://groups.google.com/a/tensorflow.org/forum/#!topic/tflite/5YhFsCFtKi4
I finally made it work. The internal reason of the error is still completely unknown for me but the point is:
The used (master branch) version of the TFLite GPU delegate for Android fails to properly prepare for running on GPU the standard (for regression task) output nodes combination = flatten + dense.
If replace it with reshape + convolution (pointwise) + squeeze, then it works fine.
Related
I'm trying to run a C++ project on Android. The following statement appears in one of the source files (part of a big project).
std::unordered_map <int, std::shared_ptr<Frame>, std::hash<int>, std::equal_to<int>,
Eigen::aligned_allocator< std::pair<const int, std::shared_ptr<Frame> > > > idToKeyFrame;
The NDK build complains the following
sysroot/usr/include/c++/v1/unordered_map:1684:5: error: static_assert failed due to requirement 'is_same<value_type, typename allocator_type::value_type>::value' "Invalid allocator::value_type"
I cannot spot what part of declaration is incompatible. Please help in fixing this error.
cplusplus.com reference
Update
When I modify the allocator (as suggested in comment by Marc) to Eigen::aligned_allocator<std::pair<int, std::shared_ptr<Frame>>> The error message remains the same but line number changes
sysroot/usr/include/c++/v1/unordered_map:854:5: error: static_assert failed due to requirement 'is_same<value_type, typename allocator_type::value_type>::value' "Invalid allocator::value_type"
Update++
According to this answer :
libstdc++ does not static_assert, libstdc++ ignores the exact type of the allocator and rebinds it to the container's value type.
and
Clang/libc++ are not forgiving
Yet another Update
I have been able to go past the error by commenting out the line in the source for unordered_map. This is a resort for now (until I get a solution).
java.lang.IllegalArgumentException: No OpKernel was registered to support Op 'Iterator' with these attrs.
I am trying to run a trained tensorflow model on an Android device. The model I am trying to run on the android mobile device uses an Iterator operation in the inference graph. I am trying to run this on a newly created Android Studio project.
I am using
Android Studio 3.1.3
Android NDK 14b
Andorid SDK v28
Bazel 0.14.1
Tensorflow 1.8.0 (For training and for creating the .so and .jar files)
I generated a Tensorflow .jar file and .so file from Tensorflow using this bazel guide in the README https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/android The .so file size was 14.1 MB.
I added the .jar under libs directory of the Android Studio project, and added the .so file under the jniLibs/armeabi-v7a directory. I got the following error after attempting to make an inference in the Android Studio applicaton.
I/TensorFlowInferenceInterface: Checking to see if TensorFlow native methods are already loaded
E/art: No implementation found for long org.tensorflow.contrib.android.RunStats.allocate() (tried Java_org_tensorflow_contrib_android_RunStats_allocate and Java_org_tensorflow_contrib_android_RunStats_allocate__)
I/TensorFlowInferenceInterface: TensorFlow native methods not found, attempting to load via tensorflow_inference
I/TensorFlowInferenceInterface: Successfully loaded TensorFlow native methods (RunStats error may be ignored)
I/TensorFlowInferenceInterface: Model load took 764ms, TensorFlow version: 1.8.0
I/TensorFlowInferenceInterface: Successfully loaded model from 'file:///android_asset/laptop_frozen_graph_init_tables.pb'
E/TensorFlowInferenceInterface: Failed to run TensorFlow inference with inputs:[batch_size, src_data], outputs:[index_to_string_Lookup]
E/AndroidRuntime: FATAL EXCEPTION: Thread-5769
Process: com.example.student.projecttest, PID: 6805
java.lang.IllegalArgumentException: No OpKernel was registered to support Op 'Iterator' with these attrs. Registered devices: [CPU], Registered kernels:
<no registered kernels>
[[Node: Iterator = Iterator[container="infer", output_shapes=[[?,?], [?]], output_types=[DT_INT32, DT_INT32], shared_name=""]()]]
at org.tensorflow.Session.run(Native Method)
at org.tensorflow.Session.access$100(Session.java:48)
at org.tensorflow.Session$Runner.runHelper(Session.java:298)
at org.tensorflow.Session$Runner.run(Session.java:248)
at org.tensorflow.contrib.android.TensorFlowInferenceInterface.run(TensorFlowInferenceInterface.java:228)
at org.tensorflow.contrib.android.TensorFlowInferenceInterface.run(TensorFlowInferenceInterface.java:197)
at org.tensorflow.contrib.android.TensorFlowInferenceInterface.run(TensorFlowInferenceInterface.java:187)
at com.example.student.projecttest.MainActivity.translateToFrench(MainActivity.java:79)
at com.example.student.projecttest.MainActivity$1$1.run(MainActivity.java:42)
I also ran the tensorflow/python/tools/print_selective_registration_header tool to generate a ops_to_register.h file from the frozen .pb model, in the ops_to_register.h file you can see the Iterator ops
...
|| isequal(op, "GreaterEqual")
|| isequal(op, "HashTableV2")
|| isequal(op, "Identity")
|| isequal(op, "InitializeTableFromTextFileV2")
|| isequal(op, "Iterator")
|| isequal(op, "IteratorGetNext")
|| isequal(op, "Less")
|| isequal(op, "LessEqual")
|| isequal(op, "LogicalAnd")
|| isequal(op, "LogicalNot")
...
and
...
"LookupTableOp<lookup::HashTable<int64, string>, int64, string>",
"LookupTableOp<lookup::HashTable<string, int64>, string, int64>",
"IdentityOp",
"InitializeTableFromTextFileOp",
"IteratorHandleOp",
"IteratorGetNextOp",
"BinaryOp<CPUDevice, functor::less<int32>>",
...
So the ops_to_register.h file finds the ops
I tried using other bazel commands to see if it would include the correct operation:
bazel build -c opt --copt="-DSELECTIVE_REGISTRATION" --copt="-DSUPPORT_SELECTIVE_REGISTRATION" //tensorflow/contrib/android:libtensorflow_inference.so --crosstool_top=//external:android/crosstool --host_crosstool_top=#bazel_tools//tools/cpp:toolchain --cpu=armeabi-v7a
I ran this after moving the ops_to_register.h file to the tensorflow/core/framework folder. Producing a 4.1 MB .so file
bazel build -c opt --copt=-D__ANDROID_TYPES_FULL__ //tensorflow/contrib/android:libtensorflow_inference.so --crosstool_top=//external:android/crosstool --host_crosstool_top=#bazel_tools//tools/cpp:toolchain --cpu=armeabi-v7a
This bazel command should include everything from the Tensorflow library I believe, this produced a 17.2 MB file.
I also tried adding this gradle file.
implementation 'org.tensorflow:tensorflow-android:+'
But it failed with the same error.
I can load TensorFlowInferenceInterface, get the graph through there and iterate through all the operations and their names fine, but it fails when making an inference.
To make sure that I could run the model by loading tensors and operations by name. I made a small Tensorflow script that can make predictions.
On TF Python script to run:
"init_all_tables" operation to initialize the tables.
"MakeIterator" operation which initializes the iterator (while feeding some text data and batch size).
"index_to_string_Lookup:0" tensor run to get the output.
On the android device using .run with any of these will throw this error.
Example: inferenceInterface.run(new String[]{"init_all_tables"});
will throw the same 'No OpKernel was registered' error mentioned previously.
So I assume the problem is in finding the Iterator op.
I also tried to change the BUILD file a bit to try as suggested here:
https://github.com/tensorflow/tensorflow/issues/11804#issuecomment-318415228
When messing with the build file mentioned in that suggestion, it would throw an error at the very end when it was close to finishing, I believe I might have done it wrong.
I'm not too sure how to change the BUILD in the tensorflow/core/kernel if that's the way to go because the iterator files are in /tensorflow/core/lib/io/ directory and not under the kernel directory.
Any help would be greatly appreciated
I've built the iOS version of my app about a month or so ago. Then added a few extra messages to the UI, tested with the Android version and now when I rebuild the iOS version it seems to fail. The error log is at: https://s3.amazonaws.com/codenameone-build-response/621a8710-2900-45a3-afdb-e3a30bdb1265-1504680431641-error.txt
At the bottom of this, the only actual failure I see is:
** ARCHIVE FAILED **
The following build commands failed:
CompileC build/Build/Intermediates/ArchiveIntermediates/MyApplication/IntermediateBuildFilesPath/MyApplication.build/Release-iphoneos/MyApplication.build/Objects-normal/arm64/com_codename1_io_websocket_WebSocketNativeImplImpl.o MyApplication-src/com_codename1_io_websocket_WebSocketNativeImplImpl.m normal arm64 objective-c com.apple.compilers.llvm.clang.1_0.compiler
(1 failure)
Failed xcodebuild step
I have updated the CN1Libs a few days ago. Not really sure what about this is actually failing to build.
Notice that the file com_codename1_io_websocket_WebSocketNativeImplImpl.m is mentioned in the final lines which means that's the file that failed. If you search the file for mentions of com_codename1_io_websocket_WebSocketNativeImplImpl.m you will find:
src/com_codename1_io_websocket_WebSocketNativeImplImpl.m -o /var/folders/zh/kb_4hqhn4kg1h0r5dp_6htcm0000gn/T/build7085253492970683151xxx/dist/build/Build/Intermediates/ArchiveIntermediates/MyApplication/IntermediateBuildFilesPath/MyApplication.build/Release-iphoneos/MyApplication.build/Objects-normal/arm64/com_codename1_io_websocket_WebSocketNativeImplImpl.o
/var/folders/zh/kb_4hqhn4kg1h0r5dp_6htcm0000gn/T/build7085253492970683151xxx/dist/MyApplication-src/com_codename1_io_websocket_WebSocketNativeImplImpl.m:23:9: fatal error: 'com_codename1_io_websocket_WebSocket.h' file not found
#import "com_codename1_io_websocket_WebSocket.h"
^
1 error generated.
This might be a bit confusing but it generally means you added the cn1lib for websockets and didn't use it. That's a problem as our VM strips out unused code but the websockets cn1lib needs to include the callback interface which is now stripped away.
I am writing a C++ module for the Nexus 7 Android kernel. Previously I compiled this module successfully with the Goldfish kernel. But now after porting the necessary changes to the Nexus 7 kernel, I am getting a compilation error. The problem seems to be with the headers. Whenever i include the linux/fs.h or linux/debugfs.h in the module, it is giving the following error.
/linux/radix-tree.h: In function 'void* radix_tree_deref_slot(void**)':
/android_kernel_grouper-android-tegra3-grouper-3.1-jb-fr2/include/linux/radix-tree.h:153:9: error: 'void*' is not a pointer-to-object type
The corresponding line in the radix-tree.h has something to do with rcu_dereference().
Is the problem with the headers, or the makefile or due to faulty patching?
To find out the compilation parameters used in gcc (or g++), you should use "make V=1" against the makefile. but the error:
error: 'void*' is not a pointer-to-object type
looked more like a C++ error, which is inherent in your code (Android kernel does not use C++).
This seemed to be solvable by recasting:
Error: ‘void*’ is not a pointer-to-object type
C++. Error: void is not a pointer-to-object type
etc.
I copied the sample of Google breakpad for Android and added it to my project. I had first a problem to get the minidumps (I was triggering SIGSEGV errors but nothing was written on my SD card). I finally managed to get some minidumps (I don't really know how but that's not my main problem).
My problem is that I can't dump the symbols of my native libraries, it says the following error message :
dump_syms.exe libcppinterface.so > libcppinterface.so.sym
loadDataForPdb and loadDataFromExe failed
Open failed
Thanks for your help
The Breakpad tools are not very cross-platform friendly. You need to build dump_syms on a Linux machine in order to get a dump_syms binary that can read ELF/DWARF and produce debug symbols from your Android binaries. The Windows dump_syms.exe is only used for dumping symbols from MSVC-produced PDB files.