For test purposes I want to create a HIDL interface + implementation and run the combination as a system service. For that I defined the IGuotie.hal interface:
package android.hardware.guotie#2.0;
interface IGuotie {
add(int32_t i, int32_t k) generates (int32_t result);
};
The following files are used to implement the interface
Guotie.h
#pragma once
#include <android/hardware/guotie/2.0/IGuotie.h>
#include <hidl/MQDescriptor.h>
#include <hidl/Status.h>
namespace android {
namespace hardware {
namespace guotie {
namespace V2_0 {
namespace implementation {
using ::android::hardware::hidl_array;
using ::android::hardware::hidl_memory;
using ::android::hardware::hidl_string;
using ::android::hardware::hidl_vec;
using ::android::hardware::Return;
using ::android::hardware::Void;
using ::android::sp;
struct Guotie : public IGuotie {
// Methods from ::android::hardware::guotie::V2_0::IGuotie follow.
Return<int32_t> add(int32_t i, int32_t k) override;
// Methods from ::android::hidl::base::V1_0::IBase follow.
static IGuotie* getInstance(void);
};
} // namespace implementation
} // namespace V2_0
} // namespace guotie
} // namespace hardware
}
Guotie.cpp
#include "Guotie.h"
namespace android {
namespace hardware {
namespace guotie {
namespace V2_0 {
namespace implementation {
// Methods from ::android::hardware::guotie::V2_0::IGuotie follow.
Return<int32_t> Guotie::add(int32_t i, int32_t k) {
return i + k;
}
IGuotie *Guotie::getInstance(void) {
return new Guotie();
}
} // namespace implementation
} // namespace V2_0
} // namespace guotie
} // namespace hardware
}
service.cpp
#define LOG_TAG "android.hardware.graphics.allocator#2.0-service"
#include <android/hardware/guotie/2.0/IGuotie.h>
#include <hidl/LegacySupport.h>
#include "Guotie.h"
using android::hardware::guotie::V2_0::IGuotie;
using android::hardware::guotie::V2_0::implementation::Guotie;
using android::hardware::configureRpcThreadpool;
using android::hardware::joinRpcThreadpool;
using android::sp;
int main() {
int res;
android::sp<IGuotie> ser = Guotie::getInstance();
ALOGE("simp main");
configureRpcThreadpool(1, true /*callerWillJoin*/);
if (ser != nullptr) {
res = ser->registerAsService();
if(res != 0)
ALOGE("Can't register instance of GuotieHardware, nullptr");
} else {
ALOGE("Can't create instance of GuotieHardware, nullptr");
}
joinRpcThreadpool();
return 0; // should never get here
}
android.hardware.guotie#2.0-service.rc
service guotieserver /vendor/bin/hw/android.hardware.guotie#2.0-service
class hal
user root
group root
seclabel u:r:su:s0
Android.bp
hidl_interface {
name: "android.hardware.guotie#2.0",
root: "android.hardware",
vndk: {
enabled: true,
},
srcs: [
"IGuotie.hal",
],
interfaces: [
"android.hidl.base#1.0",
],
gen_java: true,
}
Building results in following error message
FAILED: out/target/product/generic/obj/PACKAGING/vndk_intermediates/check-list-timestamp
/bin/bash -c "(( diff --old-line-format=\"Removed %L\" --new-line-format=\"Added %L\" --unchanged-line-format=\"\" build/make/target/product/gsi/29.txt out/target/product/generic/obj/PACKAGING/vndk_intermediates/libs.txt || ( echo -e \" error: VNDK library list has been changed.\\n\" \" Changing the VNDK library list is not allowed in API locked branches.\"; exit 1 )) ) && (mkdir -p out/target/product/generic/obj/PACKAGING/vndk_intermediates/ ) && (touch out/target/product/generic/obj/PACKAGING/vndk_intermediates/check-list-timestamp )"
Removed VNDK-code: android.hardware.guotie#2.0.so
Added VNDK-core: android.hardware.guotie#2.0.so
error: VNDK library list has been changed.
Changing the VNDK library list is not allowed in API locked branches.
Some articles suggested to add (in my case) android.hardware.guotie#2.0.so to build/make/target/product/vndk/28.txt. However, the vndk folder does not exist. Instead I added it to build/make/target/product/gsi/29.txt and current.txt but the build still fails (I added it in alphabetical order). Any suggestions?
Adding an interface to android.hardware is usually only done by Google itself. Vendor HIDL interfaces are not part of the VNDK.
You likely should consider yourself a vendor and just remove this part from your Android.bp:
vndk: {
enabled: true,
},
and change the namespace to vendor.<you>.guotie
For more information on what the VNDK is see the official documentation: https://source.android.com/devices/architecture/vndk.
I solved this build error when I was trying to make Khadas vim3 Android P.
Firstly please check if out/target/product/product_name/obj/PACKAGING/vndk_intermediates/libs.txt is same with build\make\target\product\vndk\28.txt and \current.txt.
Secondly please run make update-api.
This works for me and hope my sharing could help you.
You can see more detail and my console screenshot on my GitHub :)
Related
I am running tensorflow lite on Android using the C API. My model requires the operator RandomStandardNormal which was recently implemented as a custom op prototype in tensorflow v2.4.0-rc0 here
TfLiteInterpreterOptionsAddCustomOp() function is listed in tensorflow/lite/c/c_api_experimental.h:
TFL_CAPI_EXPORT void TfLiteInterpreterOptionsAddCustomOp(
TfLiteInterpreterOptions* options, const char* name,
const TfLiteRegistration* registration, int32_t min_version,
int32_t max_version);
Looking at this example & thread, I am trying to use TfLiteInterpreterOptionsAddCustomOp like this:
// create model and interpreter options
TfLiteModel *model = TfLiteModelCreateFromFile("path/to/model.tflite");
TfLiteInterpreterOptions* options = TfLiteInterpreterOptionsCreate();
// register custom ops
TfLiteInterpreterOptionsAddCustomOp(options, "RandomStandardNormal", Register_RANDOM_STANDARD_NORMAL(), 1, 1);
// create the interpreter
TfLiteInterpreter* interpreter = TfLiteInterpreterCreate(model, options);
TfLiteInterpreterAllocateTensors(interpreter);
I see that the Register_RANDOM_STANDARD_NORMAL() function is defined in the tflite::ops::custom C++ namespace in tensorflow/lite/kernels/custom_ops_register.h. But, when I try to include this in my C file the compiler complains because namespace is an unknown type in C.
How can I register a custom operator using the tensorflow lite C API? Do I need to use a C++ compiler in order to use the C API with this custom operator because it was defined in C++?
NOTE: I include //tensorflow/lite/kernels:custom_ops in the bazel BUILD deps when compiling libtensorflowlite_c.so
It looks like this was answered on Github via this workaround:
https://github.com/tensorflow/tensorflow/issues/44664#issuecomment-723310060
On tensorflow github, #jdduke suggested a temporary workaround:
add a extern "C" wrapper head to custom_ops_register.h
extern "C" {
TFL_CAPI_EXPORT TfLiteRegistration* TfLiteRegisterRandomStandardNormal();
}
add a extern "C" wrapper implementation to random_standard_normal.cc
extern "C" {
TFL_CAPI_EXPORT TfLiteRegistration* TfLiteRegisterRandomStandardNormal() {
return tflite::ops::custom::Register_RANDOM_STANDARD_NORMAL();
}
}
ensure //tensorflow/lite/kernels:custom_ops is included as a dependency in tensorflow/lite/c/BUILD
tflite_cc_shared_object(
name = "tensorflowlite_c",
linkopts = select({
"//tensorflow:ios": [
"-Wl,-exported_symbols_list,$(location //tensorflow/lite/c:exported_symbols.lds)",
],
"//tensorflow:macos": [
"-Wl,-exported_symbols_list,$(location //tensorflow/lite/c:exported_symbols.lds)",
],
"//tensorflow:windows": [],
"//conditions:default": [
"-z defs",
"-Wl,--version-script,$(location //tensorflow/lite/c:version_script.lds)",
],
}),
per_os_targets = True,
deps = [
":c_api",
":c_api_experimental",
":exported_symbols.lds",
":version_script.lds",
"//tensorflow/lite/kernels:custom_ops", # here
],
)
modify my C++ code to call this new wrapper function
TfLiteInterpreterOptionsAddCustomOp(options, "RandomStandardNormal", TfLiteRegisterRandomStandardNormal(), 1, 1);
And it worked! My tensors finally allocated on android :)
I am new in TensorFlow. I built TensorFlow Lite libraries from sources. I try to use TensorFlow for face recognition. This one a part of my project. And I have to use GPU memory for input/output e.g. input data: opengl texture, output data: opengl texture. Unfortunately, this information is outdated: https://www.tensorflow.org/lite/performance/gpu_advanced. I tried to use gpu::gl::InferenceBuilder for building gpu::gl::InferenceRunner. And I have problem. I don’t understand how I can get the model in GraphFloat32 (Model>) format and TfLiteContext.
Example of my experemental code:
using namespace tflite::gpu;
using namespace tflite::gpu::gl;
const TfLiteGpuDelegateOptionsV2 options = {
.inference_preference = TFLITE_GPU_INFERENCE_PREFERENCE_SUSTAINED_SPEED,
.is_precision_loss_allowed = 1 // FP16
};
tfGPUDelegate = TfLiteGpuDelegateV2Create(&options);
if (interpreter->ModifyGraphWithDelegate(tfGPUDelegate) != kTfLiteOk) {
__android_log_print(ANDROID_LOG_ERROR, "Tensorflow", "GPU Delegate hasn't been created");
return ;
} else {
__android_log_print(ANDROID_LOG_INFO, "Tensorflow", "GPU Delegate has been created");
}
InferenceEnvironmentOptions envOption;
InferenceEnvironmentProperties properties;
auto envStatus = NewInferenceEnvironment(envOption, &env, &properties);
if (envStatus.ok()){
__android_log_print(ANDROID_LOG_INFO, "Tensorflow", "Inference environment has been created");
} else {
__android_log_print(ANDROID_LOG_ERROR, "Tensorflow", "Inference environment hasn't been created");
__android_log_print(ANDROID_LOG_ERROR, "Tensorflow", "Message: %s", envStatus.error_message().c_str());
}
InferenceOptions builderOptions;
builderOptions.usage = InferenceUsage::SUSTAINED_SPEED;
builderOptions.priority1 = InferencePriority::MIN_LATENCY;
builderOptions.priority2 = InferencePriority::AUTO;
builderOptions.priority3 = InferencePriority::AUTO;
//The last part requires a model
// GraphFloat32* graph;
// TfLiteContext* tfLiteContex;
//
// auto buildStatus = BuildModel(tfLiteContex, delegate_params, &graph);
// if (buildStatus.ok()){}
You may look function BuildFromFlatBuffer (https://github.com/tensorflow/tensorflow/blob/6458d346470158605ecb5c5ba6ad390ae0dc6014/tensorflow/lite/delegates/gpu/common/testing/tflite_model_reader.cc). It creates Interpreter and graph from it.
Also Mediapipe uses InferenceRunner you may find for useful in files:
https://github.com/google/mediapipe/blob/master/mediapipe/calculators/tflite/tflite_inference_calculator.cc
https://github.com/google/mediapipe/blob/ecb5b5f44ab23ea620ef97a479407c699e424aa7/mediapipe/util/tflite/tflite_gpu_runner.cc
I can trace JNI APIs very easily using this code
Interceptor.attach(Module.findExportByName("lib.so" , "somefunction"), {
onEnter: function(args) {
args[1] = ptr(0);
send("somefunction("+Memory.readCString(args[0])+","+args[1]+")");
},
onLeave:function(retval){
} });
when I try to trace java functions using the following code nothing returns
Java.perform(function () {
var c = Java.use("java.net.URI");
c.parseURI.implementation = function () {
console.log("String1:"+args[0]);
send("String1:"+args[0]);
this.parseURI(args[0]);
} });
from Frida website
Frida currently supports Dalvik, and while most of that code is just
interacting with the JNI APIs implemented by the VM, there are some
bits that are VM-specific
boost::filesystem provides many interesting functions, for example, we can use boost::filesystem::exists to check whether one file exists or not. I am now using this functionality in Android. It succeeds in the following program:
int main()
{
using namespace std;
string file_name="/data/local/tmp/abc/def.txt";
boost::filesystem::path dddddd(file_name);
if(boost::filesystem.exists(dddddd))
{
std::cout<<"Succeed"<<std::endl;
}
else
{
std::cout<<"Failed"<<std::endl;
}
return 0;
}
However, if I use wstring instead:
int main()
{
using namespace std;
wstring file_name=L"/data/local/tmp/abc/def.txt";
boost::filesystem::wpath dddddd(file_name);
if(boost::filesystem.exists(dddddd))
{
std::cout<<"Succeed"<<std::endl;
}
else
{
std::cout<<"Failed"<<std::endl;
}
return 0;
}
It will fail. It seems to me that the reason is because Android does not handle wstring very well. Any idea how I can change in order to make sure boost::filesystem::wpath can work on Android?
How do I use HtmlAgilityPack with Android (Mono for Android - C#)? I've added the reference, but I keep getting this error:
Error CS0012: The type 'System.Xml.XPath.IXPathNavigable' is defined
in an assembly that is not referenced. You must add a reference to
assembly 'System.Xml, Version=2.0.0.0
I have incorporated HtmlAglilityPack into the base library assembly I use in all of my MonoDroid projects with great success. I have not even tried to use them in precompiled form, but simply added it's source to my project.
I then shamelessly edited HtmlWeb.cs to throw out the Windows stuff:
lines 893 to 903 (may have somewhat changed, just look around near there):
if (!helper.GetIsDnsAvailable())
{
#if Android
contentType = def;
#else
//do something.... not at full trust
try
{
RegistryKey reg = Registry.ClassesRoot;
reg = reg.OpenSubKey(extension, false);
if (reg != null) contentType = (string)reg.GetValue("", def);
}
catch (Exception)
{
contentType = def;
}
#endif
}
lines 934 to 946 (may have somewhat changed, just look around near there):
if (helper.GetIsRegistryAvailable())
{
#if Android
ext = def;
#else
try
{
RegistryKey reg = Registry.ClassesRoot;
reg = reg.OpenSubKey(#"MIME\Database\Content Type\" + contentType, false);
if (reg != null) ext = (string)reg.GetValue("Extension", def);
}
catch (Exception)
{
ext = def;
}
#endif
}
I then added Android to the conditional compilation symbols of my project.
My references are:
Microsoft.CSharp
Mono.Android
System
System.Core
System.Data
System.Xml
System.Xml.Linq
Please add a comment, telling if you can compile it now, or if you need more information.