The following video tutorial was a good starting point (using Vibrator as an example rather than DeviceId), but there were a few more details needing attention to transpose to C++.
I'm just getting started with Stack Exchange. Hopefully this question and answer is useful to others; the following code does work as expected.
#include <Androidapi.Helpers.hpp>
#include <Androidapi.JNI.Telephony.hpp>
// Get Device ID (IMEI) from device:
_di_JObject TelephonyServiceObj = SharedActivityContext()->getSystemService(TJContext::JavaClass->TELEPHONY_SERVICE);
_di_JTelephonyManager TelephonyManager = TJTelephonyManager::Wrap(((_di_ILocalObject)TelephonyServiceObj)->GetObjectID());
UnicodeString DeviceId = JStringToString(TelephonyManager->getDeviceId());
Related
I'm trying to inference my tflite model on c++ code at an embedded device.
So, I type simple tflite inference code which is using GPU.
And I cross-compile in my PC and run at the embedded device which is running android.
However, (1) If I use delegate gpu option, c++ codes give random results.
(2) It has given same input, but the results changed every time.
(3) When I turn off the gpu option, it gives me a correct results.
When I test my tflite model in python, it gives the correct output.
So I think the model file has no problem.
Should I re-build TensorFlow-lite because I use a prebuilt .so file? or Should I change the GPU option?
I don't know what should I check more.
Please Help!
Here is my C++ code
// Load model
std::unique_ptr<tflite::FlatBufferModel> model =
tflite::FlatBufferModel::BuildFromFile(model_file.c_str());
// Build the interpreter
tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
// set delegate option
bool use_gpu = true;
if(use_gpu)
{
TfLiteDelegate* delegate;
auto options = TfLiteGpuDelegateOptionsV2Default();
options.inference_preference = TFLITE_GPU_INFERENCE_PREFERENCE_FAST_SINGLE_ANSWER;
options.inference_priority1 = TFLITE_GPU_INFERENCE_PRIORITY_AUTO;
delegate = TfLiteGpuDelegateV2Create(&options);
interpreter->ModifyGraphWithDelegate(delegate);
}
interpreter->AllocateTensors();
// set input
float* input = interpreter->typed_input_tensor<float>(0);
for(int i=0; i<width*height*channel; i++)
*(input+i) = 1;
TfLiteTensor* output_tensor = nullptr;
// Inference
interpreter->Invoke();
// Check output
output_tensor = interpreter->tensor(interpreter->outputs()[0]);
printf("Result : %f\n",output_tensor->data.f[0]);
//float* output = interpreter->typed_output_tensor<float>(0);
//printf("output : %f\n",*(output));
Luckily, I found the answer.
This was caused by the mismatch of the NDK version. (The prebuilt so file I used is built another version)
After unifying the NDK version to 21, it was tested again, and it worked normally.
In my case same application using gpu on S10 lite was not working but on S22,S21 Ultra 5G was working.
So you can consider running on different chipset configuration on different devices with same application.
I am interested in demoing printf vulnerabilities via an NDK app. To be clear, I am aware that to log in the console we can use __android_log_print(ANDROID_LOG_DEBUG, "LOG_TAG", "Print : %d %s",someVal, someStr);. I have tried it and I know it works. But I explicitly want to demo the vulnerabilities of printf(), specifically to use the %n specifier to write to a pointed location.
Is there a way to make printf() work to this effect or is it possible to achieve this via __android_log_print()? I attempted it with the android/log.h header but it didn't work.
I can get the app to crash by running something along the lines of printf(%s%s%s%s%s%s%s%s%s%s). But again, I can't manipulate pointers.
For general knowledge purposes, why is it that printf() doesn't work in the first place and how does __android_log_print() prevent these exploits?
You do realize that Android is open source.
Starting with looking for __android_log_print()
and finding it: https://android.googlesource.com/platform/system/core/+/refs/heads/master/liblog/logger_write.cpp
int __android_log_print(int prio, const char* tag, const char* fmt, ...) {
va_list ap;
char buf[LOG_BUF_SIZE];
va_start(ap, fmt);
vsnprintf(buf, LOG_BUF_SIZE, fmt, ap);
va_end(ap);
return __android_log_write(prio, tag, buf);
}
I eventually ended up looking at: https://android.googlesource.com/platform/bionic/+/refs/heads/master/libc/stdio/vfprintf.cpp
lines 453-454:
case 'n':
__fortify_fatal("%%n not allowed on Android");
Also referenced in the code is additional safety through FORTIFY which is described in the following blog post:
https://android-developers.googleblog.com/2017/04/fortify-in-android.html
Android specifically does not support %n format specifiers because they're vulnerable.
https://android.googlesource.com/platform/bionic/+/400b073ee38ecc2a38234261b221e3a7afc0498e/tests/stdio_test.cpp#328
I use custom model for classification in Tensor flow Camera Demo.
I generated a .pb file (serialized protobuf file) and I could display the huge graph it contains.
To convert this graph to a optimized graph, as given in [https://www.oreilly.com/learning/tensorflow-on-android], the following procedure could be used:
$ bazel-bin/tensorflow/python/tools/optimize_for_inference \
--input=tf_files/retrained_graph.pb \
--output=tensorflow/examples/android/assets/retrained_graph.pb
--input_names=Mul \
--output_names=final_result
Here how to find the input_names and output_names from the graph display.
When I dont use proper names, I get device crash:
E/TensorFlowInferenceInterface(16821): Failed to run TensorFlow inference
with inputs:[AvgPool], outputs:[predictions]
E/AndroidRuntime(16821): FATAL EXCEPTION: inference
E/AndroidRuntime(16821): java.lang.IllegalArgumentException: Incompatible
shapes: [1,224,224,3] vs. [32,1,1,2048]
E/AndroidRuntime(16821): [[Node: dropout/dropout/mul = Mul[T=DT_FLOAT,
_device="/job:localhost/replica:0/task:0/cpu:0"](dropout/dropout/div,
dropout/dropout/Floor)]]
Try this:
run python
>>> import tensorflow as tf
>>> gf = tf.GraphDef()
>>> gf.ParseFromString(open('/your/path/to/graphname.pb','rb').read())
and then
>>> [n.name + '=>' + n.op for n in gf.node if n.op in ( 'Softmax','Placeholder')]
Then, you can get result similar to this:
['Mul=>Placeholder', 'final_result=>Softmax']
But I'm not sure it's the problem of node names regarding the error messages.
I guess you provided wrong arguements when loading the graph file or your generated graph file is something wrong?
Check this part:
E/AndroidRuntime(16821): java.lang.IllegalArgumentException: Incompatible
shapes: [1,224,224,3] vs. [32,1,1,2048]
UPDATE:
Sorry,
if you're using (re)trained graph , then try this:
[n.name + '=>' + n.op for n in gf.node if n.op in ( 'Softmax','Mul')]
It seems that (re)trained graph saves input/output op name as "Mul" and "Softmax", while optimized and/or quantized graph saves them as "Placeholder" and "Softmax".
BTW, using retrained graph in mobile environment is not recommended according to Peter Warden's post: https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/ . It's better to use quantized or memmapped graph due to performance and file size issue, I couldn't find out how to load memmapped graph in android though...:(
(no problem loading optimized / quantized graph in android)
Recently I came across this option directly from tensorflow:
bazel build tensorflow/tools/graph_transforms:summarize_graph
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph
--in_graph=custom_graph_name.pb
I wrote a simple script to analyze the dependency relations in a computational graph (usually a DAG, directly acyclic graph). It's so obvious that the inputs are the nodes that lack a input. However, outputs can be defined as any nodes in a graph because, in the weirdest but still valid case, outputs can be inputs while the other nodes are all dummy. I still define the output operations as nodes without output in the code. You could neglect it at your willing.
import tensorflow as tf
def load_graph(frozen_graph_filename):
with tf.io.gfile.GFile(frozen_graph_filename, "rb") as f:
graph_def = tf.compat.v1.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
return graph
def analyze_inputs_outputs(graph):
ops = graph.get_operations()
outputs_set = set(ops)
inputs = []
for op in ops:
if len(op.inputs) == 0 and op.type != 'Const':
inputs.append(op)
else:
for input_tensor in op.inputs:
if input_tensor.op in outputs_set:
outputs_set.remove(input_tensor.op)
outputs = list(outputs_set)
return (inputs, outputs)
I try to display video on Android using Gstreamer like on other platforms:
gst_video_overlay_set_window_handle(GST_VIDEO_OVERLAY(video_sink), this->ui->playback_widget->winId());
//playback_widget - QOpenGLWidget
But I think that winId() returns something else instead of ANativeWindow, because I get:
F libc : Fatal signal 11 (SIGSEGV), code 1, fault addr 0x5e in tid
3154 (gstglcontext)
So, how can I get instance of(or pointer to) ANativeWindow from some Qt widget on Android?
Support for embedding native widgets is incomplete and doesn't always work. Bear in mind that Qt may create native handles, but they do not necessarily represent actual native windows. Additionally, QWidget::winId() provides no effective guarantee of portability, only that the identifier is unique:
Portable in principle, but if you use it you are probably about to do something non-portable. Be careful.
The reason for this is that WId is actually a typedef for quintptr.
Solution: You will, at the very least, need to cast the return from winId to ANativeWindow, assuming this is the underlying window handle type that Qt uses to identify native windows.
This solution seems directed at X Windows but may provide some guidance.
Also see the documentation for QWidget::effectiveWinId() and QWidget::nativeParentWidget() for more helpful background information.
Update: Per the platform notes, there are quite a few caveats in using OpenGL + Qt + Android, including:
The platform plugin only supports full screen top-level OpenGL windows.
After the answer of #don-prog and some reading time I could make it work. Here is my code. Just a note: I have to delay this code execution until Qt finished loading views and layouts.
The following code retrieves the QtActivity Java object, then it dives into views and layout objects up to the QtSurface - which extends android.view.SurfaceView. Then it ask for the SurfaceHolder to finally get the Surface object that we need to call ANativeWindow_fromSurface.
QPlatformNativeInterface *nativeInterface = QApplication::platformNativeInterface();
jobject activity = (jobject)nativeInterface->nativeResourceForIntegration("QtActivity");
QAndroidJniEnvironment * qjniEnv;
JNIEnv * jniEnv;
JavaVM * jvm = qjniEnv->javaVM();
jvm->GetEnv(reinterpret_cast<void**>(&qjniEnv), JNI_VERSION_1_6);
jvm->AttachCurrentThread(&jniEnv,NULL);
jint r_id_content = QAndroidJniObject::getStaticField<jint>("android/R$id", "content");
QAndroidJniObject view = ((QAndroidJniObject) activity).callObjectMethod("findViewById", "(I)Landroid/view/View;", r_id_content);
if (view.isValid()) {
QAndroidJniObject child1 = view.callObjectMethod("getChildAt", "(I)Landroid/view/View;", 0);
QAndroidJniObject child2 = child1.callObjectMethod("getChildAt", "(I)Landroid/view/View;", 0);
if (child2.isValid()) {
QAndroidJniObject sHolder = child2.callObjectMethod("getHolder","()Landroid/view/SurfaceHolder;");
if (sHolder.isValid()) {
QAndroidJniObject theSurface = sHolder.callObjectMethod("getSurface","()Landroid/view/Surface;");
if (theSurface.isValid()) {
ANativeWindow* awindow = ANativeWindow_fromSurface(jniEnv, theSurface.object());
qDebug() << "This is a ANativeWindow " << awindow;
}
}
} else {
qDebug() << "Views are not loaded yet or you are not in the Qt UI Thread";
}
}
Hope it helps anybody else
I'm having a confusing problem. I'm trying to make a Web cleint that uses WSDL.
I'm using C++ RAD Studio 10 Seattle, but the same problem occured in RAD Studio XE8(older version).
1.I create a Multi-Device Application, add one Edit component and one Button.
2.I create a WSDL Importer by changing the location of the WSDL file to : "http://www.w3schools.com/webservices/tempconvert.asmx?WSDL" and leave all other setting to default.
3.On ButtonClick event of the button I write two lines of code :
_di_TempConvertSoap Converter = GetTempConvertSoap(true,
"http://www.w3schools.com/webservices/tempconvert.asmx?WSDL");
Edit1->Text = Converter->CelsiusToFahrenheit("32");
So after these three steps I have one unit, which is the main Unit with the Form and with the button event. And one file "tempconvert.cpp" that the WSDL Importer has generated. It quite actually just translates the WSDL code to a C++ one and defines the method to communicate with the server. In my case I have two methods : FahrenheitToCelsius() and CelsiusToFahrenheit(), in the example I use CelsiusToFahrenheit().
I compile it to 32-bit Windows platform, run it and when I click the button, the result "89.6" appears in the text of the Edit component. So this is working as expected.
But when I change the target platform to "Android" and use my mobile phone "Samsung GT-I8262" with Android 4.1.2 and run the project, it just stops and exits. I debugged the problem and it stops at the first command in "tempconvert.cpp" in RegTypes() method.
// ************************************************************************
//
// This routine registers the interfaces and types exposed by the WebService.
// ************************************************************************ //
static void RegTypes()
{
/* TempConvertSoap */
InvRegistry()->RegisterInterface(__delphirtti(TempConvertSoap), L"http://www.w3schools.com/webservices/", L"utf-8");
InvRegistry()->RegisterDefaultSOAPAction(__delphirtti(TempConvertSoap), L"http://www.w3schools.com/webservices/%operationName%");
InvRegistry()->RegisterInvokeOptions(__delphirtti(TempConvertSoap), ioDocument);
/* TempConvertSoap.FahrenheitToCelsius */
InvRegistry()->RegisterMethodInfo(__delphirtti(TempConvertSoap), "FahrenheitToCelsius", "",
"[ReturnName='FahrenheitToCelsiusResult']", IS_OPTN);
/* TempConvertSoap.CelsiusToFahrenheit */
InvRegistry()->RegisterMethodInfo(__delphirtti(TempConvertSoap), "CelsiusToFahrenheit", "",
"[ReturnName='CelsiusToFahrenheitResult']", IS_OPTN);
/* TempConvertHttpPost */
InvRegistry()->RegisterInterface(__delphirtti(TempConvertHttpPost), L"http://www.w3schools.com/webservices/", L"utf-8");
InvRegistry()->RegisterDefaultSOAPAction(__delphirtti(TempConvertHttpPost), L"");
}
#pragma startup RegTypes 32
Does someone have any idea why this might be happening? I tried on two other Samsung phones and it didn't work. The error that shuts the program down is "Segmentation fault(11)", and more precisely it stops at the following line of code in "System.pas" file :
u_strFromUTF8(PUChar(Dest), MaxDestChars, DestLen, MarshaledAString(Source), SourceBytes, ErrorConv);
Here is some info that I've found about the function:
u_strFromUTF8 - function that converts a UTF-8 string to UTF-16.
UCHAR is a Byte(in Delphi), so PUCHAR is a pointer to Byte.
I cannot se what could possibly go wrong with this function which apparently only converts a string.
So my question is why does the project work on Windows 32 bit version, but on Android it throws Segmentation fault(11)?
I hope I could find a solution for this problem. I will keep looking.
Thank you,
Zdravko Donev :)
UPDATE:
I disassembled the line:
InvRegistry()->RegisterInterface(__delphirtti(TempConvertSoap), L"http://www.w3schools.com/webservices/", L"utf-16");
to get :
TInvokableClassRegistry *Class = InvRegistry();
TTypeInfo *Info = __delphirtti(TempConvertSoap);
UnicodeString Namespace = "http://www.w3schools.com/webservices/";
UnicodeString WSDLEncoding = "utf-8";
Class->RegisterInterface(Info, Namespace, WSDLEncoding);
And I saw that the problem occurs when calling InvRegistry() function, but I still haven't found the problem as I cannot reach the source code of the function.
I found a solution.
I deleted the line
#pragma startup RegTypes 32
and called the method RegTypes() on my own when I create the form and it worked.