I found a link describing how to rebuild proto files from a C++ executable: http://www.sysdream.com/reverse-engineering-protobuf-apps
Is there a similar method for APKs or decrypted objective c apps?
The article you linked describes extracting the FileDescriptorProtos embedded as string literals in a C++ application that uses Protobufs. The Java code generator embeds similar string literals into Java code. If you ran the Java classes through a decompiler, you should be able to recover the descriptor strings and decode them.
However, note that this only works if the application uses the standard, Google-authored Java protobuf implementation and does not use "lite mode". In lite mode, descriptors are not included in the generated code. Implementations other than the Google-authored ones may or may not include the descriptor. I would guess that most Android developers prefer to use lite mode or some alternative lightweight implementation that doesn't include descriptors, so you might have trouble extracting from APKs. (I don't know about objective C.)
That said, note that you can actually decode a lot of the information in a protobuf message without having the schema at all. If you use protoc with the --decode-raw command-line option and feed it a protobuf message on stdin, it will decode it to tag/value pairs. You'll only get numbered fields (not names) and some type information is lost, but you'll find it much easier to reverse-engineer the format from there than you would with just the raw bytes.
Related
I searched to understand if there is a technique to keep a trained tensorflow model (.pb file) safe in an Android app but didn't find anything useful. I am releasing an app containing a tensorflow model which I built on a training set. When I release the app, anyone can access the model and use it for his own app. I wonder if there is a way to protect a tensorflow model that I put in the asset folder of my Android application?
This is the way that I load my model in Android:
TensorFlowInferenceInterface tf = new TensorFlowInferenceInterface();
tf.initializeTensorFlow(context.getAssets(), "file:///android_asset/model.pb");
I was thinking to embed the model encrypted in the app and decrypt it during runtime, but if someone debugs the app, it can get the password and decrypt it. Moreover, there is just one implementation of initializeTensorFlow method in the TensorFlowInferenceInterface class that just accepts (AssetManager assetManager, String model). It is possible to write one that accepts the encrypted one, but it needs some modification of Tensorflow C++ library. I wonder if there is a more reliable solution. Any suggestion, please?
As mentioned in the comments, there is no real safe way to keep your model safe when you run it locally. That being said, you can hide your model and make things a tad more difficult than having a .pb around.
Apart from name obfuscation provided by freeze_graph, a good solution is to compile to model to a binary using XLA AOT compilation using tfcompile. It generates a binary library containing your model as well as a header file to use it. Somebody who want to peek at your network would then have to go through compiled code, which is a higher bar to clear than reading a .pb file for most people.
I'm currently developing an algorithm for texture classification based on Machine Learning, primarily Support Vector Machines (SVM). I was able to gain some very good results on my test data and now want to use the SVM in productive environment.
Productive in my case means, it is going to run on multiple Desktop- and Mobile platforms (i.e. Android, iOS) and always somewhere deep down in native threads. For reasons of software structure and the platform's access policies, I'm not able to access the file system from where I use the SVM. However, my framework supports reading Files in an environment where access the file system is granted and channel the file's content as a std::string to the SVM-part of my application.
The standard procedure how to configure an SVM is by using filenames and OpenCV reads directly from the file:
cv::SVM _svm;
_svm.load("/home/<usrname>/DEV/TrainSoftware/trained.cfg", "<trainSetName>");
I want this (basically reading from the file somewhere else and passing the file's content as a string to the SVM):
cv::SVM _svm;
std::string trainedCfgContentStr="<get the content here>";
_svm.loadFromString(trainedCfgContentStr, "<trainSetName>") // This method is desired
I couldn't find anything in OpenCV's docs or source that this is possible somehow, but it wouldn't be the first OpenCV-Feature that's there and not documented or widely known. Of course, I could hack the OpenCV source and cross-compile to each of my target platforms, but I'd try to avoid that since it is a hell lot of work, besides I'm pretty convinced I'm not the first one with this problem.
All ideas (also unconventional) and/or hints are highly appreciated!
as long as you stick with the c++ api it's quite easy, FileStorage can read from memory:
string data_string; //containing xml/yml data
FileStorage fs( data_string, FileStorage::READ | FileStorage::MEMORY);
svm.read(fs.getFirstTopLevelNode()); // or the node with your trainset
(unfortunately not exposed to java)
Background:
I have working C++ code on linux that uses Boost IPC to access shared memory, and I want to port it to android. I downloaded and built the Boost-for-Android project found here: https://github.com/MysticTreeGames/Boost-for-Android.
Problem:
However when I try to create a boost named mutex like this:
boost::interprocess::named_mutex named_mtx(boost::interprocess::open_or_create, "my_mutex");
I get an exception saying "no such file or directory" with a native code 2.
Additional information:
When I searched for how to use shared memory on android, it looks ashmem and Binder are popular methods, and I can't find references to them at all in the ported Boost IPC code.
Questions:
What is the reason for the "no such file or directory" error?
Can someone confirm that the Boost-for-Android IPC part works?
By default Boost does not look for a common place where to share data on Android. It was not built this way. To make it work modify the file
/boost/interprocess/detail/os_file_functions.hpp
Find the following line an add /sdcard
const char *names[]={ "/sdcard", ......
After doing that use the library and give to your application Read/Write external storage permission.
You are good to go.
PD: Please be carefull. I have problems using Mutex and conditional variables in Android because the process was taking 100% the CPU.
I followed the way provided by #user3645767 but it didn't work. But I solved it by revision the file 'interprocess/detail/shared_dir_helpers.hpp'
line 109
to change the dir_path in get_shared_dir_root()
#elif defined __ANDROID__
dir_path="/data"
#else
dir_path="/tmp"
I am developing an application for android/iOS/windows using c++ code for the core logic. The application uses the free fuzzy logic library and it works perfectly for windows mobile, iOS and on my local Ubuntu machine, but it doesn't quite work under android.
The application reads a .fcl file from the sd card and then parses it using the free fuzzy logic library parser. The problem is, that the parser gets stuck at random stages of parsing.
Some notes to my project settings:
I enabled the Android read/write permissions for the sdcard in the manifest.xml.
The code I am trying to run is the basic example from the free fuzzy logic library website.
I am using the stlport_static library for stl support and the -frtti compiler flag.
My question is: Am I missing something android specific, like file encoding or some permissions I didn't set?
Some notes I thought about:
File compression should not be an issues, because, to my knowledge, files on the SD card are not compressed and I can parse the file partially.
Using other fuzzy logic libraries is out of the option, because I can't use GPL licenced libraries. The only other library I found didn't hat a manual / how to and couldn't parse the fcl standard.
The free fuzzy logic library uses a lot of wchar_t's whitch could be an issue.
Thank you for your time and hopefully for some help ;)
Ok after plowing through some android manuals and some Google abuse I found the problem. Currently Android doesn't support the wchar_t type. Well you can use it, but the results will not be the same as on any other operating system.
By changing all the wchar_t and wstring types in the free fuzzy logic library to their corresponding char and string types I was able to make the parser work. Well sort of, there are still some sleight inconsistencies, but nothing i can't handle ;).
Conclusion: Don't use wide characters in android c++ Programs.
Thank you for your time & help
I am writing an Android application that would both store data and communicate with a server using protocol buffers. However, the stock implementation of protocol buffers compiled with the LITE flag (in both the JAR library and the generated .java files) has an overhead of ~30 KB, where the program itself is only ~30 KB. In other words, protocol buffers doubled the program size.
Searching online, I found a reference to an Android specific implementation. Unfortunately, there seems to be no documentation for it, and the code generated from the standard .proto file is incompatible with it. Has anyone used it? How do I generate code from a .proto file for this implementation? Are there any other lightweight alternatives?
I know it's not a direct answer to your question, but an extra 30kb doesn't sound that bad to me. Even on EDGE that'll only take an extra 1 to 2 seconds to download. And memory is tight on android, but not THAT tight -- 30 kb is only about 1/10th of one percent of the available application memory space.
Are there any other lightweight alternatives?
I'm taking this to mean "to using protocol buffers", rather than "for using protocol buffers with an Android application". I apologise if you are already commited to protocol buffers.
This site is about "comparing serialization performance and other aspects of serialization libraries on the JVM". You'll find many alternatives listed there.
While there is no mention of the memory footprint of the different implementations at the moment I am sure it is a metric which the people on the wiki would be interested in.
Just to revive this archaic thread for anyone seeing it, the answer is to use Square's Wire library (https://github.com/square/wire)
As they mention themselves:
Wire messages declare public final fields instead of the usual getter methods. This cuts down on both code generated and code executed. Less code is particularly beneficial for Android programs.
They also internally build using the Lite runtime I believe.
And of course Proguard, the new Android 2.0 minify tools, [other generic answers], etc etc.
Use ProGuard[1] on your project. It will reduce the size of jars included in APK file.
[1] http://developer.android.com/guide/developing/tools/proguard.html