I have found this code: https://github.com/mintuhouse/FinMan/blob/master/src/unix/imp.cpp which is a class for preprocessing an image with a receipt using OpenCV.
I wanted to ask: how can I use it in an android application which takes a photo and saves it as bitmap?
I tried to understand what the class is doing and tried to write this procedure in android. But it's little difficult because it has some functions such as cvZero(image) that I couldn't find in OpenCV for Android. Any ideas?
Also, I have tried the NDK, but I couldn't figure out how my bitmap file and this class would communicate after the NDK-build. I'm confused! Any help?
In old opencv api all functions, structures, etc. started from prefix "cv" to show that they are part of OpenCV library. In version 2.0 api changed, and all functiones, structures, classes etc are in namespace "cv", so in c++ you can access them by using this prefix, for example "cv::Point" or "cv:imread(...)". Old cvZero function now is accessible the same way or eventually is a method(probably static) of Mat class - so you can use it this way: myMat.zero(...) or Mat::zero(...). In Java it should be similar - try myMat.zero(...) or Mat.zero(...).
Related
I am not sure, if this is the right place to ask, but I am curious, if there are any Android NDK examples (apks), on how to read a file with Java and pass it over to C/ C++ using JNI.
Currently I am trying to read pdf or office files (e.g. docx) with C/ C++ and I am trying to understand the concept behind it.
Maybe there are some full apk examples or just some snippets, with which I could extend the hello-jni/ hello-jnicallback examples.
I already found the android ndk samples site https://github.com/android/ndk-samples, but there seems to be no example on how to simply read files.
Thank you for your help.
I searched to understand if there is a technique to keep a trained tensorflow model (.pb file) safe in an Android app but didn't find anything useful. I am releasing an app containing a tensorflow model which I built on a training set. When I release the app, anyone can access the model and use it for his own app. I wonder if there is a way to protect a tensorflow model that I put in the asset folder of my Android application?
This is the way that I load my model in Android:
TensorFlowInferenceInterface tf = new TensorFlowInferenceInterface();
tf.initializeTensorFlow(context.getAssets(), "file:///android_asset/model.pb");
I was thinking to embed the model encrypted in the app and decrypt it during runtime, but if someone debugs the app, it can get the password and decrypt it. Moreover, there is just one implementation of initializeTensorFlow method in the TensorFlowInferenceInterface class that just accepts (AssetManager assetManager, String model). It is possible to write one that accepts the encrypted one, but it needs some modification of Tensorflow C++ library. I wonder if there is a more reliable solution. Any suggestion, please?
As mentioned in the comments, there is no real safe way to keep your model safe when you run it locally. That being said, you can hide your model and make things a tad more difficult than having a .pb around.
Apart from name obfuscation provided by freeze_graph, a good solution is to compile to model to a binary using XLA AOT compilation using tfcompile. It generates a binary library containing your model as well as a header file to use it. Somebody who want to peek at your network would then have to go through compiled code, which is a higher bar to clear than reading a .pb file for most people.
I have recently begun using the Android NDK and I have successfully implemented a few simple Android apps. I need to detect objects (squares and rectangles) from an image. My research has shown me that OpenCV is the solution for this. This is the algorithm I use to detect a square from the image.
However, I am unclear as to how should I use the squares.cpp file in my code. The OpenCV samples show how to use the cpp files in JNI format. Do I need to convert the squares.cpp file to JNI or would there be another feasible solution?
Thanks. All suggestions and feedback are welcome.
You don't have to convert the squares.cpp file to JNI.
From your Java code, you will call a JNI function (as I suppose you did in the "few simple Android apps" you have implemented) that will then call the functions in squares.cpp.
In other words, you basically only need one call to a JNI function from Java, and once you are in the C++ code, you can code in C++ as usual.
i'm working on integrating image recognition app using Moodstocks SDK ,
to start the scanner in moodstocks i must use a surfaceview (Camera) , all works fine when i do it in eclipse , but i want to use unity3D cause i'm making it in a sort of a game ,
so i made my eclipse project as JAR and imported it in unity and i'm trying to call the method in my java class from the unity script and pass the camera.Main to it
so if you can give me any guidelines about that
Thanks,
There are several ways to create a Java plugin but the result in each case is that you end up with a .jar file containing the .class files for your plugin. One approach is to download the JDK, then compile your .java files from the command line with javac. This will create .class files which you can then package into a .jar with the jar command line tool. Another option is to use the Eclipse IDE together with the ADT.
Once you have built your Java plugin (.jar) you should copy it to the Assets->Plugins->Android folder in the Unity project. Unity will package your .class files together with the rest of the Java code and then access the code using the Java Native Interface (JNI). JNI is used both when calling native code from Java and when interacting with Java (or the JavaVM) from native code.
To find your Java code from the native side you need access to the Java VM. Fortunately, that access can be obtained easily by adding a function like this to your C/C++ code:
jint JNI_OnLoad(JavaVM* vm, void* reserved) {
JNIEnv* jni_env = 0;
vm->AttachCurrentThread(&jni_env, 0);
}
This is all that is needed to start using Java from C/C++. It is beyond the scope of this document to explain JNI completely. However, using it usually involves finding the class definition, resolving the constructor () method and creating a new object instance, as shown in this example:-
jobject createJavaObject(JNIEnv* jni_env) {
// find class definition
jclass cls_JavaClass = jni_env->FindClass("com/your/java/Class");
// find constructor method
jmethodID mid_JavaClass = jni_env->GetMethodID (cls_JavaClass, "<init>", "()V");
// create object instance
jobject obj_JavaClass = jni_env->NewObject(cls_JavaClass, mid_JavaClass);
// return object with a global reference
return jni_env->NewGlobalRef(obj_JavaClass);
}
This explanation comes from this information page, where a few examples are written as well. You should take a look over here! This may be worth a read as well.
Disclaimer: I work for Moodstocks.
The ScannerSession object in the Moodstocks SDK for Android is designed to be a high-level, easy-to-use wrapper that takes care of many "technical" difficulties by itself, in the context of a classic, Java app. In particular, it initializes the camera for you, previews it on the provided SurfaceView and dispatches camera frames to the Moodstocks SDK.
I've never used Unity myself so I can't dive into the details, but I think that in your context, given the fact that Unity has its own way of initializing and using the camera, you'll have to bypass this ScannerSession object and hit lower-level functions of the Moodstocks SDK. Find how to get the camera frames using Unity, and feed them manually to the Moodstocks SDK Scanner object. You can take inspiration from what is done in the ScannerSession to see how to do that!
Hope this helps! If you're looking for more advice, you can ask us questions on the Moodstocks Help Center
I would like to use the facedetection in opencv cpp to use it in an android app. I have compiled jni successfully. But I wonder how would i use the haarcascades. I can store in sdcard and read it from there. Is there any other way i can use the xml files directly from the project?
There is a c++ example called facedetect coming with the opencv superpack. I'm running OpenCV-2.3.1 myself and it's located in this folder: ../opencv-2.3.1/samples/c/
The sample uses haarcascades and this might be your best bet for facedetection. If you can use the Android NDK with proper JNI calls from a .cpp file then you shouldn't have any problems to use this sample.
I'm working on a similar thing myself but haven't tried it myself yet. Should be implementing the thing somewhere next week but can't guarantee it. Let me know if this works out for you