I like to test the Object Detection Example of TFLite.
https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection/android
The example with the default Model works great. But I want to test a custom Model generated from AutoML. When I am replace the "detect.tflite" and "labelmap fille" in the "\src\main\assets" directory and build then the App crashes after launch.
My Model is very simple ... can detect only 2 objects (Tiger and Lion). And my labelmap file contains below:
???
Tiger
Lion
Also I comment the line "//apply from:'download_model.gradle'" in "build.gradle" to stop the download of default Model and use my custom Model from asset.
I new in both Android and this ML space. I'll be glad if anyone can advise for App crash after launch with custom AutoML Model.
Thanks in advance.
Regards.
Two probable errors on log might be:
Cannot convert between a TensorFlowLite buffer with 1080000 bytes and a ByteBuffer with 270000 bytes. Modify TF_OD_API_INPUT_SIZE accordingly.
tflite ml google [1, 20, 4] and a Java object with shape [1, 10, 4]. Modify NUM_DETECTIONS according to custom model.
Related
I've made an Image classification model and converted it to tflite format.
Then I've verified tflite model in Python using tf.lite.Interpreter — it produces same results for my test image as the original model. Here's a colab link to verify.
Then I embedded it to a sample Android app, using Android Studio ML Model Binding and exact example code from Android studio.
Here's the main activity code, you can also use this link to navigate to the full android project.
val assetManager = this.assets
val istr = assetManager.open("test_image.JPG") //The same image
val b = BitmapFactory.decodeStream(istr)
val model = Model2.newInstance(this) //Model definition generated by Android Studio
// Creates inputs for reference.
val image = TensorImage.fromBitmap(b)
// Runs model inference and gets result.
val outputs = model.process(image)
val probability = outputs.probabilityAsCategoryList
probability.sortByDescending { it.score }
val top9 = probability.take(9)
this.findViewById<TextView>(R.id.results_text).text = top9.toString()
And then I'm getting completely different results on Android for the same model and the same input image.
Here are results matching my initial model in Python:
Here are wrong results I'm getting in Android app:
Links to the model and the test image are there in both examples, but I'll post them into the question once again:
tflite model
test image
I guess it has something to do with input/output formats of the model. Or the image is interpreted differently in python and in android. Or the metadata I added to the model is somehow wrong. Anyways, I've tried everything to localize the issue and now I'm stuck.
How do I fix my model or Android code so it produces the same results as my python code?
I've managed to find and fix the issue:
My model from this tutorial included a built-in image normalization layer. Image normalization is when you transform standard 0-255 image color values to 0.0-1.0 float values, suitable for machine learning.
But the metadata I used for the tflite model included 2 parameters for external normalization: mean and std.
Formula for each value being: normalized_value = (value - mean) / std
Since my model handles its own normalization, I need to turn off external normalization by setting mean = 0 and std = 1.
This way I'll get normalized_value = value.
So, setting the tflite metadata parameters to these:
image_min=0,
image_max=255.0,
mean=[0.0],
std=[1.0]
fixed the double normalization issue and my model now produces correct results in Android app.
I have a custom built tensorflow graph implementing MobileNetV2-SSDLite which I implemented myself. It is working fine on the PC.
However, when I convert the model to TFLite (all float, no quantization), the model weights are changed drastically.
To give an example, a filter which was initially -
0.13172674179077148,
2.3185202252437188e-32,
-0.003990101162344217
becomes-
4.165565013885498,
-2.3981268405914307,
-1.1919032335281372
The large weight values are completely throwing off my on-device inferences. Need help! :(
What command are you using to convert to tflite? For instance are you using toco, and if so what parameters are you using? While I haven't been looking at the filters, here are my default instructions for finetuning a MobileNetV2-SSD and SSDLite graphs and the model has been performing well.
I'm trying out the java_arcore_hello_ar sample app but replacing the andy.obj with my own model created in Blender. I have exported the blender object using settings from this tutorial
The .obj and .mtl files were placed in the assets folder but when I tap on the screen I get nothing. It doesn't show an error so I'm thinking it does place the object on the screen but is not drawing it for whatever reason.
Any Google searching for results generally bring up tutorials where you have to create a parser for converting the object, but as far as I can see, the ObjectRenderer class in arcore package does this heavy lifting for you.
Has anyone tried this with any success? Do I have to do further work with the .mtl file?
I did get this worked by extending the code to reade OBJ and MTL files.
You can take a look at my code # https://github.com/JohnLXiang/arcore-sandbox .
I'm also new to openGL, my code is not perfect but works at least.
If it does't any error information , I think the reasons are:
1.The Obj model has been placed other position , for example far far place . So you should check if model's position is origin of Blender in modeling process .
2.The Obj model is different one of java_arcore_hello_ar sample , so when java_arcore_hello_ar sample's Obj parse library parsed error.
So, you can parse obj model by yourself.
I am given a task of making some complex 3d object in opengl android.Earlier I have worked only on the primitive objects of opengl using nehe tutorials.
I have googled and found that Blender is used to make 3d object and then it is imported in android project .But I cant get HOW?
One easy way to do it is to export your objects in the OBJ format (it describes vertices of an object). You can then easily make your own OBJ reader (or use an existing one) and pass the vertices to OpenGL.
Else, don't reinvent the wheel and use a library that already does this for you (libgdx for instance).
... then it
is imported in android project
Actually it's usually not imported but simply load from file by the target application. There are some export scripts for Blender that emit C code or even write out OpenGL calls; DON'T use them, they'll just mess up your program.
There are some good libraries for 3D object storage, like OpenCTM
What you need is an array of floats that represent your vertices/normals, like so:
float [ ] vertices = {
VertexX, VertexY, VertexZ, NormalX, NormalY, NormalZ,
VertexX, VertexY, VertexZ, NormalX, NormalY, NormalZ,
VertexX, VertexY, VertexZ, NormalX, NormalY, NormalZ,
VertexX, VertexY, VertexZ, NormalX, NormalY, NormalZ,
..., ..., ..., ..., ..., ...,
};
Where each face has three unique vertex lines associated with it. Once you have this array built from an OBJ file or whatever format you want, using code that you'll have to figure out, you can render it by doing the following:
glVertexPointer(3, GL_FLOAT, sizeof(vertices[0])*6, &vertices[0]);
glNormalPoitner(GL_FLOAT, sizeof(vertices[0])*6, &vertices[3]);
glDrawArrays(GL_TRIANGLES, 0, numVertices);
See this Wikipedia page on the OBJ format for reference on how the obj file is laid out. Parsing the file is pretty straightforward once you understand the format.
Is it possible to create a simple 3D model (for example in 3DS MAX) and then import it to Android?
That's where I got to:
I've used Google's APIDemos as a starting point - there are rotating cubes in there, each specified by two arrays: vertices and indices.
I've build my model using Blender and exported it as OFF file - it's a text file that lists all the vertices and then faces in terms of these vertices (indexed geometry)
Then I've created a simple C++ app that takes that OFF and writes it as two XMLs containing arrays (one for vertices and one for indices)
These XML files are then copied to res/values and this way I can assign the data they contain to arrays like this:
int vertices[] = context.getResources().getIntArray(R.array.vertices);
I also need to manually change the number of faces to be drawn in here: gl.glDrawElements(GL10.GL_TRIANGLES, 212*6, GL10.GL_UNSIGNED_SHORT, mIndexBuffer); - you can find that number (212 in this case) on top of the OFF file
Here you can find my project page, which uses this solution: Github project > vsiogap3d
you may export it to ASE format.
from ASE, you can convert it to your code manually or programatically.
You will need vertex for vertices array and faces for indices in Android.
don't forget you have to set
gl.glFrontFace(GL10.GL_CCW);
because 3ds max default is counter clockwise.
It should be possible. You can have the file as a data file with your program (and as such it will be pushed onto the emulator and packaged for installation onto an actual device). Then you can write a model loader and viewer in java using the Android and GLES libraries to display the model.
Specific resources on this are probably limited though. 3ds is a proprietry format so 3rd party loaders are in shortish supply and mostly reverse engineered. Other formats (such as blender or milkshape) are more open and you should be able to find details on writing a loader for them in java fairly easily.
Have you tried min3d for android? It supports 3ds max,obj and md2 models.
Not sure about Android specifically, but generally speaking you need a script in 3DS Max that manually writes out the formatting you need from the model.
As to whether one exists for Android or not, I do not know.
You can also convert 3DS MAX model with the 3D Object Converter
http://web.t-online.hu/karpo/
This tool can convert 3ds object to text\xml format or c code.
Please note that the tool is not free. You can try for a 30-day trial period. 'C' code and XML converters are available.
'c' OpenGL output example:
glDisable(GL_TEXTURE_2D);
glEnable(GL_LIGHTING);
glEnable(GL_NORMALIZE);
GLfloat Material_1[] = { 0.498039f, 0.498039f, 0.498039f, 1.000000f };
glBegin(GL_TRIANGLES);
glMaterialfv(GL_FRONT,GL_DIFFUSE,Material_1
glNormal3d(0.452267,0.000000,0.891883);
glVertex3d(5.108326,1.737655,2.650969);
glVertex3d(9.124107,-0.002484,0.614596);
glVertex3d(9.124107,4.039649,0.614596);
glEnd();
Or direct 'c' output:
Point3 Object1_vertex[] = {
{5.108326,1.737655,2.650969},
{9.124107,-0.002484,0.614596},
{9.124107,4.039649,0.614596}};
long Object1_face[] = {
3,0,1,2,
3,3,4,5
3,6,3,5};
You can migrate than those collections of objects to your Java code.