I'm struggling to follow the examples to load in my own model with ArCore. I found the following code:
ModelRenderable.builder()
// To load as an asset from the 'assets' folder ('src/main/assets/andy.sfb'):
.setSource(this, Uri.parse("andy.sfb"))
// Instead, load as a resource from the 'res/raw' folder ('src/main/res/raw/andy.sfb'):
//.setSource(this, R.raw.andy)
.build()
.thenAccept(renderable -> andyRenderable = renderable)
.exceptionally(
throwable -> {
Log.e(TAG, "Unable to load Renderable.", throwable);
return null;
});
However I can't find the class ModelRenderable anywhere and how to import it. Also the example app I'm building app from loads models like this:
virtualObject.createOnGlThread(/*context=*/ this, "models/andy.obj", "models/andy.png");
virtualObject.setMaterialProperties(0.0f, 2.0f, 0.5f, 6.0f);
But my model has no png files, just obj and mtl. The automatic sceneform also created a sfa and sfb file.
Which one is the right way to do it?
For reference here is the official documentation about initiating a model: https://developers.google.com/ar/develop/java/sceneform#renderables
The ModelRenderable is part of the
com.google.ar.sceneform:core
library, you could add it by adding this dependency to your app level build.gradle:
implementation 'com.google.ar.sceneform:core:1.13.0'
Make sure every other arcore / sceneform dependency is on the same version, in this case 1.13.0 .
The sfa meaning is SceneFormAsset, it represents your your model details in a humanly readable form and wouldn't be part of your application (it should be in the samplefolder which is on the same hierarchy level as your src folder). The sfb however is SceneFormBinary, this binary is generated from the sfa descriptor every time you modify something in the sfa and build the project. The sfb file should be in your assets folder in your project. For model loading, you should use the sfb file:
ModelRenderable.builder()
.setSource(context, Uri.parse("house.sfb"))
About your sample code: if you aren't familiar with OpenGL I don't recommend you to follow that sample, better to look for SceneForm, here is a sample app: https://github.com/google-ar/sceneform-android-sdk/tree/master/samples/solarsystem
Related
I'm currently working on an Android project where I need to load .obj file with ASSIMP on Android Platform. My way of implementing that is to use the AssetManager to load the .obj file into memory first, and then using the importer.ReadFileFromMemory() function to create the aiScene object. I've managed to import all vertex data, but I found that the texture is missing. I actually read from the ASSIMP GitHub page where it mentioned that ReadFileFromMemory() won't read contents cross file, therefore I think it is not reading the .mtl file where texture is applied. I would like to use the importer.ReadFile() function, but I have no idea on how to work with it on Android platform. Any one has suggestions?
Attached is my implementation of the loadModelFromMemory, similar as the loadModel from LearnOpenGL.
void Model::loadModelFromMemory(const void* pbuffer, size_t pLength)
{
Log::Message("Enter loadModelFromMemory", LOG_INIT);
// read file via ASSIMP
Assimp::Importer importer;
const aiScene* scene = importer.ReadFileFromMemory(pbuffer, pLength, aiProcess_Triangulate | aiProcess_GenSmoothNormals | aiProcess_FlipUVs | aiProcess_CalcTangentSpace);
// check for errors
if(!scene || scene->mFlags & AI_SCENE_FLAGS_INCOMPLETE || !scene->mRootNode) // if is Not Zero
{
Log::Message(strcat("ERROR::ASSIMP::", importer.GetErrorString()), LOG_ERROR);
return;
}
// process ASSIMP's root node recursively
processNode(scene->mRootNode, scene);
}
I have tried on using ReadFile but it is not working on the Android context. I also tried to use the MemoryIOWrapper provided by the ASSIMP, but have no clue on where to start.
There are different ways how to implement this.
Load the mtl- and the obj-file via memory buffers
Use the android-filesystem implementations to load assets from the asset-folder on your mobile phone. You can find our headers for that at Android File system implementations
I have a single model and I want to change image of it.
with below codes I can render my model (sfb file):
ModelRenderable.builder()
.setSource(this, sfb_source])
.build()
.thenAccept(renderable -> this.renderable = renderable)
.exceptionally(
throwable -> {
Toast toast =
Toast.makeText(this, "Unable to load andy renderable", Toast.LENGTH_LONG);
toast.setGravity(Gravity.CENTER, 0, 0);
toast.show();
return null;
});
Now, my model is unique. but I have multi images(with same sizes).
I should create multi sfb files or is there any ways to load them and change it in real time?
Firstly it is worth mentioning that Scene form is deprecated, or more accurately 'open sourced and archived', in case you are building on it which I think you might be from your code - see note here: https://developers.google.com/sceneform/develop.
You can still work with the older versions or open version of Sceneform - this may be ok for your use case.
Either way, for your requirements it sounds like you have two options:
use a model which support animation - Sceneform based instructions here: https://developers.google.com/sceneform/develop/animation
delete and replace the model when you want to change it - this is not as slow as it sounds and can, in my experience, look fairly seamless when changing model colours etc (at least in Sceneform based apps - not tried with OpenGL and ARCore).
I'm new to Sceneform (1.15.0) and related 3D file formats like fbx and glTF. I saw the sample animation project (Andy Dance) on how to run animations and the sceneform documentation.
What am I trying?
Run animations that are present in the sceneform fbx assets. I have 2 assets- a ka27 helicopter and a 3d model
Both these fbx assets have some animation. When I try to import these assets into Android Studio, it currently throws an error which I've overcome by adding the sceneform asset into my sampledata directory and adding the information in the app/gradle file. The .sfa and .sfb files are generated correctly.
sceneform.asset('sampledata/models/ka27.FBX',
'default',
'sampledata/models/ka27.sfa',
'src/main/res/raw/ka27')
But now if I try running the animation, I can see the helicopter in the scene, but without animation-
arFragment.getArSceneView().getScene().addChild(helicopterNode);
AnimationData animationData = helicopterRenderable.getAnimationData("ka27");
ModelAnimator helicopterAnimator = new ModelAnimator(animationData, helicopterRenderable);
helicopterAnimator.start();
My Questions-
Are these assets correct and compatible with sceneform animations?
In getAnimationData, what is the parameter that needs to be passed? Can i find this information by opening this asset?
(I tried importing these assets, including sceneform's sample andy_dance into Blender and Unity and while I can see the animation playing, I really can't see animation data name property anywhere.)
Do .fbx to .glTF converted assets retain their animation?
Can sceneform run .glTF animations?
Do animations have to be exported separately for sceneform? If yes, then how?
Sample illustration of the app in which the .fbx animation does not work-
If you open your .sfa, you will find a key animations if your .fbx file contains any animation. It should look like this:
{
animations: [
{
clips: [
{
name: 'Animation 001',
runtime_name: 'animation_1',
},
],
path: 'sampledata/models/ka27.fbx',
},
],
...
}
getAnimationData expects the value of runtime_name so you need to modifiy the following line:
AnimationData animationData = helicopterRenderable.getAnimationData("ka27");
With my .sfa file this line becomes:
AnimationData animationData = helicopterRenderable.getAnimationData("animation_1");
You can notice that getAnimationData can also take as parameter the index of the animation in the array animations of the .sfa file. So you can write:
AnimationData animationData = helicopterRenderable.getAnimationData(0);
The documentation of the ModelRenderable is available here.
Using Tensorflow 1.0.1 it's fine to read optimized graph and quantized graph in android using TensorFlowImageClassifier.create method, such as:
classifier = TensorFlowImageClassifier.create(
c.getAssets(),
MODEL_FILE,
LABEL_FILE,
IMAGE_SIZE,
IMAGE_MEAN,
IMAGE_STD,
INPUT_NAME,
OUTPUT_NAME);
But according to the Peter Warden's Blog(https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/), it's recommended to use memory mapped graph in mobile to avoid memory related crashes.
I built memmapped graph using
bazel-bin/tensorflow/contrib/util/convert_graphdef_memmapped_format \
--in_graph=/tf_files/rounded_graph.pb \
--out_graph=/tf_files/mmapped_graph.pb
and it created fine, but when I tried to load the file with TensorFlowImageClassifier.create(...) it says the file is not valid graph file.
In iOS, it's ok to load the file with
LoadMemoryMappedModel(
model_file_name, model_file_type, &tf_session, &tf_memmapped_env);
for it has a method for read memory mapped graph.
So, I guess there's a similar function in android, but I couldn't find it.
Could someone guide me how to load memory mapped graph in android ?
Since the file from the memmapped tool is no longer a standard GraphDef protobuf, you need to make some changes to the loading code. You can see an example of this in the iOS Camera demo app, the LoadMemoryMappedModel() function:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/ios_examples/camera/tensorflow_utils.mm#L159
The same code (with the Objective C calls for getting the filenames substituted) can be used on other platforms too. Because we’re using memory mapping, we need to start by creating a special TensorFlow environment object that’s set up with the file we’ll be using:
std::unique_ptr<tensorflow::MemmappedEnv> memmapped_env;
memmapped_env->reset(
new tensorflow::MemmappedEnv(tensorflow::Env::Default()));
tensorflow::Status mmap_status =
(memmapped_env->get())->InitializeFromFile(file_path);
You then need to pass in this environment to subsequent calls, like this one for loading the graph.
tensorflow::GraphDef tensorflow_graph;
tensorflow::Status load_graph_status = ReadBinaryProto(
memmapped_env->get(),
tensorflow::MemmappedFileSystem::kMemmappedPackageDefaultGraphDef,
&tensorflow_graph);
You also need to create the session with a pointer to the environment you’ve created:
tensorflow::SessionOptions options;
options.config.mutable_graph_options()
->mutable_optimizer_options()
->set_opt_level(::tensorflow::OptimizerOptions::L0);
options.env = memmapped_env->get();
tensorflow::Session* session_pointer = nullptr;
tensorflow::Status session_status =
tensorflow::NewSession(options, &session_pointer);
One thing to notice here is that we’re also disabling automatic optimizations, since in some cases these will fold constant sub-trees, and so create copies of tensor values that we don’t want and use up more RAM. This setup also means it's hard to use a model stored as an APK asset in Android, since those are compressed and don't have normal filenames. Instead you'll need to copy your file out of an APK onto a normal filesytem location.
Once you’ve gone through these steps, you can use the session and graph as normal, and you should see a reduction in loading time and memory usage.
I use the randomforest estimator, implemented in tensorflow, to predict if a text is english or not. I saved my model (A dataset with 2k samples and 2 class labels 0/1 (Not English/English)) using the following code (train_input_fn function return features and class labels):
model_path='test/'
TensorForestEstimator(params, model_dir='model/')
estimator.fit(input_fn=train_input_fn, max_steps=1)
After running the above code, the graph.pbtxt and checkpoints are saved in the model folder. Now I want to use it on Android. I have 2 problems:
As the first step, I need to freeze the graph and checkpoints to a .pb file to use it on Android. I tried freeze_graph (I used the code here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py). When I call the freeze_graph in my mode, I get the following error and the code cannot create the final .pb graph:
File "/Users/XXXXXXX/freeze_graph.py", line 105, in freeze_graph
_ = tf.import_graph_def(input_graph_def, name="")
File "/anaconda/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/importer.py", line 258, in import_graph_def
op_def = op_dict[node.op]
KeyError: u'CountExtremelyRandomStats'
this is how I call freeze_graph:
def save_model_android():
checkpoint_state_name = "model.ckpt-1"
input_graph_name = "graph.pbtxt"
output_graph_name = "output_graph.pb"
checkpoint_path = os.path.join(model_path, checkpoint_state_name)
input_graph_path = os.path.join(model_path, input_graph_name)
input_saver_def_path = None
input_binary = False
output_node_names = "output"
restore_op_name = "save/restore_all"
filename_tensor_name = "save/Const:0"
output_graph_path = os.path.join(model_path, output_graph_name)
clear_devices = True
freeze_graph.freeze_graph(input_graph_path, input_saver_def_path,
input_binary, checkpoint_path,
output_node_names, restore_op_name,
filename_tensor_name, output_graph_path,
clear_devices, "")
I also tried the freezing on the iris dataset in "tf.contrib.learn.datasets.load_iris". I get the same error. So I believe it is not related to the dataset.
As a second step, I need to use the .pb file on the phone to predict a text. I found the camera demo example by google and it contains a lot of code. I wonder if there is a step by step tutorial how to use a Tensorflow model on Android by passing a feature vector and get the class label.
Thanks, in advance!
UPDATE
By using the recent version of tensorflow (0.12), the problem is solved. However, now, the problem is that what I should pass to output_node_names ??? How can I get what are the output nodes in the graph ?
Re (1) it looks like you are running freeze_graph on a build of tensorflow which does not have access to contrib ops. Maybe try explicitly importing tensorforest before calling freeze_graph?
Re (2) I don't know of a simpler example.
CountExtremelyRandomStats is one of TensorForest's custom ops, and exists in tensorflow/contrib. As was pointed out, TF switched to including contrib ops by default at some point. I don't think there's an easy way to include the contrib custom ops in the global registry in the previous releases, because TensorForest uses the method of building a .so file that is included as a data file which is loaded at runtime (a method that was the standard when TensorForest was created, but may not be any longer). So there are no easily-included python build rules that will properly link in the C++ custom ops. You can try including tensorflow/contrib/tensor_forest:ops_lib as a dep in your build rule, but I don't think it will work.
In any case, you can try installing the nightly build of tensorflow. The alternative includes modifying how tensorforest custom ops are built, which is pretty nasty.