I have already built and executed the TensorFlow Android Demo but now i would like to generate another graph. I need to train another data set first. I wanted to use ImageNet . I actually want to download all the images from imageNet. i'll need about 500GB. There is a script to do this here
I want to know after i run this script and get a large number of training files will they be jpegs ? what format will they be in ? Because i then want to use the results(the training files) to create a graph i can build with tensorflow.
How can i use the results from inception script to create a graph using the following training script:
cd /tensorflow
python tensorflow/examples/image_retraining/retrain.py \
--bottleneck_dir=/tf_files/bottlenecks \
--how_many_training_steps 500 \
--model_dir=/tf_files/inception \
--output_graph=/tf_files/retrained_graph.pb \
--output_labels=/tf_files/retrained_labels.txt \
--image_dir /tf_files/flower_photos
According to the page you provided:
Each tf.Example proto contains the ImageNet image (JPEG encoded) as
well as metadata such as label and bounding box information. See
parse_example_proto for details.
so all the imageNet files you are downloading seems like in jpeg format.
And the tool you are saying is for retrain the already trained model. I guess you want to train all the images from scratch, right?
The page you provided : https://github.com/tensorflow/models/tree/master/inception
also explains how to train the data from scratch very well.
So, if you downloaded imageNet data using
bazel-bin/inception/download_and_preprocess_imagenet "${DATA_DIR}"
(Of course you have to set DATA_DIR and build download_and_preprocess_imagenet before use)
then, you can start training with:
bazel-bin/inception/imagenet_train --num_gpus=1 --batch_size=32 --train_dir=${TRAIN_DIR} --data_dir=${DATA_DIR}
you can change above options according to your needs and conditions, and also you have to specify TRAIN_DIR too.
After that, you can retrain the model with the actual data you want to train using retrain tool.
If you finished with training, then convert it to optimized and/or quantized so that you can use in the android mobile demo. ( refer this page for how to do this: https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/ )
Related
I have a .ckpt checkpoint file used for image recognition from my data scientist and I would like to convert it to .pt file using instruction from the pytorch instruction website:https://pytorch.org/tutorials/beginner/deeplabv3_on_android.html
This is what I did:
**model = torch.load(os.path.join(model_path,'Image_segmentation.ckpt'), map_location=device)
model.eval())
scriptedm = torch.jit.script(model)
torch.jit.save(scriptedm, "Image_segmentation_Android.pt")**
However I got the following error while trying to do so:
NotSupportedError Traceback (most recent call last)
<ipython-input-31-a8138feb2578> in <module>
1 model = torch.load(os.path.join(model_path,'model_eyeglasses.ckpt'), map_location=device)
2 model.eval()
----> 3 scriptedm = torch.jit.script(model)
4 torch.jit.save(scriptedm, "model_eyeglasses_Android.pt")
5 model.to(device)
After some reading, it seem that both file type can be used in Android development. I usually script in python and is very new to Android so I cannot be sure.
I was wondering if someone can confirm this? Unfortunately, I wont be able to get in contact with our data scientist for quite sometime to train another model in .pt format.
Many thanks for you help
There isn't an established difference between the file suffixes, because you can save arbitrary Python objects using torch.save, using any suffix you want. For example: you can directly save the model itself, or you can save a dictionary that includes multiple models. (Related answer: https://stackoverflow.com/a/70541507/13095028).
As for why JIT scripting failed however, there can be a variety of reasons. It could be that the tensor operations involved in the model genuinely is not supported (ref: https://pytorch.org/docs/stable/jit_unsupported.html).
It could also be a file loading error depending on how the model is saved. You can either save the model object directly, or just save the state_dict. They need to be loaded differently as per Pytorch docs: https://pytorch.org/tutorials/beginner/saving_loading_models.html#saving-loading-model-for-inference
I cloned the Tensorflow's Android image classification example and I want to now use my own tflite that I have downloaded from here to classify food. How do I replace the default tflites with my own tflite? I only have one tflite while in the Android example there's a couple so I'm not sure which tflite to get rid of.
First, add the model into the assets folder. Alternatively, you can directly download the model from TFHub in the similar way as the models in download.gradle.
Then if you use Task library to run inference (which is simpler), you will need to add a similar class of ClassifierFloatEfficientNet.java and provide the model info. Register your new class in Classifier.java .
If you use the TFLite Support library to run inference, you'll add the new class like this one.
I'm able to run the TensorFlow lite image classification example on my mobile device. However, I want to exchange the image classification model to a pose recognition model. In my case, the output should consist of a list of (x,y) coordinates.
The respective line in the code looks like this:
#Override
protected void runInference() {
tflite.run(imgData, labelProbArray);
}
However the tflite.run function has no source code (only available as binary). So I don't know how it works or how to manipulate its return values.
I worked with TensorFlow before, however, I don't know how to create a TensorFlow model that is compatible with the input and output expected by TensorFlow Lite.
Can anyone help or point me to some more detailed tutorial than the official documentation?
The conversion has to be done to the TF model before converting it to tflite. A pre-existing tflite model can be inspected using the tool "netron"
When using a self trained model (.ckpt files) one has to undergo the procedure of
creating a graph definition file for evaluation
use freeze_graph to freeze the previously created graph definition file using the latest .ckpt file from your training to assign it some weights
using tflite_convert (eg from command line) to convert the frozen graph to a tflite file which you can push to your android application
TensorFlow Android Camera Demo uses Inception5h model for live image recognition which delivers exceptional performance. Since I haven't had success retraining Inception5h I've gone with InceptionV3 model but it's not quite as snappy at image recognition. So I'm back at the beginning trying to retrain (or transfer learn) Inception5h model. I've tried modifying retrain.py but it's clearly written just for the v3 model. 5h model doesn't contain "pool_3/_reshape:0", "DecodeJpeg/contents:0" or "ResizeBilinear:0" tensors to begin with. There are other differences as well.
I'm a bit of a newbie at machine learning and TensorFlow so I'd greatly appreciate clear steps as to what I have to do.
Thank you!
It looks like the retrain.py script and tutorial was just updated to work with the mobilenet architecture.
So that solves the first part of your problem, it's not actually inception5h, but it runs well on mobile with much better accuracy than inception5h.
To actually get it to run in the android example you'll still need to update these settings.
I think you should be able to just copy the settings determined for the mobilenet you choose, from the retrain script and you might be okay.
If you wanted to use a different network, that didn't have the settings in retrain.py then the easiest way I can think of to determine them would be to explore the graph with TensorBoard.
So if you really wanted to use inception 5h, you could download and unzip it:
curl -O https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip
unzip -d inception5h inception5h.zip
Then grab this simple script, from the Tensorflow for Poets: 2 codelab repo, to convert the graph .pb file to something tensorboard can use:
curl -O https://raw.githubusercontent.com/googlecodelabs/tensorflow-for-poets-2/master/scripts/graph_pb2tb.py
And run it on your graph.pb:
mkdir tb_graph
python graph_pb2tb.py tb/inception5h inception5h/tensorflow_inception_graph.pb
And open it in tensorboard:
tensorboard --logdir tb_graph
Then it might be relatively simple to poke around in the graph and find the names of the nodes you need to fill up your own model_info dict.
I think this is the node you'd want to set as your bottleneck_tensor:
At the end of retrain.py script you can notice these lines:
output_graph_def = graph_util.convert_variables_to_constants(
sess, graph.as_graph_def(), [FLAGS.final_tensor_name])
with gfile.FastGFile(FLAGS.output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
Here all the variables are saved as constants in a protocol buffer (pb) file which is binary ('wb'). You should also save in a text file the names of the model's classes. Then as the android documentation mentions, you should save these 2 files in a folder named "assets" in the android path of tensorflow. Then there are some modifications that should be done to load the inception-v3 model which you can see here: https://github.com/tensorflow/tensorflow/issues/1269
I hope this will help!
I am looking into Renderscript capabilities and stuck with the A3D (Android 3d) file format. I can't find an easy way to convert a Collada file into an A3D format to store my blender model.
I was wondering if you guys have an idea I could try maybe?
Does anyone have a working code sample so that is can see what im doing wrong?
More info: http://developer.android.com/reference/android/renderscript/FileA3D.html
Edit: Not to be mistaken for the Asci3d file extention ( also *.a3d )
As of Ice Cream Sandwich (perhaps earlier) there is a tool in the Android source to convert between Collada and A3D.
The tool is called a3dconvert; you can browse the source online here (in the ICS branch): https://github.com/android/platform_development/tree/ics-mr1-release/tools/a3dconvert
Usage:
a3dconvert input_file a3d_output_file
Currently .obj and .dae (collada) input files are accepted.
This tool has been removed as of newer releases (Jelly Bean, it looks like). This probably because the graphics portion of Renderscript has been deprecated.
I'm not sure A3D is a good format but if you have to write a converter here is a description of both formats:
http://scorpion.tordivel.no/help/UsersGuide/General/ImageOperations/ImageFormats/ImageFormats_a3d.htm
http://en.wikipedia.org/wiki/COLLADA
And here is some sample code to read Collada:
http://sourceforge.net/projects/colladaloader/
If you're going from Blender to A3D, I would consider writing a Python script to go directly to A3D format from Blender. The A3D format seems rather simplistic and if you're only accessing the Mesh data, the Blender API isn't too hard to follow. Of course if you don't already know it, you'll have to pick up some Python syntax.
I knew nothing of Python when I first wanted to pull some information from Blender myself, and looking at existing .py scripts (like the OBJ export), the Blender API and learning some basic Python syntax I was able to write my first (rather simple) script in just a few hours or so.
http://colladablender.illusoft.com/cms/ is a project making a plugin for Blender to read Collada directly.
Also, Carrara could be used to convert your files to something Blender supports.