Need Tensorflow Suggestions - android

I have a dataset of ASL(American Sign Language) in which 3000 images per letter and i am going to train my model by the help of tensorflow codelabs using this script
"python -m scripts.retrain \
--bottleneck_dir=tf_files/bottlenecks \
--how_many_training_steps=? \
--model_dir=tf_files/models/ \ --summaries_dir=tf_files/training_summaries/"mobilenet_1.0_224" \
--output_graph=tf_files/retrained_graph.pb \
--output_labels=tf_files/retrained_labels.txt \
--architecture="mobilenet_1.0_224" \ --image_dir=tf_files/dataset".
Can any one tell me how many steps i have to choosen for the accurate predictions?
I am new in deep learning suggestions would be helpful as i am in learning phase.

If you have about 3,000 images per letter, and 26 letters, that gives you about 78,000 images per epoch. If your batch size is b then that gives you 78,000/b training steps per epoch. I'd suggest training to 10 epoch first, and see what happens.
This is experimental science. Print the accuracy after each epoch, and see what happens, if the network improves any more. Stop training when it stops improving significantly.

Related

How to use only 'concat' feature of ffmpeg and disable other components in Android?

I want to use ffmpeg library in my Android application to concatenate mp4 videos. After lots of research I choose ffmpeg-kit to work with ffmpeg. The problem is that the apk size with the library is large and I want to reduce it. As described here I have to disable unused components of ffmpeg, but I don't know what components I do need and what I don't. I started with adding these lines to ffmpeg.sh file of ffmpeg-kit but, it didn't work:
--disable-everything \
--enable-avcodec \
--enable-avformat \
--enable-ffmpeg \
I got the error below when executingffmpeg -f concat -safe 0 -i mp4_parts.txt -c copy output.mp4 command:
Unrecognized option 'safe'
I added those lines with no reason but to find out the right components that I need.
So my question is that what components do I need to enable for using the concat feature of ffmpeg? Thanks

Tensorflow lite accuracy drop on mobile device

I followed the both Tensorflow for Poets Tutorials:
Tensorflow for Poets 1 and Tensorflow for Poets 2.
My retrained model gives accurate results for a test on my laptop but after converting into the .tflite file and trying to classify the same image on my Android device the accuracy drops under 1%.
I used the following commands to retrain und convert:
python retrain.py \
--bottleneck_dir=tf_files/bottlenecks \
--how_many_training_steps=500 \
--model_dir=tf_files/models/ \
--summaries_dir=tf_files/training_summaries/"${ARCHITECTURE}" \
--output_graph=tf_files/retrained_graph.pb \
--output_labels=tf_files/retrained_labels.txt \
--architecture="${ARCHITECTURE}" \
--image_dir=tf_files/flower_photos
toco \
--input_file=tf_files/retrained_graph.pb \
--output_file=tf_files/optimized_graph.lite \
--input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \
--input_shape=1,224,224,3 \
--input_array=Placeholder \
--output_array=final_result \
--inference_type=FLOAT \
--input_data_type=FLOAT
Strangely the optimized file is almost as high as the original (both around 80 MB).
Using Tensorflow 1.9.0 and Python 3.6.6.
Any help or tip is appreciated!
Well I figured it out. Apparently the ARCHITECTURE variable was not set to the right value. So if anyone encounters the same problem, first of all check that

How to improve computational time in tensorflow

I am using tensorflow in android. I installed the apk for TFClassify available. I ran the application and it is running swiftly with inference time of not more than 400ms. However when I replaced the available trained model with my model, it is taking around 2000ms for computational before displaying the result. Why is there such a difference and how can I optimize my retrained_graph.pb?
Did you convert the retrained model to optimized & quantized graph ?
If not try:
tensorflow/bazel-bin/tensorflow/python/tools/optimize_for_inference \
--input=retrained_graph.pb \
--output=optimized_graph.pb \
--input_names=Mul \
--output_names=final_result
tensorflow/bazel-bin/tensorflow/tools/quantization/quantize_graph \
--input=optimized_graph.pb \
--output=rounded_graph.pb \
--output_node_names=final_result \
--mode=weights_rounded
FYI, you have to build these tools first.

this memory alloc. code runs perfectly on x86, but periodically causes segfaults on Android

I am having a hard time tracking down a bug in native C code within my Android application. This code sits in Wireshark, which I've ported to Android. I have run this same code on x86 numerous times under Valgrind and GDB and can never find a problem with it. However, when it runs on Android, it seems to behave differently and cause segmentation faults every so often (e.g., after running ~100 times).
To be honest, I do not understand the syntax of the code that well, so I'm having a hard time understanding what assumptions might have been made about an x86 machine that do not hold about an ARM-based processor.
Essentially, what it tries to do is bypass constantly allocating new memory and freeing it, by placing memory in to a "bucket" and allowing it to be reused. Whether or not this is actually better in terms of performance is a separate question. I'm simply trying to adopt pre-existing code. But, to do so, it has a couple main macros:
#define SLAB_ALLOC(item, type) \
if(!type ## _free_list){ \
int i; \
union type ## slab_item *tmp; \
tmp=g_malloc(NITEMS_PER_SLAB*sizeof(*tmp)); \
for(i=0;i<NITEMS_PER_SLAB;i++){ \
tmp[i].next_free = type ## _free_list; \
type ## _free_list = &tmp[i]; \
} \
} \
item = &(type ## _free_list->slab_item); \
type ## _free_list = type ## _free_list->next_free;
#define SLAB_FREE(item, type) \
{ \
((union type ## slab_item *)item)->next_free = type ## _free_list; \
type ## _free_list = (union type ## slab_item *)item; \
}
Then, a couple supporting macros for specific types:
#define SLAB_ITEM_TYPE_DEFINE(type) \
union type ## slab_item { \
type slab_item; \
union type ## slab_item *next_free; \
};
#define SLAB_FREE_LIST_DEFINE(type) \
union type ## slab_item *type ## _free_list = NULL;
#define SLAB_FREE_LIST_DECLARE(type) \
union type ## slab_item *type ## _free_list;
Does anyone recognize any assumptions on x86 that might not fly on an Android phone? Eventually what happens is, SLAB_ALLOC() is called and it returns something from the list. Then, following code attempts to use the memory, and the application segfaults. This leads me to believe it's accessing invalid memory. It happens unpredictably, but it always happens in the first attempt to use memory that SLAB_ALLOC() returns.
Is it possible you're simply running out of memory? The SLAB_ALLOC macro calls g_malloc which aborts if the allocation fails.

compiling synamic arm code

I'm building some common gnu/linux console utilities for my Android phone but so far I have only been able to build them statically, with quite a size penalty. Can someone walk me through the steps for synamic compiles using shared libraries?
Here's the script(s) I'm using for configuration:
./configure --host=arm-none-linux-gnueabi \
CC="arm-none-linux-gnueabi-gcc" \
CROSS_COMPILE="arm-none-linux-gnueabi-" \
CFLAGS=" -static $_XXFLAGS" \
for shared:
./configure --host=arm-none-linux-gnueabi \
CC="arm-none-linux-gnueabi-gcc" \
CROSS_COMPILE="arm-none-linux-gnueabi-" \
--enable-shared=yes --enable-static=no
Do I need to make the libs on my android phone avaiable
to my cross-compiler? Google isn't helping me here.
You would have to provide the location for the shared libraries that you want to link against. Please post the error that you're getting for a better answer, but take a look at my answer to
install 64-bit glib2 on 32-bit system for cross-compiling
You should just need to add the right -L and -Wl,-rpath-link to the CFLAGS variable when you're running configure.

Categories

Resources