Tensorflow lite accuracy drop on mobile device - android

I followed the both Tensorflow for Poets Tutorials:
Tensorflow for Poets 1 and Tensorflow for Poets 2.
My retrained model gives accurate results for a test on my laptop but after converting into the .tflite file and trying to classify the same image on my Android device the accuracy drops under 1%.
I used the following commands to retrain und convert:
python retrain.py \
--bottleneck_dir=tf_files/bottlenecks \
--how_many_training_steps=500 \
--model_dir=tf_files/models/ \
--summaries_dir=tf_files/training_summaries/"${ARCHITECTURE}" \
--output_graph=tf_files/retrained_graph.pb \
--output_labels=tf_files/retrained_labels.txt \
--architecture="${ARCHITECTURE}" \
--image_dir=tf_files/flower_photos
toco \
--input_file=tf_files/retrained_graph.pb \
--output_file=tf_files/optimized_graph.lite \
--input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \
--input_shape=1,224,224,3 \
--input_array=Placeholder \
--output_array=final_result \
--inference_type=FLOAT \
--input_data_type=FLOAT
Strangely the optimized file is almost as high as the original (both around 80 MB).
Using Tensorflow 1.9.0 and Python 3.6.6.
Any help or tip is appreciated!

Well I figured it out. Apparently the ARCHITECTURE variable was not set to the right value. So if anyone encounters the same problem, first of all check that

Related

Android NDK Build FFMPEG in 2021

I'm working on an android app, and I have to convert webm files to mp3.
I really want to make a custom ffmpeg build, because it reduces the ffmpeg executable size to only 2MB.
My library works absolutely fine when running on my PC, but i'm struggling to build it for android... It seems like NDK architecture has changed and tutorials are outdated, and I can't find a proper and recent guide for android compiling...
I also would like to target all architectures (aarch64, armv7, i686, and x86_64)...
I've been on this for hours, fixed many errors, but still nothing has worked ><.
Please help me ! :\
PS. I'm compiling on Linux, here is my configuration script:
#!/bin/bash
API=31 # target android api
OUTPUT=/home/romain/dev/android/ffmpeg_build
NDK=/home/romain/android-sdk/ndk/23.0.7599858
TOOLCHAIN=$NDK/toolchains/llvm/prebuilt/linux-x86_64
SYSROOT=$TOOLCHAIN/sysroot
TOOL_PREFIX="$TOOLCHAIN/bin/aarch64-linux-android"
CC="$TOOL_PREFIX$API-clang"
CXX="$TOOL_PREFIX$API-clang++"
./configure \
--prefix=$OUTPUT \
--target-os=android \
--arch=$ARCH \
--cpu=$CPU \
--disable-everything \
--disable-everything \
--disable-network \
--disable-autodetect \
--enable-small \
--enable-decoder=opus,vorbis \
--enable-demuxer=matroska \
--enable-muxer=mp3 \
--enable-protocol=file \
--enable-filter=aresample \
--enable-libshine \
--enable-encoder=libshine \
--cc=$CC \
--cxx=$CXX \
--sysroot=$SYSROOT \
--extra-cflags="-0s -fpic"
make
make install
The prefix should point to $SYSROOT/usr/ and you misunderstood what --prefix mean. Its not output directory. Other than that i think nothing problematic than that (if it still happen please provide ffbuild/config.log)
The repository pointed to by the previous answer is no longer being maintained.
Here is an updated one.
This is the android branch: https://github.com/arthenica/ffmpeg-kit/tree/main/android

Custom implementation for Dequantize when converting .pb to .tflite

I'm trying to convert tensorflow lite quantised .pb file to .lite using toco. The command for creating .pb file is :
retrain.py is here and here.
python retrain.py \
--bottleneck_dir=/mobilenet_q/bottlenecks \
--how_many_training_steps=4000 \
--output_graph=/mobilenet_q/retrained_graph_mobilenet_q_1_224.pb \
--output_labels=/mobilenet_q/retrained_labels_mobilenet_q_1_224.txt \
--image_dir=/data \
--architecture=mobilenet_1.0_224_quantized
When I'm trying to convert the .pb file to .tflite using toco command:
bazel run --config=opt //tensorflow/contrib/lite/toco:toco \
-- --input_file= retrained_graph_mobilenet_q_1_224.pb \
--output_file= retrained_graph_mobilenet_q_1_224.lite \
--input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \
--input_shape=1,224,224,3 \
--input_array=input \
--output_array=final_result \
--inference_type=FLOAT \
--input_data_type=FLOAT
I'm getting the error:
Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.contrib.lite.toco_convert(). Here is a list of operators for which you will need custom implementations: Dequantize.
I've searched in github and stackoverflow but I've not come across a satisfactory answer.
The discussion and the solution are here.

TFLite: Unable to obtain Inference using MobilenetV2 on custom Dataset

I have followed this link and successfully created frozen graph for MoiblenetV2_1.4_224 by fine tuning on my custom dataset.
Then, I followed the tensorflow-for-poets:tflite to create the tflite graph using toco using following command.
IMAGE_SIZE=224
toco \
--input_file=frozen_mobilenet_v2.pb \
--output_file=optimized_graph.lite \
--input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \
--input_shape=10,${IMAGE_SIZE},${IMAGE_SIZE},3 \
--input_array=input \
--output_array=MobilenetV2/Predictions/Softmax \
--inference_type=FLOAT \
--input_data_type=FLOAT
The lite graph was successfully created but during inference, while running the tflite Interpretor I get the following error. Due to this, I am not getting any inferences.
Input error: Failed to get input dimensions. 0-th input should have 6021120 bytes, but found 602112 bytes.
Have you tried this argument --input_array=Predictions.
You can try to use quantized model
Input error: Failed to get input dimensions.
The commandline flag should be --input_arrays and not --input_array. (Plural instead of singular).
Ditto --output_arrays instead of --output_array. That should resolve your error. So the command should be:
IMAGE_SIZE=224
toco \
--input_file=frozen_mobilenet_v2.pb \
--output_file=optimized_graph.lite \
--input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \
--input_shape=10,${IMAGE_SIZE},${IMAGE_SIZE},3 \
--input_arrays=input \
--output_arrays=MobilenetV2/Predictions/Softmax \
--inference_type=FLOAT \
--input_data_type=FLOAT
Additional debugging tip
For future reference, if your flags are all correct, then next step would be to check that your input and output tensor names are correct. You can try to visualize the inputs and outputs of your graph in Tensorboard or convert to pbtxt as follows and then read it out:
import tensorflow as tf
path_to_pb = '...'
output_file = '...'
g = tf.GraphDef()
with open(path_to_pb, 'rb') as f:
g.ParseFromString(f.read())
with open(output_file, 'w') as f:
f.write(str(g))

android nkd no include path in which to search for limits.h #include_next <limits.h>

When i build the x264 ndk library , i face a problem .
I've compiled both in window and liunx environment.i got the same mistakes...
like this:
In file included
from c:\users\xxx\appdata\local\android\sdk\ndk-bundle\toolchains\
aarch64-linux-android-4.9\prebuilt\windows-x86_64\lib\gcc\aarch64-linux-android\4.9.x\include-fixed\syslimits.h:7:0,
from c:\users\xxx\appdata\local\android\sdk\ndk-bundle\toolchains\
aarch64-linux-android-4.9\prebuilt\windows-x86_64\lib\gcc\aarch64-linux-android\4.9.x\include-fixed\limits.h:34,
from ./common/common.h:123,
from ./x264cli.h:30,
from ./input/input.h:31,
from ./filters/video/video.h:29,
from ./filters/video/depth.c:26:
c:\users\xxx\appdata\local\android\sdk\ndk-bundle\toolchains\aarch64-linux-android-4.9\
prebuilt\windows-x86_64\lib\gcc\aarch64-linux-android\4.9.x\include-fixed\limits.h:168:61:
error: no include path in which to search for limits.h
#include_next <limits.h> /* recurse down to the real one */
make: *** [.depend] Error 1
Here is my script:
SYSROOT=$NDK/platforms/android-21/arch-arm64
TOOLCHAIN=$NDK/toolchains/aarch64-linux-android-4.9/prebuilt/windows-x86_64
CC=$TOOLCHAIN/bin/aarch64-linux-android-gcc-4.9.x
#CXX=$TOOLCHAIN/bin/aarch64-linux-android-g++
CROSS_PREFIX=$TOOLCHAIN/bin/aarch64-linux-android-
EXTRA_CFLAGS="-march=armv8-a -D__ANDROID__"
EXTRA_LDFLAGS="-nostdlib"
./configure --prefix=$PREFIX \
--host=arm-linux \
--sysroot=$SYSROOT \
--cross-prefix=$CROSS_PREFIX \
--extra-cflags="$EXTRA_CFLAGS" \
--extra-ldflags="$EXTRA_LDFLAGS" \
--enable-pic \
--enable-static \
--enable-strip \
--disable-cli \
--disable-win32thread \
--disable-avs \
--disable-swscale \
--disable-lavf \
--disable-ffms \
--disable-gpac \
--disable-lsmash \
--disable-asm \
--disable-opencl
does anyone know how to solve it? thanks every much.
To build with latest NDK You need to use --deprecated-headers while creating standalone toolchain.
Some additional info: NDK unified headers
This problem occurs when you use recent versions of android ndk. Please, use older versions like android ndk r13b. I have successfully built on my mac machine using android ndk r13b.
Link given below:
https://dl.google.com/android/repository/android-ndk-r13b-darwin-x86_64.zip
Please, use one specific to your platform.
You can also follow this link https://osburneblog.wordpress.com/2017/06/01/cross-compiling-ffmpeg-and-libx264-for-android/ know more about build process.

Android caffe ForwardPrefilled() doesn't work in Multithread objects

I'm getting problems with m_caffe_net->forwardPrefilled but only with android threading case.
My algorithm is a basic caffe process such as:
load models-> process-> get the result. (CPU mode)
If I integrate the code in the ui thread everything works fine. But Android says that
this is not a good tip because of freezing gui.
I tested threads, AsyncTask and runnables, and always get the same SIGSEV 11 error
So I tested my call-stack in Android Studio and I noticed that the last call was omp_get_num_threads.
Is it necesary to use openmp set_num_threads omp function to exec forwardPrefilled() in multicore mode?
My sample is similar to this:
https://github.com/sh1r0/caffe-android-demo
Caffe lib compilation is this:
https://github.com/sh1r0/caffe-android-lib
Thanks in advance.
Finally I found a solution:
I disabled openmp option in caffe script/build_caffe.sh file.
cmake -DCMAKE_TOOLCHAIN_FILE="${WD}/android-cmake/android.toolchain.cmake" \
-DANDROID_NDK="${NDK_ROOT}" \
-DCMAKE_BUILD_TYPE=Release \
-DANDROID_ABI="${ANDROID_ABI}" \
-DANDROID_NATIVE_API_LEVEL=21 \
-DANDROID_USE_OPENMP=OFF \
-DADDITIONAL_FIND_PATH="${ANDROID_LIB_ROOT}" \
-DBUILD_python=OFF \
-DBUILD_docs=OFF \
-DCPU_ONLY=ON \
-DUSE_LMDB=ON \
-DUSE_LEVELDB=OFF \
-DUSE_HDF5=OFF \
-DBLAS=${BLAS} \
-DBOOST_ROOT="${BOOST_HOME}" \
-DGFLAGS_INCLUDE_DIR="${GFLAGS_HOME}/include" \
-DGFLAGS_LIBRARY="${GFLAGS_HOME}/lib/libgflags.a" \
-DGLOG_INCLUDE_DIR="${GLOG_ROOT}/include" \
-DGLOG_LIBRARY="${GLOG_ROOT}/lib/libglog.a" \
-DOpenCV_DIR="${OPENCV_ROOT}" \
-DPROTOBUF_PROTOC_EXECUTABLE="${ANDROID_LIB_ROOT}/protobuf_host/bin/protoc" \
-DPROTOBUF_INCLUDE_DIR="${PROTOBUF_ROOT}/include" \
-DPROTOBUF_LIBRARY="${PROTOBUF_ROOT}/lib/libprotobuf.a" \
-DCMAKE_INSTALL_PREFIX="${ANDROID_LIB_ROOT}/caffe" \
The result is a little bit slower but it works :).

Categories

Resources