Parse multiple page PDF to multiple bitmaps on andorid - android

I want to get a PDF file (witch contains multiple pages) from my device using Intent and then parse this pdf to multiple images and show them in my ViewPager. I successfully got file from my device, but how to parse pdf to multiple bitmaps on android?

Tool to parse: Android-ImageMagick
You can use ImageMagick to parse PDF files.
There is an android port: paulasiimwe/Android-ImageMagick: Android port for ImageMagick based on techblue/jmagick Java library.
Command to parse:
Try something like this:
convert \
-verbose \
-density 150 \
-trim \
<your-PDF-file>.pdf \
-quality 100 \
-flatten \
-sharpen 0x1.0 \
<1-100 page numbers>.jpg
P.S.: convert is a part of ImageMagick package

Related

How to use only 'concat' feature of ffmpeg and disable other components in Android?

I want to use ffmpeg library in my Android application to concatenate mp4 videos. After lots of research I choose ffmpeg-kit to work with ffmpeg. The problem is that the apk size with the library is large and I want to reduce it. As described here I have to disable unused components of ffmpeg, but I don't know what components I do need and what I don't. I started with adding these lines to ffmpeg.sh file of ffmpeg-kit but, it didn't work:
--disable-everything \
--enable-avcodec \
--enable-avformat \
--enable-ffmpeg \
I got the error below when executingffmpeg -f concat -safe 0 -i mp4_parts.txt -c copy output.mp4 command:
Unrecognized option 'safe'
I added those lines with no reason but to find out the right components that I need.
So my question is that what components do I need to enable for using the concat feature of ffmpeg? Thanks

How to apply metadata on tflite model?

I'm trying to launch TF Object detection Android app (https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection/android) with a custom model
I need to fix this issue
java.lang.AssertionError: Error occurred when initializing ObjectDetector: Input tensor has type kTfLiteFloat32: it requires specifying NormalizationOptions metadata to preprocess input images.
I found one suggestion, that I need to apply metadata on my .tflite model, so I tried to run
python tflite_convert.py \ --input_shapes="1,300,300,3" \ --input_arrays=normalized_input_image_tensor \ --output_arrays="TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3" \ --allow_custom_ops \ --saved_model_dir=alexey/saved_model \ --inference_input_type=FLOAT \ --inference_type=FLOAT \ --output_file=detect.tflite
And it was done without any errors, but when I launch the app with this generated .tflite I get the same error as without applying metadata.
So it seems to me that metadata was not applied
I had the same error today. I solved it by running this script. It generated a tflite and json file and I put both of them in the assets/models folder.
It's a good idea to just modify the flgs in the string than use command line parameters if you are on Windows 10 (just replace where it says none with the correct path)

Tensorflow: Building graph and label files form checkpoint file

I want to build the graph and labels file from inception-resnet-v2.ckpt file. I have already downloaded the check point file form
wget http://download.tensorflow.org/models/inception_resnet_v2_2016_08_30.tar.gz.
I want to replace the inception5h model in tensorflow: android camera domo app with inception-resnet-v2. which requires a MODEL_FILE and a LABEL_FILE .
Now I don't know how I can get a .pb file and a label files from a checkpoint file.
I am learning tensorflow, still at beginner level.
Not sure, what the label file is, but to convert a checkpoint into a .pb file (which is binary protobuf), you have to freeze the graph. Here is a script I use for it:
#!/bin/bash -x
# The script combines graph definition and trained weights into
# a single binary protobuf with constant holders for the weights.
# The resulting graph is suitable for the processing with other tools.
TF_HOME=~/tensorflow/
if [ $# -lt 4 ]; then
echo "Usage: $0 graph_def snapshot output_nodes output.pb"
exit 0
fi
proto=$1
snapshot=$2
out_nodes=$3
out=$4
$TF_HOME/bazel-bin/tensorflow/python/tools/freeze_graph --input_graph=$proto \
--input_checkpoint=$snapshot \
--output_graph=$out \
--output_node_names=$out_nodes
Here, proto is a Graph definition (text protobuf), and snapshot is a checkpoint.
You will need to optimize your model after you have frozen it.
Look at this great tutorial
For the labels you can get them here (credits to Hands-On Machine Learning with Scikit-Learn and TensorFlow )
bazel build tensorflow/python/tools:optimize_for_inference
bazel-bin/tensorflow/python/tools/optimize_for_inference \
--input=/tf_files/retrained_graph.pb \
--output=/tf_files/optimized_graph.pb \
--input_names=Mul \
--output_names=final_result

Tensor Flow Could not load custom protobuf files in

I have just created a protobuf file (.pb file) for my own custom images using a TensorFlow tutorial.
But when I replaced the same file into the assets folder in tensorflow/examples/android/assets and try to build and generate an APK, the APK gets generated, but when I run the APK in an Android device, the APK crashes.
If I run the classify_image from Python, it gives me proper results.
Appreciate any help.
Since DecodeJpeg isn't supported as part of the core, you'll need to strip it out of the graph first.
bazel build tensorflow/python/tools:strip_unused && \
bazel-bin/tensorflow/python/tools/strip_unused \
--input_graph=your_retrained_graph.pb \
--output_graph=stripped_graph.pb \
--input_node_names=Mul \
--output_node_names=final_result \
--input_binary=true
Change few parameters in this file
/tensorflow/examples/android/src/org/tensorflow/demo/TensorFlowImageListener.java
The input sizes need to be 299, not 224. You'll also need to change the mean and std values both to 128.
INPUT_NAME to "Mul:0" ,
OUTPUT_NAME to "final_result:0"
after which you will be able to compile the apk.
Good Luck

Android Facebook Video Upload with Graph API - How to use multipart/form-data

I am implementing Video Upload with Graph API and I don't understand the chunks part. It says
The request parameters are:
upload_phase (enum) - Set to transfer upload_session_id (int32) - The
session id returned in the start phase start_offset (int32) - Start
byte position of this chunk video_file_chunk (multipart/form-data) -
The video chunk, encoded as form data
And they provide the next example
curl \
-X POST \
"https://graph-video.facebook.com/v2.3/1533641336884006/videos" \
-F "access_token=XXXXXXX" \
-F "upload_phase=transfer" \
-F “start_offset=0" \
-F "upload_session_id=1564747013773438" \
-F "video_file_chunk=#chunk1.mp4"
I don't understand the video_file_chunk part. How do I encode it to multipart/form-data? All what I got is a file, and I can read bytes from it.
I found the solution using Android Async Library
Ion.with(context)
.load(url)
.uploadProgress(progressCallback)
.setMultipartParameter("access_token", AccessToken.getCurrentAccessToken().getToken())
.setMultipartParameter("upload_phase", "transfer")
.setMultipartParameter("upload_session_id", Long.toString(uploadSessionId))
.setMultipartParameter("start_offset", Long.toString(startOffset))
.setMultipartFile("video_file_chunk", chunkFile)
.asByteArray()
.setCallback(completeCallback);

Categories

Resources