Do android PDF viewers convert the PDF into an image? - android

I am making an app and part of it will view PDF's. The pdf's are on a web server and downloaded to the app. I am trying to understand what happens when a viewer loads a pdf. If it is converted to an image then I would like to try converting the pdf to a png on the server and just using that copy to view in app as png is a lot less hassle to deal with.
The only reason I would not convert to png is if an android pdf viewer maintains the vector nature of the file as zooming is critical and I want a nice crisp image.

Ultimately someone is always going to convert the PDF to an image. Or better said to pixels because that is what you need to display on the tablet screen (or any screen that I know of). The question is who does the conversion, when it is done and how well and quickly it is done.
For a tablet viewer, the challenge is to do the conversion quickly enough not to bother the user with load times and that often goes at the price of quality. There are virtually no PDF viewing applications on either iOS or Android at this point that do a really good job at showing all intricacies of the PDF file format.
That being said, the quality is usually good enough and if the viewer is well implemented, zooming for a PDF file should be a no brainer. Zooming simply means for the application that the viewport (the part of the PDF page that is visible) is different, but it doesn't really change the algorithm used to convert the PDF page elements into pixels.
That is also the difference between you converting the PDF to an image on the server and the app converting it to pixels on the device. Your server likely has more calculation power (at least it might have :-)) but the application knows at which resolution it needs to convert to pixels and what part of the page it has to convert to pixels. And a good viewing application can use these details to adapt how it does the conversion to pixels. There are lots of optimisation algorithms that can be used to only render visible elements and take shortcuts based on knowing exactly what resolution will be used for rendering.
In short, yes, you can do the rendering on the server and feed an image to your viewer. But keep in mind that - especially while allowing the user to zoom - you'll get lots of data and probably poorer quality than when you let a good viewer handle things in PDF...

Related

AlexNet works fine on web images but not on mobile images

I have trained a deep-learning model using alexNet to classify car models. The data was collected from the web,including google image, flicker,.. etc.
The model was tested on a separate test set collected from the web 'many comes from flicker', and it works fine. However, I have build a simple android app, that takes a photo 'with landscape mode' and send it to the server to recognize the car. The performance on the mobile images are very poor. I have tested the images with the alexNet model trained on imageNet and the model didn't answer correctly these images.
I wonder, if there is any think i missed when applying mobile images.
Thanks.
You may want to add photos taken with a mobile device to your training data. If this is not possible maybe you can preprocess the image in a way to make it more like the images used for training (e.g. normalization).
Edit:
mean-image subtraction is not normalization. Normalization is subtracting the mean image and then dividing by the standard deviation. Both computed from the training set.
You should analyze both images. Maybe there are resolution / lightning changes etc. Without having a sample of both images it is hard to tell what exactly the difference is. However deep learning is notorious to being bad with changes in data acquisition.
2nd Edit:
Another point that might fail on mobile images is the resolution. CNN work on any resolution and if you pass in a 4000x2000 image to an imagenet it will probably find something different than on an 224x224 image. For this reason you might want to post a web image and a mobile image and we on stackoverflow might be able to tell you the difference.

Images to Executables?

I have an app that uses a lot of images for drawables. However, because they are in high definition quality they take a lot of space. Is there anyway in android that I might be able to convert these images into text or since I am using Maya save them in a different format so that my app can draw them? In other words is there a way to create a code that draws the image via points given by text yet still has a good quality?

Crop image without loading into memory

I want to crop image of large size and tried using Bitmap.createBitmap but it gives OOM error. Also, tried multiple technique around createBitmap but none of them were successful.
Now I thinking of saving image to file system and crop it without loading image into memory that might solve the problem. But don't know how to do it.
User flow: User will take multiple pictures from in-app camera after each snap user can crop it manually or app will silently crop it on some predefine login and later it will send these images to server.
Can anybody guide me how I can achieve this?
There is a class called BitmapRegionDecoder which might help you, but it's available from API 10 and above.
If you can't use it :
Many image formats are compressed and therefore require some sort of loading into memory.
You will need to read about the best image format that fits your needs, and then read it by yourself, using only the memory that you need.
a little easier task would be to do it all in JNI, so that even though you will use a lot of memory, at least your app won't get into OOM so soon since it won't be constrained to the max heap size that is imposed on normal apps.
Of course, since android is open source, you can try to use the BitmapRegionDecoder and use it for any device.
I very much doubt you can solve this problem with the existing Android API.
What you need to do is obtain one of the available image access libraries (libpng is probably your best bet) and link it to your application via jni (see if there's a Java binding already available).
Use the low-level I/O operations to read the image a single scanline at a time. Discard any scanlines before or after the vertical cropped region. For those scanlines inside the vertical cropped region, take only those pixels inside the horizontal cropped region and write them out to the cropped image.

Android: pixel quality reduction in Images loaded in WebView

I am building Javascript application for mobile browsers (not wrapped-as-native app).
I noticed that Android (tested 2.3 emulator and Galaxy S device) reduces the quality of loaded images if the image dimensions exceed certain threshold (width above 1400 px or so). This make it impossible to load big bitmap images (2000 x 2000 px) without the quality going unusable.
I tested this by
Loading one big image and drawing it on the - I got pixel garbage out. If I draw grid lines using lineTo() on they have perfect quality, so the bad must be in the image pixel data
Slicing the big image to 100 x 100 slices and drawing them to a canvas - this is the only method I found resulting no quality reduction. However, slicing is cumbersome, adds extra step to preprocess images and page loading times suffers
I tested tring to load image with new Image() object, tag and CSS background: everything suffers from the reduced quality, so I suspect the probelm is the image loader itself
I also tried everything with CSS image-rendering https://developer.mozilla.org/En/CSS/Image-rendering - no luck
Viewport tag seems to have no effect to the image loading - the data is already garbage when you try to touch the loaded pixel data. I tried all possible values suggested in Android's SDK documentation http://developer.android.com/reference/android/webkit/WebView.html
Tested also Firefox mobile, desktop browsers, iOS: everything is good there.
So, what is going on - Android WebView simply can't load big images?
(smiley of hung Android robot here)
Android unconditionally resamples images and reduces quality if a certain threshold of memory usage is exceeded.
https://android.googlesource.com/platform/external/webkit/+/android-3.2.4_r1/WebCore/platform/graphics/android/ImageSourceAndroid.cpp
There is no way to access the original image data in intact.
I posted a question regarding this to android-developers Google Group and kindly asking to maybe provide some kind of flag to opt-out from this behavior.
Meanwhile, if you are considering developing HTML5 web apps and you might use big images, you simply need to preprocess them on the server-side by slicing, send in smaller images to the device and then reconstuct the original image using or putting many tags inside a container element.
Another option would be load image "manually" by writing a PNG decoder which directly loads the image to , bypassing ImageSourceAndroid class.
The question is old so probably few things changed, but if you are having image quality issues with WebView then consider converting your image into PNG format.
Somehow when I load the jpeg version of the image the quality is low, while loading the png image with the same resolution the quality is high.

Loading image from server

I have activity that load's image from server into imageview. Image is displayed after it's fully loaded.
What I want to do is while image is being loaded display it first in low, than medium, and at the end high quality (some images are big). I have no idea how this thing is called and what to google, so any help is appreciated.
Loading an image like this is called progressive loading, it is even part of the JPEG standard.
Another possible search term might be interlacing.
It is certainly possible to create the image in progressive qualities from the same stream though some clever technical co-operation from both sides would be nessecary.
Are you grabbing pictures from a server in your control or from other url's?
If you control the server you could query a php script with image quality parameters and get them in succession though personally I think that's a waste of bandwidth.
Why not just indicate the image is loading and put a place holder image, and when the high quality image is downloaded just replace it?
I'm sorry i couldn't help you by telling you how to do it the first way although it's something I may try to implement myself in some spare time.
My finding is that iOS can progressively decode & render Progressive JPEG with NYXImagesKit (see How do I display a progressive JPEG in an UIImageView while it is being downloaded? ), but Android doesn't seem to have a library like that.

Categories

Resources