I want to crop image of large size and tried using Bitmap.createBitmap but it gives OOM error. Also, tried multiple technique around createBitmap but none of them were successful.
Now I thinking of saving image to file system and crop it without loading image into memory that might solve the problem. But don't know how to do it.
User flow: User will take multiple pictures from in-app camera after each snap user can crop it manually or app will silently crop it on some predefine login and later it will send these images to server.
Can anybody guide me how I can achieve this?
There is a class called BitmapRegionDecoder which might help you, but it's available from API 10 and above.
If you can't use it :
Many image formats are compressed and therefore require some sort of loading into memory.
You will need to read about the best image format that fits your needs, and then read it by yourself, using only the memory that you need.
a little easier task would be to do it all in JNI, so that even though you will use a lot of memory, at least your app won't get into OOM so soon since it won't be constrained to the max heap size that is imposed on normal apps.
Of course, since android is open source, you can try to use the BitmapRegionDecoder and use it for any device.
I very much doubt you can solve this problem with the existing Android API.
What you need to do is obtain one of the available image access libraries (libpng is probably your best bet) and link it to your application via jni (see if there's a Java binding already available).
Use the low-level I/O operations to read the image a single scanline at a time. Discard any scanlines before or after the vertical cropped region. For those scanlines inside the vertical cropped region, take only those pixels inside the horizontal cropped region and write them out to the cropped image.
Related
So, I've been researching on bitmap scaling using the bitmap factory.
http://developer.android.com/training/displaying-bitmaps/load-bitmap.html
I'm doing so because the application I'm working on requires a gallery that allows users to submit their photos to be added to the gallery. These photos will then be read from a URL.
My theoretical problem is this: Considering the android devices can have as low as 16MB of memory, even scaling down the images is only delaying the inevitable unless only handling a single image. Whereas in my case, the amount of images that will be loaded could be hundreds. Meaning that even if they're scaled down, eventually one will reach that limit.
My only idea thus far are to load one image at a time, which is not preferable since users will have to wait between photo transitions.
That being said, is there anyone who has experience developing applications on android that handle 100's of images? If so, is there any theory you could share on handling all these images fluidly? It can obviously be done, as there are gallery applications available. I am just unsure how they accomplished it given the restraints.
Please note this is not a request on how to use the bitmap factory to scale images, as that question has been answered many times.
Rather a request on handling data amounts you know will exceed limitations.
The gallary apps should not be storing all thousands of images in memory. Use the Viewholder pattern such that the image views displayed will get recycled (this is forced upon you if you use RecyclerView). On backend use an image cache and keep a limit on it size.
See e.g. What is the benefit of ViewHolder? and How to release memory of bitmap using imageloader in android?
The Android gallary app source may be a good reference: https://android.googlesource.com/platform/packages/apps/Gallery/+/android-5.1.1_r18/src/com/android/camera
I've been struggling for a long time with large images that are able to zoom. I am loading some picture from the network that can have very variable size: it might go from 0.5MP up to 10MP. Simply loading one to a bitmap can produce application crash because of OOM exception. But details are very important so I want the user to be able to zoom on them such that full quality is maintained (so the picture should refine itself during zooming). I don't find a proper way to do this. I've used the TouchImageView library, but it doesn't manage large pictures at all. If I first down sample my picture with the inSampleSize parameter of the BitmapFactory, I lose the quality definitively. I don't want to code a whole new zooming tool, as it is already implemented on every android phone in the default Gallery app. There has to be a way to use this kind of tool, and simply display a large image that is able to zoom, right?
Have you tried PhotoView?
You could also do it with loadUrl(String) of standard WebView which should handle big images too. WebView has built-in zoom controls.
BitmapRegionDecoder(added in 2.3.3) may work. But I've not tried it.
It seems that the implementation in Gallery is OpenGL, there is no way to use it simply.
From experiments and from reading other posts like this one it seems that it's hard to process high resolution images on Android because there is a limit on how much memory the VM will allow to allocate.
Loading a 8MP camera pictures takes around 20 MB of memory.
I understand that the easy solution is to downsample the image when loading it (BitmapFactory offers such an option) but I still would like to process the image in full resolution: the camera shoots 8MP, why would I only use 4MP and reduce the quality.
Does anyone know good workarounds for that?
In a resource-constrained environment I think that your only solution is to divide and conquer: e.g. caching/tiling (as in: tiles)
Instead of loading and processing the image all at once you load/save manageable chunks of the image from a raw data file to do your processing. This is not trivial and could get really complex depending on the type of processing you want to do, but it's the only way if you don't want to comprise on image quality.
Indeed, this is hard. But in case image is in some continuous raster format, you can mmap it
( see java.nio.ByteBuffer ) - this way you get byte buffer without allocating it.
2 things:
Checkout the gallery in Honeycomb. It does this tiled based rendering. You can zoom in on an image and you see then that the current part is higher res then the other parts. If you pan around you see it rendering.
When using native code (NDK) there is not a resource limit. So you could try to load all the data native and somehow get parts of it using JNI, but I doubt it's better then the gallery of honeycom.
I'm performing an OCR utility for Android and I'd like to crop an image on the fly, I mean, take the picture and in the JPEG callback be able to crop the image from the byte array Android returns to you before to save it or whatever.
The original issue is that I need to generate a bitmap from that image and, if it has high resolution, I'm getting a "Bitmap exceeds VM budget" error. Also I'd like to crop the image (automatically, not allowing the user to do it) because of processing time of the OCR.
I saw a BitmapRegionDecoder class from Android 2.3.3 forth that makes all I'd like to do, but I need to work with earlier versions. Any suggestions?
Thank you guys!
Assuming it doesn't use any native code, just copy BitmapRegionDecoder to your project and use that instead of the system version.
Finally I've realized the only two feasible options seem to be storing the photo in the SD card and work with it after or to use a native library (which memory allocation is done out of the Dalvik VM heap so you're able to use up to 10 times more RAM than inside the VM). I think I'll choose to store it first. Seems to be more simple to do and maintain.
I would like to capture an image with the Android Camera but because the image may contain sensitive data I dont want the image saved to the phone or sd card. Instead I would like a base64 string (compressed) which would be sent to the server immediately
In PhoneGap it seems files are saved to various places automatically.
Natively I was never able to get the image stream - in onJpegPictureTaken() the byte[] parameter was always null.
can anyone suggest a way?
See Camera.onPreviewFrame() and the YuvImage.compresstoJpeg() to be able to get a byte array you can convert into a bitmap.
Note that YuvImage.compressToJpeg() is only available in SDK 8 or later, I think. For earlier versions you'll need to implement your own YUV decoder. There are several examples around or, I could provide you an example.
Those two methods will allow you to get a camera picture in memory and never persist it to SD. Beware that bitmaps of most camera preview sizes will chew up memory pretty quickly and you'll need to be very careful to recycle the bitmaps and probably also have to scale them down a bit to do much with them and still fit inside the native heap restrictions on most devices.
Good luck!