Use RAW Bitmap via ImageView.setImageUri() - android

I am trying to write a content provider in order to send an image to another process via ImageView.setImageUri() call, the Bitmap i want to send is in memory and, due to performance reasons, i would prefer avoiding saving it on disk and even encoding it because it might take up to 2/3 seconds to encode in PNG and it wouldn't make much sense since the other end is then decoding it.
So, i am able to send the image in PNG correctly using a PipeHelper but i cannot send it "raw" (so an Android Bitmap as a bytearray directly from memory), is there any option to do that? Am i passing the wrong mime type? Why setImageBitmap() works with raw bitmaps while setImageUri() doesn't?
I know i could use a Parcelable Bitmap but i want to overcome Parcelable size constraints. Finally, is there any encoding faster than PNG that supports alpha if ending size is not an issue?

Related

Detach Image produced from ImageReader

I trying to create an Android App that using Camera2 API, as part of the functionality I want to develop a module that saving multiple images produced by ImageReader as followed:
Image image = reader.acquireLatestImage();
I'm getting the followed Exception:
IllegalStateException too many images are currently acquired
as mentioned in the documention:
https://developer.android.com/reference/android/media/ImageReader#acquireLatestImage()
This is because the image returned from 'acquireLatestImage' is still belongs to the ImageReader Queue.
Is there any way to detach images returning from 'ImageReader' ?
Is there a way to copy an image, preferably without to store it on disk, that is a resource consuming operation ?
Thank's
If this is a YUV_420_888 image, you can copy each of the plane ByteBuffers to keep the data around indefinitely. That is somewhat expensive, of course.
Unfortunately, there's no way to easily detach the Image from the ImageReader. The Reader is a circular buffer queue internally, so removing an Image would require the Reader to allocate a new image to replace the one removed, which is somewhat expensive as well.
It can be done by using an ImageWriter.queueInputImage, connected to another ImageReader with the same configuration as the original ImageReader. When you receive image A from your first ImageReader, and you want to keep it, you can queue it into the ImageWriter, and then get the Image handle again from the second ImageReader. Still need to set maxImages on the second reader high enough to account for all the Images you want to keep around at once, of course.
That's fairly cumbersome, of course, and if you're doing this continually you'll cause a lot of memory reallocation and copies may be just as expensive (and simpler in many ways).

Bitmap.compress png 8-bit pallete

I'm playing with sending some images over a Bluetooth link to an embedded device. The device speaks and decodes PNGs for the images, however it seems some of the messages are getting too large for transfer over the wire at a reasonable speed / data size.
I'm currently using Bitmap.compress(Bitmap.Compression.PNG, 0, <ByteArrayOutputStream>) to get the byte array for the compressed data.
Is there any way I can either make Bitmap.compress() be lossy with the colors. And for that matter, would Bitmap.compress ever use a 256-colour mode on it's own, given an image that has an appropriate number of colors.

Crop image without loading into memory

I want to crop image of large size and tried using Bitmap.createBitmap but it gives OOM error. Also, tried multiple technique around createBitmap but none of them were successful.
Now I thinking of saving image to file system and crop it without loading image into memory that might solve the problem. But don't know how to do it.
User flow: User will take multiple pictures from in-app camera after each snap user can crop it manually or app will silently crop it on some predefine login and later it will send these images to server.
Can anybody guide me how I can achieve this?
There is a class called BitmapRegionDecoder which might help you, but it's available from API 10 and above.
If you can't use it :
Many image formats are compressed and therefore require some sort of loading into memory.
You will need to read about the best image format that fits your needs, and then read it by yourself, using only the memory that you need.
a little easier task would be to do it all in JNI, so that even though you will use a lot of memory, at least your app won't get into OOM so soon since it won't be constrained to the max heap size that is imposed on normal apps.
Of course, since android is open source, you can try to use the BitmapRegionDecoder and use it for any device.
I very much doubt you can solve this problem with the existing Android API.
What you need to do is obtain one of the available image access libraries (libpng is probably your best bet) and link it to your application via jni (see if there's a Java binding already available).
Use the low-level I/O operations to read the image a single scanline at a time. Discard any scanlines before or after the vertical cropped region. For those scanlines inside the vertical cropped region, take only those pixels inside the horizontal cropped region and write them out to the cropped image.

Android reuse stream in BitmapFactory.decodeStream()

We need to downsample image received from InputStream. It is an image received from some URL and it can be either pretty small or very large. To fit this image in memory we have to downsample it. First we retrieve image size with the help of inJustDecodeBounds and calculate necessary sample. Then we create downsampled bitmap by specifying this sample in BitmapFactory.Options.inSampleSize. This 2-steps decoding needs two calls of decodeStream() and works just fine.
This works just fine for files from SD card. But in our case input stream cannot be reset so we can't call decodeStream() twice. Cloning of input stream is also not an option because of its huge size. Alternatively, we can create 2 HTTP requests to the same URL: first to get image size, and then to decode actual image with downsampling, but this solution seems to be rather ugly.
Can we reuse stream which cannot be reset? Or please propose some known workarounds for this problem.
If you wan't to reuse the stream it is obviously must be saved to either RAM or the SD-card, because network InputStream (let's imagine it is not Buffered) is not keeping downloaded data.
So the option to workaround this as said before is to save image directly to the sd-card (maybe in some temp directory) if image could really huge.

How to capture an Android Camera image without saving a file to the phone/sdcard?

I would like to capture an image with the Android Camera but because the image may contain sensitive data I dont want the image saved to the phone or sd card. Instead I would like a base64 string (compressed) which would be sent to the server immediately
In PhoneGap it seems files are saved to various places automatically.
Natively I was never able to get the image stream - in onJpegPictureTaken() the byte[] parameter was always null.
can anyone suggest a way?
See Camera.onPreviewFrame() and the YuvImage.compresstoJpeg() to be able to get a byte array you can convert into a bitmap.
Note that YuvImage.compressToJpeg() is only available in SDK 8 or later, I think. For earlier versions you'll need to implement your own YUV decoder. There are several examples around or, I could provide you an example.
Those two methods will allow you to get a camera picture in memory and never persist it to SD. Beware that bitmaps of most camera preview sizes will chew up memory pretty quickly and you'll need to be very careful to recycle the bitmaps and probably also have to scale them down a bit to do much with them and still fit inside the native heap restrictions on most devices.
Good luck!

Categories

Resources