I'm downloading image data over the network and obviously it's compressed. Inside the app I need the data to be 100% accurate to the original. So for example for a 720x720 image I need a 518400 length int array or 2073600 length byte array where the values have not been modified from the original. Since PNG compression is lossless this should be possible.I noticed that BitmapFactory applies some psychovisual effects when decoding that aren't noticeable when looking at the image, however change the byte values ever so slightly (I'm assuming this is for speed and/or better visual look on screen).
What I'm currently doing is using BitmapFactory.decode and then iterating over every pixel to create a new array from approximation and creating a new bitmap with Bitmap.create This is possible because for now the images are binary and in the original images each pixel is either (r, g, b, 255) or (0, 0, 0, 0). I can find the most prevalent colour and set all pixels with alpha values over some threshold to this RGB value. The solution is quite slow though and seems needlessly complicated.
So my question is - is there any combination of options flags for BitmapFactory.decode that returns a 100% true to the original Bitmap or do I have to create a PNG decoder myself to achieve this?
Have a try with this method:
val opts = BitmapFactory.Options()
opts.inPremultiplied = false
var bitmapDecode = BitmapFactory.decodeByteArray(decode, 0, decode.size, opts)
Related
I am using com.otaliastudios.cameraview.CameraView component from otaliastudios Library. I need to convert some of the frames to Mat objects to process using OpenCV. How can I convert Otaliastudios Frame object into OpenCV Mat object?
Edit: The frame class I am using is located here: github.com/natario1/CameraView/blob/master/cameraview/src/main/java/com/otaliastudios/cameraview/Frame.java
How can I know which Image format this is? Does it make a difference?
You need to know the source format of your phone camera frame.
The Frame object contains a byte[] data field.This field is probably in the same ImageFormat of your camera. The two most common formats are NV21 and YUV_420_888.
The YUV format is composed by a component that is the luminance Y and another component that is called chrominance (U-V).
Typically the relation (and consequently the real bit/byte size) of these two components is defined by methods that reduce the chrominance component because human eyes are more sensible to luminance variations than color variations (see Chroma Subsampling). The reduction is expressed by a set of numbers like 4:2:0.
In this case the part related to chrominance is half the size of luminance.
So the byte size of a Frame has probably a part for the luminance that is equal to width X height bytes and a part of chrominance that is width X (height/2) bytes. This means that the byte size is heavily dependent on the image format that you are acquiring and you have to modify the mat size and choose the CvType according to this.
You have to allocate a mat that has the same size of your frame and put the data into it(from this answer):
mYuv = new Mat(getFrameHeight() + getFrameHeight() / 2,
getFrameWidth(), CvType.CV_8UC1);
....
mYuv.put(0, 0, data);
And then you have your Mat. If you need also to convert to an RGB format check the bottom of this page.
Hope this will help you.
I want to crop image without getting OutOfMemory exception.
it means i have x, y, width and height of cropped image and want to crop original image without bringing it to memory.
Yes i know that BitmapRegionDecoder is good idea but maybe the cropped image would be too large for bringing it to memory.
In fact i don't want copped bitmap, just want to write cropped image from source file to destination file.
EDIT : I want to save cropped image not just showing it in an ImageView
I want to save it in a new file without losing dimensions
This is the example
in this situation cropped image resolution is 20000x20000 and code below wont work cause of OOM:
BitmapRegionDecoder bitmapRegionDecoder = BitmapRegionDecoder.newInstance(inputStream, false);
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig = Bitmap.Config.RGB_565;
Bitmap bitmap = bitmapRegionDecoder.decodeRegion(new Rect(width / 2 - 100, height / 2 - 100, width / 2 + 100, height / 2 + 100), options);
mImageView.setImageBitmap(bitmap);
using inSampleSize to decrease the original picture size is good but the result i save is no longer 20000x20000.
How can i crop the 25000x25000 and save the 20000x20000 part of image in a file?
Simply put, it requires lots of low level programming and optimizations.
as you can see, lots of answers in this region are pointing to generic concepts of bitmap compression, etc which are indeed applicable in most issues but not specifically yours.
Also BitmapRegionDecoder as suggested in answers won’t work well. It sure prevents loading the whole bitmap in RAM but what about the cropped image? after cropping an image it gives you a giant bitmap which no matter what, gives you an OOM.
Because your problem as you described, needs Bitmaps to get written or get read from disk just as they get written or read from memory; something called a BufferedBitmap (or so) which efficiently handles the memory it requires by saving little chunks of a bitmap to disk and using them later, thus, avoiding OOM.
Any other solution which wants to tackle the problem with scaling only do half of the work. why? because cropped image itself can be too big for memory (as you said).
However, solving the problem by scaling isn’t that bad, if you don’t care about the quality of the cropped image compared to the quality user had seen when she was cropping it. that’s what the Google Photos do, it simply reduces the quality of cropped image, very simple!
I haven’t seen any BufferedBitmap classes around (but if there are, it would be awesome). They sure become handy for solving similar problems.
You can check Telegram messaging app which comes with an open-source implementation of image cropping facilities; you guess right, it handles all the similar nasty works with good old C... Hence, we might conclude that a good global solution (or better said, ONE OF THE SEVERAL APPLICABLE SOLUTIONS) appears to be low-level programming to handle disk and memory yourself.
I know my answer failed to give any copy-paste-ish solution to your problem but at least I hope it gave you some ideas my friend.
Did you checked BitmapRegionDecoder? It will extract a rectangle out of the original image.
BitmapRegionDecoder bitmapRegionDecoder = BitmapRegionDecoder.newInstance(inputStream, false);
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig = Bitmap.Config.RGB_565;
Bitmap bitmap = bitmapRegionDecoder.decodeRegion(new Rect(width / 2 - 100, height / 2 - 100, width / 2 + 100, height / 2 + 100), options);
mImageView.setImageBitmap(bitmap);
http://developer.android.com/reference/android/graphics/BitmapRegionDecoder.html
You can solve this using BitmapFactory. To determinate the original bitmap size without putting it in to memory, do the fallowing:
BitmapFactory.Options options = new BitmapFactory.Options();
options.inJustDecodeBounds = true;
BitmapFactory.decodeResource(..., options);
int originalImageWith = options.outWidth;
int originalImageHeight = options.outHeight;
Now you can use options.inSampleSize
If set to a value > 1, requests the decoder to
subsample the original image, returning a smaller image to save
memory. The sample size is the number of pixels in either dimension
that correspond to a single pixel in the decoded bitmap. For example,
inSampleSize == 4 returns an image that is 1/4 the width/height of the
original, and 1/16 the number of pixels. Any value <= 1 is treated the
same as 1. Note: the decoder uses a final value based on powers of 2,
any other value will be rounded down to the nearest power of 2.
Now it's not a perfect solution but you can do math to find what is the closest factor of 2 that you can use on options.inSampleSize to save memory.
BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = sampleSize;
Bitmap bitmap = BitmapFactory.decodeResource(..., options);
BitmapRegionDecoder is the good way to crop big or large Images, but it's available from API 10 and above.
There is a class called BitmapRegionDecoder which might help you, but it's available from API 10 and above.
If you can't use it :
Many image formats are compressed and therefore require some sort of loading into memory.
You will need to read about the best image format that fits your needs, and then read it by yourself, using only the memory that you need.
a little easier task would be to do it all in JNI, so that even though you will use a lot of memory, at least your app won't get into OOM so soon since it won't be constrained to the max heap size that is imposed on normal apps.
Of course, since android is open source, you can try to use the BitmapRegionDecoder and use it for any device.
Reference :
Crop image without loading into memory
Or you can find some other way on below that might be helpful to you:
Bitmap/Canvas use and the NDK
I was reading an article about how to load bitmaps efficiently here. it had suggested using some techniques to load bitmap with a size that is needed not the real size. the only thing is that I didn't get what inSampleSize variable does(which must be a power of 2). if I choose number 1 for that, does it mean that this would be like if i normally loaded a bitmap with its real size?
Rajesh has quoted the explanation from the documentation of what inSampleSize does; that explanation can be expanded on with diagrams.
The important part is:
The sample size is the number of pixels in either dimension that correspond to a single pixel in the decoded bitmap.
So, if we had this image (where each letter denotes a pixel):
AAAABBBB
AAAABBBB
AAAABBBB
AAAABBBB
CCCCDDDD
CCCCDDDD
CCCCDDDD
CCCCDDDD
And we set inSampleSize = 2, we would get a decoded bitmap that looks like this:
AABB
AABB
CCDD
CCDD
That is, 2 pixels in the original image (AA) correspond to 1 pixel (A) in the decoded image.
If we set inSampleSize = 4, we would get a decoded bitmap that looks like this:
AB
CD
That is, 4 pixels in the original image correspond to 1 pixel in the decoded image.
Notice than an inSampleSize of 2 effectively halves the vertical and horizontal resolutions, but uses 1/4 of the pixels - and therefore only 1/4 of the memory.
Please read the documentation for inSampleSize
If set to a value > 1, requests the decoder to subsample the original image, returning a smaller image to save memory. The sample size is the number of pixels in either dimension that correspond to a single pixel in the decoded bitmap. For example, inSampleSize == 4 returns an image that is 1/4 the width/height of the original, and 1/16 the number of pixels. Any value <= 1 is treated the same as 1. Note: the decoder uses a final value based on powers of 2, any other value will be rounded down to the nearest power of 2.
if I choose number 1 for that, does it mean that this would be like if i normally loaded a bitmap with its real size?
Yes, 1 denotes no subsampling.
Please we need help urgently, we are using openCv in Android (Java).
We are facing a lot of problems:
convertTo() doesn't work so we can't convert 3 channel image to 1 channel without passing it on cvtColor().
grayImg.convertTo(grayImg, CvType.CV_8UC1);
cvtColor() gives a weird output:
Imgproc.cvtColor(src, grayImg, Imgproc.COLOR_RGB2GRAY);
Output of this line is the image repeated 4 times!
The only way to get rid of this repetition is to add this line and the output is a white and black image but 3 channel so it crashes any coming function because it needs 1 channel image.
Imgproc.cvtColor(grayImg, grayImg, Imgproc.COLOR_GRAY2RGB,3);
canny() for edge detection:
Imgproc.Canny(grayImg, grayImg, 10, 100,3,true);
findContours() counts a horrible number of contours while number of objects in the image is only 2 input image is 3 channel bmp image and we convert it to Mat.
output image:
https://dl.dropbox.com/u/36214963/canny.jpg
Thanks for your concern
Try BGR2GRAY rather than RGB2GRAY.I had the same problem and I solved it through this.There is also a note in the documentation about this
Converts an image from one color space to another.
The function cvtColor converts an input image from one color space to another. In case of a transformation to-from RGB color space, the order of the channels should be specified explicitly (RGB or BGR). Note that the default color format in OpenCV is often referred to as RGB but it is actually BGR (the bytes are reversed). So the first byte in a standard (24-bit) color image will be an 8-bit Blue component, the second byte will be Green, and the third byte will be Red. The fourth, fifth, and sixth bytes would then be the second pixel (Blue, then Green, then Red), and so on.
If I understand your first question correctly, you have two options to convert RGB images to grayscale ones.
Option 1: Convert the 3 channel image to 1 channel as you are trying to do.
IplImage *RGB_image = cvLoadImage("my_colored_image.jpg");
IplImage *GRAY_IMAGE = cvCreateImage(cvGetSize(RGB_image), IPL_DEPTH_8U, 1);
cvCvtColor(RGB_image, GRAY_IMAGE, CV_RGB2GRAY);
Option 2: Read the colored image as a grayscale image directly.
IplImage* GRAY_IMAGE = cvLoadImage("my_colored_image.jpg", CV_LOAD_IMAGE_GRAYSCALE);
I hope this suits you.
I haven't actually used opencv before, but I don't think convertTo is the answer your looking for.
By looking at the opencv documentation I found this:
cvtColor - Converts an image from one color space to another
Mat color; // the input image
Mat gray(color.rows, color.cols, color.depth());
cvtColor(color, gray, CV_BGR2GRAY);
Or simply (and the function cvtColor will create the image internally):
Mat color;
Mat gray;
cvtColor(color, gray, CV_BGR2GRAY);
I am trying to run a denoising algorithm on a bitmap image that I have -- the function returns me a short[], so I tried simply casting it to int[] in order to generate a bitmap and I get this:
I'd like it to be in grayscale, not .. well.. pink. Any ideas?
Instead of replicating the 8-bit intensity in each of the RGB channels, you can use the intensity as the alpha channel. In this scheme, 0 corresponds to transparent (background color) and 255 corresponds to fully opaque (black, or whatever color you want--even pink). The idea is similar to Jason LeBrun's proposal: take the high-order 8 bits of each value, shift 24 bits left, then bitwise-OR with the color you want to use for full intensity (or with nothing, if you want black to represent full intensity).
The pixels of a bitmap are encoded using either ARGB_8888, RGB_565, ARGB_4444, or ALPHA_8. So, the short values that you're returning must happen to correspond to values that look slightly pink-ish in one of those formats.
If you want a grayscale bitmap, you can only have values in the range of 0-256 (For the maximum precious color component of 8 bits if you're using ARGB_8888). So, you'll need to map your short to values within that range, and then replicate that value for each of the RGB components.