Green tint when stacking RAW DNG images in Android - android

I am trying to process multiple RAW DNG images by stacking them to produce 1 stacked RAW DNG image. First, I get the DNG pixel data into byte array, since DNG is digital negative, I then flipped the byte values using "~" and convert them into unsigned integer, now I calculate the average. With the average result, I flipped it back with "~" and save in "newData" byte array.
Below is the snippet on averaging 2 DNG images. The images are shot from OnePlus 3 in RAW (16MP DNG).
byte[] previousData //from previous DNG image
ByteBuffer rawByteBuffer = mImage.getPlanes()[0].getBuffer();
byte[] data = new byte[rawByteBuffer.remaining()];
rawByteBuffer.get(data); // from current DNG image
for(int i=0; i<data.length; i++){
int currentInt = Byte.toUnsignedInt((byte) ~data[i]);
int previousInt = Byte.toUnsignedInt((byte) ~previousData[i]);
int sumInt = currentInt + previousInt;
int averageInt = sumInt/2;
newData[i] = (byte) (~averageInt); // store average into newData
}
// save newData into storage in DNG format
However, the result image(link below) always show green tint on white color portion. Any ideas what went wrong in the process?
Here is the preview of the stacked DNG image
Here is the preview of original single DNG for reference

Related

Camera 2, increase FPS

I'm using Camera 2 API to save JPEG images on disk. I currently have 3-4 fps on my Nexus 5X, I'd like to improve it to 20-30. Is it possible?
Changing the image format to YUV I manage to generate 30 fps. Is it possible to save them at this frame-rate, or should I give up and live with my 3-4 fps?
Obviously I can share code if needed, but if everyone agree that it's not possible, I'll just give up. Using the NDK (with libjpeg for instance) is an option (but obviously I'd prefer to avoid it...).
Thanks
EDIT: here is how I convert the YUV android.media.Image to a single byte[]:
private byte[] toByteArray(Image image, File destination) {
ByteBuffer buffer0 = image.getPlanes()[0].getBuffer();
ByteBuffer buffer2 = image.getPlanes()[2].getBuffer();
int buffer0_size = buffer0.remaining();
int buffer2_size = buffer2.remaining();
byte[] bytes = new byte[buffer0_size + buffer2_size];
buffer0.get(bytes, 0, buffer0_size);
buffer2.get(bytes, buffer0_size, buffer2_size);
return bytes;
}
EDIT 2: another method I found to convert the YUV image into a byte[]:
private byte[] toByteArray(Image image, File destination) {
Image.Plane yPlane = image.getPlanes()[0];
Image.Plane uPlane = image.getPlanes()[1];
Image.Plane vPlane = image.getPlanes()[2];
int ySize = yPlane.getBuffer().remaining();
// be aware that this size does not include the padding at the end, if there is any
// (e.g. if pixel stride is 2 the size is ySize / 2 - 1)
int uSize = uPlane.getBuffer().remaining();
int vSize = vPlane.getBuffer().remaining();
byte[] data = new byte[ySize + (ySize/2)];
yPlane.getBuffer().get(data, 0, ySize);
ByteBuffer ub = uPlane.getBuffer();
ByteBuffer vb = vPlane.getBuffer();
int uvPixelStride = uPlane.getPixelStride(); //stride guaranteed to be the same for u and v planes
if (uvPixelStride == 1) {
uPlane.getBuffer().get(data, ySize, uSize);
vPlane.getBuffer().get(data, ySize + uSize, vSize);
}
else {
// if pixel stride is 2 there is padding between each pixel
// converting it to NV21 by filling the gaps of the v plane with the u values
vb.get(data, ySize, vSize);
for (int i = 0; i < uSize; i += 2) {
data[ySize + i + 1] = ub.get(i);
}
}
return data;
}
The dedicated JPEG encoder units on mobile phones are efficient, but not generally optimized for throughput. (Historically, users took one photo every second or two). At full resolution, the 5X's camera pipeline will not generate JPEGs at faster than a few FPS.
If you need higher rates, you need to capture in uncompressed YUV. As mentioned by CommonsWare, there's not enough disk bandwidth to stream full-resolution uncompressed YUV to disk, so you can only hold on to some number of frames before you run out of memory.
You can use libjpeg-turbo or some other high-efficiency JPEG encoder and see how many frames per second you can compress yourself - this may be higher than the hardware JPEG unit. The simplest way to maximize the rate is to capture YUV at 30fps, and run some number of JPEG encoding threads in parallel. For maximum speed, you'll want to hand-write the code talking to the JPEG encoder, because your source data is YUV, not RGB, which most JPEG encoding interfaces tend to accept (even though typically the colorspace of an encoded JPEG is actually YUV as well).
Whenever an encoder thread finishes the previous frame, it can grab the next frame that comes from the camera (you can maintain a small circular buffer of the latest YUV Images to make this simpler).

How to plot the RGB intensity graph from an image in android?

I wanted to plot the RGB intensity graph of an image like sinusoidal waves for 3 colours. Can anyone please suggest any idea to do this ?
There are (in general) 256 levels (8-bits) for each color component. If the image has an alpha channel then the overall image bits per pixel will be 32, else for an RGB only image it will be 24.
I will push this out algorithmically to get the image histogram, it will be up to you to write the drawing code.
// Arrays for the histogram data
int histoR[256]; // Array that will hold the counts of how many of each red VALUE the image had
int histoG[256]; // Array that will hold the counts of how many of each green VALUE the image had
int histoB[256]; // Array that will hold the counts of how many of each blue VALUE the image had
int histoA[256]; // Array that will hold the counts of how many of each alpha VALUE the image had
// Zeroize all histogram arrays
for(num = 0 through 255){
histoR[num] = 0;
histoG[num] = 0;
histoB[num] = 0;
histoA[num] = 0;
}
// Move through all image pixels counting up each time a pixel color value is used
for(x = 0 through image width){
for(y = 0 through image height){
histoR[image.pixel(x, y).red] += 1;
histoG[image.pixel(x, y).green] += 1;
histoB[image.pixel(x, y).blue] += 1;
histoA[image.pixel(x, y).alpha] += 1;
}
}
You now have the histogram data, it is up to you to plot it. PLEASE REMEMBER THE ABOVE IS ONLY AN ALGORITHMIC DESCRIPTION, NOT ACTUAL CODE

Saving SurfaceTexture to JPEG

I try to save, through JNI, the output of the camera modified by OpenGL ES 2 on my tablet.
To achieve this, I use the libjpeg library compiled by the NDK-r8b.
I use the following code:
In the rendering function:
renderImage();
if (iIsPictureRequired)
{
savePicture();
iIsPictureRequired=false;
}
The saving procedure:
bool Image::savePicture()
{
bool l_res =false;
char p_filename[]={"/sdcard/Pictures/testPic.jpg"};
// Allocates the image buffer (RGBA)
int l_size = iWidth*iHeight*4*sizeof(GLubyte);
GLubyte *l_image = (GLubyte*)malloc(l_size);
if (l_image==NULL)
{
LOGE("Image::savePicture:could not allocate %d bytes",l_size);
return l_res;
}
// Reads pixels from the color buffer (byte-aligned)
glPixelStorei(GL_PACK_ALIGNMENT, 1);
checkGlError("glPixelStorei");
// Saves the pixel buffer
glReadPixels(0,0,iWidth,iHeight,GL_RGBA,GL_UNSIGNED_BYTE,l_image);
checkGlError("glReadPixels");
// Stores the file
FILE* l_file = fopen(p_filename, "wb");
if (l_file==NULL)
{
LOGE("Image::savePicture:could not create %s:errno=%d",p_filename,errno);
free(l_image);
return l_res;
}
// JPEG structures
struct jpeg_compress_struct cinfo;
struct jpeg_error_mgr jerr;
cinfo.err = jpeg_std_error(&jerr);
jerr.trace_level = 10;
jpeg_create_compress(&cinfo);
jpeg_stdio_dest(&cinfo, l_file);
cinfo.image_width = iWidth;
cinfo.image_height = iHeight;
cinfo.input_components = 3;
cinfo.in_color_space = JCS_RGB;
jpeg_set_defaults(&cinfo);
// Image quality [0..100]
jpeg_set_quality (&cinfo, 70, true);
jpeg_start_compress(&cinfo, true);
// Saves the buffer
JSAMPROW row_pointer[1]; // pointer to a single row
// JPEG stores the image from top to bottom (OpenGL does the opposite)
while (cinfo.next_scanline < cinfo.image_height)
{
row_pointer[0] = (JSAMPROW)&l_image[(cinfo.image_height-1-cinfo.next_scanline)* (cinfo.input_components)*iWidth];
jpeg_write_scanlines(&cinfo, row_pointer, 1);
}
// End of the process
jpeg_finish_compress(&cinfo);
fclose(l_file);
free(l_image);
l_res =true;
return l_res;
}
The display is correct but the generated JPEG seems tripled and overlap from left to right.
What did I do wrong ?
It appears that the internal format of the jpeg lib and the canvas do not match. Other appears to read/encode with RGBRGBRGB, other with RGBARGBARGBA.
You might be able to rearrange the image data, if everything else fails...
char *dst_ptr = l_image; char *src_ptr = l_image;
for (i=0;i<width*height;i++) { *dst_ptr++=*src_ptr++;
*dst_ptr++=*src_ptr++; *dst_ptr++=*src_ptr++; src_ptr++; }
EDIT: now that the cause is verified, there might be even simpler modification.
You might be able to get data from gl pixel buffer in the correct format:
int l_size = iWidth*iHeight*3*sizeof(GLubyte);
...
glReadPixels(0,0,iWidth,iHeight,GL_RGB,GL_UNSIGNED_BYTE,l_image);
And one more piece of warning: if this compiles, but the output is tilted, then it means that your screen width is not a multiple of 4, but that opengl wants to start each new row at dword boundary. But in that case there's a good opportunity of a crash, because in that case the l_size should have been 1,2 or 3 bytes larger than expected.

OpenCV for Android - Access elements of Mat

What is the standard way to access and modify individual elements of a Mat in OpenCV4Android? Also, what is the format of the data for BGR (which is the default, I think) and grayscale?
edit: Let's make this more specific. mat.get(row, col) returns a double array. What is in this array?
If you just want to access some pixels do it by using double[] get(int row, int col) and writing using put(int row, int col, double... data). If you are thinking in accessing the whole image or iterate the data of the image in a loop the best thing you should do is copy the Mat data into a Java primitive data type. When you're done with the data operations just copy the data back into a Mat structure.
Images use CV_8U, if you have a grayscale image it will use CV_8UC1 if you have a RGB image it will use a Mat of CV_8UC3 (3 channels of CV_8U). CV_8U is the equivalent of byte in java. :)
I can give you and example of a method I use in Java (Android platform) to binarize a grayscale image:
private Mat featuresVectorBinarization(Mat fv){
int size = (int) fv.total() * fv.channels();
double[] buff = new double[size];
fv.get(0, 0, buff);
for(int i = 0; i < size; i++)
{
buff[i] = (buff[i] >= 0) ? 1 : 0;
}
Mat bv = new Mat(fv.size(), CvType.CV_8U);
bv.put(0, 0, buff);
return bv;
}
Hope that helps.
What is the standard way to access and modify individual elements of a Mat in OpenCV4Android?
A Mat is the thing we call a "matrix" in Mathematics - a rectangular array of quantities set out by rows and columns. Those "quantities" represent pixels in the case of an image Mat, (e.g. every element of a matrix can be the color of each pixel in an image Mat). From this tutorial:
in the above image you can see that the mirror of the car is nothing
more than a matrix containing all the intensity values of the pixel
points.
So how would you go about iterating through a matrix? How about this:
for (int row=0; row<mat.rows(); row++) {
for (int col=0; col<mat.cols(); col++ ) {
//...do what you want..
//e.g. get the value of the 3rd element of 2nd row
//by mat.get(2,3);
}
}
What is the standard way to access and modify individual elements of a Mat in OpenCV4Android?
You get the value of an element of Mat by using its function get(x)(y), where x is the first coordinate (row number) and y is the second coordinate (column number) of the element. For example to get the 4th element of the 7th row of a BGR image Mat named bgrImageMat, use the get method of Mat to get an array of type double, which will have a size of 3, each array element representing each of the Blue, Green, and Red channels of the BGR image format.
double [] bgrColor = bgrImageMat.get();
Also, what is the format of the data for BGR (which is the default, I think) and grayscale? edit: Let's make this more specific. mat.get(row, col) returns a double array. What is in this array?
You can read about BGR color format and grayscale from the web. e.g. BGR and Grayscale.
In short, BGR color format has 3 channels: Blue, Green and Red. So the double array returned by mat.get(row, col), when mat is a BGR image, is an array of size 3, and each of its elements contains the values of each of the Blue, green and red channels respectively.
Likewise, Grayscale format is a 1 channel color format, so the double returned will have a size of 1.
What I could understand from learning about OpenCV Mat object is that Mat is object that can represent any image of W x H pixels.
Now lets say you want to access center pixel of your image then
X = W/2
Y = H/2
Then you can access pixel data as follows
double[] data = matObject.get(x,y);
Now what does data represent and what is size of data array.That depends on image type.
if image is grayscale then data.length = 1 , since there is one channel only , and data[0] represents color value of that pixel ie. 0(black) - 255(white)
if image is color image then data.length = 4(rgba) , since there are four channels , and data[0-n] represent color value of that pixel

Access to raw data in ARGB_8888 Android Bitmap

I am trying to access the raw data of a Bitmap in ARGB_8888 format on Android, using the copyPixelsToBuffer and copyPixelsFromBuffer methods. However, invocation of those calls seems to always apply the alpha channel to the rgb channels. I need the raw data in a byte[] or similar (to pass through JNI; yes, I know about bitmap.h in Android 2.2, cannot use that).
Here is a sample:
// Create 1x1 Bitmap with alpha channel, 8 bits per channel
Bitmap one = Bitmap.createBitmap(1,1,Bitmap.Config.ARGB_8888);
one.setPixel(0,0,0xef234567);
Log.v("?","hasAlpha() = "+Boolean.toString(one.hasAlpha()));
Log.v("?","pixel before = "+Integer.toHexString(one.getPixel(0,0)));
// Copy Bitmap to buffer
byte[] store = new byte[4];
ByteBuffer buffer = ByteBuffer.wrap(store);
one.copyPixelsToBuffer(buffer);
// Change value of the pixel
int value=buffer.getInt(0);
Log.v("?", "value before = "+Integer.toHexString(value));
value = (value >> 8) | 0xffffff00;
buffer.putInt(0, value);
value=buffer.getInt(0);
Log.v("?", "value after = "+Integer.toHexString(value));
// Copy buffer back to Bitmap
buffer.position(0);
one.copyPixelsFromBuffer(buffer);
Log.v("?","pixel after = "+Integer.toHexString(one.getPixel(0,0)));
The log then shows
hasAlpha() = true
pixel before = ef234567
value before = 214161ef
value after = ffffff61
pixel after = 619e9e9e
I understand that the order of the argb channels is different; that's fine. But I don't
want the alpha channel to be applied upon every copy (which is what it seems to be doing).
Is this how copyPixelsToBuffer and copyPixelsFromBuffer are supposed to work? Is there any way to get the raw data in a byte[]?
Added in response to answer below:
Putting in buffer.order(ByteOrder.nativeOrder()); before the copyPixelsToBuffer does change the result, but still not in the way I want it:
pixel before = ef234567
value before = ef614121
value after = ffffff41
pixel after = ff41ffff
Seems to suffer from essentially the same problem (alpha being applied upon each copyPixelsFrom/ToBuffer).
One way to access data in Bitmap is to use getPixels() method. Below you can find an example I used to get grayscale image from argb data and then back from byte array to Bitmap (of course if you need rgb you reserve 3x bytes and save them all...):
/*Free to use licence by Sami Varjo (but nice if you retain this line)*/
public final class BitmapConverter {
private BitmapConverter(){};
/**
* Get grayscale data from argb image to byte array
*/
public static byte[] ARGB2Gray(Bitmap img)
{
int width = img.getWidth();
int height = img.getHeight();
int[] pixels = new int[height*width];
byte grayIm[] = new byte[height*width];
img.getPixels(pixels,0,width,0,0,width,height);
int pixel=0;
int count=width*height;
while(count-->0){
int inVal = pixels[pixel];
//Get the pixel channel values from int
double r = (double)( (inVal & 0x00ff0000)>>16 );
double g = (double)( (inVal & 0x0000ff00)>>8 );
double b = (double)( inVal & 0x000000ff) ;
grayIm[pixel++] = (byte)( 0.2989*r + 0.5870*g + 0.1140*b );
}
return grayIm;
}
/**
* Create a gray scale bitmap from byte array
*/
public static Bitmap gray2ARGB(byte[] data, int width, int height)
{
int count = height*width;
int[] outPix = new int[count];
int pixel=0;
while(count-->0){
int val = data[pixel] & 0xff; //convert byte to unsigned
outPix[pixel++] = 0xff000000 | val << 16 | val << 8 | val ;
}
Bitmap out = Bitmap.createBitmap(outPix,0,width,width, height, Bitmap.Config.ARGB_8888);
return out;
}
}
My guess is that this might have to do with the byte order of the ByteBuffer you are using. ByteBuffer uses big endian by default.
Set endianess on the buffer with
buffer.order(ByteOrder.nativeOrder());
See if it helps.
Moreover, copyPixelsFromBuffer/copyPixelsToBuffer does not change the pixel data in any way. They are copied raw.
I realize this is very stale and probably won't help you now, but I came across this recently in trying to get copyPixelsFromBuffer to work in my app. (Thank you for asking this question, btw! You saved me tons of time in debugging.) I'm adding this answer in the hopes it helps others like me going forward...
Although I haven't used this yet to ensure that it works, it looks like that, as of API Level 19, we'll finally have a way to specify not to "apply the alpha" (a.k.a. premultiply) within Bitmap. They're adding a setPremultiplied(boolean) method that should help in situations like this going forward by allowing us to specify false.
I hope this helps!
This is an old question, but i got to the same issue, and just figured out that the bitmap byte are pre-multiplied, you can set the bitmap (as of API 19) to not pre-multiply the buffer, but in the API they make no guarantee.
From the docs:
public final void setPremultiplied(boolean premultiplied)
Sets whether the bitmap should treat its data as pre-multiplied.
Bitmaps are always treated as pre-multiplied by the view system and Canvas for performance reasons. Storing un-pre-multiplied data in a Bitmap (through setPixel, setPixels, or BitmapFactory.Options.inPremultiplied) can lead to incorrect blending if drawn by the framework.
This method will not affect the behaviour of a bitmap without an alpha channel, or if hasAlpha() returns false.
Calling createBitmap or createScaledBitmap with a source Bitmap whose colors are not pre-multiplied may result in a RuntimeException, since those functions require drawing the source, which is not supported for un-pre-multiplied Bitmaps.

Categories

Resources