I'm trying to use the NDK to do some image processing. I am NOT using opencv.
I am fairly new to Android so I was doing this in steps. I started by writing a simple app that would let me capture video from the camera and display it to the screen. I have this done.
Then I tried to manipulate the camera data in native. However, onPreviewFrame uses a byte array to capture frame information. This is my code -
public void onPreviewFrame(byte[] arg0, Camera arg1)
{
if (imageFormat == ImageFormat.NV21)
{
if ( !bProcessing )
{
FrameData = arg0;
mHandler.post(callnative);
}
}
}
And the callnative runnable is like so -
private Runnable callnative = new Runnable()
{
public void run()
{
bProcessing = true;
String returnNative = callTorch(MainActivity.assetManager, PreviewSizeWidth, PreviewSizeHeight, FrameData, pixels);
bitmap.setPixels(pixels, 0, PreviewSizeWidth, 0, 0, PreviewSizeWidth, PreviewSizeHeight);
MycameraClass.setImageBitmap(bitmap);
bProcessing = false;
}
};
The problem is, I need to use FrameData in native as the float datatype. However, it is in the form of a bytearray. I wanted to know how the frame data is stored. Is this a 2 dimensional array of bytes? So the camera returns an 8 bit image and stores this as 640x480 bytes? If that is so, in what form does C interpret this byte data type? Can I simply convert it to float? I have this in native -
jbyte *nativeData;
nativeData = (env)->GetByteArrayElements(NV21FrameData,NULL);
__android_log_print(ANDROID_LOG_INFO, "Nativeprint", "nativedata is: %d",(int)nativeData[0]);
However, this prints -22 which leads me to believe that I am trying to print out a pointer. I am not sure why that is the case though.
I would appreciate any help on this.
You will not be able to get any float data type from the pixels buffer. the data are in bytes, which in C is the char datatype.
So this:
jbyte *nativeData = (env)->GetByteArrayElements(NV21FrameData,NULL);
is the same as this:
char *nativeData = (char *)((env)->GetByteArrayElements(NV21FrameData, NULL));
The data is stored as 1 dimension array, so you will retrieve each pixel operating by width, height, and x and y calculations.
Also remember the preview camera frames from your sample are in YUV420sp, this means you will need to convert the data from YUV to RGB before you can set it in a bitmap.
Related
I am developing custom camera API 2 app, and I notice that the capture format conversion is different on some devices when I use ImageReader callback.
For example in Nexus 4 doesn't work fine and in Nexus5X looks OK, here is the output.
I initialize the ImageReader in this form:
mImageReader = ImageReader.newInstance(320, 240, ImageFormat.YUV_420_888,2);
And my callback is simple callback ImageReader Callback.
mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable( ImageReader reader) {
try {
mBackgroundHandler.post(
new ImageController(reader.acquireNextImage())
);
}
catch(Exception e)
{
//exception
}
}
};
And in the case of Nexus 4: I had this error.
D/qdgralloc: gralloc_lock_ycbcr: Invalid format passed: 0x32315659
When I try to write the raw file in both devices, I have these different images. So I understand that the Nexus 5X image has NV21 codification and the Nexus 4 has YV12 codification.
I found a specification of image format and I try to get the format in ImageReader.
There are YV12 and NV21 options, but obviously, I get the YUV_420_888 format when I try to obtain the format.
int test=mImageReader.getImageFormat();
So is there any way to get the camera input format (NV21 or YV12) to discriminate this codification types in the camera class? CameraCharacteristics maybe?
Thanks in advance.
Unai.
PD: I use OpenGL for displayin RGB images, and I use Opencv to make the conversions to YUV_420_888.
YUV_420_888 is a wrapper that can host (among others) both NV21 and YV12 images. You must use the planes and strides to access individual colors:
ByteBuffer Y = image.getPlanes()[0];
ByteBuffer U = image.getPlanes()[1];
ByteBuffer V = image.getPlanes()[2];
If the underlying pixels are in NV21 format (as on Nexus 4), the pixelStride will be 2, and
int getU(image, col, row) {
return getPixel(image.getPlanes()[1], col/2, row/2);
}
int getPixel(plane, col, row) {
return plane.getBuffer().get(col*plane.getPixelStride() + row*plane.getRowStride());
}
We take half column and half row because this is how U and V (chroma) planes are stored in 420 image.
This code is for illustration, it is very inefficient, you probably want to access pixels at bulk, using get(byte[], int, int), or via a fragment shader, or via JNI function GetDirectBufferAddress in native code. What you cannot use, is method plane.array(), because the planes are guaranteed to be direct byte buffers.
Here useful method which converts from YV12 to NV21.
public static byte[] fromYV12toNV21(#NonNull final byte[] yv12,
final int width,
final int height) {
byte[] nv21 = new byte[yv12.length];
final int size = width * height;
final int quarter = size / 4;
final int vPosition = size; // This is where V starts
final int uPosition = size + quarter; // This is where U starts
System.arraycopy(yv12, 0, nv21, 0, size); // Y is same
for (int i = 0; i < quarter; i++) {
nv21[size + i * 2] = yv12[vPosition + i]; // For NV21, V first
nv21[size + i * 2 + 1] = yv12[uPosition + i]; // For Nv21, U second
}
return nv21;
}
I have an onPreviewFrame callback set up. This gets a byte[] with NV21 data in it. I have set the preview size to 176*144. When device is held in landscape mode, byte[] with 176*144 dimensions is perfect but when device is held in portrait mode I still get byte[] with the same dimensions.
I want to rotate the byte[] by 90 degrees and obtain byte[] with dimensions 144*176.
So the question is, how to rotate the data, not just the preview image? Camera.Parameters.setRotation only affects taking the picture, not video. Camera.setDisplayOrientation specifically says it only affects the displaying preview, not the frame bytes:
This does not affect the order of byte array passed in
onPreviewFrame(byte[], Camera), JPEG pictures, or recorded videos.
After checking out various posts I have found this one stating to use ConvertToI420 from libyuv.
Now the deal is I have compiled libyuv and able to call libyuv::ConvertToI420 method but the resulting I420 that I get is all messed up in terms of color and showing lines and all..... however the dimensions that I get are now 144*176, can check the image here.
The code snippet that i've used is as follows.
//sourceWidth = 176 and sourceHeight = 144
unsigned char I420M = new unsigned char[(int)(sourceWidth*sourceHeight*1.5)];
unsigned int YSize = sourceWidth * sourceHeight;
// yuvPtr is the NV21 data passed from onPreviewCallback (from JAVA layer)
const uint8* src_frame = const_cast<const uint8*>(yuvPtr);
size_t src_size = YSize;
uint8* pDstY = I420M;
uint8* pDstU = I420M + YSize;
uint8* pDstV = I420M + (YSize/4);
libyuv::RotationMode mode;
if(landscapeLeft){
mode = libyuv::kRotate90;
}else{
mode = libyuv::kRotate270;
}
uint32 format = libyuv::FOURCC_NV21;
int retVal = libyuv::ConvertToI420(src_frame, src_size,
pDstY, sourceHeight,
pDstU, (sourceHeight/2),
pDstV, (sourceHeight/2),
0, 0,
sourceWidth, sourceHeight,
sourceWidth, sourceHeight,
mode,
format);
I don't wish to crop the image, just rotate it by 90 (clockwise/anticlockwise) the attached image is for kRotate90.
Could anyone please point me where am going wrong, I strongly doubt it has o do something with the parameters am passing to the ConvertToI420 method.
Any help appreciated.
use sourceWidth not sourceHeight
int retVal = libyuv::ConvertToI420(src_frame, src_size,
pDstY, sourceWidth,
pDstU, (sourceWidth/2),
pDstV, (sourceWidth/2),
0, 0,
sourceWidth, sourceHeight,
sourceWidth, sourceHeight,
mode,
format);
I have figured out what was going wrong. The above code snippet works perfectly well and I420M contains the rotated YUV with 144*176 dimensions.
The problem was the in the way I was converting the I420M to jbyte[] while passing it back to Java Layer.
I am staring to develop an app which monitors the camera preview, and does some image processing on it and displays int on a canvas. Just as a diagnostic I have the following code:
camera = Camera.open();
ImageFormat imf = new ImageFormat();
Camera.Parameters param = camera.getParameters();
param.setPreviewSize(128, 128);
preview_format = param.getPreviewFormat();
Camera.Size sz = param.getPreviewSize();
myimage = new int[sz.width*sz.height];
At run time it reports that preview_format is 17 which I understand is "NV21".
Later I have:
camera.setPreviewCallback(new PreviewCallback()
{
public void onPreviewFrame(byte[] _data, Camera _camera)
{
YUV_NV21_TO_RGB(myimage , _data, 128, 128) ;
}
});
The function YUV_NV21_TO_RGB was taken from here.
Meanwhile in another thread I have:
canvas.drawBitmap(
myimage, // the int array
0, // where to start in the array
128, // the stride ???
200, // x coord of where to display
200, // y coord of where to display
128, // wid
128, // ht
false, // alpha used?
null); // the paint used
The resulting image can be seen amongst other diagnostics in the square below. The stripes change as I move the phone around and appear to in some way correspond to what the camera is pointing at, but clearly it has been mangled. I tried using an alternative function found here, and another from wikipedia, but with seemingly identical results. Any ideas?
EDIT: One thought I had was that perhaps NV21 may not completely specify the format - maybe its a class of formats, where you need to go on and specify the bits per pixel or similar.
EDIT: An extra clue - if I cover the camera completely, the square goes entirely pure green.
Your preview size is not 128 by 128 because you fail to set it. You set it on the Camera.Parameters instance but you don't apply it to the camera.
You need to add the following line:
camera.setParameters(param);
And it's probably safe to get the parameters directly from the Camera instance:
preview_format = camera.getPreviewFormat();
Camera.Size sz = camera.getPreviewSize();
I try to save, through JNI, the output of the camera modified by OpenGL ES 2 on my tablet.
To achieve this, I use the libjpeg library compiled by the NDK-r8b.
I use the following code:
In the rendering function:
renderImage();
if (iIsPictureRequired)
{
savePicture();
iIsPictureRequired=false;
}
The saving procedure:
bool Image::savePicture()
{
bool l_res =false;
char p_filename[]={"/sdcard/Pictures/testPic.jpg"};
// Allocates the image buffer (RGBA)
int l_size = iWidth*iHeight*4*sizeof(GLubyte);
GLubyte *l_image = (GLubyte*)malloc(l_size);
if (l_image==NULL)
{
LOGE("Image::savePicture:could not allocate %d bytes",l_size);
return l_res;
}
// Reads pixels from the color buffer (byte-aligned)
glPixelStorei(GL_PACK_ALIGNMENT, 1);
checkGlError("glPixelStorei");
// Saves the pixel buffer
glReadPixels(0,0,iWidth,iHeight,GL_RGBA,GL_UNSIGNED_BYTE,l_image);
checkGlError("glReadPixels");
// Stores the file
FILE* l_file = fopen(p_filename, "wb");
if (l_file==NULL)
{
LOGE("Image::savePicture:could not create %s:errno=%d",p_filename,errno);
free(l_image);
return l_res;
}
// JPEG structures
struct jpeg_compress_struct cinfo;
struct jpeg_error_mgr jerr;
cinfo.err = jpeg_std_error(&jerr);
jerr.trace_level = 10;
jpeg_create_compress(&cinfo);
jpeg_stdio_dest(&cinfo, l_file);
cinfo.image_width = iWidth;
cinfo.image_height = iHeight;
cinfo.input_components = 3;
cinfo.in_color_space = JCS_RGB;
jpeg_set_defaults(&cinfo);
// Image quality [0..100]
jpeg_set_quality (&cinfo, 70, true);
jpeg_start_compress(&cinfo, true);
// Saves the buffer
JSAMPROW row_pointer[1]; // pointer to a single row
// JPEG stores the image from top to bottom (OpenGL does the opposite)
while (cinfo.next_scanline < cinfo.image_height)
{
row_pointer[0] = (JSAMPROW)&l_image[(cinfo.image_height-1-cinfo.next_scanline)* (cinfo.input_components)*iWidth];
jpeg_write_scanlines(&cinfo, row_pointer, 1);
}
// End of the process
jpeg_finish_compress(&cinfo);
fclose(l_file);
free(l_image);
l_res =true;
return l_res;
}
The display is correct but the generated JPEG seems tripled and overlap from left to right.
What did I do wrong ?
It appears that the internal format of the jpeg lib and the canvas do not match. Other appears to read/encode with RGBRGBRGB, other with RGBARGBARGBA.
You might be able to rearrange the image data, if everything else fails...
char *dst_ptr = l_image; char *src_ptr = l_image;
for (i=0;i<width*height;i++) { *dst_ptr++=*src_ptr++;
*dst_ptr++=*src_ptr++; *dst_ptr++=*src_ptr++; src_ptr++; }
EDIT: now that the cause is verified, there might be even simpler modification.
You might be able to get data from gl pixel buffer in the correct format:
int l_size = iWidth*iHeight*3*sizeof(GLubyte);
...
glReadPixels(0,0,iWidth,iHeight,GL_RGB,GL_UNSIGNED_BYTE,l_image);
And one more piece of warning: if this compiles, but the output is tilted, then it means that your screen width is not a multiple of 4, but that opengl wants to start each new row at dword boundary. But in that case there's a good opportunity of a crash, because in that case the l_size should have been 1,2 or 3 bytes larger than expected.
I am trying to access the raw data of a Bitmap in ARGB_8888 format on Android, using the copyPixelsToBuffer and copyPixelsFromBuffer methods. However, invocation of those calls seems to always apply the alpha channel to the rgb channels. I need the raw data in a byte[] or similar (to pass through JNI; yes, I know about bitmap.h in Android 2.2, cannot use that).
Here is a sample:
// Create 1x1 Bitmap with alpha channel, 8 bits per channel
Bitmap one = Bitmap.createBitmap(1,1,Bitmap.Config.ARGB_8888);
one.setPixel(0,0,0xef234567);
Log.v("?","hasAlpha() = "+Boolean.toString(one.hasAlpha()));
Log.v("?","pixel before = "+Integer.toHexString(one.getPixel(0,0)));
// Copy Bitmap to buffer
byte[] store = new byte[4];
ByteBuffer buffer = ByteBuffer.wrap(store);
one.copyPixelsToBuffer(buffer);
// Change value of the pixel
int value=buffer.getInt(0);
Log.v("?", "value before = "+Integer.toHexString(value));
value = (value >> 8) | 0xffffff00;
buffer.putInt(0, value);
value=buffer.getInt(0);
Log.v("?", "value after = "+Integer.toHexString(value));
// Copy buffer back to Bitmap
buffer.position(0);
one.copyPixelsFromBuffer(buffer);
Log.v("?","pixel after = "+Integer.toHexString(one.getPixel(0,0)));
The log then shows
hasAlpha() = true
pixel before = ef234567
value before = 214161ef
value after = ffffff61
pixel after = 619e9e9e
I understand that the order of the argb channels is different; that's fine. But I don't
want the alpha channel to be applied upon every copy (which is what it seems to be doing).
Is this how copyPixelsToBuffer and copyPixelsFromBuffer are supposed to work? Is there any way to get the raw data in a byte[]?
Added in response to answer below:
Putting in buffer.order(ByteOrder.nativeOrder()); before the copyPixelsToBuffer does change the result, but still not in the way I want it:
pixel before = ef234567
value before = ef614121
value after = ffffff41
pixel after = ff41ffff
Seems to suffer from essentially the same problem (alpha being applied upon each copyPixelsFrom/ToBuffer).
One way to access data in Bitmap is to use getPixels() method. Below you can find an example I used to get grayscale image from argb data and then back from byte array to Bitmap (of course if you need rgb you reserve 3x bytes and save them all...):
/*Free to use licence by Sami Varjo (but nice if you retain this line)*/
public final class BitmapConverter {
private BitmapConverter(){};
/**
* Get grayscale data from argb image to byte array
*/
public static byte[] ARGB2Gray(Bitmap img)
{
int width = img.getWidth();
int height = img.getHeight();
int[] pixels = new int[height*width];
byte grayIm[] = new byte[height*width];
img.getPixels(pixels,0,width,0,0,width,height);
int pixel=0;
int count=width*height;
while(count-->0){
int inVal = pixels[pixel];
//Get the pixel channel values from int
double r = (double)( (inVal & 0x00ff0000)>>16 );
double g = (double)( (inVal & 0x0000ff00)>>8 );
double b = (double)( inVal & 0x000000ff) ;
grayIm[pixel++] = (byte)( 0.2989*r + 0.5870*g + 0.1140*b );
}
return grayIm;
}
/**
* Create a gray scale bitmap from byte array
*/
public static Bitmap gray2ARGB(byte[] data, int width, int height)
{
int count = height*width;
int[] outPix = new int[count];
int pixel=0;
while(count-->0){
int val = data[pixel] & 0xff; //convert byte to unsigned
outPix[pixel++] = 0xff000000 | val << 16 | val << 8 | val ;
}
Bitmap out = Bitmap.createBitmap(outPix,0,width,width, height, Bitmap.Config.ARGB_8888);
return out;
}
}
My guess is that this might have to do with the byte order of the ByteBuffer you are using. ByteBuffer uses big endian by default.
Set endianess on the buffer with
buffer.order(ByteOrder.nativeOrder());
See if it helps.
Moreover, copyPixelsFromBuffer/copyPixelsToBuffer does not change the pixel data in any way. They are copied raw.
I realize this is very stale and probably won't help you now, but I came across this recently in trying to get copyPixelsFromBuffer to work in my app. (Thank you for asking this question, btw! You saved me tons of time in debugging.) I'm adding this answer in the hopes it helps others like me going forward...
Although I haven't used this yet to ensure that it works, it looks like that, as of API Level 19, we'll finally have a way to specify not to "apply the alpha" (a.k.a. premultiply) within Bitmap. They're adding a setPremultiplied(boolean) method that should help in situations like this going forward by allowing us to specify false.
I hope this helps!
This is an old question, but i got to the same issue, and just figured out that the bitmap byte are pre-multiplied, you can set the bitmap (as of API 19) to not pre-multiply the buffer, but in the API they make no guarantee.
From the docs:
public final void setPremultiplied(boolean premultiplied)
Sets whether the bitmap should treat its data as pre-multiplied.
Bitmaps are always treated as pre-multiplied by the view system and Canvas for performance reasons. Storing un-pre-multiplied data in a Bitmap (through setPixel, setPixels, or BitmapFactory.Options.inPremultiplied) can lead to incorrect blending if drawn by the framework.
This method will not affect the behaviour of a bitmap without an alpha channel, or if hasAlpha() returns false.
Calling createBitmap or createScaledBitmap with a source Bitmap whose colors are not pre-multiplied may result in a RuntimeException, since those functions require drawing the source, which is not supported for un-pre-multiplied Bitmaps.