I try to save, through JNI, the output of the camera modified by OpenGL ES 2 on my tablet.
To achieve this, I use the libjpeg library compiled by the NDK-r8b.
I use the following code:
In the rendering function:
renderImage();
if (iIsPictureRequired)
{
savePicture();
iIsPictureRequired=false;
}
The saving procedure:
bool Image::savePicture()
{
bool l_res =false;
char p_filename[]={"/sdcard/Pictures/testPic.jpg"};
// Allocates the image buffer (RGBA)
int l_size = iWidth*iHeight*4*sizeof(GLubyte);
GLubyte *l_image = (GLubyte*)malloc(l_size);
if (l_image==NULL)
{
LOGE("Image::savePicture:could not allocate %d bytes",l_size);
return l_res;
}
// Reads pixels from the color buffer (byte-aligned)
glPixelStorei(GL_PACK_ALIGNMENT, 1);
checkGlError("glPixelStorei");
// Saves the pixel buffer
glReadPixels(0,0,iWidth,iHeight,GL_RGBA,GL_UNSIGNED_BYTE,l_image);
checkGlError("glReadPixels");
// Stores the file
FILE* l_file = fopen(p_filename, "wb");
if (l_file==NULL)
{
LOGE("Image::savePicture:could not create %s:errno=%d",p_filename,errno);
free(l_image);
return l_res;
}
// JPEG structures
struct jpeg_compress_struct cinfo;
struct jpeg_error_mgr jerr;
cinfo.err = jpeg_std_error(&jerr);
jerr.trace_level = 10;
jpeg_create_compress(&cinfo);
jpeg_stdio_dest(&cinfo, l_file);
cinfo.image_width = iWidth;
cinfo.image_height = iHeight;
cinfo.input_components = 3;
cinfo.in_color_space = JCS_RGB;
jpeg_set_defaults(&cinfo);
// Image quality [0..100]
jpeg_set_quality (&cinfo, 70, true);
jpeg_start_compress(&cinfo, true);
// Saves the buffer
JSAMPROW row_pointer[1]; // pointer to a single row
// JPEG stores the image from top to bottom (OpenGL does the opposite)
while (cinfo.next_scanline < cinfo.image_height)
{
row_pointer[0] = (JSAMPROW)&l_image[(cinfo.image_height-1-cinfo.next_scanline)* (cinfo.input_components)*iWidth];
jpeg_write_scanlines(&cinfo, row_pointer, 1);
}
// End of the process
jpeg_finish_compress(&cinfo);
fclose(l_file);
free(l_image);
l_res =true;
return l_res;
}
The display is correct but the generated JPEG seems tripled and overlap from left to right.
What did I do wrong ?
It appears that the internal format of the jpeg lib and the canvas do not match. Other appears to read/encode with RGBRGBRGB, other with RGBARGBARGBA.
You might be able to rearrange the image data, if everything else fails...
char *dst_ptr = l_image; char *src_ptr = l_image;
for (i=0;i<width*height;i++) { *dst_ptr++=*src_ptr++;
*dst_ptr++=*src_ptr++; *dst_ptr++=*src_ptr++; src_ptr++; }
EDIT: now that the cause is verified, there might be even simpler modification.
You might be able to get data from gl pixel buffer in the correct format:
int l_size = iWidth*iHeight*3*sizeof(GLubyte);
...
glReadPixels(0,0,iWidth,iHeight,GL_RGB,GL_UNSIGNED_BYTE,l_image);
And one more piece of warning: if this compiles, but the output is tilted, then it means that your screen width is not a multiple of 4, but that opengl wants to start each new row at dword boundary. But in that case there's a good opportunity of a crash, because in that case the l_size should have been 1,2 or 3 bytes larger than expected.
Related
OpenGL ES 3.1, Android.
I have set up SSBO with the intention to write something in fragment shader and read it back in the application. Things almost work, i.e. I can read back the value I have written, with one issue: when I read an INT, its bytes come reversed (a '17' = 0x00000011 written in the shader comes back as '285212672' = 0x11000000 ).
Here's how I do it:
Shader
(...)
layout (std140,binding=0) buffer SSBO
{
int ssbocount[];
};
(...)
ssbocount[0] = 17;
(...)
Application code
int SIZE = 40;
int[] mSSBO = new int[1];
ByteBuffer buf = ByteBuffer.allocateDirect(SIZE).order(ByteOrder.nativeOrder());
(...)
glGenBuffers(1,mSSBO,0);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, mSSBO[0]);
glBufferData(GL_SHADER_STORAGE_BUFFER, SIZE, null, GL_DYNAMIC_READ);
buf = (ByteBuffer) glMapBufferRange(GL_SHADER_STORAGE_BUFFER, 0, SIZE, GL_MAP_READ_BIT );
glBindBufferBase(GL_SHADER_STORAGE_BUFFER,0, mSSBO[0]);
(...)
int readValue = buf.getInt(0);
Now print out the Value and it comes as '17' with reversed bytes.
Notice I DO allocate the ByteBuffer with 'nativeOrder'. Of course, I could manually flip the bytes, but the concern is this would only sometimes work, depending on the endianness of the host machine...
The fix is to use native endianess, and create an integer view of the ByteBuffer using ByteBuffer.asIntBuffer(). For some reason the local getInt() calls do not seem to respect the local ByteBuffer endianness settings.
I came across one problem to render the camera image after some process on its YUV buffer.
I am using the example video-overlay-jni-example and in the method OnFrameAvailable I am creating a new frame buffer using the cv::Mat...
Here is how I create a new frame buffer:
cv::Mat frame((int) yuv_height_ + (int) (yuv_height_ / 2), (int) yuv_width_, CV_8UC1, (uchar *) yuv_temp_buffer_.data());
After process, I copy the frame.data to the yuv_temp_buffer_ in order to render it on the texture: memcpy(&yuv_temp_buffer_[0], frame.data, yuv_size_);
And this works fine...
The problem starts when I try to execute an OpenCV method findChessboardCorners... using the frame that I've created before.
The method findChessboardCorners takes about 90ms to execute (11 fps), however, it seems to be rendering in a slower rate. (It appears to be rendering in ~0.5 fps on the screen).
Here is the code of the OnFrameAvailable method:
void AugmentedRealityApp::OnFrameAvailable(const TangoImageBuffer* buffer) {
if (yuv_drawable_ == NULL){
return;
}
if (yuv_drawable_->GetTextureId() == 0) {
LOGE("AugmentedRealityApp::yuv texture id not valid");
return;
}
if (buffer->format != TANGO_HAL_PIXEL_FORMAT_YCrCb_420_SP) {
LOGE("AugmentedRealityApp::yuv texture format is not supported by this app");
return;
}
// The memory needs to be allocated after we get the first frame because we
// need to know the size of the image.
if (!is_yuv_texture_available_) {
yuv_width_ = buffer->width;
yuv_height_ = buffer->height;
uv_buffer_offset_ = yuv_width_ * yuv_height_;
yuv_size_ = yuv_width_ * yuv_height_ + yuv_width_ * yuv_height_ / 2;
// Reserve and resize the buffer size for RGB and YUV data.
yuv_buffer_.resize(yuv_size_);
yuv_temp_buffer_.resize(yuv_size_);
rgb_buffer_.resize(yuv_width_ * yuv_height_ * 3);
AllocateTexture(yuv_drawable_->GetTextureId(), yuv_width_, yuv_height_);
is_yuv_texture_available_ = true;
}
std::lock_guard<std::mutex> lock(yuv_buffer_mutex_);
memcpy(&yuv_temp_buffer_[0], buffer->data, yuv_size_);
///
cv::Mat frame((int) yuv_height_ + (int) (yuv_height_ / 2), (int) yuv_width_, CV_8UC1, (uchar *) yuv_temp_buffer_.data());
if (!stam.isCalibrated()) {
Profiler profiler;
profiler.startSampling();
stam.initFromChessboard(frame, cv::Size(9, 6), 100);
profiler.endSampling();
profiler.print("initFromChessboard", -1);
}
///
memcpy(&yuv_temp_buffer_[0], frame.data, yuv_size_);
swap_buffer_signal_ = true;
}
Here is the code of the method initFromChessBoard:
bool STAM::initFromChessboard(const cv::Mat& image, const cv::Size& chessBoardSize, int squareSize)
{
cv::Mat rvec = cv::Mat(cv::Size(3, 1), CV_64F);
cv::Mat tvec = cv::Mat(cv::Size(3, 1), CV_64F);
std::vector<cv::Point2d> imagePoints, imageBoardPoints;
std::vector<cv::Point3d> boardPoints;
for (int i = 0; i < chessBoardSize.height; i++)
{
for (int j = 0; j < chessBoardSize.width; j++)
{
boardPoints.push_back(cv::Point3d(j*squareSize, i*squareSize, 0.0));
}
}
//getting only the Y channel (many of the functions like face detect and align only needs the grayscale image)
cv::Mat gray(image.rows, image.cols, CV_8UC1);
gray.data = image.data;
bool found = findChessboardCorners(gray, chessBoardSize, imagePoints, cv::CALIB_CB_FAST_CHECK);
#ifdef WINDOWS_VS
printf("Number of chessboard points: %d\n", imagePoints.size());
#elif ANDROID
LOGE("Number of chessboard points: %d", imagePoints.size());
#endif
for (int i = 0; i < imagePoints.size(); i++) {
cv::circle(image, imagePoints[i], 6, cv::Scalar(149, 43, 0), -1);
}
}
Is anyone having the same problem after process something in the YUV buffer to render on the texture?
I did a test using other device rather than the project Tango using camera2 API, and the rendering process on the screen appears to be the same rate of the OpenCV function process itself.
I appreciate any help.
I had a similar problem. My app slowed down after using the copied yuv buffer and doing some image processing with OpenCV. I would recommand you to use the tango_support library to access the yuv image buffer by doing the following:
In your config function:
int AugmentedRealityApp::TangoSetupConfig() {
TangoSupport_createImageBufferManager(TANGO_HAL_PIXEL_FORMAT_YCrCb_420_SP, 1280, 720, &yuv_manager_);
}
In your callback function:
void AugmentedRealityApp::OnFrameAvailable(const TangoImageBuffer* buffer) {
TangoSupport_updateImageBuffer(yuv_manager_, buffer);
}
In your render thread:
void AugmentedRealityApp::Render() {
TangoImageBuffer* yuv = new TangoImageBuffer();
TangoSupport_getLatestImageBuffer(yuv_manager_, &yuv);
cv::Mat yuv_frame, rgb_img, gray_img;
yuv_frame.create(720*3/2, 1280, CV_8UC1);
memcpy(yuv_frame.data, yuv->data, 720*3/2*1280); // yuv image
cv::cvtColor(yuv_frame, rgb_img, CV_YUV2RGB_NV21); // rgb image
cvtColor(rgb_img, gray_img, CV_RGB2GRAY); // gray image
}
You can share the yuv_manger with other objects/threads so you can access the yuv image buffer wherever you want.
On API Level 19+ devices, we have getByteCount() and getAllocationByteCount(), each of which return a size of the Bitmap in bytes. The latter takes into account the fact that a Bitmap can actually represent a smaller image than its byte count (e.g., the Bitmap originally held a larger image, but then was used with BitmapFactory.Options and inBitmap to hold a smaller image).
In most Android IPC scenarios, particularly those involving Parcelable, we have a 1MB "binder transaction limit".
For the purposes of determining whether a given Bitmap is small enough for IPC, do we use getByteCount() or getAllocationByteCount()?
My gut instinct says that we use getByteCount(), as that should be the number of bytes the current image in the Bitmap is taking up, but I was hoping somebody had a more authoritative answer.
The size of the image data written to the parcel is getByteCount() plus the size of the Bitmap's color table, if it has one. There are also approximately 48 bytes of Bitmap attributes written to the parcel. The following code analysis and tests provide the basis for these statements.
The native function to write a Bitmap to a Parcel begins at line 620 of this file. The function is included here with explanation added:
static jboolean Bitmap_writeToParcel(JNIEnv* env, jobject,
jlong bitmapHandle,
jboolean isMutable, jint density,
jobject parcel) {
const SkBitmap* bitmap = reinterpret_cast<SkBitmap*>(bitmapHandle);
if (parcel == NULL) {
SkDebugf("------- writeToParcel null parcel\n");
return JNI_FALSE;
}
android::Parcel* p = android::parcelForJavaObject(env, parcel);
The following seven ints are the first data written to the parcel. In Test #2, described below, these values are read from a sample parcel to confirm the size of data written for the Bitmap.
p->writeInt32(isMutable);
p->writeInt32(bitmap->colorType());
p->writeInt32(bitmap->alphaType());
p->writeInt32(bitmap->width());
p->writeInt32(bitmap->height());
p->writeInt32(bitmap->rowBytes());
p->writeInt32(density);
If the bitmap has a color map, it is written to the parcel. A precise determination the parcel size of a bitmap must address this also.
if (bitmap->colorType() == kIndex_8_SkColorType) {
SkColorTable* ctable = bitmap->getColorTable();
if (ctable != NULL) {
int count = ctable->count();
p->writeInt32(count);
memcpy(p->writeInplace(count * sizeof(SkPMColor)),
ctable->lockColors(), count * sizeof(SkPMColor));
ctable->unlockColors();
} else {
p->writeInt32(0); // indicate no ctable
}
}
Now we get to the core of the question: How much data is written for the bitmap image? The amount is determined by this call to bitmap->getSize(). That function is analyzed below. Note here that the value is stored in size, which is used in the code following to write the blob for the image data, and copy the data into the memory pointed to by the blob.
size_t size = bitmap->getSize();
A variable size block of data is written to a parcel using a blob. If the block is less than 40K, it is written "in place" in the parcel. Blocks larger than 40K are written to shared memory using ashmem, and the attributes of the ashmem region are written in the parcel. The blob itself is just a small descriptor containing a few members that include a pointer to the block, its length, and a flag indicating if the block is in-place or in shared memory. The class definition for WriteableBlob is at line 262 of this file. The definition of writeBlob() is at line 747 of this file. writeBlob() determines if the data block is small enough to be written in-place. If so, it expands the parcel buffer to make room. If not, an ashmem region is created and configured. In both cases, the members of the blob (pointer, size, flag) are defined for later use when the block is copied. Note that for both cases, size defines the size of the data that will be copied, either in-place or to shared memory. When writeBlob() completes, the target data buffer is defined in blob and values written to the parcel describing how the image data block is stored (in-place or shared memory) and, for shared memory, the attributes of the ashmem region.
android::Parcel::WritableBlob blob;
android::status_t status = p->writeBlob(size, &blob);
if (status) {
doThrowRE(env, "Could not write bitmap to parcel blob.");
return JNI_FALSE;
}
With the target buffer for the block now setup, the data can be copied using the pointer in the blob. Note that size defines the amount of data copied. Also note that there is only one size. The same value is used for in-place and shared memory targets.
bitmap->lockPixels();
const void* pSrc = bitmap->getPixels();
if (pSrc == NULL) {
memset(blob.data(), 0, size);
} else {
memcpy(blob.data(), pSrc, size);
}
bitmap->unlockPixels();
blob.release();
return JNI_TRUE;
}
That completes the analysis of Bitmap_writeToParcel. It is now clear that while small (<40K) images are written in-place and larger images are written to shared memory, the size of the data written is the same for both cases. The easiest and most direct way to see what that size is, is to create a test case using an image <40K, so that it will be written in-place. The size of the resulting parcel then reveals the size of the image data.
A second method for determining what size is requires an understanding of SkBitmap::getSize(). This is the function used in the code analyzed above to get the size of the image block.
SkBitmap::getSize() is define at line 130 of this file. It is:
size_t getSize() const { return fHeight * fRowBytes; }
Two other functions in the same file relevant to this explanation are height(), defined at line 98:
int height() const { return fHeight; }
and rowBytes(), defined at line 101:
int rowBytes() const { return fRowBytes; }
We saw these functions used in Bitmap_writeToParcel when the attributes of a bitmap are written to a parcel:
p->writeInt32(bitmap->height());
p->writeInt32(bitmap->rowBytes());
With this understanding of these functions, we now see that by dumping the first few ints in a parcel, we can see the values of fHeight and fRowBytes, from which we can infer the value returned by getSize().
The second code snippet below does this and provides further confirmation that the size of the data written to the Parcel corresponds to the value returned by getByteCount().
Test #1
This test creates a bitmap smaller than 40K to produce in-place storage of the image data. The parcel data size is then examined to show that getByteCount() determines the size of the image data stored in the parcel.
The first few statements in the code below are to produce a Bitmap smaller than 40K.
The logcat output confirms that the size of the data written to the Parcel corresponds to the value returned by getByteCount().
byteCount=38400 allocatedByteCount=38400 parcelDataSize=38428
byteCount=7680 allocatedByteCount=38400 parcelDataSize=7708
Code that produced the output shown:
// Setup to get a mutable bitmap less than 40 Kbytes
String path = "someSmallImage.jpg";
Bitmap bm0 = BitmapFactory.decodeFile(path);
// Need it mutable to change height
Bitmap bm1 = bm0.copy(bm0.getConfig(), true);
// Chop it to get a size less than 40K
bm1.setHeight(bm1.getHeight() / 32);
// Now we have a BitMap with size < 40K for the test
Bitmap bm2 = bm1.copy(bm0.getConfig(), true);
// What's the parcel size?
Parcel p1 = Parcel.obtain();
bm2.writeToParcel(p1, 0);
// Expect byteCount and allocatedByteCount to be the same
Log.i("Demo", String.format("byteCount=%d allocatedByteCount=%d parcelDataSize=%d",
bm2.getByteCount(), bm2.getAllocationByteCount(), p1.dataSize()));
// Resize to make byteCount and allocatedByteCount different
bm2.setHeight(bm2.getHeight() / 4);
// What's the parcel size?
Parcel p2 = Parcel.obtain();
bm2.writeToParcel(p2, 0);
// Show that byteCount determines size of data written to parcel
Log.i("Demo", String.format("byteCount=%d allocatedByteCount=%d parcelDataSize=%d",
bm2.getByteCount(), bm2.getAllocationByteCount(), p2.dataSize()));
p1.recycle();
p2.recycle();
Test #2
This test stores a bitmap to a parcel, then dumps the first few ints to get the values from which image data size can be inferred.
The logcat output with comments added:
// Bitmap attributes
bc=12000000 abc=12000000 hgt=1500 wid=2000 rbyt=8000 dens=213
// Attributes after height change. byteCount changed, allocatedByteCount not.
bc=744000 abc=12000000 hgt=93 wid=2000 rbyt=8000 dens=213
// Dump of parcel data. Parcel data size is 48. Image too large for in-place.
pds=48 mut=1 ctyp=4 atyp=1 hgt=93 wid=2000 rbyt=8000 dens=213
// Show value of getSize() derived from parcel data. It equals getByteCount().
bitmap->getSize()= 744000 getByteCount()=744000
Code that produced this output:
String path = "someImage.jpg";
Bitmap bm0 = BitmapFactory.decodeFile(path);
// Need it mutable to change height
Bitmap bm = bm0.copy(bm0.getConfig(), true);
// For reference, and to provide confidence that the parcel data dump is
// correct, log the bitmap attributes.
Log.i("Demo", String.format("bc=%d abc=%d hgt=%d wid=%d rbyt=%d dens=%d",
bm.getByteCount(), bm.getAllocationByteCount(),
bm.getHeight(), bm.getWidth(), bm.getRowBytes(), bm.getDensity()));
// Change size
bm.setHeight(bm.getHeight() / 16);
Log.i("Demo", String.format("bc=%d abc=%d hgt=%d wid=%d rbyt=%d dens=%d",
bm.getByteCount(), bm.getAllocationByteCount(),
bm.getHeight(), bm.getWidth(), bm.getRowBytes(), bm.getDensity()));
// Get a parcel and write the bitmap to it.
Parcel p = Parcel.obtain();
bm.writeToParcel(p, 0);
// When the image is too large to be written in-place,
// the parcel data size will be ~48 bytes (when there is no color map).
int parcelSize = p.dataSize();
// What are the first few ints in the parcel?
p.setDataPosition(0);
int mutable = p.readInt(); //1
int colorType = p.readInt(); //2
int alphaType = p.readInt(); //3
int width = p.readInt(); //4
int height = p.readInt(); //5 bitmap->height()
int rowBytes = p.readInt(); //6 bitmap->rowBytes()
int density = p.readInt(); //7
Log.i("Demo", String.format("pds=%d mut=%d ctyp=%d atyp=%d hgt=%d wid=%d rbyt=%d dens=%d",
parcelSize, mutable, colorType, alphaType, height, width, rowBytes, density));
// From code analysis, we know that the value returned
// by SkBitmap::getSize() is the size of the image data written.
// We also know that the value of getSize() is height()*rowBytes().
// These are the values in ints 5 and 6.
int imageSize = height * rowBytes;
// Show that the size of image data stored is Bitmap.getByteCount()
Log.i("Demo", String.format("bitmap->getSize()= %d getByteCount()=%d", imageSize, bm.getByteCount()));
p.recycle();
I'm trying to use the NDK to do some image processing. I am NOT using opencv.
I am fairly new to Android so I was doing this in steps. I started by writing a simple app that would let me capture video from the camera and display it to the screen. I have this done.
Then I tried to manipulate the camera data in native. However, onPreviewFrame uses a byte array to capture frame information. This is my code -
public void onPreviewFrame(byte[] arg0, Camera arg1)
{
if (imageFormat == ImageFormat.NV21)
{
if ( !bProcessing )
{
FrameData = arg0;
mHandler.post(callnative);
}
}
}
And the callnative runnable is like so -
private Runnable callnative = new Runnable()
{
public void run()
{
bProcessing = true;
String returnNative = callTorch(MainActivity.assetManager, PreviewSizeWidth, PreviewSizeHeight, FrameData, pixels);
bitmap.setPixels(pixels, 0, PreviewSizeWidth, 0, 0, PreviewSizeWidth, PreviewSizeHeight);
MycameraClass.setImageBitmap(bitmap);
bProcessing = false;
}
};
The problem is, I need to use FrameData in native as the float datatype. However, it is in the form of a bytearray. I wanted to know how the frame data is stored. Is this a 2 dimensional array of bytes? So the camera returns an 8 bit image and stores this as 640x480 bytes? If that is so, in what form does C interpret this byte data type? Can I simply convert it to float? I have this in native -
jbyte *nativeData;
nativeData = (env)->GetByteArrayElements(NV21FrameData,NULL);
__android_log_print(ANDROID_LOG_INFO, "Nativeprint", "nativedata is: %d",(int)nativeData[0]);
However, this prints -22 which leads me to believe that I am trying to print out a pointer. I am not sure why that is the case though.
I would appreciate any help on this.
You will not be able to get any float data type from the pixels buffer. the data are in bytes, which in C is the char datatype.
So this:
jbyte *nativeData = (env)->GetByteArrayElements(NV21FrameData,NULL);
is the same as this:
char *nativeData = (char *)((env)->GetByteArrayElements(NV21FrameData, NULL));
The data is stored as 1 dimension array, so you will retrieve each pixel operating by width, height, and x and y calculations.
Also remember the preview camera frames from your sample are in YUV420sp, this means you will need to convert the data from YUV to RGB before you can set it in a bitmap.
I am trying to access the raw data of a Bitmap in ARGB_8888 format on Android, using the copyPixelsToBuffer and copyPixelsFromBuffer methods. However, invocation of those calls seems to always apply the alpha channel to the rgb channels. I need the raw data in a byte[] or similar (to pass through JNI; yes, I know about bitmap.h in Android 2.2, cannot use that).
Here is a sample:
// Create 1x1 Bitmap with alpha channel, 8 bits per channel
Bitmap one = Bitmap.createBitmap(1,1,Bitmap.Config.ARGB_8888);
one.setPixel(0,0,0xef234567);
Log.v("?","hasAlpha() = "+Boolean.toString(one.hasAlpha()));
Log.v("?","pixel before = "+Integer.toHexString(one.getPixel(0,0)));
// Copy Bitmap to buffer
byte[] store = new byte[4];
ByteBuffer buffer = ByteBuffer.wrap(store);
one.copyPixelsToBuffer(buffer);
// Change value of the pixel
int value=buffer.getInt(0);
Log.v("?", "value before = "+Integer.toHexString(value));
value = (value >> 8) | 0xffffff00;
buffer.putInt(0, value);
value=buffer.getInt(0);
Log.v("?", "value after = "+Integer.toHexString(value));
// Copy buffer back to Bitmap
buffer.position(0);
one.copyPixelsFromBuffer(buffer);
Log.v("?","pixel after = "+Integer.toHexString(one.getPixel(0,0)));
The log then shows
hasAlpha() = true
pixel before = ef234567
value before = 214161ef
value after = ffffff61
pixel after = 619e9e9e
I understand that the order of the argb channels is different; that's fine. But I don't
want the alpha channel to be applied upon every copy (which is what it seems to be doing).
Is this how copyPixelsToBuffer and copyPixelsFromBuffer are supposed to work? Is there any way to get the raw data in a byte[]?
Added in response to answer below:
Putting in buffer.order(ByteOrder.nativeOrder()); before the copyPixelsToBuffer does change the result, but still not in the way I want it:
pixel before = ef234567
value before = ef614121
value after = ffffff41
pixel after = ff41ffff
Seems to suffer from essentially the same problem (alpha being applied upon each copyPixelsFrom/ToBuffer).
One way to access data in Bitmap is to use getPixels() method. Below you can find an example I used to get grayscale image from argb data and then back from byte array to Bitmap (of course if you need rgb you reserve 3x bytes and save them all...):
/*Free to use licence by Sami Varjo (but nice if you retain this line)*/
public final class BitmapConverter {
private BitmapConverter(){};
/**
* Get grayscale data from argb image to byte array
*/
public static byte[] ARGB2Gray(Bitmap img)
{
int width = img.getWidth();
int height = img.getHeight();
int[] pixels = new int[height*width];
byte grayIm[] = new byte[height*width];
img.getPixels(pixels,0,width,0,0,width,height);
int pixel=0;
int count=width*height;
while(count-->0){
int inVal = pixels[pixel];
//Get the pixel channel values from int
double r = (double)( (inVal & 0x00ff0000)>>16 );
double g = (double)( (inVal & 0x0000ff00)>>8 );
double b = (double)( inVal & 0x000000ff) ;
grayIm[pixel++] = (byte)( 0.2989*r + 0.5870*g + 0.1140*b );
}
return grayIm;
}
/**
* Create a gray scale bitmap from byte array
*/
public static Bitmap gray2ARGB(byte[] data, int width, int height)
{
int count = height*width;
int[] outPix = new int[count];
int pixel=0;
while(count-->0){
int val = data[pixel] & 0xff; //convert byte to unsigned
outPix[pixel++] = 0xff000000 | val << 16 | val << 8 | val ;
}
Bitmap out = Bitmap.createBitmap(outPix,0,width,width, height, Bitmap.Config.ARGB_8888);
return out;
}
}
My guess is that this might have to do with the byte order of the ByteBuffer you are using. ByteBuffer uses big endian by default.
Set endianess on the buffer with
buffer.order(ByteOrder.nativeOrder());
See if it helps.
Moreover, copyPixelsFromBuffer/copyPixelsToBuffer does not change the pixel data in any way. They are copied raw.
I realize this is very stale and probably won't help you now, but I came across this recently in trying to get copyPixelsFromBuffer to work in my app. (Thank you for asking this question, btw! You saved me tons of time in debugging.) I'm adding this answer in the hopes it helps others like me going forward...
Although I haven't used this yet to ensure that it works, it looks like that, as of API Level 19, we'll finally have a way to specify not to "apply the alpha" (a.k.a. premultiply) within Bitmap. They're adding a setPremultiplied(boolean) method that should help in situations like this going forward by allowing us to specify false.
I hope this helps!
This is an old question, but i got to the same issue, and just figured out that the bitmap byte are pre-multiplied, you can set the bitmap (as of API 19) to not pre-multiply the buffer, but in the API they make no guarantee.
From the docs:
public final void setPremultiplied(boolean premultiplied)
Sets whether the bitmap should treat its data as pre-multiplied.
Bitmaps are always treated as pre-multiplied by the view system and Canvas for performance reasons. Storing un-pre-multiplied data in a Bitmap (through setPixel, setPixels, or BitmapFactory.Options.inPremultiplied) can lead to incorrect blending if drawn by the framework.
This method will not affect the behaviour of a bitmap without an alpha channel, or if hasAlpha() returns false.
Calling createBitmap or createScaledBitmap with a source Bitmap whose colors are not pre-multiplied may result in a RuntimeException, since those functions require drawing the source, which is not supported for un-pre-multiplied Bitmaps.