As the title said, anyone know what is RGBX_8888 pixel format? and what is the difference with RGBA_8888? Is RGBA_8888 offers an alpha channel but RGBX_8888 does not?
The android documentation does not give much information on this unfortunately.
Thanks.
RGBX means, that the pixel format still has an alpha channel, but it is ignored, and is always set to 255.
Some reference:
Blackberry PixelFormat
(It is not android, however I guess that the naming conventions stay same across platforms.)
The RGBX 32 bit RGB format is stored in memory as 8 red bits, 8 green bits, 8 blue bits, and 8 ignored bits.
Android 4.1.2 source code (texture.cpp) Line 80
There is a function called PointSample, where it samples based on a template format, and the passed parameters. You can see, that at pixelformat RGBX_8888, the alpha channel is ignored and set to 255, while at RGBA_8888, it is normally sampled.
if (GGL_PIXEL_FORMAT_RGBA_8888 == format)
*sample = *(data + index);
else if (GGL_PIXEL_FORMAT_RGBX_8888 == format)
{
*sample = *(data + index);
*sample |= 0xff000000;
}
Related
I am trying to do hardware encoding (avc) of NV12 stream using Android MediaCodec API.
When using OMX.qcom.video.encoder.avc, resolutions 1280x720 and 640x480 work fine, while the others (i.e. 640x360, 320x240, 800x480) produce output where chroma component seems shifted (please see snapshot).
I have double-checked that the input image is correct by saving it to a jpeg file.
This problem only occurs on QualComm devices (i.e. Samsung Galaxy S4).
Anyone has this working properly? Any additional setup / quirks necessary?
Decoder(MediaCodec) has its MediaFormat, it can be received using getOutputFormat. Returned instance can be printed to log. And there you can see some useful information. For example in your case value like "slice-height" could be useful. I suspect that it is equal to height for 1280x720 and 640x480 and differs for other resolutions. Probably you should use this value to get chroma offset.
Yep, the OMX.qcom.video.encoder.avc does that but not on all devices/android version. On my Nexus 4 with Android 4.3 the encoder works fine, but not on my S3 (running 4.1)
The solution for an S3 running 4.1 with the OMX.qcom.video.encoder.avc (it seems that some S3 have another encoder) is to add 1024 bytes just before the Chroma pane.
// The encoder may need some padding before the Chroma pane
int padding = 1024;
if ((mWidth==640 && mHeight==480) || mWidth==1280 && mHeight==720) padding = 0;
// Interleave the U and V channel
System.arraycopy(buffer, 0, tmp, 0, mYSize); // Y
for (i = 0; i < mUVSize; i++) {
tmp[mYSize + i*2 + padding] = buffer[mYSize + i + mUVSize]; // Cb (U)
tmp[mYSize + i*2+1 + padding] = buffer[mYSize + i]; // Cr (V)
}
return tmp;
The camera is using YV12 and the encoder COLOR_FormatYUV420SemiPlanar.
Your snapshot shows the same kind of artefacts I had, you may need a similar hack for some resolutions, maybe with another padding length
You should also avoid resolutions that are not a multiple of 16, even on 4.3 apparently (http://code.google.com/p/android/issues/detail?id=37769) !
I am new to android ndk.I have started learning through the image processing example by
ruckus and by IBM blog. I am not getting few lines below.Can any one please help me to understand the code snippet?`
static void brightness(AndroidBitmapInfo* info, void* pixels, float brightnessValue){
int xx, yy, red, green, blue;
uint32_t* line;
for(yy = 0; yy < info->height; yy++){
line = (uint32_t*)pixels;
for(xx =0; xx < info->width; xx++){
//extract the RGB values from the pixel
red = (int) ((line[xx] & 0x00FF0000) >> 16);
green = (int)((line[xx] & 0x0000FF00) >> 8);
blue = (int) (line[xx] & 0x00000FF );
//manipulate each value
red = rgb_clamp((int)(red * brightnessValue));
green = rgb_clamp((int)(green * brightnessValue));
blue = rgb_clamp((int)(blue * brightnessValue));
// set the new pixel back in
line[xx] =
((red << 16) & 0x00FF0000) |
((green << 8) & 0x0000FF00) |
(blue & 0x000000FF);
}
pixels = (char*)pixels + info->stride;
}
}
`
static int rgb_clamp(int value) {
if(value > 255) {
return 255;
}
if(value < 0) {
return 0;
}
return value;
}
How the RGB value are getting extracted and wht does this rgb_clamp do.Why are we setting new Pixell back and how does pixels = (char*)pixels + info->stride; works?
I am not a c/c++ guys and not having much knowledge of Image processing.
Thanks
At first lets talk about one pixel. As far as i can see, it is a composition of at least 3 channels: r,g and b, which are all stored in one uint32_t value and has the format 0x00RRGGBB(32bit / 4 channels = 8bit per channel and thus a value range from 0..255). So, to get the separated r, g and b-values you need to mask them out, which is done in the three lines below //extract the RGB values. For example the red component... With the mask 0x00FF0000 and the & operator, you set every bit to 0 except the bits that are set in the red channel. But when you just mask them out with 0x00RRGGBB & 0x00FF0000 = 0x00RR0000, you would get a very big number. To get a value between 0 and 255 you also have to shift the bits to the right and that is what is done with the >>-operator. So for the latter example: After applying the mask, you get 0x00RR0000, and shifting this 16 bit right (>> 16)gives you 0x000000RR, which is a number between 0 and 255. The same happens with the green channel, but with an 8bit right shift and since the blue value is already on the "right" bit position, there is no need to shift.
Second question: What rgb_clamp does is easy to explain. It ensures, that your r,g or b-value, multiplied with your brightness factor, never exceeds the value range 0..255.
After the multiplication with the brightness factor, the new values are written back into memory, which happens in the reverse order of the above described extraction, this time shifting them leftwards and removing bits, that we don't want with the mask.
After one line of your image is processed, the info->stride is added, since for optimization purposes, the memory probably is aligned to fill 32byte boundaries and thus a single line can be longer than only image width and thus the "rest" of bytes are added to the pointer.
First and foremost I suggest you read the C book here: http://publications.gbdirect.co.uk/c_book/
Next I'll go through your questions.
How are the RGB values extracted
line is set to point to pixels parameter:
line = (uint32_t*)pixels;
That is pixels is an array of 32 bit unsigned integers
Then for the height and width of the bitmap the RGB values are extracted using a combination of bitwise ANDing (&) and bit shifting right (>>).
Lets see how you get red:
red = (int) ((line[xx] & 0x00FF0000) >> 16);
Here we get the current line, then AND with 0x00FF0000 as mask, this gets the bits 24-16 from the line. So using RGB code #123456 as an example we will be left with 0x00120000 in the red variable. But it's still in the 24-16 bit position so we right shift 16 bits to shift the bits down to 0x00000012.
We do this for the green and blue values, adjusting the AND mask and number of bits to shift right.
More information on binary arithmetic can be found here: http://publications.gbdirect.co.uk/c_book/chapter2/expressions_and_arithmetic.html
What does rgb_clamp do
This function simply ensures the red, green, or blue values are 0 or above or 255 and below.
If the parameter to rbg_clamp is -20 the it will return 0, which will be used to set the RGB value. If the parameter is rbg_clamp is 270 it will return 255.
RGB values for each colour must not exceed 225 or be below 0. In this example 255 being the brightest and 0 being the darkest value.
Why are we setting pixel back
It appears we are changing the brightness of the pixel, and setting the value back ready to be displayed.
How does pixels = (char*)pixels + info->stride; work?
Without knowing the structure of info variable of AndroidBitmapInfo type, I would guess info->stride refers to the width of the bitmap in bytes so line will become the next line on the next loop iteration.
Hope that helps
In order to minimize the memory usage of bitmaps, yet still try to maximize the quality of them, I would like to ask a simple question:
Is there a way for me to check if a given image file (.png file) has transparency using the API, without checking every pixel in it?
If the image doesn't have any transparency, it would be the best to use a different bitmap format that uses only the RGB values.
The problem is that Android also doesn't have a format for just the 3 colors. Only RGB_565, which they say that degrade the quality of the image and that should have dithering feature enabled.
Is there also a way to read only the RGB values and be able to show them?
For me bitmap.hasAlpha() works fine to check first if the bitmap has alpha values. Afterwards you have to run through the pixels and create a second bitmap with no alpha I would suggest.
Let's start a bit off-topic
the problem is that android also doesn't have a format for just the 3 colors . only RGB_565 , which they say that degrade the quality of the image and that should have dithering feature enabled.
The reason for that problem is not really Android specific. It's about performance while drawing images. You get the best performance if the pixeldata fits exactly in 1 32bit memory cell.
So the most obvious good pixel format is the ARGB_8888 format which uses exactly 32bit (24 for the color 8 for alpha). While drawing you don't need to do anything but to loop over the image data and each cell you read can be drawn directly. The only downside is the required memory to work with such images, both when they just sit in memory and while displaying them since the graphic hardware has to transfer more data.
The second best option is to use a format where several pixels fit into 1 cell. Using 2 pixels in 32bit you have 16bit per pixel left and one of the formats using 16bit is the 565 format. 5bit red, 6bit green, 5bit blue. While drawing this you can still work on memory cells separately and all you have to do is to split 1 cell in parts. Due to the smaller memory size required for images, drawing can sometimes be even faster than using 32bit colors. Since in the beginning of android memory was a much bigger problem they chose this format to be the default.
And the worst category of formats are those where pixels don't fit into those cells. If you take just the 3 colors you get 24 bit and those need to be distributed over 2 cells in 3 out of 4 cases. For example the second pixel would use the remaining 8 bit from the first cell & the first 16bit of the next cell. The extra work required to work with 24bit colors is so big that it is not used. And when drawing images you usually have alpha at some point anyways and if not you simply use 32bit but ignore the alpha bits.
So the 16bit approach looks ugly & the 24 bit approach does not make sense. And since the memory limitations of Android are not as tight as they were and the hardware got faster, Android has switched it's default to 32bit (explained in even more details in http://www.curious-creature.org/2010/12/08/bitmap-quality-banding-and-dithering/)
Back to your real question
is there a way for me to check if a given image file (png file) has transparency using the API , without checking every pixel in it?
I don't know. But JPEG images don't support alpha and PNG images usually have alpha. You could simply abuse the file extension to get it right in most cases.
But I would suggest you don't bother with all that and simply use ARGB_8888 and apply the nice image loading techniques detailed in the Android Training documentation about Displaying Bitmaps Efficiently.
The reason people run into memory problems is usually either that they have way more images loaded in memory than they currently display or they use giant images that can't be displayed on the small screen of a phone. And in my opinion it makes more sense to add good memory management than complicating your code to downgrade the image quality.
There is a way to check if a PNG file has transparency, or at least if it supports it:
public final static int COLOR_GREY = 0;
public final static int COLOR_TRUE = 2;
public final static int COLOR_INDEX = 3;
public final static int COLOR_GREY_ALPHA = 4;
public final static int COLOR_TRUE_ALPHA = 6;
private final static int DECODE_BUFFER_SIZE = 16 * 1024;
private final static int HEADER_DECODE_BUFFER_SIZE = 1024;
/** given an inputStream of a png file , returns true iff found that it has transparency (in its header) */
private static boolean isPngInputStreamContainTransparency(final InputStream pngInputStream) {
try {
// skip: png signature,header chunk declaration,width,height,bitDepth :
pngInputStream.skip(12 + 4 + 4 + 4 + 1);
final byte colorType = (byte) pngInputStream.read();
switch (colorType) {
case COLOR_GREY_ALPHA:
case COLOR_TRUE_ALPHA:
return true;
case COLOR_INDEX:
case COLOR_GREY:
case COLOR_TRUE:
return false;
}
return true;
} catch (final Exception e) {
}
return false;
}
Other than that, I don't know if such a thing is possible.
i've found the next links which could be helpful for checking if the png file has transparency . sadly, it's a solution only for png files . rest of the files (like webP , bmp, ...) need to have a different parser .
links:
http://www.java2s.com/Code/Java/2D-Graphics-GUI/PNGDecoder.htm
http://hg.l33tlabs.org/twl/file/tip/src/de/matthiasmann/twl/utils/PNGDecoder.java
http://www.java-gaming.org/index.php/topic,24202
I'm confused about PixelFormat on Android.
My device is Motorola Defy.
I have two questions:
On Android 2.3 getWindowManager().getDefaultDisplay().getPixelFormat() returns 4 what stands for RGB_565. As far as I know my device has 16M colors, that means 3 (or 4 with alpha channel) bytes per pixel:
2^(8*3) = 2^24 = 16M
But RGB_565 format has 2 bytes (16 bits) per pixel, what stands for 65K colors:
2^(8*2) = 2^16 = 65K
So, why getPixelFormat() doesn't return format with 3 (or 4 like RGBA) bytes per pixel? Is it display driver problems or something? Can I set PixelFormat to RGBA_8888 (or analogue)?
On Android 4.1 (custom rom), getPixelFormat() returns 5. But this value is undocumented. What does it stand for? Actually, in this situation effect is the same as with constant 4. But from this discussion I found that 5 stands for RGBA_8888 (but there is no proof for that statement). So how can I figure out the real format of device's screen? Also I found one Chinese device on Android 2.2, that also has PixelFormat 5, but the real format is 4 (as my Motorola).
I have googled these questions and found nothing. The only thing I found is that nexus 7 also has 5 format.
Update:
I found method getWindow().setFormat() but it actually does not change main pixel format.
I'll just add my two cents to this discussion, though I should admit in advance that I could not find conclusive answers to all your questions.
So, why getPixelFormat() doesn't return format with 3 (or 4 like RGBA)
bytes per pixel? Is it display driver problems or something? Can I set
PixelFormat to RGBA_8888 (or analogue)?
I'm a little puzzled about what you're exactly asking here. The return value of getPixelFormat() is just an integer that provides a way of identifying the active pixel format; it is not meant to represent any data compacted into a number (e.g. as with MeasureSpec). Unfortunately, I do not have an explanation for why a different is returned than you expected. My best guess would be it's either due to an OS decision, as there does not seem to be a limitation from a hardware point of view, or alternatively, the constants defined in the native implementation do not match up the ones in Java. The fact that you're getting back a 4 as pixel format would then not necessarily mean that it's really RGB_565, if Motorola messed up the definitions.
On a side note: I've actually come across misaligned constant definitions before in Android, although I can't currently recall where exactly...
Just to confirm, it may be worth printing out the pixel format details at runtime. If there's indeed a native constant defined that uses a Java PixelFormat value but doesn't match up, you could possibly reveal the 'real' format this way. Use the getPixelFormatInfo(int format, PixelFormat info) method, that simply delegates retrieving the actual values from the native implementation.
On Android 4.1 (custom rom), getPixelFormat() returns 5. But this
value is undocumented. What does it stand for?
As mentioned earlier, sometimes constants defined in native code do not match up the ones in Java, or aren't defined at all. This is probably such a case. You'll have to do some digging to find out what it represents, but it's fairly straightforward:
/**
* pixel format definitions
*/
enum {
HAL_PIXEL_FORMAT_RGBA_8888 = 1,
HAL_PIXEL_FORMAT_RGBX_8888 = 2,
HAL_PIXEL_FORMAT_RGB_888 = 3,
HAL_PIXEL_FORMAT_RGB_565 = 4,
HAL_PIXEL_FORMAT_BGRA_8888 = 5,
HAL_PIXEL_FORMAT_RGBA_5551 = 6,
HAL_PIXEL_FORMAT_RGBA_4444 = 7,
/* 0x8 - 0xF range unavailable */
HAL_PIXEL_FORMAT_YCbCr_422_SP = 0x10, // NV16
HAL_PIXEL_FORMAT_YCrCb_420_SP = 0x11, // NV21 (_adreno)
HAL_PIXEL_FORMAT_YCbCr_422_P = 0x12, // IYUV
HAL_PIXEL_FORMAT_YCbCr_420_P = 0x13, // YUV9
HAL_PIXEL_FORMAT_YCbCr_422_I = 0x14, // YUY2 (_adreno)
/* 0x15 reserved */
HAL_PIXEL_FORMAT_CbYCrY_422_I = 0x16, // UYVY (_adreno)
/* 0x17 reserved */
/* 0x18 - 0x1F range unavailable */
HAL_PIXEL_FORMAT_YCbCr_420_SP_TILED = 0x20, // NV12_adreno_tiled
HAL_PIXEL_FORMAT_YCbCr_420_SP = 0x21, // NV12
HAL_PIXEL_FORMAT_YCrCb_420_SP_TILED = 0x22, // NV21_adreno_tiled
HAL_PIXEL_FORMAT_YCrCb_422_SP = 0x23, // NV61
HAL_PIXEL_FORMAT_YCrCb_422_P = 0x24, // YV12 (_adreno)
};
Source: hardware.h (lines 121-148)
If you were to compare the values with the ones defined in PixelFormat.java, you'll find they add up quite nicely (as they should). It also shows the meaning of the mysterious 5, which is BGRA_8888; a variant of RGBA_8888.
By the way, you may want to try determining the pixel format details for this integer value using the aforementioned getPixelFormatInfo(...) method by passing in 5 as identifier. It'll be interesting to see what gets returned. I'd expect it to show values matching the BGRA_8888 definition, and hence similar to those given in the linked discussion on the Motorola board.
According to this thread on the motodev forums, the return value 5 corresponds to RGBA_8888. The thread states that the documentation for PixelFormat is incomplete and outdated, and links to a bug that was filed for it. However, the link to that bug now returns a 404.
Additionally, I could not seem to find any thing in the PixelFormat source code(4.1) that supports that claim, as over there RGBA_8888 is assigned the value 1.
My guess is that this value is specific to Motorola and some other devices, as I am seeing the same output on my Nexus 7 and Galaxy Nexus.
EDIT: I emailed a Google employee about this, and he told me that 5 corresponded to BGRA_8888, as indicated in MH's answer and the Motorola forum thread I linked to earlier. He recommended that I file a bug for the documentation problem, which I have done. Please star the bug report so that action is taken sooner rather than later.
RGBA_8888 corresponds to 1 as can be seen in the annex below.
If you go to the code related to mPixelFormat you find the following.
// Following fields are initialized from native code
private int mPixelFormat;
That means that for some reason your device is being treated as RGB_565 due to an OS decision more than hardware capabilities.
Actually, that makes me feel curious.
Interestingly enough descriptions of Galaxy Nexus and Nexus 7 don't feel to have too much in common. GN N7
public static final int RGBA_8888 = 1;
public static final int RGBX_8888 = 2;
public static final int RGB_888 = 3;
public static final int RGB_565 = 4;
#Deprecated
public static final int RGBA_5551 = 6;
#Deprecated
public static final int RGBA_4444 = 7;
public static final int A_8 = 8;
public static final int L_8 = 9;
#Deprecated
public static final int LA_88 = 0xA;
#Deprecated
public static final int RGB_332 = 0xB;
I want to load .png file via asset manager which is provided by android sdk. AssetManager manager; /........./ BitmapFactory.decodeStream(manager.open(path));
It returns BGR format data but opengl es 2.0 uses RGB format so , Blue seems red , red seems blue, how odd.
Is there any solution for it?
I use Nvıdia Tegra 2 (Android 2.2) device for test the application along with c++ via JNI.
You must know the number of bits for colors, let's say n bit is for a color, so the first n bit represents BLUE, the second n bits represent GREEN and the final n bits represent RED in the input. You need to swap these bit groups into the correct order, like this:
output = (input << (2 * n)) + (input << n >> n) + (input >> (2 * n));
To be able to use this solution you need to find out how much is n.
Recent versions of OpenGL, also provide BGR input formats; OpenGL-ES not, unfortunatly. Since you're on Android you have to deal with OpenGL-ES.
If you're using a fragment shader it is also trivial to apply a rgb→bgr swizzle, which if probably the easiest way to overcome this problem.