I'm confused about PixelFormat on Android.
My device is Motorola Defy.
I have two questions:
On Android 2.3 getWindowManager().getDefaultDisplay().getPixelFormat() returns 4 what stands for RGB_565. As far as I know my device has 16M colors, that means 3 (or 4 with alpha channel) bytes per pixel:
2^(8*3) = 2^24 = 16M
But RGB_565 format has 2 bytes (16 bits) per pixel, what stands for 65K colors:
2^(8*2) = 2^16 = 65K
So, why getPixelFormat() doesn't return format with 3 (or 4 like RGBA) bytes per pixel? Is it display driver problems or something? Can I set PixelFormat to RGBA_8888 (or analogue)?
On Android 4.1 (custom rom), getPixelFormat() returns 5. But this value is undocumented. What does it stand for? Actually, in this situation effect is the same as with constant 4. But from this discussion I found that 5 stands for RGBA_8888 (but there is no proof for that statement). So how can I figure out the real format of device's screen? Also I found one Chinese device on Android 2.2, that also has PixelFormat 5, but the real format is 4 (as my Motorola).
I have googled these questions and found nothing. The only thing I found is that nexus 7 also has 5 format.
Update:
I found method getWindow().setFormat() but it actually does not change main pixel format.
I'll just add my two cents to this discussion, though I should admit in advance that I could not find conclusive answers to all your questions.
So, why getPixelFormat() doesn't return format with 3 (or 4 like RGBA)
bytes per pixel? Is it display driver problems or something? Can I set
PixelFormat to RGBA_8888 (or analogue)?
I'm a little puzzled about what you're exactly asking here. The return value of getPixelFormat() is just an integer that provides a way of identifying the active pixel format; it is not meant to represent any data compacted into a number (e.g. as with MeasureSpec). Unfortunately, I do not have an explanation for why a different is returned than you expected. My best guess would be it's either due to an OS decision, as there does not seem to be a limitation from a hardware point of view, or alternatively, the constants defined in the native implementation do not match up the ones in Java. The fact that you're getting back a 4 as pixel format would then not necessarily mean that it's really RGB_565, if Motorola messed up the definitions.
On a side note: I've actually come across misaligned constant definitions before in Android, although I can't currently recall where exactly...
Just to confirm, it may be worth printing out the pixel format details at runtime. If there's indeed a native constant defined that uses a Java PixelFormat value but doesn't match up, you could possibly reveal the 'real' format this way. Use the getPixelFormatInfo(int format, PixelFormat info) method, that simply delegates retrieving the actual values from the native implementation.
On Android 4.1 (custom rom), getPixelFormat() returns 5. But this
value is undocumented. What does it stand for?
As mentioned earlier, sometimes constants defined in native code do not match up the ones in Java, or aren't defined at all. This is probably such a case. You'll have to do some digging to find out what it represents, but it's fairly straightforward:
/**
* pixel format definitions
*/
enum {
HAL_PIXEL_FORMAT_RGBA_8888 = 1,
HAL_PIXEL_FORMAT_RGBX_8888 = 2,
HAL_PIXEL_FORMAT_RGB_888 = 3,
HAL_PIXEL_FORMAT_RGB_565 = 4,
HAL_PIXEL_FORMAT_BGRA_8888 = 5,
HAL_PIXEL_FORMAT_RGBA_5551 = 6,
HAL_PIXEL_FORMAT_RGBA_4444 = 7,
/* 0x8 - 0xF range unavailable */
HAL_PIXEL_FORMAT_YCbCr_422_SP = 0x10, // NV16
HAL_PIXEL_FORMAT_YCrCb_420_SP = 0x11, // NV21 (_adreno)
HAL_PIXEL_FORMAT_YCbCr_422_P = 0x12, // IYUV
HAL_PIXEL_FORMAT_YCbCr_420_P = 0x13, // YUV9
HAL_PIXEL_FORMAT_YCbCr_422_I = 0x14, // YUY2 (_adreno)
/* 0x15 reserved */
HAL_PIXEL_FORMAT_CbYCrY_422_I = 0x16, // UYVY (_adreno)
/* 0x17 reserved */
/* 0x18 - 0x1F range unavailable */
HAL_PIXEL_FORMAT_YCbCr_420_SP_TILED = 0x20, // NV12_adreno_tiled
HAL_PIXEL_FORMAT_YCbCr_420_SP = 0x21, // NV12
HAL_PIXEL_FORMAT_YCrCb_420_SP_TILED = 0x22, // NV21_adreno_tiled
HAL_PIXEL_FORMAT_YCrCb_422_SP = 0x23, // NV61
HAL_PIXEL_FORMAT_YCrCb_422_P = 0x24, // YV12 (_adreno)
};
Source: hardware.h (lines 121-148)
If you were to compare the values with the ones defined in PixelFormat.java, you'll find they add up quite nicely (as they should). It also shows the meaning of the mysterious 5, which is BGRA_8888; a variant of RGBA_8888.
By the way, you may want to try determining the pixel format details for this integer value using the aforementioned getPixelFormatInfo(...) method by passing in 5 as identifier. It'll be interesting to see what gets returned. I'd expect it to show values matching the BGRA_8888 definition, and hence similar to those given in the linked discussion on the Motorola board.
According to this thread on the motodev forums, the return value 5 corresponds to RGBA_8888. The thread states that the documentation for PixelFormat is incomplete and outdated, and links to a bug that was filed for it. However, the link to that bug now returns a 404.
Additionally, I could not seem to find any thing in the PixelFormat source code(4.1) that supports that claim, as over there RGBA_8888 is assigned the value 1.
My guess is that this value is specific to Motorola and some other devices, as I am seeing the same output on my Nexus 7 and Galaxy Nexus.
EDIT: I emailed a Google employee about this, and he told me that 5 corresponded to BGRA_8888, as indicated in MH's answer and the Motorola forum thread I linked to earlier. He recommended that I file a bug for the documentation problem, which I have done. Please star the bug report so that action is taken sooner rather than later.
RGBA_8888 corresponds to 1 as can be seen in the annex below.
If you go to the code related to mPixelFormat you find the following.
// Following fields are initialized from native code
private int mPixelFormat;
That means that for some reason your device is being treated as RGB_565 due to an OS decision more than hardware capabilities.
Actually, that makes me feel curious.
Interestingly enough descriptions of Galaxy Nexus and Nexus 7 don't feel to have too much in common. GN N7
public static final int RGBA_8888 = 1;
public static final int RGBX_8888 = 2;
public static final int RGB_888 = 3;
public static final int RGB_565 = 4;
#Deprecated
public static final int RGBA_5551 = 6;
#Deprecated
public static final int RGBA_4444 = 7;
public static final int A_8 = 8;
public static final int L_8 = 9;
#Deprecated
public static final int LA_88 = 0xA;
#Deprecated
public static final int RGB_332 = 0xB;
Related
I have implemented a small CNN in RenderScript and want to profile the performance on different hardware. On my Nexus 7 the times make sense, but on the NVIDIA Shield they do not.
The CNN (LeNet) is implemented in 9 layers residing in a queue, computation is performed in sequence. Each layer is timed individually.
Here is an example:
conv1 pool1 conv2 pool2 resh1 ip1 relu1 ip2 softmax
nexus7 11.177 7.813 13.357 8.367 8.097 2.1 0.326 1.557 2.667
shield 13.219 1.024 1.567 1.081 0.988 14.588 13.323 14.318 40.347
The distribution of the times are about right for the nexus, with conv1 and conv2 (convolution layers) taking most of the time. But on the shield, the times drop way beyond what's reasonable for layers 2-4 and seem to gather up towards the end. The softmax layer is a relatively small job, so 40ms is way too large. My timing method must be faulty, or something else is going on.
The code running the layers looks something like this:
double[] times = new double[layers.size()];
int layerindex = 0;
for (Layer a : layers) {
double t = SystemClock.elapsedRealtime();
//long t = System.currentTimeMillis(); // makes no difference
blob = a.forward(blob); // here we call renderscript forEach_(), invoke_() etc
//mRS.finish(); // makes no difference
t = SystemClock.elapsedRealtime() - t;
//t = System.currentTimeMillis() - t; // makes no difference
times[layerindex] += t; // later we take average etc
layerindex++;
}
It is my understanding that once forEach_() returns, the job is supposed to be finished. In any case, mRS.finish() should provide a final barrier. But looking at the times, the only reasonable explanation is that jobs are still processed in the background.
The app is very simple, I just run the test from MainActivity and print to logcat. Android Studio builds the app as a release and runs it on the device which is connected by USB.
(1) What is the correct way to time RenderScript processes?
(2) Is it true that when forEach_() returns, the threads spawned by the script are guaranteed to be done?
(3) In my test app, I simply run directly from the MainActivity. Is this a problem (other than blocking the UI thread and making the app unresponsive)? If this influences the timing or causes the weirdness, what is a proper way to set up a test app like this?
I've implemented CNNs in RenderScript myself, and as you explain, it does require chaining multiple processes and calling forEach_*() various times for each layer if you implement them each as a different kernel. As such, I can assure you that the forEach call returning does not really guarantee that the process has completed. In theory, this will only schedule the kernel and all queued up requests will actually run whenever the system determines it's best to, especially if they get processed in the tablet's GPU.
Usually, the only way to make absolutely sure you have some kind of control over a kernel truly running is by explicitly reading the output of the RS kernel in between layers, such as by using .copyTo() on the output allocation object of that kernel. This "forces" any queued up RS jobs that have not run yet (on which that layer's output allocation is dependent), to execute at that time. Granted, that may introduce data transfer overheads and your timing will not be fully accurate -- in fact, the execution time of the full network will quite surely be lower than the sum of the individual layers if timed in this manner. But as far as I know, it's the only reliable way to time individual kernels in a chain and it will give you some feedback to find out where bottlenecks are, and to better guide your optimization, if that's what you're after.
Maybe a little bit off topic: but for CNN, if you can structure your algorithm using matrix-matrix multiplication as basic computing blocks you can actually use RenderScript IntrinsicBLAS, especially BNNM and SGEMM.
Pros:
High performance implementation of 8bit Matrix Multiplication (BNNM), available in N Preview.
Back support back to Android 2.3 through RenderScript Support lib, when using Build-Tools 24.0.0 rc3 and above.
High performance GPU acceleration of SGEMM on Nexus5X and 6P with N Preview build NPC91K.
If you only use RenderScript Intrinsics, you can code everything in java.
Cons:
Your algorithm may need to be refactored, and need to be based on 2d matrix multiplication.
Though available in Android 6.0, but BNNM performance in 6.0 is not satisfactory. So it is better to use support lib for BNNM and set targetSdkVersion to be 24.
SGEMM GPU acceleration currently only available in Nexus5X and Nexus6P. And it currently requires the width and height of the Matrices to be multiples of 8.
It's worth trying if BLAS fits into your algorithm. And it is easy to use:
import android.support.v8.renderscript.*;
// if you are not using support lib:
// import android.renderscript.*;
private void runBNNM(int m, int n, int k, byte[] a_byte, byte[] b_byte, int c_offset, RenderScript mRS) {
Allocation A, B, C;
Type.Builder builder = new Type.Builder(mRS, Element.U8(mRS));
Type a_type = builder.setX(k).setY(m).create();
Type b_type = builder.setX(k).setY(n).create();
Type c_type = builder.setX(n).setY(m).create();
// If you are reusing the input Allocations, just create and cache them somewhere else.
A = Allocation.createTyped(mRS, a_type);
B = Allocation.createTyped(mRS, b_type);
C = Allocation.createTyped(mRS, c_type);
A.copyFrom(a_byte);
B.copyFrom(b_byte);
ScriptIntrinsicBLAS blas = ScriptIntrinsicBLAS.create(mRS);
// Computes: C = A * B.Transpose
int a_offset = 0;
int b_offset = 0;
int c_offset = 0;
int c_multiplier = 1;
blas.BNNM(A, a_offset, B, b_offset, C, c_offset, c_multiplier);
}
SGEMM is similar:
ScriptIntrinsicBLAS blas = ScriptIntrinsicBLAS.create(mRS);
// Construct the Allocations: A, B, C somewhere and make sure the dimensions match.
// Computes: C = 1.0f * A * B + 0.0f * C
float alpha = 1.0f;
float beta = 0.0f;
blas.SGEMM(ScriptIntrinsicBLAS.NO_TRANSPOSE, ScriptIntrinsicBLAS.NO_TRANSPOSE,
alpha, A, B, beta, C);
As the title said, anyone know what is RGBX_8888 pixel format? and what is the difference with RGBA_8888? Is RGBA_8888 offers an alpha channel but RGBX_8888 does not?
The android documentation does not give much information on this unfortunately.
Thanks.
RGBX means, that the pixel format still has an alpha channel, but it is ignored, and is always set to 255.
Some reference:
Blackberry PixelFormat
(It is not android, however I guess that the naming conventions stay same across platforms.)
The RGBX 32 bit RGB format is stored in memory as 8 red bits, 8 green bits, 8 blue bits, and 8 ignored bits.
Android 4.1.2 source code (texture.cpp) Line 80
There is a function called PointSample, where it samples based on a template format, and the passed parameters. You can see, that at pixelformat RGBX_8888, the alpha channel is ignored and set to 255, while at RGBA_8888, it is normally sampled.
if (GGL_PIXEL_FORMAT_RGBA_8888 == format)
*sample = *(data + index);
else if (GGL_PIXEL_FORMAT_RGBX_8888 == format)
{
*sample = *(data + index);
*sample |= 0xff000000;
}
I was just going through powertutor source code and found many constants related to WiFi, CPU and LCD components. I can understand the flow but not getting about these constants.I am curious to know about this. Where these constants are derived from ? what are the standard values? etc.. Please point me to the link where I can get these information. Please do reply.
Update:
Following methods are in dreamConstants file. From where they got these constants? From which file they read it?
public double wifiHighPower() {
return 720;
}
public double wifiLowHighTransition() {
return 15;
}
public double wifiHighLowTransition() {
return 8;
}
private static final double[] arrayWifiLinkRatios = {
47.122645, 46.354821, 43.667437, 43.283525, 40.980053, 39.44422, 38.676581,
34.069637, 29.462693, 20.248805, 11.034917, 6.427122
};
private static final double[] arrayWifiLinkSpeeds = {
1, 2, 5.5, 6, 9, 11, 12, 18, 24, 36, 48, 54
};
If you refer to the constants leading to the infos from the Proc/File system, they are often device-specific. Thus, you have to do it the hard way. Open a file browser, search in the file system for the constants and add them to the application.
As far as I know, that's is exactly what the developers of PowerTutor did. Some of these constrants are "common sense" but you cannot rely on that. There is always yet another device ...
Those constants; for example 720 (mw) in the wifiHighPower function is the estimated power consumption which was measured on HTC G1. Actually, PowerTutor initially built a power model for HTC G1 only and it will caused high error rates on other phones. So, if u want to apply powertutor for yr phones, u need to build a new power model for your phone with the power modeling technique described in the PowerTutor's paper. Thus, u will get fitting constants for your phone instead.
In order to minimize the memory usage of bitmaps, yet still try to maximize the quality of them, I would like to ask a simple question:
Is there a way for me to check if a given image file (.png file) has transparency using the API, without checking every pixel in it?
If the image doesn't have any transparency, it would be the best to use a different bitmap format that uses only the RGB values.
The problem is that Android also doesn't have a format for just the 3 colors. Only RGB_565, which they say that degrade the quality of the image and that should have dithering feature enabled.
Is there also a way to read only the RGB values and be able to show them?
For me bitmap.hasAlpha() works fine to check first if the bitmap has alpha values. Afterwards you have to run through the pixels and create a second bitmap with no alpha I would suggest.
Let's start a bit off-topic
the problem is that android also doesn't have a format for just the 3 colors . only RGB_565 , which they say that degrade the quality of the image and that should have dithering feature enabled.
The reason for that problem is not really Android specific. It's about performance while drawing images. You get the best performance if the pixeldata fits exactly in 1 32bit memory cell.
So the most obvious good pixel format is the ARGB_8888 format which uses exactly 32bit (24 for the color 8 for alpha). While drawing you don't need to do anything but to loop over the image data and each cell you read can be drawn directly. The only downside is the required memory to work with such images, both when they just sit in memory and while displaying them since the graphic hardware has to transfer more data.
The second best option is to use a format where several pixels fit into 1 cell. Using 2 pixels in 32bit you have 16bit per pixel left and one of the formats using 16bit is the 565 format. 5bit red, 6bit green, 5bit blue. While drawing this you can still work on memory cells separately and all you have to do is to split 1 cell in parts. Due to the smaller memory size required for images, drawing can sometimes be even faster than using 32bit colors. Since in the beginning of android memory was a much bigger problem they chose this format to be the default.
And the worst category of formats are those where pixels don't fit into those cells. If you take just the 3 colors you get 24 bit and those need to be distributed over 2 cells in 3 out of 4 cases. For example the second pixel would use the remaining 8 bit from the first cell & the first 16bit of the next cell. The extra work required to work with 24bit colors is so big that it is not used. And when drawing images you usually have alpha at some point anyways and if not you simply use 32bit but ignore the alpha bits.
So the 16bit approach looks ugly & the 24 bit approach does not make sense. And since the memory limitations of Android are not as tight as they were and the hardware got faster, Android has switched it's default to 32bit (explained in even more details in http://www.curious-creature.org/2010/12/08/bitmap-quality-banding-and-dithering/)
Back to your real question
is there a way for me to check if a given image file (png file) has transparency using the API , without checking every pixel in it?
I don't know. But JPEG images don't support alpha and PNG images usually have alpha. You could simply abuse the file extension to get it right in most cases.
But I would suggest you don't bother with all that and simply use ARGB_8888 and apply the nice image loading techniques detailed in the Android Training documentation about Displaying Bitmaps Efficiently.
The reason people run into memory problems is usually either that they have way more images loaded in memory than they currently display or they use giant images that can't be displayed on the small screen of a phone. And in my opinion it makes more sense to add good memory management than complicating your code to downgrade the image quality.
There is a way to check if a PNG file has transparency, or at least if it supports it:
public final static int COLOR_GREY = 0;
public final static int COLOR_TRUE = 2;
public final static int COLOR_INDEX = 3;
public final static int COLOR_GREY_ALPHA = 4;
public final static int COLOR_TRUE_ALPHA = 6;
private final static int DECODE_BUFFER_SIZE = 16 * 1024;
private final static int HEADER_DECODE_BUFFER_SIZE = 1024;
/** given an inputStream of a png file , returns true iff found that it has transparency (in its header) */
private static boolean isPngInputStreamContainTransparency(final InputStream pngInputStream) {
try {
// skip: png signature,header chunk declaration,width,height,bitDepth :
pngInputStream.skip(12 + 4 + 4 + 4 + 1);
final byte colorType = (byte) pngInputStream.read();
switch (colorType) {
case COLOR_GREY_ALPHA:
case COLOR_TRUE_ALPHA:
return true;
case COLOR_INDEX:
case COLOR_GREY:
case COLOR_TRUE:
return false;
}
return true;
} catch (final Exception e) {
}
return false;
}
Other than that, I don't know if such a thing is possible.
i've found the next links which could be helpful for checking if the png file has transparency . sadly, it's a solution only for png files . rest of the files (like webP , bmp, ...) need to have a different parser .
links:
http://www.java2s.com/Code/Java/2D-Graphics-GUI/PNGDecoder.htm
http://hg.l33tlabs.org/twl/file/tip/src/de/matthiasmann/twl/utils/PNGDecoder.java
http://www.java-gaming.org/index.php/topic,24202
I want to load .png file via asset manager which is provided by android sdk. AssetManager manager; /........./ BitmapFactory.decodeStream(manager.open(path));
It returns BGR format data but opengl es 2.0 uses RGB format so , Blue seems red , red seems blue, how odd.
Is there any solution for it?
I use Nvıdia Tegra 2 (Android 2.2) device for test the application along with c++ via JNI.
You must know the number of bits for colors, let's say n bit is for a color, so the first n bit represents BLUE, the second n bits represent GREEN and the final n bits represent RED in the input. You need to swap these bit groups into the correct order, like this:
output = (input << (2 * n)) + (input << n >> n) + (input >> (2 * n));
To be able to use this solution you need to find out how much is n.
Recent versions of OpenGL, also provide BGR input formats; OpenGL-ES not, unfortunatly. Since you're on Android you have to deal with OpenGL-ES.
If you're using a fragment shader it is also trivial to apply a rgb→bgr swizzle, which if probably the easiest way to overcome this problem.