I was just going through powertutor source code and found many constants related to WiFi, CPU and LCD components. I can understand the flow but not getting about these constants.I am curious to know about this. Where these constants are derived from ? what are the standard values? etc.. Please point me to the link where I can get these information. Please do reply.
Update:
Following methods are in dreamConstants file. From where they got these constants? From which file they read it?
public double wifiHighPower() {
return 720;
}
public double wifiLowHighTransition() {
return 15;
}
public double wifiHighLowTransition() {
return 8;
}
private static final double[] arrayWifiLinkRatios = {
47.122645, 46.354821, 43.667437, 43.283525, 40.980053, 39.44422, 38.676581,
34.069637, 29.462693, 20.248805, 11.034917, 6.427122
};
private static final double[] arrayWifiLinkSpeeds = {
1, 2, 5.5, 6, 9, 11, 12, 18, 24, 36, 48, 54
};
If you refer to the constants leading to the infos from the Proc/File system, they are often device-specific. Thus, you have to do it the hard way. Open a file browser, search in the file system for the constants and add them to the application.
As far as I know, that's is exactly what the developers of PowerTutor did. Some of these constrants are "common sense" but you cannot rely on that. There is always yet another device ...
Those constants; for example 720 (mw) in the wifiHighPower function is the estimated power consumption which was measured on HTC G1. Actually, PowerTutor initially built a power model for HTC G1 only and it will caused high error rates on other phones. So, if u want to apply powertutor for yr phones, u need to build a new power model for your phone with the power modeling technique described in the PowerTutor's paper. Thus, u will get fitting constants for your phone instead.
Related
By code, I can make a button that inserts these 3 emojis into the text: ⚽️😈🐺
On many phones when the user clicks the button, though, the problem is that ⚽️😈🐺 displays as [X][X][X]. Or even worse, it displays only three empty spaces.
I would like to disable and hide my own built-in emoji-keypad on Android devices that do not display emojis correctly. Does anyone knows or have a tip on how to detect in code if a device has emoji support?
I have read that emoji is supported from android 4.1, but that is not my experience....
I just implemented a solution for this problem myself. The nice thing with Android is that it is open source so that when you come around problems like these, there's a good chance you can find an approach to help you.
In the Android Open Source Project, you can find a method where they use Paint.hasGlyph to detect whether a font exists for a given emoji. However, as this method is not available before API 23, they also do test renders and compare the result against the width of 'tofu' (the [x] character you mention in your post.)
There are some other failings with this approach, but it should be enough to get you started.
Google source:
https://android.googlesource.com/platform/packages/inputmethods/LatinIME/+/master/java/src/com/android/inputmethod/keyboard/emoji/EmojiCategory.java#441
https://android.googlesource.com/platform/packages/inputmethods/LatinIME/+/master/java/src/com/android/inputmethod/keyboard/KeyboardLayoutSet.java
Based on Jason Gore answer:
For example create boolean canShowFlagEmoji:
private static boolean canShowFlagEmoji() {
Paint paint = new Paint();
String switzerland = "\uD83C\uDDE8\uD83C\uDDED"; // Here enter Surrogates of Emoji
try {
return paint.hasGlyph(switzerland);
} catch (NoSuchMethodError e) {
// Compare display width of single-codepoint emoji to width of flag emoji to determine
// whether flag is rendered as single glyph or two adjacent regional indicator symbols.
float flagWidth = paint.measureText(switzerland);
float standardWidth = paint.measureText("\uD83D\uDC27"); // U+1F427 Penguin
return flagWidth < standardWidth * 1.25;
// This assumes that a valid glyph for the flag emoji must be less than 1.25 times
// the width of the penguin.
}
}
And then in code whenever when you need to check if emoji is available:
if (canShowFlagEmoji()){
// Code when FlagEmoji is available
} else {
// And when not
}
Surrogates of emoji you can get here, when you click on detail.
An alternative option might be to include the Android "Emoji Compatibility" library, which would detect and add any required Emoji characters to apps running on Android 4.4 (API 19) and later: https://developer.android.com/topic/libraries/support-library/preview/emoji-compat.html
final Paint paint = new Paint();
final boolean isEmojiRendered;
if (VERSION.SDK_INT >= VERSION_CODES.M) {
isEmojiRendered = paint.hasGlyph(emoji);
}
else{
isEmojiRendered = paint.measureText(emoji) > 7;
}
The width > 7 part is particularly hacky, I would expect the value to be 0.0 for non-renderable emoji, but across a few devices, I found that the value actually ranged around 3.0 to 6.0 for non-renderable, and 12.0 to 15.0 for renderable. Your results may vary so you might want to test that. I believe the font size also has an effect on the output of measureText() so keep that in mind.
The second part was answerd by RogueBaneling here how can I check if my device is capable to render Emoji images correctly?
I have implemented a small CNN in RenderScript and want to profile the performance on different hardware. On my Nexus 7 the times make sense, but on the NVIDIA Shield they do not.
The CNN (LeNet) is implemented in 9 layers residing in a queue, computation is performed in sequence. Each layer is timed individually.
Here is an example:
conv1 pool1 conv2 pool2 resh1 ip1 relu1 ip2 softmax
nexus7 11.177 7.813 13.357 8.367 8.097 2.1 0.326 1.557 2.667
shield 13.219 1.024 1.567 1.081 0.988 14.588 13.323 14.318 40.347
The distribution of the times are about right for the nexus, with conv1 and conv2 (convolution layers) taking most of the time. But on the shield, the times drop way beyond what's reasonable for layers 2-4 and seem to gather up towards the end. The softmax layer is a relatively small job, so 40ms is way too large. My timing method must be faulty, or something else is going on.
The code running the layers looks something like this:
double[] times = new double[layers.size()];
int layerindex = 0;
for (Layer a : layers) {
double t = SystemClock.elapsedRealtime();
//long t = System.currentTimeMillis(); // makes no difference
blob = a.forward(blob); // here we call renderscript forEach_(), invoke_() etc
//mRS.finish(); // makes no difference
t = SystemClock.elapsedRealtime() - t;
//t = System.currentTimeMillis() - t; // makes no difference
times[layerindex] += t; // later we take average etc
layerindex++;
}
It is my understanding that once forEach_() returns, the job is supposed to be finished. In any case, mRS.finish() should provide a final barrier. But looking at the times, the only reasonable explanation is that jobs are still processed in the background.
The app is very simple, I just run the test from MainActivity and print to logcat. Android Studio builds the app as a release and runs it on the device which is connected by USB.
(1) What is the correct way to time RenderScript processes?
(2) Is it true that when forEach_() returns, the threads spawned by the script are guaranteed to be done?
(3) In my test app, I simply run directly from the MainActivity. Is this a problem (other than blocking the UI thread and making the app unresponsive)? If this influences the timing or causes the weirdness, what is a proper way to set up a test app like this?
I've implemented CNNs in RenderScript myself, and as you explain, it does require chaining multiple processes and calling forEach_*() various times for each layer if you implement them each as a different kernel. As such, I can assure you that the forEach call returning does not really guarantee that the process has completed. In theory, this will only schedule the kernel and all queued up requests will actually run whenever the system determines it's best to, especially if they get processed in the tablet's GPU.
Usually, the only way to make absolutely sure you have some kind of control over a kernel truly running is by explicitly reading the output of the RS kernel in between layers, such as by using .copyTo() on the output allocation object of that kernel. This "forces" any queued up RS jobs that have not run yet (on which that layer's output allocation is dependent), to execute at that time. Granted, that may introduce data transfer overheads and your timing will not be fully accurate -- in fact, the execution time of the full network will quite surely be lower than the sum of the individual layers if timed in this manner. But as far as I know, it's the only reliable way to time individual kernels in a chain and it will give you some feedback to find out where bottlenecks are, and to better guide your optimization, if that's what you're after.
Maybe a little bit off topic: but for CNN, if you can structure your algorithm using matrix-matrix multiplication as basic computing blocks you can actually use RenderScript IntrinsicBLAS, especially BNNM and SGEMM.
Pros:
High performance implementation of 8bit Matrix Multiplication (BNNM), available in N Preview.
Back support back to Android 2.3 through RenderScript Support lib, when using Build-Tools 24.0.0 rc3 and above.
High performance GPU acceleration of SGEMM on Nexus5X and 6P with N Preview build NPC91K.
If you only use RenderScript Intrinsics, you can code everything in java.
Cons:
Your algorithm may need to be refactored, and need to be based on 2d matrix multiplication.
Though available in Android 6.0, but BNNM performance in 6.0 is not satisfactory. So it is better to use support lib for BNNM and set targetSdkVersion to be 24.
SGEMM GPU acceleration currently only available in Nexus5X and Nexus6P. And it currently requires the width and height of the Matrices to be multiples of 8.
It's worth trying if BLAS fits into your algorithm. And it is easy to use:
import android.support.v8.renderscript.*;
// if you are not using support lib:
// import android.renderscript.*;
private void runBNNM(int m, int n, int k, byte[] a_byte, byte[] b_byte, int c_offset, RenderScript mRS) {
Allocation A, B, C;
Type.Builder builder = new Type.Builder(mRS, Element.U8(mRS));
Type a_type = builder.setX(k).setY(m).create();
Type b_type = builder.setX(k).setY(n).create();
Type c_type = builder.setX(n).setY(m).create();
// If you are reusing the input Allocations, just create and cache them somewhere else.
A = Allocation.createTyped(mRS, a_type);
B = Allocation.createTyped(mRS, b_type);
C = Allocation.createTyped(mRS, c_type);
A.copyFrom(a_byte);
B.copyFrom(b_byte);
ScriptIntrinsicBLAS blas = ScriptIntrinsicBLAS.create(mRS);
// Computes: C = A * B.Transpose
int a_offset = 0;
int b_offset = 0;
int c_offset = 0;
int c_multiplier = 1;
blas.BNNM(A, a_offset, B, b_offset, C, c_offset, c_multiplier);
}
SGEMM is similar:
ScriptIntrinsicBLAS blas = ScriptIntrinsicBLAS.create(mRS);
// Construct the Allocations: A, B, C somewhere and make sure the dimensions match.
// Computes: C = 1.0f * A * B + 0.0f * C
float alpha = 1.0f;
float beta = 0.0f;
blas.SGEMM(ScriptIntrinsicBLAS.NO_TRANSPOSE, ScriptIntrinsicBLAS.NO_TRANSPOSE,
alpha, A, B, beta, C);
We want to implement some kind of indoor position determination using iBeacons.
This article seems really interesting, in which the author implemented the Non-linear Least Squares Triangulation, using the Eigen C++ library and the Levenberg Marquardt algorithm. Since Eigen is written in C++, I tried to use JNI and Android NDK in order to use it but it throws a lot of errors that I have no idea how to solve and I couldn´t find anything online. I also tried to use Jeigen, but it does not have all the functions that we need.
So, my questions are:
Has someone ever implemented some kind of Trilateration using
beacons in Android?
Do you think that using Eigen+JNI+NDK is a good solution? If yes,
have you ever implemented Levenberg Marquardt using that
combination?
Are there better options than the Levenberg Marquardt algorithm for calculating the trilateration in a Android application?
Take a look at this library: https://github.com/lemmingapex/Trilateration
uses Levenberg-Marquardt algorithm from Apache Commons Math.
For example..
into the TrilaterationTest.java
you can see:
double[][] positions = new double[][] { { 1.0, 1.0 }, { 2.0, 1.0 } };
double[] distances = new double[] { 0.0, 1.0 };
TrilaterationFunction trilaterationFunction = new TrilaterationFunction(positions, distances);
NonLinearLeastSquaresSolver solver = new NonLinearLeastSquaresSolver(trilaterationFunction, new LevenbergMarquardtOptimizer());
double[] expectedPosition = new double[] { 1.0, 1.0 };
Optimum optimum = solver.solve();
testResults(expectedPosition, 0.0001, optimum);
but if you see the objectivec example https://github.com/RGADigital/indoor_navigation_iBeacons/blob/show-work/ios/Group5iBeacons/Debug/Managers/Location/NonLinear/NonLinear.mm you can note that the accuracy is used as an assessment parameter and not the distance.
I'm developing in qt 5.3 on android device. I can't get the screen resolution.
With the old qt 5 version this code worked:
QScreen *screen = QApplication::screens().at(0);
largh=screen->availableGeometry().width();
alt =screen->availableGeometry().height();
However now it doesn't work (returns a screen size 00x00). Is there another way to do it? thanks
Size holds the pixel resolution
screen->size().width()
screen->size().height();
Whereas, availableSize holds the size excluding window manager reserved areas...
screen->availableSize().width()
screen->availableSize().height();
More info on the QScreen class.
for more information, screen availableSize is not ready at the very beginning, so you have to wait for it, here is the code:
Widget::Widget(QWidget *parent){
...
QScreen *screen = QApplication::screens().at(0);
connect(screen, SIGNAL(virtualGeometryChanged(QRect)), this,SLOT(getScreen(QRect)));
}
void Widget::getScreen(QRect rect)
{
int screenY = screen->availableSize().height();
int screenX = screen->availableSize().width();
this->setGeometry(0,0,screenX,screenY);
}
I found that there are several ways to obtain the device resolution, each outputs the same results and thankfully works across all Os-es supported by Qt...
1) My favorite is to write a static function using QDesktopWidget in a reference class and use it all across the code:
QRect const CGenericWidget::getScreenSize()
{
//Note: one might implement caching of the value to optimize processing speed. This however will result in erros if screen resolution is resized during execution
QDesktopWidget scr;
return scr.availableGeometry(scr.primaryScreen());
}
Then you can just call across your code the function like this:
qDebug() << CGenericWidget::getScreenSize();
It will return you a QRect const object that you can use to obtain the screen size without the top and bottom bars.
2) Another way to obtain the screen resolution that works just fine if your app is full screen is:
QWidget *activeWindow = QApplication::activeWindow();
m_sw = activeWindow->width();
m_sh = activeWindow->height();
3) And of course you have the option that Zeus recommended:
QScreen *screen = QApplication::screens().at(0);
largh=screen->availableSize().width();
alt =screen->availableSize().height();
I'm confused about PixelFormat on Android.
My device is Motorola Defy.
I have two questions:
On Android 2.3 getWindowManager().getDefaultDisplay().getPixelFormat() returns 4 what stands for RGB_565. As far as I know my device has 16M colors, that means 3 (or 4 with alpha channel) bytes per pixel:
2^(8*3) = 2^24 = 16M
But RGB_565 format has 2 bytes (16 bits) per pixel, what stands for 65K colors:
2^(8*2) = 2^16 = 65K
So, why getPixelFormat() doesn't return format with 3 (or 4 like RGBA) bytes per pixel? Is it display driver problems or something? Can I set PixelFormat to RGBA_8888 (or analogue)?
On Android 4.1 (custom rom), getPixelFormat() returns 5. But this value is undocumented. What does it stand for? Actually, in this situation effect is the same as with constant 4. But from this discussion I found that 5 stands for RGBA_8888 (but there is no proof for that statement). So how can I figure out the real format of device's screen? Also I found one Chinese device on Android 2.2, that also has PixelFormat 5, but the real format is 4 (as my Motorola).
I have googled these questions and found nothing. The only thing I found is that nexus 7 also has 5 format.
Update:
I found method getWindow().setFormat() but it actually does not change main pixel format.
I'll just add my two cents to this discussion, though I should admit in advance that I could not find conclusive answers to all your questions.
So, why getPixelFormat() doesn't return format with 3 (or 4 like RGBA)
bytes per pixel? Is it display driver problems or something? Can I set
PixelFormat to RGBA_8888 (or analogue)?
I'm a little puzzled about what you're exactly asking here. The return value of getPixelFormat() is just an integer that provides a way of identifying the active pixel format; it is not meant to represent any data compacted into a number (e.g. as with MeasureSpec). Unfortunately, I do not have an explanation for why a different is returned than you expected. My best guess would be it's either due to an OS decision, as there does not seem to be a limitation from a hardware point of view, or alternatively, the constants defined in the native implementation do not match up the ones in Java. The fact that you're getting back a 4 as pixel format would then not necessarily mean that it's really RGB_565, if Motorola messed up the definitions.
On a side note: I've actually come across misaligned constant definitions before in Android, although I can't currently recall where exactly...
Just to confirm, it may be worth printing out the pixel format details at runtime. If there's indeed a native constant defined that uses a Java PixelFormat value but doesn't match up, you could possibly reveal the 'real' format this way. Use the getPixelFormatInfo(int format, PixelFormat info) method, that simply delegates retrieving the actual values from the native implementation.
On Android 4.1 (custom rom), getPixelFormat() returns 5. But this
value is undocumented. What does it stand for?
As mentioned earlier, sometimes constants defined in native code do not match up the ones in Java, or aren't defined at all. This is probably such a case. You'll have to do some digging to find out what it represents, but it's fairly straightforward:
/**
* pixel format definitions
*/
enum {
HAL_PIXEL_FORMAT_RGBA_8888 = 1,
HAL_PIXEL_FORMAT_RGBX_8888 = 2,
HAL_PIXEL_FORMAT_RGB_888 = 3,
HAL_PIXEL_FORMAT_RGB_565 = 4,
HAL_PIXEL_FORMAT_BGRA_8888 = 5,
HAL_PIXEL_FORMAT_RGBA_5551 = 6,
HAL_PIXEL_FORMAT_RGBA_4444 = 7,
/* 0x8 - 0xF range unavailable */
HAL_PIXEL_FORMAT_YCbCr_422_SP = 0x10, // NV16
HAL_PIXEL_FORMAT_YCrCb_420_SP = 0x11, // NV21 (_adreno)
HAL_PIXEL_FORMAT_YCbCr_422_P = 0x12, // IYUV
HAL_PIXEL_FORMAT_YCbCr_420_P = 0x13, // YUV9
HAL_PIXEL_FORMAT_YCbCr_422_I = 0x14, // YUY2 (_adreno)
/* 0x15 reserved */
HAL_PIXEL_FORMAT_CbYCrY_422_I = 0x16, // UYVY (_adreno)
/* 0x17 reserved */
/* 0x18 - 0x1F range unavailable */
HAL_PIXEL_FORMAT_YCbCr_420_SP_TILED = 0x20, // NV12_adreno_tiled
HAL_PIXEL_FORMAT_YCbCr_420_SP = 0x21, // NV12
HAL_PIXEL_FORMAT_YCrCb_420_SP_TILED = 0x22, // NV21_adreno_tiled
HAL_PIXEL_FORMAT_YCrCb_422_SP = 0x23, // NV61
HAL_PIXEL_FORMAT_YCrCb_422_P = 0x24, // YV12 (_adreno)
};
Source: hardware.h (lines 121-148)
If you were to compare the values with the ones defined in PixelFormat.java, you'll find they add up quite nicely (as they should). It also shows the meaning of the mysterious 5, which is BGRA_8888; a variant of RGBA_8888.
By the way, you may want to try determining the pixel format details for this integer value using the aforementioned getPixelFormatInfo(...) method by passing in 5 as identifier. It'll be interesting to see what gets returned. I'd expect it to show values matching the BGRA_8888 definition, and hence similar to those given in the linked discussion on the Motorola board.
According to this thread on the motodev forums, the return value 5 corresponds to RGBA_8888. The thread states that the documentation for PixelFormat is incomplete and outdated, and links to a bug that was filed for it. However, the link to that bug now returns a 404.
Additionally, I could not seem to find any thing in the PixelFormat source code(4.1) that supports that claim, as over there RGBA_8888 is assigned the value 1.
My guess is that this value is specific to Motorola and some other devices, as I am seeing the same output on my Nexus 7 and Galaxy Nexus.
EDIT: I emailed a Google employee about this, and he told me that 5 corresponded to BGRA_8888, as indicated in MH's answer and the Motorola forum thread I linked to earlier. He recommended that I file a bug for the documentation problem, which I have done. Please star the bug report so that action is taken sooner rather than later.
RGBA_8888 corresponds to 1 as can be seen in the annex below.
If you go to the code related to mPixelFormat you find the following.
// Following fields are initialized from native code
private int mPixelFormat;
That means that for some reason your device is being treated as RGB_565 due to an OS decision more than hardware capabilities.
Actually, that makes me feel curious.
Interestingly enough descriptions of Galaxy Nexus and Nexus 7 don't feel to have too much in common. GN N7
public static final int RGBA_8888 = 1;
public static final int RGBX_8888 = 2;
public static final int RGB_888 = 3;
public static final int RGB_565 = 4;
#Deprecated
public static final int RGBA_5551 = 6;
#Deprecated
public static final int RGBA_4444 = 7;
public static final int A_8 = 8;
public static final int L_8 = 9;
#Deprecated
public static final int LA_88 = 0xA;
#Deprecated
public static final int RGB_332 = 0xB;