I want to calculate and show in a plot the power consumption of my app over the time. The x axis is the time (hours) and the y axis the power consumption in mW.
I have the discharge values for my application (100, 93, 82, 78, 71, 64, 59, 49, 41) that correspond to initial charge, 1h, 2h... The battery of the smartphone is 3.7V and 1850mAh. I calculated the power consumption the same way:
cons(W) = voltage (V) * discharge amount (%) * capacity (mAh) / discharge time (h)
cons (W) = 3.7V * 1.85 Ah * [100, 93, 82, 78, 71, 64, 59, 49, 41] / [0.1 1 2 3 4 5 6 7 8 ]
Is that correct? I know there is a way to directly obtain the values I need but I want to compare several apps and I don't have time to compute the values again. So, based on the previous calculation, What I am doing wrong? I am obtaining values too large. Any suggestion?
Android and iOS have the possibility to show power consumption on a per-app basis.
At least Android should support API calls, to access these values.
(These calculations are more valid then just using battery drain, still not perfect. [i.e. they use processor time, readout of sensor-values, ...])
possible duplicate: https://stackoverflow.com/questions/23428675/android-to-check-battery-stats-per-application
Related
I have an Android App where I get Heart Rate Measurements from a Polar H10 Device.
I'm totally lost on how to interpret the heart rate. Various links to the bluetooth.com site are resulting in 404 errors unfortunately.
The characteristics value is i.e.
[16, 59, 83, 4]
From what I understood the second byte (59) is the heart rate in BPM. But this does not seem to be decimal as the value goes up to 127 and then goes on -127, -126, -125, ... It is not hex either.
I tried (in kotlin)
characteristic.value[1].toUInt()
characteristic.value[1].toInt()
characteristic.value[1].toShort()
characteristic.value[1].toULong()
characteristic.value[1].toDouble()
All values freak out as soon as the -127 appears.
Do I have to convert the 59 to binary (59=111011) and see it in there? Please give me some insight.
### Edit (12th April 2021) ###
What I do to get those values is a BluetoothDevice.connectGatt().
Then hold the GATT.
In order to get heart rate values I look for
Service 0x180d and its
characteristic 0x2a37 and its only
descriptor 0x2902.
Then I enable notifications by setting 0x01 on the descriptor. I then get ongoing events in the GattClientCallback.onCharacteristicChanged() callback. I will add a screenshot below with all data.
From what I understood the response should be 6 bytes long instead of 4, right? What am I doing wrong?
On the picture you see the characteristic on the very top. It is linked to the service 180d and the characteristic holds the value with 4 bytes on the bottom.
See Heart Rate Value in BLE for the links to the documents. As in that answer, here's the decode:
Byte 0 - Flags: 16 (0001 0000)
Bits are numbered from LSB (0) to MSB (7).
Bit 0 - Heart Rate Value Format: 0 => UINT8 beats per minute
Bit 1-2 - Sensor Contact Status: 00 => Not supported or detected
Bit 3 - Energy Expended Status: 0 => No Present
Bit 4 - RR-Interval: 1 => One or more values are present
So the first byte is a heart rate in UInt8 format, and the next two bytes are an RR interval.
To read this in Kotlin:
characteristic.getIntValue(FORMAT_UINT8, 1)
This return a heart rate of 56 bpm.
And ignore the other two bytes unless you want the RR.
It seems I found a way by retrieving the value as follows
val hearRateDecimal = characteristic.getIntValue(BluetoothGattCharacteristic.FORMAT_UINT8, 1)
2 things are important
first - the format of UINT8 (although I don't know when to use UINT8 and when UINT16. Actually I thought I need to use UINT16 as the first byte is actually 16 (see the question above)
second - the offset parameter 1
What I now get is an Integer even beyond 127 -> 127, 128, 129, 130, ...
byte bytes[] = {0x04,0x08,0x0F,0x66,(byte)0x99,0x41,0x52,0x43,0x55,(byte)0xAA};
ch.setValue(bytes);
If I log the output of this array I get (note the negative values):
[4, 8, 15, 102, -103, 65, 82, 67, 85, -86]
But in theory this should only be java's representation of the values and shouldn't affect the perceived values when they hit the Bluetooth device but this doesn't seem to be the case
These values are required by the manufacturer so cannot be changed, however, 2 of the values are out of the range of an unsigned byte/int and it appears that this is the reason the device isn't recognizing the command.
When I write this command to the characteristic I get a success. But the device doesn't act upon the command.
So, my question is, am I sending this in the correct way, or should I be formatting/processing the byte array in order to maintain the perceived values contained within?
Any advice greatly appreciated!!!
I'm developing a VoIP application that runs at the sampling rate of 48 kHz. Since it uses Opus, which uses 48 kHz internally, as its codec, and most current Android hardware natively runs at 48 kHz, AEC is the only piece of the puzzle I'm missing now. I've already found the WebRTC implementation but I can't seem to figure out how to make it work. It looks like it corrupts the memory randomly and crashes the whole thing sooner or later. When it doesn't crash, the sound is kinda chunky as if it's quieter for the half of the frame. Here's my code that processes a 20 ms frame:
webrtc::SplittingFilter* splittingFilter;
webrtc::IFChannelBuffer* bufferIn;
webrtc::IFChannelBuffer* bufferOut;
webrtc::IFChannelBuffer* bufferOut2;
// ...
splittingFilter=new webrtc::SplittingFilter(1, 3, 960);
bufferIn=new webrtc::IFChannelBuffer(960, 1, 1);
bufferOut=new webrtc::IFChannelBuffer(960, 1, 3);
bufferOut2=new webrtc::IFChannelBuffer(960, 1, 3);
// ...
int16_t* samples=(int16_t*)data;
float* fsamples[3];
float* foutput[3];
int i;
float* fbuf=bufferIn->fbuf()->bands(0)[0];
// convert the data from 16-bit PCM into float
for(i=0;i<960;i++){
fbuf[i]=samples[i]/(float)32767;
}
// split it into three "bands" that the AEC needs and for some reason can't do itself
splittingFilter->Analysis(bufferIn, bufferOut);
// split the frame into 6 consecutive 160-sample blocks and perform AEC on them
for(i=0;i<6;i++){
fsamples[0]=&bufferOut->fbuf()->bands(0)[0][160*i];
fsamples[1]=&bufferOut->fbuf()->bands(0)[1][160*i];
fsamples[2]=&bufferOut->fbuf()->bands(0)[2][160*i];
foutput[0]=&bufferOut2->fbuf()->bands(0)[0][160*i];
foutput[1]=&bufferOut2->fbuf()->bands(0)[1][160*i];
foutput[2]=&bufferOut2->fbuf()->bands(0)[2][160*i];
int32_t res=WebRtcAec_Process(aecState, (const float* const*) fsamples, 3, foutput, 160, 20, 0);
}
// put the "bands" back together
splittingFilter->Synthesis(bufferOut2, bufferIn);
// convert the processed data back into 16-bit PCM
for(i=0;i<960;i++){
samples[i]=(int16_t) (CLAMP(fbuf[i], -1, 1)*32767);
}
If I comment out the actual echo cancellation and just do the float conversion and band splitting back and forth, it doesn't corrupt the memory, doesn't sound weird and runs indefinitely. (I do pass the farend/speaker signal into AEC, I just didn't want to make the mess of my code by including it in the question)
I've also tried Android's built-in AEC. While it does work, it upsamples the captured signal from 16 kHz.
Unfortunately, there is no free AEC package that support 48khz. So, either move to 32khz or use a commercial AEC package at 48khz.
i wanna know how to recognize flash code of the blinking LED.
If I set in app correct code: 0,5+1;0,5+3 (0,5 sec LIGHT , 1 sec DARK, 0,5 sec LIGHT, 3 sec DARK),and then with light sensor detect LED flashing,
how to recognize first flash (0,5) if flashing is continuously?? How to compare detected values with specified?
Considering you are getting the signal without noise, then you will be getting a sequence: 0.5 LIGHT , 1 DARK, 0.5 LIGHT, 3 DARK, 0.5 LIGHT , 1 DARK, 0.5 LIGHT, 3 DARK, ...
In this way, I think you are not matching a specific event, but matching using a time window (0.5 + 1 + 0.5 + 3 = 5 seconds). When moving the time window along the signals detected, you will find your events, and then you can identify the specific ones.
It's important to check the frequency you can get out of the light sensor. Let's say, if you are getting at 10fps, then you will get an array of values:
[0, 10, 200, 230, 209, 198, 201, 10, 7, 20, 17, 18, 10, 11, 10, 12, 13, ... ]
Then, by setting a threshold, you can see where is the start and end of light and dark.
When you are using the time window of 5 seconds, the array you keep will be at a length of 50. You might want to check the array by first connecting the head and tail of it in order to match the sequence you want.
Hope this helps!
I'm using a touchscreen (atmel maxtouch - atmel 1664s) with android and finding that the further to the right(X gets larger) I go, the larger distance between where my finger is vs touch spot on screen. Would this be a problem with settings in the IDC file, driver, or somewhere else? Using another OS like Ubuntu on the same screen doesn't seem to have this problem.
I've used this IDC file to try and correct the position, but the last line just turns the touchscreen into a trackpad.
touch.deviceType = touchScreen
touch.orientationAware = 1
output.x = (raw.x - raw.x.min) * (output.width / raw.width)
The kernel driver isn't detecting and reporting the possible range of the input X reports correctly.
If you use adb shell and run getevent -il you should get something like
add device 6: /dev/input/event2
bus: 0000
vendor 0000
product 0000
version 0000
name: "touch_dev"
location: ""
id: ""
version: 1.0.1
events:
ABS (0003): ABS_MT_SLOT : value 0, min 0, max 9, fuzz 0, flat 0, resolution 0
ABS_MT_TOUCH_MAJOR : value 0, min 0, max 15, fuzz 0, flat 0, resolution 0
ABS_MT_POSITION_X : value 0, min 0, max 1535, fuzz 0, flat 0, resolution 0
ABS_MT_POSITION_Y : value 0, min 0, max 2559, fuzz 0, flat 0, resolution 0
ABS_MT_TRACKING_ID : value 0, min 0, max 65535, fuzz 0, flat 0, resolution 0
ABS_MT_PRESSURE : value 0, min 0, max 255, fuzz 0, flat 0, resolution 0
input props:
INPUT_PROP_DIRECT
You can see on my device, the X value can range between 0 and 1535.
If you then run getevent -trl /dev/input/event2, move your finger around the screen, and look at the maximum possible X value, it should correspond:
[ 115960.226411] EV_ABS ABS_MT_POSITION_X 000005ee
0x5ee = 1518, so that's about right.
There are some parameters on the touch controller which adjust this scaling, and need to be in sync with what the kernel driver reports. The standard Linux mainline driver doesn't deal very well with those parameters being out of sync with the platform data. There are patches to address this which haven't gone upstream yet: https://github.com/atmel-maxtouch/linux/commit/002438d207
If when you move your finger to the far right, the touch is still on screen, you could probably correct it by doing
output.x = raw.x / scale
Where scale is the ratio of the reported vs desired coordinates. You can't do it the other way round, because the lower input layers will throw away reports outside of the screen.
A proper fix would be to fix the bug in the kernel driver, or adjust the range settings on the touch controller.
You don't say what particular device it is, so it's difficult to help further.