BLE Heart Rate Senser Value Interpretation - android

I have an Android App where I get Heart Rate Measurements from a Polar H10 Device.
I'm totally lost on how to interpret the heart rate. Various links to the bluetooth.com site are resulting in 404 errors unfortunately.
The characteristics value is i.e.
[16, 59, 83, 4]
From what I understood the second byte (59) is the heart rate in BPM. But this does not seem to be decimal as the value goes up to 127 and then goes on -127, -126, -125, ... It is not hex either.
I tried (in kotlin)
characteristic.value[1].toUInt()
characteristic.value[1].toInt()
characteristic.value[1].toShort()
characteristic.value[1].toULong()
characteristic.value[1].toDouble()
All values freak out as soon as the -127 appears.
Do I have to convert the 59 to binary (59=111011) and see it in there? Please give me some insight.
### Edit (12th April 2021) ###
What I do to get those values is a BluetoothDevice.connectGatt().
Then hold the GATT.
In order to get heart rate values I look for
Service 0x180d and its
characteristic 0x2a37 and its only
descriptor 0x2902.
Then I enable notifications by setting 0x01 on the descriptor. I then get ongoing events in the GattClientCallback.onCharacteristicChanged() callback. I will add a screenshot below with all data.
From what I understood the response should be 6 bytes long instead of 4, right? What am I doing wrong?
On the picture you see the characteristic on the very top. It is linked to the service 180d and the characteristic holds the value with 4 bytes on the bottom.

See Heart Rate Value in BLE for the links to the documents. As in that answer, here's the decode:
Byte 0 - Flags: 16 (0001 0000)
Bits are numbered from LSB (0) to MSB (7).
Bit 0 - Heart Rate Value Format: 0 => UINT8 beats per minute
Bit 1-2 - Sensor Contact Status: 00 => Not supported or detected
Bit 3 - Energy Expended Status: 0 => No Present
Bit 4 - RR-Interval: 1 => One or more values are present
So the first byte is a heart rate in UInt8 format, and the next two bytes are an RR interval.
To read this in Kotlin:
characteristic.getIntValue(FORMAT_UINT8, 1)
This return a heart rate of 56 bpm.
And ignore the other two bytes unless you want the RR.

It seems I found a way by retrieving the value as follows
val hearRateDecimal = characteristic.getIntValue(BluetoothGattCharacteristic.FORMAT_UINT8, 1)
2 things are important
first - the format of UINT8 (although I don't know when to use UINT8 and when UINT16. Actually I thought I need to use UINT16 as the first byte is actually 16 (see the question above)
second - the offset parameter 1
What I now get is an Integer even beyond 127 -> 127, 128, 129, 130, ...

Related

IOException: write failed: EINVAL (Invalid argument) on UVC FileOutputStream in Kotlin

I'm trying to write Android Camera stream frames to the UVC Buffer using FileOutputStream. For context: the UVC Driver is working on the device and it has a custom built kernel.
I get 24 frames per second using imageAnalyzer:
imageAnalyzer = ImageAnalysis.Builder()
.setTargetAspectRatio(screenAspectRatio)
.setOutputImageFormat(ImageAnalysis.OUTPUT_IMAGE_FORMAT_YUV_420_888)
...
imageAnalysis.setAnalyzer(cameraExecutor) { image ->
val buffer = image.planes[0].buffer
val data = buffer.toByteArray()
...
}
Then based on UVC Specifications I build the header of the frame:
val header = ByteBuffer.allocate(26)
val frameSize = image.width * image.height * ImageFormat.getBitsPerPixel(image.format) / 8
val EOH = 0x01
val ERR = 0x00
val STI = 0x01
val REST = 0x00
val SRC = 0x00
val PTS = (System.currentTimeMillis() - referenceTime) * 10000
val endOfFrame = 0x01
val FID = (frameId).toByte()
Add all of the above to the header
header.putInt(frameSize)
header.putShort(image.width.toShort())
header.putShort(image.height.toShort())
header.put(image.format.toByte())
header.put(((EOH shl 7) or (ERR shl 6) or (STI shl 5) or (REST shl 4) or SRC).toByte())
header.putLong(PTS)
header.put(endOfFrame.toByte())
header.put(FID)
Open the FileOutputStream and try to write the header and the image:
val uvcFileOutputStream = FileOutputStream("/dev/video3", true)
uvcFileOutputStream.write(header.toByteArray() + data)
uvcFileOutputStream.close()
Tried to tweak the header/payload but I'm still getting the same error:
java.io.IOException: write failed: EINVAL (Invalid argument)
at libcore.io.IoBridge.write(IoBridge.java:654)
at java.io.FileOutputStream.write(FileOutputStream.java:401)
at java.io.FileOutputStream.write(FileOutputStream.java:379)
What could I be doing wrong? is the header format wrong?
I don't know the answer directly, but I was curious to look and have some findings. I focused on the Kotlin part, as I don't know about UVC and because I suspect the problem to be there.
Huge assumption
Since there's no link to the specification I just found this source:
https://www.usb.org/document-library/video-class-v15-document-set
within the ZIP I looked at USB_Video_Payload_Frame_Based_1.5.pdf
Page 9, Section 2.1 Payload Header
I'm basing all my findings on this, so if I got this wrong, everything else is. It could still lead to a solution though if you validated the same things.
Finding 1: HLE is wrong
HLE is the length of the header, not the image data. You're putting the whole image size there (all the RGB byte data). Table 2-1 describes PTS and SCR bits control whether PTS and SCR are present. This means that if they're 0 in BFH then the header is shorter. This is why HLE is either 2, 6, 12.
Confirmation source + the fact that the field is 1 byte long (each row of Table 2-1 is 1 byte/8 bits) which means the header can be only up to 255 bytes long.
Finding 2: all the header is misaligned
Since your putting HLE with putInt, you're writing 4 bytes, from this point on, everything is messed up in the header, the flags depend on image size, etc.
Finding 3: SCR and PTS flag inconsistencies
Assuming I was wrong about 1 and 2. You're still setting the SRC and PTS bit to 0, but pushing a long (8 bytes).
Finding 4: wrong source
Actually, something is really off at this point, so I looked at your referenced GitHub ticket and found a better example of what your code represents:
Sadly, I was unable to match up what your header structure is, so I'm going to assume that you are implementing something very similar to what I was looking at, because all PDFs had pretty much the same header table.
Finding 5: HLE is wrong or missing
Assuming you need to start with the image size, the HLE is still wrong because it's the image format's type, not in connection with SCR and PTS flags.
Finding 6: BFH is missing fields
If you're following one of these specs, the BFH is always one byte with 8 bits. This is confirmed by how the shls are putting it together in your code and the descriptions of each flag (flag = true|false / 1/0).
Finding 7: PTS is wrong
Multiplying something that is millisecond precise by 10000 looks strange. The doc says "at most 450 microseconds", if you're trying to convert between ms and us, I think the multiplier would be just 1000. Either way it is only an int (4 bytes) large, definitely not a long.
Finding 8: coding assistant?
I have a feeling after reading all this, that Copilot, ChatGPT or other generator wrote your original code. This sound confirmed by you looking for a reputable source.
Finding 9: reputable source example
If I were you I would try to find a working example of this in GitHub, using keyword search like this: https://github.com/search?q=hle+pts+sti+eoh+fid+scr+bfh&type=code the languages don't really matter since these are binary file/stream formats, so regardless of language they should be produced and read the same way.
Finding 10: bit order
Have a look at big endian / little endian byte order. If you look at Table 2-1 in the PDF I linked you can see which bit should map to which byte. You can specify the order you need easily on the buffer BEFORE writing to it, by the looks of the PDF it is header.order(ByteOrder.LITTLE_ENDIAN). I think conventionally 0 is the lowest bit and 31 is the highest. I can't cite a source on this, I seem to remember from uni. Bit 0 should be the 2^0 component (1) and bit 7 is the 2^7 (128). Reversing it would make things much harder to compute and comprehend. So PTS [7:0] means that byte is the lowest 8 bits of the 32 bit PTS number.
If you link to your specification source, I can revise what I wrote, but likely will find very similar guesses.

Any acoustic echo cancellation (AEC) library capable of 48 kHz?

I'm developing a VoIP application that runs at the sampling rate of 48 kHz. Since it uses Opus, which uses 48 kHz internally, as its codec, and most current Android hardware natively runs at 48 kHz, AEC is the only piece of the puzzle I'm missing now. I've already found the WebRTC implementation but I can't seem to figure out how to make it work. It looks like it corrupts the memory randomly and crashes the whole thing sooner or later. When it doesn't crash, the sound is kinda chunky as if it's quieter for the half of the frame. Here's my code that processes a 20 ms frame:
webrtc::SplittingFilter* splittingFilter;
webrtc::IFChannelBuffer* bufferIn;
webrtc::IFChannelBuffer* bufferOut;
webrtc::IFChannelBuffer* bufferOut2;
// ...
splittingFilter=new webrtc::SplittingFilter(1, 3, 960);
bufferIn=new webrtc::IFChannelBuffer(960, 1, 1);
bufferOut=new webrtc::IFChannelBuffer(960, 1, 3);
bufferOut2=new webrtc::IFChannelBuffer(960, 1, 3);
// ...
int16_t* samples=(int16_t*)data;
float* fsamples[3];
float* foutput[3];
int i;
float* fbuf=bufferIn->fbuf()->bands(0)[0];
// convert the data from 16-bit PCM into float
for(i=0;i<960;i++){
fbuf[i]=samples[i]/(float)32767;
}
// split it into three "bands" that the AEC needs and for some reason can't do itself
splittingFilter->Analysis(bufferIn, bufferOut);
// split the frame into 6 consecutive 160-sample blocks and perform AEC on them
for(i=0;i<6;i++){
fsamples[0]=&bufferOut->fbuf()->bands(0)[0][160*i];
fsamples[1]=&bufferOut->fbuf()->bands(0)[1][160*i];
fsamples[2]=&bufferOut->fbuf()->bands(0)[2][160*i];
foutput[0]=&bufferOut2->fbuf()->bands(0)[0][160*i];
foutput[1]=&bufferOut2->fbuf()->bands(0)[1][160*i];
foutput[2]=&bufferOut2->fbuf()->bands(0)[2][160*i];
int32_t res=WebRtcAec_Process(aecState, (const float* const*) fsamples, 3, foutput, 160, 20, 0);
}
// put the "bands" back together
splittingFilter->Synthesis(bufferOut2, bufferIn);
// convert the processed data back into 16-bit PCM
for(i=0;i<960;i++){
samples[i]=(int16_t) (CLAMP(fbuf[i], -1, 1)*32767);
}
If I comment out the actual echo cancellation and just do the float conversion and band splitting back and forth, it doesn't corrupt the memory, doesn't sound weird and runs indefinitely. (I do pass the farend/speaker signal into AEC, I just didn't want to make the mess of my code by including it in the question)
I've also tried Android's built-in AEC. While it does work, it upsamples the captured signal from 16 kHz.
Unfortunately, there is no free AEC package that support 48khz. So, either move to 32khz or use a commercial AEC package at 48khz.

Explanation of how this MIDI lib for Android works

I'm using the library of #LeffelMania : https://github.com/LeffelMania/android-midi-lib
I'm musician but I've always recorded as studio recordings, not MIDI, so I don't understand some things.
The thing I want to understand is this piece of code:
// 2. Add events to the tracks
// Track 0 is the tempo map
TimeSignature ts = new TimeSignature();
ts.setTimeSignature(4, 4, TimeSignature.DEFAULT_METER, TimeSignature.DEFAULT_DIVISION);
Tempo tempo = new Tempo();
tempo.setBpm(228);
tempoTrack.insertEvent(ts);
tempoTrack.insertEvent(tempo);
// Track 1 will have some notes in it
final int NOTE_COUNT = 80;
for(int i = 0; i < NOTE_COUNT; i++)
{
int channel = 0;
int pitch = 1 + i;
int velocity = 100;
long tick = i * 480;
long duration = 120;
noteTrack.insertNote(channel, pitch, velocity, tick, duration);
}
Ok, I have 228 Beats per minute, and I know that I have to insert the note after the previous note. What I don't understand is the duration.. is it in milliseconds? it doesn't have sense if I keep the duration = 120 and I set my BPM to 60 for example. Neither I understand the velocity
MY SCOPE
I want to insert notes of X pitch with Y duration.
Could anyone give me some clue?
The way MIDI files are designed, notes are in terms of musical length, not time. So when you insert a note, its duration is a number of ticks, not a number of seconds. By default, there are 480 ticks per quarter note. So that code snippet is inserting 80 sixteenth notes since there are four sixteenths per quarter and 480 / 4 = 120. If you change the tempo, they will still be sixteenth notes, just played at a different speed.
If you think of playing a key on a piano, the velocity parameter is the speed at which the key is struck. The valid values are 1 to 127. A velocity of 0 means to stop playing the note. Typically a higher velocity means a louder note, but really it can control any parameter the MIDI instrument allows it to control.
A note in a MIDI file consists of two events: a Note On and a Note Off. If you look at the insertNote code you'll see that it is inserting two events into the track. The first is a Note On command at time tick with the specified velocity. The second is a Note On command at time tick + duration with a velocity of 0.
Pitch values also run from 0 to 127. If you do a Google search for "MIDI pitch numbers" you'll get dozens of hits showing you how pitch number relates to note and frequency.
There is a nice description of timing in MIDI files here. Here's an excerpt in case the link dies:
In a standard MIDI file, there’s information in the file header about “ticks per quarter note”, a.k.a. “parts per quarter” (or “PPQ”). For the purpose of this discussion, we’ll consider “beat” and “quarter note” to be synonymous, so you can think of a “tick” as a fraction of a beat. The PPQ is stated in the last word of information (the last two bytes) of the header chunk that appears at the beginning of the file. The PPQ could be a low number such as 24 or 96, which is often sufficient resolution for simple music, or it could be a larger number such as 480 for higher resolution, or even something like 500 or 1000 if one prefers to refer to time in milliseconds.
What the PPQ means in terms of absolute time depends on the designated tempo. By default, the time signature is 4/4 and the tempo is 120 beats per minute. That can be changed, however, by a “meta event” that specifies a different tempo. (You can read about the Set Tempo meta event message in the file format description document.) The tempo is expressed as a 24-bit number that designates microseconds per quarter-note. That’s kind of upside-down from the way we normally express tempo, but it has some advantages. So, for example, a tempo of 100 bpm would be 600000 microseconds per quarter note, so the MIDI meta event for expressing that would be FF 51 03 09 27 C0 (the last three bytes are the Hex for 600000). The meta event would be preceded by a delta time, just like any other MIDI message in the file, so a change of tempo can occur anywhere in the music.
Delta times are always expressed as a variable-length quantity, the format of which is explained in the document. For example, if the PPQ is 480 (standard in most MIDI sequencing software), a delta time of a dotted quarter note (720 ticks) would be expressed by the two bytes 82 D0 (hexadecimal).

Meaning of values from Audiorecord.read()

I am trying to understand what do the values obtained with Audiorecord.read() actually mean.
I am trying to create an app that will start recording sound when it will detect an impulse reponse (therefore I have though to set a treshold in which any sound above it will be considered an impulse).
The problem is that I don´t really know what do the values stored in "data" represent when I call this method:
read = recorder.read(data, 0, bufferSize);
These are some of the values that I obtain:
[96, 2, 101, 3, 101, 2, 110, 1, -41, 2, -80, 2, -117, 2, 119, 2, -94, 0 .........]
The idea is to set the treshold from these values but firstly I need to know what do they represent.
Can you guys help me with this?
The data depends on the parameters you sent to the constructor. AudioRecord (int audioSource, int sampleRateInHz, int channelConfig, int audioFormat, int bufferSizeInBytes)
sampleRateInHz is the number of samples per second. channel config is either MONO or STEREO meaning 1 or 2 channels. format is PCM8 or PCM16 meaning 8 bits or 16 bits per sample.
So the data is an array of samples. Each sample is an array of channels. Each channel is going to have an 8 or 16 bit value, depending on what you asked for. No data will be skipped, it will always be a fixed size format.
So if you chose 1 channel and 8 bits, each byte is a single sound heard, and you should see sampleRateInHz sounds per second. If you choose 16 bits, each sound is 2 bytes long. If you use 2 channels, it should go in order channel 1 then channel 2 for each sample.
The individual values are the amplitude of the sound data when sampled at the requested frequency. See http://en.wikipedia.org/wiki/Pulse-code_modulation for more information on how it works.

Polar Wearlink Bluetooth packet

i am looking at the code of a project called MyTracks:
http://code.google.com/r/jrgert-polar-bluetooth/source/browse/MyTracks/src/com/google/android/apps/mytracks/services/sensors/PolarMessageParser.java?r=ebc01faf49550bc9801633ff38bb3b8ddd6f5698
Now I am having problems with the method isValid(byte[] buffer). I don´t understand what exactly is he checking here. We want to know if the first byte in the array is the header containing 0xFE. I don´t quite understand the following lines :
boolean goodHdr = ((buffer[0] & 0xFF) == 0xFE);
boolean goodChk = ((buffer[2] & 0xFF) == (0xFF - (buffer[1] & 0xFF)));
return goodHdr && goodChk;
any ideas?
Ewoks is correct, refer to this blog post:
http://ww.telent.net/2012/5/3/listening_to_a_polar_bluetooth_hrm_in_linux
"Digging into src/com/google/android/apps/mytracks/services/sensors/PolarMessageParser.java we find a helpful comment revealing that, notwithstanding Polar's ridiculous stance on giving out development info (they don't, is the summary) the Wearlink packet format is actually quite simple.
Polar Bluetooth Wearlink packet example
Hdr - Len - Chk - Seq - Status - HeartRate - RRInterval_16-bits
FE - 08 - F7 - 06 - F1 - 48 - 03 64
where
Hdr always = 254 (0xFE),
Chk = 255 - Len
Seq range 0 to 15
Status = Upper nibble may be battery voltage
bit 0 is Beat Detection flag."
&0xff simply converts signed byte to unsigned int for doing the comparison
First line is checking is received buffer are starting with 0xFE as it should be with this Polar Wearable.
Second line is checking if length byte is correct as well because it's value by specification is 255-value writen is size byte..
This together is super simple verification that messages are correct (more complicated implementation would include CRC or other verification methods). cheers

Categories

Resources