I have a xml which I am parsing and displaying the values.
<question>
The frequency of tuning fork is 1000 Hz and velocity of sound in air 300 ms-1.The distance travelled by sound while the fork executes 500 oscillations is
</question>
In this as you can see 300 ms to the power -1 is the exact value. But while parsing it is not displayed.
Also I need to display 10^8 as (10 to the power 8 )
How to display these values. Itried including ASCII code but it does not work.
Please help and Thanks in advance.
Related
I want to identify the wheel and crank sensor data from the 11-bytes data. I have tried to parse the 11-bytes hex data which i got in our mobile application as per the split ups in the link below.
https://www.bluetooth.com/wp-content/uploads/Sitecore-Media-Library/Gatt/Xml/Characteristics/org.bluetooth.characteristic.csc_measurement.xml
For instance i have tried the following,
Hex Data : 0x03 6D010000 FC7E 2C01 F87E
Flag-03 ->0000 0011 -> 8bits so both are true hence we can get the wheel and crank's respective values.
Cumulative Wheel Revolutions- 6D 01 00 00 -> 32bits so converting it in decimal we get -1828782080
Last Wheel Event Time- FC 7E -> 16bits so converting it in decimal we get - 64638
Cumulative Crank Revolutions- 2C 01 -> 16bits so converting it in decimal we get - 11265
Last Crank Event Time- F8 7E -> 16bits so converting it in decimal we get - 63614
I am unable to get the actual wheel and crank measurement values from the BLE. Is the above procedure what i have understood from the reference link which i have followed is correct ? or am i wrong elsewhere ? I have put our maximum effort to dissect and parse the data but unfortunately I am unable to reach the solution. Kindly guide me through this process. What do we have to do to get the right value ? Like am i supposed to multiply it with some number ? I have tried with different combination yet not able to get. The device i am using is the SunDing515 cycling speed and cadence sensor with Bluetooth low energy.
From your data and from the data sheet, we see that the values are using unsigned integer. (uint16 or uint8). None of your value should be negative.
Usually, bluetooth values are little endian instead of big endian.
Example:
6D010000 should be read 00 00 01 6D = 365
FC7E should be read 7E FC = 32508
2C01 should be read 01 2C = 300
F87E should be read 7E F8 = 32504
I don't know if you figured out what you were looking for at this, but the endian thing really helped me out. The basic process you need to do is read the data twice. Then use the difference in time and the difference in crank revolutions (i only did cranks) multiply by 1024 (as per the spec).
so assuming that you have from the little endian example:
300 for the Cumulative Crank Revolutions (CCR) and 23504 as the Last Crank Event Time (LCET) (Unit has a resolution of 1/1024s.) . Read the data again and you will get a slightly higher CCR (say 301 or 302) and a much higher LCET say 24704. Then subtract the more recent LCET from the older one (24704-23504) to get 1200 and the Higer CCR from the lower CCR (302-300) to get 2. Then multipy the CCR by 1024 and divide by the difference in the LCET. Giving you 1.7. That is your rotations per reading. Then multiply by 60 to get Rotations per minute. (102.4)
Dennis Rutherford's answer is the right direction, but in my opinion we should take into account the nature and mechanics of cycling: if someone is climbing uphill with a 20 rpm (I know this is extreme but it can happen!), then that's less than 1/3 crank rotation per seconds. So multiplication by 60 would yield too highly variable data in that case.
The Event times have high enough resolution (1/1024s), but the wheel and crank revolutions are just simple counts, because traditional magnetic sensors cannot provide fractional revolutions.
Therefore my solution is a sliding window to calculate the instantaneous cadence: if we wait enough time (20-30 seconds) then we can settle for a less instantaneous but more stable reading.
It's also important to deal properly with the overflow of the Event Time values, because with UInt16 bytes it overflows every 64 seconds. So if the last event time is lower than the first event time then an overflow happened, and for our calculation we'd need to bump the last event time by 64 seconds. Also note that our time window cannot be too large, otherwise multiple overflows could fuse together and the calculation will be off.
Lastly: please check the 1826 GATT Service's 2AD2 "Indoor Bike Data" GATT Characteristic, because if you are lucky (I was not) then you'll get an instantaneous cadence supplied there and you'll save yourself from all these shenanigans.
As a part of a larger aplication I am currently working on a decibel meters that takes the average sound level of a 10 second timespan.
To achieve this I made a CountDownTimer of 10 000 miliseconds that ticks every 100 miliseconds.
In each onTick event I update the textfield that shows the time left, and I also update the realtime decibel value.
My issue however is converting the maximum amplitude to decibels. I found the "power_db = 20 * log10(amp / amp_ref);" formula here on StackOverflow and I understand how it works, but I seem to always end up with a negative decibel value.
I understand that this is because of a wrong amp_ref value, but I am absolutely stumped on which one I should use. I found alot of different values on the web and none seem to do the trick.
Does anyone have any idea which reference amplitude I should use to get the correct decibel reading on my meter? The phone I am testing this on is a Google Nexus 5. For now it would be good enough if it only was a really accurate value on this phone if thats of any help.
The code I have in my onTick event is the following (I removed the formula for now since it seemed to be wrong anyways):
public void onTick(long ms) {
meetBtn.setText(String.valueOf((ms/1000)+1));
amplitude = mRecorder.getMaxAmplitude();
decibelView.setText(String.valueOf(amplitude));
}
If anyone has any tips or needs more information, please let me know!
Thanks in advance! :)
Negative decibels are fine, because it's a relative measure anyway. In fact it's a common practice to take the maximum possible amplitude as a reference point and as a result have decibels go from 0 down to negative space. Pro systems usually indicate positive decibels as an overload when clipping and distortions of sound may occur.
So for example if your amplitude range is 0 to 1 (an accepted float-PCM standard), then your amp_ref would be 1 and your decibel values will go from some negative value that depends on the bitness resolution of your source (e.g. -186dB for 32 bit, or -90dB for 16-bit source), up to 0dB. This is normal!
Edit: actually, your float-PCM amplitude range is -1 to 1, but decibel calculation "drops" the minus sign and gives the same result for both negative and positive amplitudes.
I´m trying to develop an app that calculates the reverberation time of a room and displays it on the screen.
What I´ve done so far is:
record the audio in a wav file
extract the bytes from the wav file and transform them to "double"
plot the data obtained following the next equation: SPL=20log(samples/ 20 μPa)
then from the figure that I´ve plotted, I can obtain the RT60 easily
The point is that I´m not really sure if what I´m doing has any sense, as wherever I search for info I see that they obtain the RT by octave ( or third of octave ) bands and in my case I´m not doing anything with the frequency, I´m just plotting the graph against time getting something like this:
So my point is, is there anything that I´m missing?
Should the "samples" value in the SPL formula be something else? What Im doing to obtain them is:
double audioSample = (double) (array[i+1] << 8 | array[i] & 0xff)/ 32767.0;
and then I place the [-1,+1] values that I obtain directly in the formula
For what frequency I´m I theorically plotting the RT?
Thanks
You have to use a common frequency for the voice you can use 500hz
, 1000 or 2000. Or an average of the 3 like for the calculus off rt60. Oliver.
I tried to find how to get the altitude above the mean sea level.
At this time, it returns altitude from the ellipsoid.
So, anyone knows the formula or calculation to change the altitude value
from ellipsoid to the altitude value from mean sea level.
Thank you for all help
As you mentioned, GPS returns the altitude as an offset from the WGS84 reference ellipsoid, but most people want to see mean sea level (MSL), and the two frequently don't agree. The way this is most frequently done is by looking up the delta in a table and using that to compute MSL based on the height from GPS and the delta in the table.
There's some java code here: https://github.com/NASAWorldWind/WorldWindJava/blob/develop/src/gov/nasa/worldwind/util/EGM96.java. The other functions that it uses from Worldwind aren't that complicated, so you could probably use most of the code unmodified, and the rest you could adapt if you're working in Java and their license meets your needs.
It uses information from the EGM 96 data set (link here if you're interested -- not strictly necessary though), which you can download here: https://github.com/jleppert/egm96/blob/master/WW15MGH.DAC. You will want the WW15MGH.DAC file. It's a binary file full of 16-bit signed integers. You can use the Java example to show you how to access the data in the file. They also provide a Fortran example if that's your thing. :-)
Here's the information on the file from their readme.
Data Description for 15 minute worldwide binary geoid height file:
---- FILE: WW15MGH.DAC
The total size of the file is 2,076,480 bytes. This file was created
using an INTEGER2 data type format and is an unformatted direct access
file. The data on the file is arranged in records from north to south.
There are 721 records on the file starting with record 1 at 90 N. The
last record on the file is at latitude 90 S. For each record, there
are 1,440 15 arc-minute geoid heights arranged by longitude from west to
east starting at the Prime Meridian (0 E) and ending 15 arc-minutes west
of the Prime Meridian (359.75 E). On file, the geoid heights are in units
of centimeters. While retrieving the Integer2 values on file, divide by
100 and this will produce a geoid height in meters.
I am using the AudioRecord class to analize raw pcm bytes as it comes in the mic.
So thats working nicely. Now i need convert the pcm bytes into decibel.
I have a formula that takes sound presure in Pa into db.
db = 20 * log10(Pa/ref Pa)
So the question is the bytes i am getting from audiorecorder from the buffer what is it is it amplitude pascal sound pressure or what.
I tried to putting the value into te formula but it comes back with very hight db so i do not think its right
thanks
Disclaimer: I know little about Android.
Your device is probably recording in mono at 44,100 samples per second (maybe less) using two bytes per sample. So your first step is to combine pairs of bytes in your original data into two-byte integers (I don't know how this is done in Android).
You can then compute the decibel value (relative to the peak) of each sample by first taking the normalized absolute value of the sample and passing it to your Db function:
float Db = 20 * log10(ABS(sampleVal) / 32768)
A value near the peak (e.g. +32767 or -32768) will have a Db value near 0. A value of 3277 (0.1) will have a Db value of -20; a value of 327 (.01) will have a Db value of -40 etc.
The problem is likely the definition of the "reference" sound pressure at the mic. I have no idea what it would be or if it's available.
The only audio application I've ever used, defined 0db as "full volume", when the samples were at + or - max value (in unsigned 16 bits, that'd be 0 and 65535). To get this into db I'd probably do something like this:
// assume input_sample is in the range 0 to 65535
sample = (input_sample * 10.0) - 327675.0
db = log10(sample / 327675.0)
I don't know if that's right, but it feels right to the mathematically challenged me. As the input_sample approaches the "middle", it'll look more and more like negative infinity.
Now that I think about it, though, if you want a SPL or something that might require different trickery like doing RMS evaluation between the zero crossings, again something that I could only guess at because I have no idea how it really works.
The reference pressure in Leq (sound pressure level) calculations is 20 micro-Pascal (rms).
To measure absolute Leq levels, you need to calibrate your microphone using a calibrator. Most calibrators fit 1/2" or 1/4" microphone capsules, so I have my doubts about calibrating the microphone on an Android phone. Alternatively you may be able to use the microphone sensitivity (Pa/mV) and then calibrate the voltage level going into the ADC. Even less reliable results could be had from comparing the Android values with the measured sound level of a diffuse stationary sound field using a sound level meter.
Note that in Leq calculations you normally use the RMS values. A single sample's value doesn't mean much.
I held my sound level meter right next to the mic on my google ion and went 'Woooooo!' and noted that clipping occurred about 105 db spl. Hope this helps.
The units are whatever units are used for the reference reading. In the formula, the reading is divided by the reference reading, so the units cancel out and no longer matter.
In other words, decibels is a way of comparing two things, it is not an absolute measurement. When you see it used as if it is absolute, then the comparison is with the quietest sound the average human can hear.
In our case, it is a comparison to the highest reading the device handles (thus, every other reading is negative, or less than the maximum).