How to change the frequency of an equalizer? - android

It is possible to change the frequency of the bands of an equalizer, or is only possible to use 60Hz 230 Hz 910 Hz 3600 Hz 14000 Hz?

I suppose you're talking about android.media.audiofx.Equalizer.
Different Android devices have different number of frequency bands, and you can set it freely between the supported ones, as the docs says:
setBandLevel(short band, short level)
Sets the given equalizer band to
the given gain value.
Parameters
band - short: frequency band that will have the new gain. The numbering of the bands starts from 0 and ends at (number of bands - 1).
level - short: new gain in millibels that will be set to the given band. getBandLevelRange() will define the maximum and minimum values.
This answer from WoodyDev gives us some example code to get range supported for the device:
// New instance of equalizer (add it as a member of your class rather than a scoped instance like this)
Equalizer mEqualizer = new Equalizer(0, mMediaPlayer.getAudioSessionId());
// Get the number of bands available on your device
short bands = mEqualizer.getNumberOfBands();
// Get the gain level available for the bands
final short minEQLevel = mEqualizer.getBandLevelRange()[0];
final short maxEQLevel = mEqualizer.getBandLevelRange()[1];
You can find more opinions and help here: Number of bands in Android Equalizer

Related

How to compare amplitude from Android with iOS

So I've read various posts and blogs on how to get the amplitude from the microphone on Android. Multiple posts suggested using this implementation:
minSize = AudioRecord.getMinBufferSize(
44100,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT
);
audioRecord = AudioRecord(
MediaRecorder.AudioSource.MIC,
44100,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT,
minSize
);
val buffer = ShortArray(minSize)
audioRecord.read(buffer, 0, minSize)
var max = 0
for (s in buffer) {
if (abs(s.toInt()) > max) {
max = abs(s.toInt())
}
}
return max.toDouble()//This being the amplitude
First question: What is the measurement of the value I am getting returned? An example of the value could typically be 381 with "regular noise" e.g. milliVolts(mV)?
On iOS you are able to get averagePower and peakPower from the AudioRecorder which returns the average or max amplitude in dbFS.
Second question: Is it possible to do the same implementation that we have on Android on iOS?
Third question: Is it possible to do the same implementation that we have on iOS on Android?
To provide some context; As part of a research project we are looking for different sound patterns that might link a user to a specific context and in order to do so on a larger scale we need to be able to compare the amplitude from Android and iOS.
Fourth question: Regardless of the implementations mentioned above, is there a better way to compare soundwaves from microphones of both iOS and Android devices?
The easiest way to get the max amplitude on Android is a lot less code. Just call MediaRecorder.getMaxAmplitude(). It returns the max amplitude since the previous call. I'm not sure where you got the suggestion to use the call you did, that seems like the hard way.
The units aren't specified. It should correspond to the amount of pressure picked up by the mic, but as every model will have different mics, amps, DACs, etc it isn't going to be the same on all devices. All you can be promised is it will be between 0 and (2^x)-1 where x is the number of bits per sample you picked. I'm not sure why you'd think it would be in millivolts, this isn't an electrical measurement. Sound is typically measured in dB or pascals.
For comparing iOS to Android and trying to do matching- what are you trying to do? The code you have is just finding the max value. That's kind of uninteresting, unless all you're doing is an applause meter. If you're actually looking to compare soundwaves wouldn't you be better off taking the fourier transform and doing so in the frequency domain? The time domain is really messy for that.

Android - Get max safe stream volume

I have a use case to change stream volume programmatically, but on newer android volume, raising the volume above a certain limit (60% as per my observations which corresponds to step 9 on most phones) results in a warning dialog:
Listening at high volume for a long time may damage your hearing. Tap OK to allow the volume
to be increased above safe levels
Cancel OK
I couldn't find any documentation about this in the the android developer portal, all I could find are some random articles citing the European regulations like this one:
According to regulations set by the European Committee for Electrotechnical Standarisation (CENELEC), all electronic devices capable of media playback sold after February 2013 must have a default output volume level of a maximum 85 dB. Users can choose to override the warning to increase the volume to a maximum of 100 dB, but in doing so the warning must re-appear after 20 hours of music playback.
So I need to figure out reliably what that number is, so I don't ever result in a volume change that would show this dialog, but I also don't want to just use step 9 as the max volume and, then find out that it's not the right value for another phone. Does the android API expose the max safe stream volume anywhere? If not, then do they at least document the step number that corresponds to it for different phone?
Thanks!
There's a resource which holds the safe volume step: config_safe_media_volume_index
// .../overlay/frameworks/base/core/res/res/values/config.xml
<integer name="config_safe_media_volume_index">7</integer>
It is defined HERE
And it is used HERE
You can get it dinamically via:
int safeVolumeStep;
int safeVolumeStepResourceId =
getResources().getIdentifier("config_safe_media_volume_index", "integer", "android");
if(safeVolumeStepResourceId != 0) {
safeVolumeStep = getResources().getInteger(safeVolumeStepResourceId);
} else {
Log.w("TESTS", "Resource config_safe_media_volume_index not found. Setting a hardcoded value");
// We probably won't fall here because config_safe_media_volume_index is defined in the AOSP
// It not a vendor specific resource...
// For any case, try to set the safe step manually to 60% of the max volume.
AudioManager audioManager = (AudioManager) getSystemService(Context.AUDIO_SERVICE);
int maxVolume = audioManager.getStreamMaxVolume(AudioManager.STREAM_MUSIC);
safeVolumeStep = (int) (maxVolume * 0.6f);
}
Log.d("TESTS", "Safe Volume Step: " + safeVolumeStep +
" Safe volume step resourceID: " + Integer.toHexString(safeVolumeStepResourceId) );
I tested here in a Galaxy S10 and I'm getting 9.

How to get smaller buffer size in multi-channel audio application with Oboe

I'm using Oboe 1.2 in an audio android application. When I call getFramesPerBurst(), which gives the endpoint buffer size, I get expected results (240 frames) if the number of output channels is set to 2. However when I set 4 output channels, the value returned by getFramesPerBurst() is around 960 (!). Is that normal ? Is that a limitation of the hardware (I tested on 4 different devices though, with different os version) ? A limitation of Oboe ? I notice also that this value is different than the value given by the property PROPERTY_OUTPUT_FRAMES_PER_BUFFER of AudioManager from the AudioService.
oboe::AudioStreamBuilder builder;
if (!oboe::AudioStreamBuilder::isAAudioRecommended()){
builder.setAudioApi(oboe::AudioApi::OpenSLES);
}
builder.setSharingMode(oboe::SharingMode::Exclusive);
builder.setFormat(oboe::AudioFormat::Float);
builder.setChannelCount(4);
builder.setCallback(&_oboeCallback);
builder.setPerformanceMode(oboe::PerformanceMode::LowLatency);
oboe::Result result = builder.openStream(&_stream);
if (result == oboe::Result::OK) {
int framePerBurst = _stream->getFramesPerBurst(); // gives value around 960 for 4 channels, 240 for 2 channels
_stream->setBufferSizeInFrames(2*framePerBurst);
}
Unless you are connecting to an audio device which actually has 4 independent channels (e.g. a USB audio interface or DJ controller like this one) then your 4 channel stream will need to be mixed into an N channel stream where N is the number of channels in your audio device. This could be 2 (stereo) for headphones or 1 (mono) for a built-in speaker.
The mixer introduces latency and larger buffer sizes. This is the difference in buffer sizes you see when you request a channel count of 2 vs 4.
For the lowest latency always leave the channel count unspecified when creating the stream, then do any channel count conversion inside your own app. There's an example of this here.

Explanation of how this MIDI lib for Android works

I'm using the library of #LeffelMania : https://github.com/LeffelMania/android-midi-lib
I'm musician but I've always recorded as studio recordings, not MIDI, so I don't understand some things.
The thing I want to understand is this piece of code:
// 2. Add events to the tracks
// Track 0 is the tempo map
TimeSignature ts = new TimeSignature();
ts.setTimeSignature(4, 4, TimeSignature.DEFAULT_METER, TimeSignature.DEFAULT_DIVISION);
Tempo tempo = new Tempo();
tempo.setBpm(228);
tempoTrack.insertEvent(ts);
tempoTrack.insertEvent(tempo);
// Track 1 will have some notes in it
final int NOTE_COUNT = 80;
for(int i = 0; i < NOTE_COUNT; i++)
{
int channel = 0;
int pitch = 1 + i;
int velocity = 100;
long tick = i * 480;
long duration = 120;
noteTrack.insertNote(channel, pitch, velocity, tick, duration);
}
Ok, I have 228 Beats per minute, and I know that I have to insert the note after the previous note. What I don't understand is the duration.. is it in milliseconds? it doesn't have sense if I keep the duration = 120 and I set my BPM to 60 for example. Neither I understand the velocity
MY SCOPE
I want to insert notes of X pitch with Y duration.
Could anyone give me some clue?
The way MIDI files are designed, notes are in terms of musical length, not time. So when you insert a note, its duration is a number of ticks, not a number of seconds. By default, there are 480 ticks per quarter note. So that code snippet is inserting 80 sixteenth notes since there are four sixteenths per quarter and 480 / 4 = 120. If you change the tempo, they will still be sixteenth notes, just played at a different speed.
If you think of playing a key on a piano, the velocity parameter is the speed at which the key is struck. The valid values are 1 to 127. A velocity of 0 means to stop playing the note. Typically a higher velocity means a louder note, but really it can control any parameter the MIDI instrument allows it to control.
A note in a MIDI file consists of two events: a Note On and a Note Off. If you look at the insertNote code you'll see that it is inserting two events into the track. The first is a Note On command at time tick with the specified velocity. The second is a Note On command at time tick + duration with a velocity of 0.
Pitch values also run from 0 to 127. If you do a Google search for "MIDI pitch numbers" you'll get dozens of hits showing you how pitch number relates to note and frequency.
There is a nice description of timing in MIDI files here. Here's an excerpt in case the link dies:
In a standard MIDI file, there’s information in the file header about “ticks per quarter note”, a.k.a. “parts per quarter” (or “PPQ”). For the purpose of this discussion, we’ll consider “beat” and “quarter note” to be synonymous, so you can think of a “tick” as a fraction of a beat. The PPQ is stated in the last word of information (the last two bytes) of the header chunk that appears at the beginning of the file. The PPQ could be a low number such as 24 or 96, which is often sufficient resolution for simple music, or it could be a larger number such as 480 for higher resolution, or even something like 500 or 1000 if one prefers to refer to time in milliseconds.
What the PPQ means in terms of absolute time depends on the designated tempo. By default, the time signature is 4/4 and the tempo is 120 beats per minute. That can be changed, however, by a “meta event” that specifies a different tempo. (You can read about the Set Tempo meta event message in the file format description document.) The tempo is expressed as a 24-bit number that designates microseconds per quarter-note. That’s kind of upside-down from the way we normally express tempo, but it has some advantages. So, for example, a tempo of 100 bpm would be 600000 microseconds per quarter note, so the MIDI meta event for expressing that would be FF 51 03 09 27 C0 (the last three bytes are the Hex for 600000). The meta event would be preceded by a delta time, just like any other MIDI message in the file, so a change of tempo can occur anywhere in the music.
Delta times are always expressed as a variable-length quantity, the format of which is explained in the document. For example, if the PPQ is 480 (standard in most MIDI sequencing software), a delta time of a dotted quarter note (720 ticks) would be expressed by the two bytes 82 D0 (hexadecimal).

Number of bands in Android Equalizer

My question is simple: what is the default number of the bands provided by the built-in android equalizer? Also, what is the guaranteed minimum number of bands?
As far as I researched, the answer appears to be 5, but it is not very well documented. However, testing it on my devices, which is currently available, I got the following result:
HTC Desire S running android 2.3.5: 5 bands
Sony Xperia Tipo running android 4.0.x: 5 bands
however, Nexus 4 running Android 4.3.1: 6 bands
The way I get theese numbers is the following:
MediaPlayer mp=new MediaPlayer(this);
/* some initialization */
Equalizer eq=new Equalizer(0, mp.getAudioSessionId());
short bands=eq.getNumberOfBands();
So, on some devices, I may be able to get more bands, but the minimum number is 5?
Also, is that a good approach that I render the UI part of the equalizer dynamically, depending on how much bands the current device has, and then let the user set his own preferences?
Thanks in advance!
I can't tell the number of bands of device. It is hardware dependent. Samsung galaxy have 13 equalizer bands and some devices have greater than it.
You can simply create any number of bands programamtically.
Equalizer eq=new Equalizer(0, mp.getAudioSessionId());
short bands=eq.getNumberOfBands();
LinearLayout parentViewToAllEqualizerSeekBars = findViewById...
for(int i=0;i<(int)bands.length;i++)
{
//create seekbar programmatically and set it min max and add to the view
Seekbar seek = new SeekBar(this);
seek.min(equalizerMinLevel....)
seek.max(equalizerMaxLevel..)
parentViewToAllEqualizerSeekBars .addView(seek);
}
Now it will work on all devices.Whether it has band less than 5 or greater than 13.
Note:
Also check whether equalizer!=null must
I do not think there is a default number of bands, and you should not build your application assuming there is a default/fixed number of bands.
Definitely you will have to render your UI equalizer dynamically, based on device number of bands.
Because of low reputation i have to tell it to you here
Maximum number of bands are 8, I have created 8 seekbar and only show seekbar ==numberOfBands
How to implement Equalizer in android

Categories

Resources