I am using the new MIDI API in order to play some MIDI notes. However, I am unable to hear any sound, nor any exception is being thrown. The code for the same is as follows:
//initialising the MidiReceiver
private MidiReceiver midiReceiver;
midiReceiver = new MidiReceiver() {
#Override
public void onSend(byte[] msg, int offset,
int count, long timestamp) throws IOException {
}
};
/*Then in my loop containing note_on or note_off events*/
byte[] buffer = new byte[32];
int numBytes = 0;
int channel = 2; // MIDI channels 1-16 are encoded as 0-15.
// NOTE_STATUS is either 0x90 or 0x80
buffer[numBytes++] = (byte)(NOTE_STATUS + (channel - 1));
buffer[numBytes++] = (byte)noteValue; // the required MIDI pitch
buffer[numBytes++] = (byte)127; // max velocity
int offset = 0;
midiReceiver.send(buffer, offset, numBytes);
What am I doing wrong here? I think it must be because the onSend method is empty. How do I use it in order to make my app play back the note(s) within the Android device?
I couldn't find any indication in the documentation that the new MIDI API actually let you synthesize audio, so, it seems you need to generate the sound yourself.
Maybe this library might be useful.
Related
I'm writing an Android application. A MIDI piano keyboard is connected physically by a cable to an Android device. I have been following the official Android Midi documentation here https://developer.android.com/reference/android/media/midi/package-summary, but I am stuck with decoding the raw Midi data which I am receiving.
#RequiresApi(api = Build.VERSION_CODES.M)
class MidiFramer extends MidiReceiver {
public void onSend(byte[] data, int offset,
int count, long timestamp) throws IOException {
// parse MIDI or whatever
// How to convert data to something readable? Below doesn't make any sense.
Log.v(LOG_TAG, "onSend strData:" + data +" length:"+data.length);
StringBuffer sb = new StringBuffer();
for (int i=0; i<data.length; i++){
String hex = new String (data, StandardCharsets.UTF_8);
sb.append(hex);
}
Log.v(LOG_TAG, "onSend sb:" + sb.toString());
}
}
Essentially from the raw Midi data which is being received, I want to know what note is being played (e.g. D4 / C#5) on the physical piano keyboard. Any help would be appreciated.
I use oboe to play sounds in my ndk library, and I use OpenSL with Android extensions to decode wav files into PCM. Decoded signed 16-bit PCM are stored in-memory (std::forward_list<int16_t>), and then they are sent into the oboe stream via a callback. The sound that I can hear from my phone is alike original wav file in volume level, however, 'quality' of such a sound is not -- it bursting and crackle.
I am guessing that I send PCM in audio stream in wrong order or format (sampling rate ?). How can I can use OpenSL decoding with oboe audio stream ?
To decode files to PCM, I use AndroidSimpleBufferQueue as a sink, and AndroidFD with AAssetManager as a source:
// Loading asset
AAsset* asset = AAssetManager_open(manager, path, AASSET_MODE_UNKNOWN);
off_t start, length;
int fd = AAsset_openFileDescriptor(asset, &start, &length);
AAsset_close(asset);
// Creating audio source
SLDataLocator_AndroidFD loc_fd = { SL_DATALOCATOR_ANDROIDFD, fd, start, length };
SLDataFormat_MIME format_mime = { SL_DATAFORMAT_MIME, NULL, SL_CONTAINERTYPE_UNSPECIFIED };
SLDataSource audio_source = { &loc_fd, &format_mime };
// Creating audio sink
SLDataLocator_AndroidSimpleBufferQueue loc_bq = { SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 1 };
SLDataFormat_PCM pcm = {
.formatType = SL_DATAFORMAT_PCM,
.numChannels = 2,
.samplesPerSec = SL_SAMPLINGRATE_44_1,
.bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_16,
.containerSize = SL_PCMSAMPLEFORMAT_FIXED_16,
.channelMask = SL_SPEAKER_FRONT_LEFT | SL_SPEAKER_FRONT_RIGHT,
.endianness = SL_BYTEORDER_LITTLEENDIAN
};
SLDataSink sink = { &loc_bq, &pcm };
And then I register callback, enqueue buffers and move PCM from buffer to storage until it's done.
NOTE: wav audio file is also 2 channeled signed 16 bit 44.1Hz PCM
My oboe stream configuration is the same:
AudioStreamBuilder builder;
builder.setChannelCount(2);
builder.setSampleRate(44100);
builder.setCallback(this);
builder.setFormat(AudioFormat::I16);
builder.setPerformanceMode(PerformanceMode::LowLatency);
builder.setSharingMode(SharingMode::Exclusive);
Audio rendering is working like that:
// Oboe stream callback
audio_engine::onAudioReady(AudioStream* self, void* audio_data, int32_t num_frames) {
auto stream = static_cast<int16_t*>(audio_data);
sound->render(stream, num_frames);
}
// Sound::render method
sound::render(int16_t* audio_data, int32_t num_frames) {
auto iter = pcm_data.begin();
std::advance(iter, cur_frame);
const int32_t rem_size = std::min(num_frames, size - cur_frame);
for(int32_t i = 0; i < rem_size; ++i, std::next(iter), ++cur_frame) {
audio_data[i] += *iter;
}
}
It looks like your render() method is confusing samples and frames.
A frame is a set of simultaneous samples.
In a stereo stream, each frame has TWO samples.
I think your iterator works on a sample basis. In other words next(iter) will advance to the next sample, not the next frame. Try this (untested) code.
sound::render(int16_t* audio_data, int32_t num_frames) {
auto iter = pcm_data.begin();
const int samples_per_frame = 2; // stereo
std::advance(iter, cur_sample);
const int32_t num_samples = std::min(num_frames * samples_per_frame,
total_samples - cur_sample);
for(int32_t i = 0; i < num_samples; ++i, std::next(iter), ++cur_sample) {
audio_data[i] += *iter;
}
}
In short: essentially, I was experiencing an underrun, because of usage of std::forward_list to store PCM. In such a case (using iterators to retrieve PCM), one has to use a container whose iterator implements LegacyRandomAccessIterator (e.g. std::vector).
I was sure that the linear complexity of methods std::advance and std::next doesn't make any difference there in my sound::render method. However, when I was trying to use raw pointers and pointer arithmetic (thus, constant complexity) with debugging methods that were suggested in the comments (Extracting PCM from WAV with Audacity, then loading this asset with AAssetManager directly into memory), I realized, that amount of "corruption" of output sound was directly proportional to the position argument in std::advance(iter, position) in render method.
So, if the amount of sound corruption was directly proportional to the complexity of std::advance (and also std::next), then I have to make the complexity constant -- by using std::vector as an container. And using an answer from #philburk, I got this as a working result:
class sound {
private:
const int samples_per_frame = 2; // stereo
std::vector<int16_t> pcm_data;
...
public:
render(int16_t* audio_data, int32_t num_frames) {
auto iter = std::next(pcm_data.begin(), cur_sample);
const int32_t s = std::min(num_frames * samples_per_frame,
total_samples - cur_sample);
for(int32_t i = 0; i < s; ++i, std::advance(iter, 1), ++cur_sample) {
audio_data[i] += *iter;
}
}
}
I want to analyse an audio file (mp3 in particular) which the user can select and determine what notes are played, when they're player and with what frequency.
I already have some working code for my computer, but I want to be able to use this on my phone as well.
In order to do this however, I need access to the bytes of the audio file. On my PC I could just open a stream and use AudioFormat to decode it and then read() the bytes frame by frame.
Looking at the Android Developer Forums I can only find classes and examples for playing a file (without access to the bytes) or recording to a file (I want to read from a file).
I'm pretty confident that I can set up a file chooser, but once I have the Uri from that, I don't know how to get a stream or the bytes.
Any help would be much appreciated :)
Edit: Is a similar solution to this possible? Android - Read a File
I don't know if I could decode the audio file that way or if there would be any problems with the Android API...
So I solved it in the following way:
Get an InputStream with
final InputStream inputStream = getContentResolver().openInputStream(selectedUri);
Then pass it in this function and decode it using classes from JLayer:
private synchronized void decode(InputStream in)
throws BitstreamException, DecoderException {
ArrayList<Short> output = new ArrayList<>(1024);
Bitstream bitstream = new Bitstream(in);
Decoder decoder = new Decoder();
float total_ms = 0f;
float nextNotify = -1f;
boolean done = false;
while (! done) {
Header frameHeader = bitstream.readFrame();
if (total_ms > nextNotify) {
mListener.OnDecodeUpdate((int) total_ms);
nextNotify += 500f;
}
if (frameHeader == null) {
done = true;
} else {
total_ms += frameHeader.ms_per_frame();
SampleBuffer buffer = (SampleBuffer) decoder.decodeFrame(frameHeader, bitstream); // CPU intense
if (buffer.getSampleFrequency() != 44100 || buffer.getChannelCount() != 2) {
throw new DecoderException("mono or non-44100 MP3 not supported", null);
}
short[] pcm = buffer.getBuffer();
for (int i = 0; i < pcm.length-1; i += 2) {
short l = pcm[i];
short r = pcm[i+1];
short mono = (short) ((l + r) / 2f);
output.add(mono); // RAM intense
}
}
bitstream.closeFrame();
}
bitstream.close();
mListener.OnDecodeComplete(output);
}
The full project (in case you want to look up the particulars) can be found here:
https://github.com/S7uXN37/MusicInterpreterStudio/
I'm working on adding a live broadcasting feature to an Android app. I do so through RTMP and make use of the DailyMotion Android SDK, which in turn makes use of Kickflip.
Everything works perfect, except for the playback of the audio on the website (which makes use of Flash). The audio does work in VLC, so it seems to be an issue with Flash being unable to decode the AAC audio.
For the audio I instantiate an encoder with the "audio/mp4a-latm" mime type. The Android developer docs state the following about this mime type: "audio/mp4a-latm" - AAC audio (note, this is raw AAC packets, not packaged in LATM!). I expect that my problem lies here, but yet I have not been able to find a solution for it.
Pretty much all my research, including this SO question about the matter pointed me in the direction of adding an ADTS header to the audio byte array. That results in the following code in the writeSampleData method:
boolean isHeader = false;
if ((bufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
isHeader = true;
} else {
pts = bufferInfo.presentationTimeUs - mFirstPts;
}
if (mFirstPts != -1 && pts >= 0) {
pts /= 1000;
byte data[] = new byte[bufferInfo.size + 7];
addADTStoPacket(data, bufferInfo.size + 7);
encodedData.position(bufferInfo.offset);
encodedData.get(data, 7, bufferInfo.size);
addDataPacket(new AudioPacket(data, isHeader, pts, mAudioFirstByte));
}
The addADTStoPacket method is identical to the one in the above mentioned SO post, but I will show it here regardless:
private void addADTStoPacket(byte[] packet, int packetLen) {
int profile = 2; //AAC LC
//39=MediaCodecInfo.CodecProfileLevel.AACObjectELD;
int freqIdx = 4; //44.1KHz
int chanCfg = 1; //CPE
// fill in ADTS data
packet[0] = (byte)0xFF;
packet[1] = (byte)0xF9;
packet[2] = (byte)(((profile-1)<<6) + (freqIdx<<2) +(chanCfg>>2));
packet[3] = (byte)(((chanCfg&3)<<6) + (packetLen>>11));
packet[4] = (byte)((packetLen&0x7FF) >> 3);
packet[5] = (byte)(((packetLen&7)<<5) + 0x1F)
packet[6] = (byte)0xFC;
}
The variables in the above method match the settings I have configured in the application, so I'm pretty sure that's fine.
The data is written to the output stream in the following method of the AudioPacket class:
#Override
public void writePayload(OutputStream outputStream) throws IOException {
outputStream.write(mFirstByte);
outputStream.write(mIsAudioSpecificConfic ? 0 : 1);
outputStream.write(mData);
}
Am I missing something here? I could present more code if necessary, but I think this covers the most related parts. Thanks in advance and I really hope someone is able to help, I've been stuck for a couple of days now...
I am trying to make a call recording app in Android. I am using loudspeaker to record both uplink and downlink audio. The only problem I am facing is the volume is too low. I've increased the volume of device using AudioManager to max and it can't go beyond that.
I've first used MediaRecorder, but since it had limited functions and provides compressed audio, I've tried with AudioRecorder. Still I havn't figured out how to increase the audio. I've checked on projects on Github too, but it's of no use. I've searched on stackoverflow for last two weeks, but couldn't find anything at all.
I am quite sure that it's possible, since many other apps are doing it. For instance Automatic Call recorder does that.
I understand that I have to do something with the audio buffer, but I am not quite sure what needs to be done on that. Can you guide me on that.
Update:-
I am sorry that I forgot to mention that I am already using Gain. My code is almost similar to RehearsalAssistant (in fact I derived it from there). The gain doesn't work for more than 10dB and that doesn't increase the audio volume too much. What I wanted is I should be able to listen to the audio without putting my ear on the speaker which is what lacking in my code.
I've asked a similar question on functioning of the volume/loudness at SoundDesign SE here. It mentions that the Gain and loudness is related but it doesn't set the actual loudness level. I am not sure how things work, but I am determined to get the loud volume output.
You obviously have the AudioRecord stuff running, so I skip the decision for sampleRate and inputSource. The main point is that you need to appropriately manipulate each sample of your recorded data in your recording loop to increase the volume. Like so:
int minRecBufBytes = AudioRecord.getMinBufferSize( sampleRate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT );
// ...
audioRecord = new AudioRecord( inputSource, sampleRate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, minRecBufBytes );
// Setup the recording buffer, size, and pointer (in this case quadruple buffering)
int recBufferByteSize = minRecBufBytes*2;
byte[] recBuffer = new byte[recBufferByteSize];
int frameByteSize = minRecBufBytes/2;
int sampleBytes = frameByteSize;
int recBufferBytePtr = 0;
audioRecord.startRecording();
// Do the following in the loop you prefer, e.g.
while ( continueRecording ) {
int reallySampledBytes = audioRecord.read( recBuffer, recBufferBytePtr, sampleBytes );
int i = 0;
while ( i < reallySampledBytes ) {
float sample = (float)( recBuffer[recBufferBytePtr+i ] & 0xFF
| recBuffer[recBufferBytePtr+i+1] << 8 );
// THIS is the point were the work is done:
// Increase level by about 6dB:
sample *= 2;
// Or increase level by 20dB:
// sample *= 10;
// Or if you prefer any dB value, then calculate the gain factor outside the loop
// float gainFactor = (float)Math.pow( 10., dB / 20. ); // dB to gain factor
// sample *= gainFactor;
// Avoid 16-bit-integer overflow when writing back the manipulated data:
if ( sample >= 32767f ) {
recBuffer[recBufferBytePtr+i ] = (byte)0xFF;
recBuffer[recBufferBytePtr+i+1] = 0x7F;
} else if ( sample <= -32768f ) {
recBuffer[recBufferBytePtr+i ] = 0x00;
recBuffer[recBufferBytePtr+i+1] = (byte)0x80;
} else {
int s = (int)( 0.5f + sample ); // Here, dithering would be more appropriate
recBuffer[recBufferBytePtr+i ] = (byte)(s & 0xFF);
recBuffer[recBufferBytePtr+i+1] = (byte)(s >> 8 & 0xFF);
}
i += 2;
}
// Do other stuff like saving the part of buffer to a file
// if ( reallySampledBytes > 0 ) { ... save recBuffer+recBufferBytePtr, length: reallySampledBytes
// Then move the recording pointer to the next position in the recording buffer
recBufferBytePtr += reallySampledBytes;
// Wrap around at the end of the recording buffer, e.g. like so:
if ( recBufferBytePtr >= recBufferByteSize ) {
recBufferBytePtr = 0;
sampleBytes = frameByteSize;
} else {
sampleBytes = recBufferByteSize - recBufferBytePtr;
if ( sampleBytes > frameByteSize )
sampleBytes = frameByteSize;
}
}
Thanks to Hartmut and beworker for the solution. Hartmut's code did worked at near 12-14 dB. I did merged the code from the sonic library too to increase volume, but that increase too much noise and distortion, so I kept the volume at 1.5-2.0 and instead tried to increase gain. I got decent sound volume which doesn't sound too loud in phone, but when listened on a PC sounds loud enough. Looks like that's the farthest I could go.
I am posting my final code to increase the loudness. Be aware that using increasing mVolume increases too much noise. Try to increase gain instead.
private AudioRecord.OnRecordPositionUpdateListener updateListener = new AudioRecord.OnRecordPositionUpdateListener() {
#Override
public void onPeriodicNotification(AudioRecord recorder) {
aRecorder.read(bBuffer, bBuffer.capacity()); // Fill buffer
if (getState() != State.RECORDING)
return;
try {
if (bSamples == 16) {
shBuffer.rewind();
int bLength = shBuffer.capacity(); // Faster than accessing buffer.capacity each time
for (int i = 0; i < bLength; i++) { // 16bit sample size
short curSample = (short) (shBuffer.get(i) * gain);
if (curSample > cAmplitude) { // Check amplitude
cAmplitude = curSample;
}
if(mVolume != 1.0f) {
// Adjust output volume.
int fixedPointVolume = (int)(mVolume*4096.0f);
int value = (curSample*fixedPointVolume) >> 12;
if(value > 32767) {
value = 32767;
} else if(value < -32767) {
value = -32767;
}
curSample = (short)value;
/*scaleSamples(outputBuffer, originalNumOutputSamples, numOutputSamples - originalNumOutputSamples,
mVolume, nChannels);*/
}
shBuffer.put(curSample);
}
} else { // 8bit sample size
int bLength = bBuffer.capacity(); // Faster than accessing buffer.capacity each time
bBuffer.rewind();
for (int i = 0; i < bLength; i++) {
byte curSample = (byte) (bBuffer.get(i) * gain);
if (curSample > cAmplitude) { // Check amplitude
cAmplitude = curSample;
}
bBuffer.put(curSample);
}
}
bBuffer.rewind();
fChannel.write(bBuffer); // Write buffer to file
payloadSize += bBuffer.capacity();
} catch (IOException e) {
e.printStackTrace();
Log.e(NoobAudioRecorder.class.getName(), "Error occured in updateListener, recording is aborted");
stop();
}
}
#Override
public void onMarkerReached(AudioRecord recorder) {
// NOT USED
}
};
simple use MPEG_4 format
To increase the call recording volume use AudioManager as follows:
int deviceCallVol;
AudioManager audioManager;
Start Recording:
audioManager = (AudioManager)context.getSystemService(Context.AUDIO_SERVICE);
//get the current volume set
deviceCallVol = audioManager.getStreamVolume(AudioManager.STREAM_VOICE_CALL);
//set volume to maximum
audioManager.setStreamVolume(AudioManager.STREAM_VOICE_CALL, audioManager.getStreamMaxVolume(AudioManager.STREAM_VOICE_CALL), 0);
recorder.setAudioSource(MediaRecorder.AudioSource.VOICE_CALL);
recorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
recorder.setAudioEncodingBitRate(32);
recorder.setAudioSamplingRate(44100);
Stop Recording:
//revert volume to initial state
audioManager.setStreamVolume(AudioManager.STREAM_VOICE_CALL, deviceCallVol, 0);
In my app I use an open source sonic library. Its main purpose is to speed up / slow down speech, but besides this it allows to increase loudness too. I apply it to playback, but it must work for recording similarly. Just pass your samples through it before compressing them. It has a Java interface too. Hope this helps.