I'm trying to generate midi file and play it on Android. I found android-midi-lib, but there are almost no any documentation about this library. I tried to run example from this lib. It works. But there is delay about 6 seconds before track from my notes start playing. I don't know anything about notes and midi format. Everything is new for me.
Here is my code:
public class MyActivity extends Activity {
private MediaPlayer player = new MediaPlayer();
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
MidiTrack tempoTrack = new MidiTrack();
MidiTrack noteTrack = new MidiTrack();
// 2. Add events to the tracks
// 2a. Track 0 is typically the tempo map
Tempo t = new Tempo();
t.setBpm(228);
tempoTrack.insertEvent(t);
// 2b. Track 1 will have some notes in it
for(int i = 0; i < 128; i++) {
int channel = 0, pitch = i, velocity = 100;
NoteOn on = new NoteOn(i*480, channel, pitch, velocity);
NoteOff off = new NoteOff(i*480 + 120, channel, pitch, 0);
noteTrack.insertEvent(on);
noteTrack.insertEvent(off);
}
// It's best not to manually insert EndOfTrack events; MidiTrack will
// call closeTrack() on itself before writing itself to a file
// 3. Create a MidiFile with the tracks we created
ArrayList<MidiTrack> tracks = new ArrayList<MidiTrack>();
tracks.add(tempoTrack);
tracks.add(noteTrack);
MidiFile midi = new MidiFile(MidiFile.DEFAULT_RESOLUTION, tracks);
// 4. Write the MIDI data to a file
File output = new File("/sdcard/example.mid");
try {
midi.writeToFile(output);
} catch(IOException e) {
Log.e(getClass().toString(), e.getMessage(), e);
}
try {
player.setDataSource(output.getAbsolutePath());
player.prepare();
} catch (Exception e) {
Log.e(getClass().toString(), e.getMessage(), e);
}
player.start();
}
#Override
protected void onDestroy() {
player.stop();
player.release();
super.onDestroy();
}
}
I figured out that this delay depends on first param in NoteOn constructor (maybe NoteOff too). I don't understand what is 480 number is. I tried to change this number, and than less this number than shorter delay before track, BUT whole track is shorter to.
Seems like time between notes with 480 value is fine for me, but I don't need a delay before them.
Help me please!
Ok, I figured out what is the problem.
According to this url http://www.phys.unsw.edu.au/jw/notes.html MIDI values for piano for example starts from 21. So if I start cycle from 0, then first 20 values won't play anything.
Now about delay.
The cycle should look like this:
delay = 0;
duration = 480; // ms
for (int i = 21; i < 108; ++i) {
noteTrack.insertNote(chanel, i, velocity, delay, duration);
delay += duration;
}
Delay means at what time note should be played. So if we want to play all notes one by one, we need to set delay as sum of all previous notes duration.
Related
My objective is to create a VideoView that can play videos in a pre-defined play list.
I'm trying to use MediaPlayer.setNextMediaPlayer(...) to allow a seamless transition between two videos. However, when the first video finishes playing, the 2nd video will not start automatically as it should according to the documentation.
Xamarin Android Code:
Queue<MediaPlayer> MediaPlayerList = null;
private void PlayVideo()
{
MediaPlayerList = new Queue<MediaPlayer>();
//Let's go ahead and create all media players
VideoView_CurrentVideoView = new VideoView(this);
VideoView_CurrentVideoView.Completion += mVideoView_Completion;
VideoView_CurrentVideoView.Prepared += mVideoView_Prepared;
//Let's prepare all MediaPlayer
for (int i = 0; i < VideoView_CurrentVideoChannel.VideoAssetList.Count; i++)
{
string filePath = FilePath[i];
if (i == 0)
{
VideoView_CurrentVideoView.SetVideoPath(filePath);
VideoContainer.AddView(VideoView_CurrentVideoView);
}
else
{
MediaPlayer mpNew = new MediaPlayer();
mpNew.SetDataSource(filePath);
MediaPlayerList.Enqueue(mpNew);
}
}
VideoView_CurrentVideoView.Start();
}
void mVideoView_Completion(object sender, EventArgs e)
{
MediaPlayer mp = (MediaPlayer)sender;
mp.Release();
}
void mVideoView_Prepared(object sender, EventArgs e)
{
MediaPlayer mp = (MediaPlayer)sender;
//Take the next available MediaPlayer from the queue
MediaPlayer nextMediaPlayer = MediaPlayerList.Dequeue();
//Put the current MediaPlayer at the end of the queue
MediaPlayerList.Enqueue(mp);
nextMediaPlayer.Prepare();
mp.SetNextMediaPlayer(nextMediaPlayer);
}
Any help or suggestions will be greatly appreciated. This is coded in Xamarin Android.
Update #1: After moving the .Prepare() away from Prepared() event
Queue<string> VideoListQueue = null;
MediaPlayer NextMediaPlayer = null;
private void PlayVideo()
{
string filePath = FilePath[0];
//Create video view
if (VideoContainer.ChildCount == 0)
{
//setup the videoview container
VideoView_CurrentVideoView = new VideoView(this);
VideoView_CurrentVideoView.Info += VideoView_CurrentVideoView_Info;
VideoView_CurrentVideoView.Error += VideoView_CurrentVideoView_Error;
LinearLayout.LayoutParams param = new LinearLayout.LayoutParams(ViewGroup.LayoutParams.FillParent, ViewGroup.LayoutParams.FillParent);
param.LeftMargin = 0;
param.RightMargin = 0;
param.BottomMargin = 0;
param.TopMargin = 0;
VideoView_CurrentVideoView.LayoutParameters = param;
VideoView_CurrentVideoView.LayoutParameters.Width = ViewGroup.LayoutParams.FillParent;
VideoView_CurrentVideoView.LayoutParameters.Height = ViewGroup.LayoutParams.FillParent;
VideoView_CurrentVideoView.Completion += VideoView_CurrentVideoView_Completion;
VideoView_CurrentVideoView.Prepared += VideoView_CurrentVideoView_Prepared;
VideoContainer.AddView(VideoView_CurrentVideoView);
}
VideoView_CurrentVideoView.SetVideoPath(filePath);
VideoView_CurrentVideoView.SeekTo(0);
VideoView_CurrentVideoView.Start();
}
void VideoView_CurrentVideoView_Prepared(object sender, EventArgs e)
{
//Do nothing at this moment
MediaPlayer mp = (MediaPlayer)sender;
}
void VideoView_CurrentVideoView_Completion(object sender, EventArgs e)
{
//GC the finished MediaPlayer
MediaPlayer mp = (MediaPlayer)sender;
mp.Reset();
mp.Release();
mp = null;
//Preparing the next MediaPlayer
MediaPlayer currentPlayer = NextMediaPlayer;
NextMediaPlayer = SetupNextMediaPlayer();
currentPlayer.SetNextMediaPlayer(NextMediaPlayer);
}
MediaPlayer SetupNextMediaPlayer()
{
MediaPlayer mp = new MediaPlayer();
//When the video start playing, let's get ready for next one
string sourceURL = VideoListQueue.Dequeue();
VideoListQueue.Enqueue(sourceURL);
string filePath = sourceURL;
mp = new MediaPlayer();
mp.Info += VideoView_CurrentVideoView_Info;
mp.Completion += VideoView_CurrentVideoView_Completion;
mp.Prepared += VideoView_CurrentVideoView_Prepared;
mp.SetDataSource(filePath);
mp.Prepare();
//Fire back the created MediaPlayer object to the caller
return mp;
}
void VideoView_CurrentVideoView_Info(object sender, MediaPlayer.InfoEventArgs e)
{
Console.WriteLine("What = " + e.What);
switch (e.What)
{
case MediaInfo.VideoRenderingStart:
{
//This is only happening on video first started
NextMediaPlayer = SetupNextMediaPlayer();
e.Mp.SetNextMediaPlayer(NextMediaPlayer);
break;
}
}
}
void VideoView_CurrentVideoView_Error(object sender, MediaPlayer.ErrorEventArgs e)
{
e.Handled = true;
}
At this moment, the media player will begin playing the 2nd video once the first one is done. However, the 2nd video only have sound and no video showing.
Anyone know what I did wrong? I have a feeling that it has to do something with the MediaPlayer not attached to the SurfaceView. However, I created the view using VideoView, how can I get the Surface from VideoView?
Regarding playing 2nd video only with sound: try to implement OnCompletionListener listener for each MediaPlayer with this:
mediaPlayer.setOnCompletionListener(new OnCompletionListener() {
#Override
public void onCompletion(MediaPlayer mp) {
mp.setDisplay(null); //for current mediaPlayer
nextMediaPlayer.setDisplay(getHolder()); //for next video
}
});
I can't say that is is gapless, but somehow it works. To archive this I don't use standard VideoView, but custom View, that extends from SurfaceView.
what if you prepare and play the next mediaplayer on .Completion event? have you try it? although it may have a small delay
After many years of testing, this problem does not happen on all hardware. For example, I run the same APK on Nexus7, it appears to be seamless and everything is working. In contrast, if I run it on Amlogic media player board, it will render the above-described problem.
I decided to close this post off with a conclusion that it is something to do with the hardware. I know someone overcome this limitation by run everything in OpenGL, but that is completely a separate beast to deal with.
Conclusion
If you are having a similar problem as described above, there's nothing much you can do as it is heavily dependent on the hardware.
I have a sound file (.3gp) and its about ~1 min. I would like to get the frequency of this sound file in every 1/4 seconds. My idea is to receive samples in every 1/4 seconds from the audio file and using FFT I might get the frequency values. Is there any way to do this?
Actually I would split the sound file into 1/4sec samples sound files (alwyas overwriting the preveious one), then using FFT algorithm and detect the frequency where the magintude is the bigggest. But there might be easier solutions however I dont have a clue how to do this either.
***UPDATE 2 - new code
I use this code so far:
public class RecordAudio extends AsyncTask<Void, double[], Void> {
#Override
protected Void doInBackground(Void... arg0) {
try {
int bufferSize = AudioRecord.getMinBufferSize(frequency,
AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
//int bufferSize = AudioRecord.getMinBufferSize(frequency,
// channelConfiguration, audioEncoding);
AudioRecord audioRecord = new AudioRecord(
MediaRecorder.AudioSource.MIC, frequency,
channelConfiguration, audioEncoding, bufferSize);
short[] buffer = new short[blockSize];
//double[] toTransform = new double[blockSize];
audioRecord.startRecording();
// started = true; hopes this should true before calling
// following while loop
while (started) {
sampling++;
double[] re = new double[blockSize];
double[] im = new double[blockSize];
double[] newArray = new double[blockSize*2];
double[] magns = new double[blockSize];
double MaxMagn=0;
double pitch = 0;
int bufferReadResult = audioRecord.read(buffer, 0,
blockSize);
for (int i = 0; i < blockSize && i < bufferReadResult; i++) {
re[i] = (double) buffer[i] / 32768.0; // signed 16bit
im[i] = 0;
}
newArray = FFTbase.fft(re, im,true);
for (int i = 0; i < newArray.length; i+=2) {
re[i/2]=newArray[i];
im[i/2]=newArray[i+1];
magns[i/2] = Math.sqrt(re[i/2]*re[i/2]+im[i/2]*im[i/2]);
}
// I only need the first half
for (int i = 0; i < (magns.length)/2; i++) {
if (magns[i]>MaxMagn)
{
MaxMagn = magns[i];
pitch=i;
}
}
if (sampling > 50) {
Log.i("pitch and magnitude", "" + MaxMagn + " " + pitch*15.625f);
sampling=0;
MaxMagn=0;pitch=0;
}
}
audioRecord.stop();
} catch (Throwable t) {
t.printStackTrace();
Log.e("AudioRecord", "Recording Failed");
}
return null;
}
I use this: http://www.wikijava.org/wiki/The_Fast_Fourier_Transform_in_Java_%28part_1%29
Guitar strings seem correct, but my own sound is not good because of this:
The magnitude of the two peaks change most of the time and I always find the biggest to get the fundamental frequency.
Pitch tracking with the FFT is asked so often on Stack Overflow I wrote a blog entry with sample code. The code is in C, but with the explanation and links you should be able to do what you want.
As to dividing it up into 1/4 second increments, you could simply take FFTs of 1/4 second segments as you suggested, instead of the default (which I think is about 1 second). If this doesn't give you the frequency resolution you want, you may have to use a different pitch recognition method. Another thing you could do is use overlapping segments that are longer than 1/4 second, but start at intervals that are 1/4 second apart. This method is alluded to the blog entry, but it may not meet your design spec.
Try AsyncTask:
class GetFrequency extends AsyncTask<String, Void, Void> {
public Void doInBackground(String... params) {
while (true) {
// Apply Logic Here
try {
Thread.sleep(250);
} catch (Exception ie) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
Call this in your MainActivity by,
frequencyButtonListener.setOnClickListener(new OnClickListener() {
#Override
public void onClick(View v) {
new GetFrequency.execute(params);
}
});
I would like to know if there is a way to fix the duration of recording using mobile's microphone. Like when I click a button the recording should start and it should stop after 5 seconds on its own, what method do you propose me to use :-)
Edit:
Sorry for the confusion but I am using AudioRecorder class to record data and I don't think the MediaRecorder class function works properly (/at all) for the same.
If you just use a timer, I do not think that you can accurately control how much data is within the buffer when your app reads it.
I think they way to record 5 seconds of audio data is to use the technique from this class.
The code there carefully sets the size of the audio buffer so that it will call back after it has recorded data for a certain amount of time. Here is a snipped from that class.
public boolean startRecordingForTime(int millisecondsPerAudioClip,
int sampleRate, int encoding)
{
float percentOfASecond = (float) millisecondsPerAudioClip / 1000.0f;
int numSamplesRequired = (int) ((float) sampleRate * percentOfASecond);
int bufferSize =
determineCalculatedBufferSize(sampleRate, encoding,
numSamplesRequired);
return doRecording(sampleRate, encoding, bufferSize,
numSamplesRequired, DEFAULT_BUFFER_INCREASE_FACTOR);
}
Then later on your code just does this:
while (continueRecording)
{
int bufferResult = recorder.read(readBuffer, 0, readBufferSize);
//do stuff
}
since readBufferSize is just right, you will get the amount of data you want (with some slight variation)
This is all what you need.
#Override
public void onClick(View view)
{
if (view.getId() == R.id.Record)
{
new Timer().schedule(new TimerTask()
{
#Override
public void run()
{
runOnUiThread(new Runnable()
{
#Override
public void run()
{
mediaRecorder.stop();
mediaRecorder.reset();
mediaRecorder.release();
files.setEnabled(true);
record.setEnabled(true);
stop.setEnabled(false);
}
});
}
}, 5000);
record.setEnabled(false);
files.setEnabled(false);
stop.setEnabled(true);
try
{
File file = new File(Environment.getExternalStorageDirectory(),
"" + new Random().nextInt(50) + ".3gp");
adapter.add(file.getAbsolutePath());
adapter.notifyDataSetChanged();
mediaRecorder = new MediaRecorder();
mediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mediaRecorder
.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
mediaRecorder
.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
mediaRecorder.setOutputFile(file.getAbsolutePath());
mediaRecorder.prepare();
mediaRecorder.start();
stop.setEnabled(true);
} catch (IllegalStateException e)
{
e.printStackTrace();
} catch (IOException e)
{
e.printStackTrace();
}
}
Use setMaxDuration from MediaRecorder class.
alternately
When you start recording start a new thread and put it to sleep for 5 seconds. when it wakes stop the recording.
or use a timertask which shall call the stop recording after 5 second delay.
or
I'm creating a TTS app using google's unofficial tts api and it works fine but the api can only process a maximum of 100 characters at a time but my application may have to send strings containing as many as 300 characters.
Here is my code
try {
String text = "bonjour comment allez-vous faire";
text=text.replace(" ", "%20");
String oLanguage="fr";
MediaPlayer player = new MediaPlayer();
player.setAudioStreamType(AudioManager.STREAM_MUSIC);
player.setDataSource("http://translate.google.com/translate_tts?tl=" + oLanguage + "&q=" + text);
player.prepare();
player.start();
} catch (Exception e) {
// TODO: handle exception
}
So my questions are
How do I get it to check the number of characters in the string and send only complete words within the 100 character limit.
How do I detect when the first group of TTS is finished so I can send the second to avoid both speech overlapping each other
Is there any need for me to use Asynctask for this process?
1.How do I get it to check the number of characters in the string and send only complete words within the 100 character limit.
ArrayList<String> arr = new ArrayList<String>();
int counter = 0;
String textToSpeach = "Your long text";
if(textToSpeach.length()>100)
{
for(int i =0 ; i<(textToSpeach.length()/100)+1;i++)
{
String temp = textToSpeach.substring(0+counter,99+counter);
arr.add(temp.substring(0, temp.lastIndexOf(" ")));
counter = counter + 100;
}
}
2.How do I detect when the first group of TTS is finished so I can send the second to avoid both speech overlapping each other
player.setOnCompletionListener(new OnCompletionListener() {
#Override
public void onCompletion(MediaPlayer mp) {
// pass next block
}
});
3.Is there any need for me to use Asynctask for this process?
Right now I dont see any need for that.
Simple: this is not a public API, don't use it. Use Andorid's built-in TTS engine for speech synthesis. It does not have string length limitations.
I would like to use an arbitrary InputStream as a data source for a MediaPlayer object.
The reason for this is that the InputStream I am using is in fact an authorized HTTPS connection to a media resource on a remote server. Passing the URL in that case will obviously not work as an authentication is required. I can however do the authentication separately and get an InputStream to the resource - problem is what do I do once I have it?
I thought about the option of using a named pipe and passing its FileDescriptor to the setDataResource method of MediaPlayer. Is there a way to create named pipes in Android (without using NDK)?
Any other suggestion is most welcome.
I think I have found a solution. I would appreciate it if others who are interested would try this on their own and report the results with their device models and SDK version.
I have seen similar posts which direct to this but I thought I would post it anyway since it is newer and seems to work on newer versions of the SDK - so far it works on my Nexus One running Android 2.3.6.
The solution relies on bufferring the input stream to a local file (I have this file on the external storage but it will probably be possible to place it on the intenal storage as well) and providing that file's descriptor to the MediaPlayer instance.
The following runs in a doInBackground method of some AsyncTask that does AudioPlayback:
#Override
protected
Void doInBackground(LibraryItem... params)
{
...
MediaPlayer player = new MediaPlayer();
setListeners(player);
try {
_remoteStream = getMyInputStreamSomehow();
File tempFile = File.createTempFile(...);
tempFile.deleteOnExit();
_localInStream = new FileInputStream(tempFile);
_localOutStream = new FileOutputStream(tempFile);
int buffered = bufferMedia(
_remoteStream, _localOutStream, BUFFER_TARGET_SIZE // = 128KB for instance
);
player.setAudioStreamType(AudioManager.STREAM_MUSIC);
player.setDataSource(_localInStream.getFD());
player.prepareAsync();
int streamed = 0;
while (buffered >= 0) {
buffered = bufferMedia(
_remoteStream, _localOutStream, BUFFER_TARGET_SIZE
);
}
}
catch (Exception exception) {
// Handle errors as you see fit
}
return null;
}
The bufferMedia method buffers nBytes bytes or until the end of input is reached:
private
int bufferMedia(InputStream inStream, OutputStream outStream, int nBytes)
throws IOException
{
final int BUFFER_SIZE = 8 * (1 << 10);
byte[] buffer = new byte[BUFFER_SIZE]; // TODO: Do static allocation instead
int buffered = 0, read = -1;
while (buffered < nBytes) {
read = inStream.read(buffer);
if (read == -1) {
break;
}
outStream.write(buffer, 0, read);
outStream.flush();
buffered += read;
}
if (read == -1 && buffered == 0) {
return -1;
}
return buffered;
}
The setListeners method sets handlers for various MediaPlayer events. The most important one is the OnCompletionListener which
is invoked when playback is complete. In cases of buffer underrun (due to, say, temporary slow network connection) the player
will reach the end of the local file and transit to the PlaybackCompleted state. I identify those situations by comparing the
position of _localInStream against the size of the input stream. If the position is smaller, then playback is now really completed
and I reset the MediaPlayer:
private
void setListeners(MediaPlayer player)
{
// Set some other listeners as well
player.setOnSeekCompleteListener(
new MediaPlayer.OnSeekCompleteListener()
{
#Override
public
void onSeekComplete(MediaPlayer mp)
{
mp.start();
}
}
);
player.setOnCompletionListener(
new MediaPlayer.OnCompletionListener()
{
#Override
public
void onCompletion(MediaPlayer mp)
{
try {
long bytePosition = _localInStream.getChannel().position();
int timePosition = mp.getCurrentPosition();
int duration = mp.getDuration();
if (bytePosition < _track.size) {
mp.reset();
mp.setDataSource(_localInStream.getFD());
mp.prepare();
mp.seekTo(timePosition);
} else {
mp.release();
}
} catch (IOException exception) {
// Handle errors as you see fit
}
}
}
);
}
Another solution would be to start a proxy HTTP server on localhost. The media player will connect to this server with setDataSource(Context context, Uri uri). This solution works better than the previous and does not cause playback to glitch.