This question has a background for iOS but I am also asking in general.
I am developing an app that requires to start and stop playing an audio file at specific points on the timeline. Apart from finding out the duration of play by substracting the start-time from the stop-time and use the duration to set a separate timer to stop the audio play, is there a more effective way (an exact way) to control the stop?
Currently, my app, under the influence of its "weather" that the stop could be near or far, occasionally the stop will come too late and part of the audio after the specified stop point is played.
Now, I have examined audio editing program namely Audacity, by selecting a range on the timeline and hit play, Audacity starts and stops (at least to the naked ear or the feeble mind) precisely at the specified points. What is the underlying control mechanism and how it differs to iOS API?
Could iOS employ the same or similar mechanism? How about Android?
Much appreciated.
With Android u can start and stop the playback of a audiofile
at the axactly time in milliseconds also like in audacity.
In audacity the time is formated like this 00h00m00.000s.
The Time after the "." are the milliseconds , so u have to transfer both times to milliseconds.
Than u can seek to the start time and stop after the (start-stop)-time difference with a handler.
See here Android: How to stop media (mp3) in playing when specific milliseconds come?
but why u not cut the specific segment in audacity?
Related
My application needs to run a silent mp3 in background.
I use TTS synthesizetofile() to convert text to mp3 and TTS playsilence() for playing silence.
While I can easily convert normal text to mp3 I cant figure out an easy way to play silence.
For those who would suggest don't play anything for x duration and that will be silence...it will not solve my objective. Because if nothing is being played that would allow other sound application to assume that nothing is being played while here I need a silent pause.
Secondly, when nothing is being played, android by default shuts down the application.
Also, the solution is not to create a silent mp3 file manually and put into the application because the pause keeps varying and also dependent on values chosen by the user.
Also, TTS playsilence will not do the job because android does not consider it as background music application and shuts it down in like 5 seconds.
Thanks!
I found a solution myself and sharing for those who would come looking for it here...
Use the same TTS.synthesizetofile() method but instead of text use the text in code below:
<speak version="1.1" ><break time="1500ms" /></speak>
replace 1500ms here with any duration you want either in seconds or milliseconds like "3s" or "3000ms"
I'm working on an android app that plays video (using video view). the video is meant to have both music (left and right) and narration, but I want to selectively be able to turn off the narration track in the MediaPlayer.
Is the way to do this correctly to encode by mp4 video file with 3 audio tracks (right left and narration) and then turn off the naration audio track with deselectTrack()?
Not clear to me from the documentation that MediaPlayer can handle more than 2 audio tracks.
If the audio tracks are limited to 2, would it make sense to run two media player simultaneously (synching them up with seekTo())when I want the narration track to play?
Thanks.
Sorry to burst your bubble, but...
1) You have a misunderstanding about what a "track" denotes. A track can have multiple channels (e.g., a stereo track has left and right channels). As I understand it, stereo is the extent of the Android AudioTrack implementation at present. I haven't yet checked if the OpenSL implementation is more extensive than the Java API.
2) Only 1 audio track can be selected at a time, so you wouldn't be able to have background and narration simultaneously in the way you were thinking.
3) Audio tracks can only be selected in the prepared state (i.e., not after playback has started). The documentation mentions this limitation is not ideal, so it will probably change in the future. If not for this problem, your goal could be accomplished with two audio tracks encoded in the stream, one with both background & narration, the other just background.
You will probably find it difficult to synchronize two MediaPlayers, but I haven't tried. Maybe this approach would be acceptable for your situation, although be forewarned the seekTo method isn't accurate. It depends on the encoding of the files.
Something I would try if I were you is to have two complete encoded videos, one with narration, the other without. Use two MediaPlayers and keep them both prepared. When you want to switch use seekTo to put the correct one at (or near) the desired location. That way you don't have to worry about synchronization. If the video is large, this method could use significantly more resources, though.
i understand that there are some issues why android can't ply low latency audio and has a >100ms delay on everything (well.. actually vibrations are faster as audio!!! Shame on you!).. but is there some possibility to figure out how much earlier i need to run the sound to actually be on time?
e.g. how to calculate audio delay?
Im creating a rhythm game and i need to play "ticks" in sync with music.
Im using libGDX Sound - e.g. sound pool - play() now.
Any suggestions?
Your app could emit a sound with the speaker and then use the microphone to detect the sound emited by itself (something similar to remote.js).
Even though there are many variables involved (the mic will also have a latency), you can make the device calibrate it self by continuously trying to guess how long the sound will take to be detected, and emiting it again and again until your guess gets "fine tuned".
I'm writing my first android app, trying to playback two 10min soundfiles synchronously (imagine an instrumental track and an acapella), to be able to change the volume of each track independently). I am using two MediaPlayers for this, since a SoundPool is targeted at shorter audio samples as far as I read.
Now my problem is that, when pausing and resuming the playback, sometimes the players are not synchronous anymore, even though I set their positions to the same value before resuming playback.
I know that this is kind of inevitable, because they cannot be started at exactly the same moment and they may require different amounts of time for starting playback, but: Is there maybe any other approach to meet my requirements?
You can take a look at JetPlayer, this may accomplish what you want as far as synchronization.
To use it you create audio channels (your instrument channel and vocal channel) as MIDI files in a track and the player can keep them synchronized while allowing you to mute or unmute the different channels as appropriate.
The user guide for creating JET resouces can be found here.
I was trying to do a voice capturing app that only capture when there is noise.
So I used the getMaxAmplitude() method from Media Recorder class.
And here is my idea and work:
I started a service that used a MediaRecorder object to record the sound from emulator' mic, then I have a thread running to check the getMaxAmplitude() value from that object, if it goes above a particular level, I start another recording using a new object from MediaRecorder for a period of time and then save it. If there are for example "3 noises" after starting the service of my app, it should then save 4 audio files, including the main one that use to monitor the amplitude level.
BUT I notice a problem, that is the microphone of android only allows 1 media recorder at a time.
So is there any other way to do this?
You might note the timestamps for the start and end of the noises, and then pull out the needed sections from the audio file at a later time.
Of course, this may not fit exactly with what you're trying to do... tough to say at this point.