I'm looking for some way in Android to play in-memory audio in a manner analogous to the waveOutOpen family of methods in Windows programming.
The waveOut... methods essentially let an application create arrays of sample values (like in-memory WAV files without the headers) and dump them into a queue for sequential playback. Windows transitions seamlessly from one array to the next, so as long as the application keeps dumping arrays into the queue ahead of playback, the program can create and play continuous audio of any arbitrary length. The Windows API also incorporates a callback mechanism that the application can use to indicate progress and load additional buffers.
As far as I can tell, the Android audio API lets an application play a file from local storage or a URL, or from a memory stream. Is there any way to get Android to "queue up" MediaPlayer.start() calls so that one player transitions (without glitches) into the next upon play completion? It appears that Jet does something like this, but only with its own internal synthesis engine.
Is there any other way of accessing Android audio in a waveOutOpen way?
android.media.AudioTrack
... is the class you are probably looking for.
http://developer.android.com/reference/android/media/AudioTrack.html#AudioTrack%28int,%20int,%20int,%20int,%20int,%20int%29
After creating it you simply feed it with binary data with given format using following method:
AudioTrack.writeTo(...)
Related
I guess small audio clips are necessary for many applications, thus I would expect QT have support playing mp3 in memory slices. Maybe decode mp3 data to wav data in memory may be one solution, but that needs time to decode all data first. For real time application, it is not a good idea. It also doesn't make sense to store mp3_data in a file and ask QMediaPlayer to play that, the performance is unacceptable.
This is my code after many searches by google, including stackoverflow:
m_buffer.setBuffer(&mp3_data_in_memory);
m_player.setMedia(QMediaContent(), &m_buffer);
m_player.play();
where m_buffer is a QBuffer instance, and mp3_data_in_memory is a QByteArray one; m_player is a QMediaPlayer instance.
I got some information that the code here doesn't work in MacOS and iOS, but I am running on Android now.
Does anyone have a solution for Android system? Thanks a lot.
Your code won't work because the media property requires a valid QMediaContent instance:
Setting this property to a null QMediaContent will cause the player to
discard all information relating to the current media source and to
cease all I/O operations related to that media.
There's also no way of telling the QMediaPlayer what format the data is in, you're just dumping raw data on it. In principle QMediaResource can hold this information, but it requires a url and is regarded as null without it.
As you may have guessed, QMediaPlayer and the related classes are high-level constructs not designed for this sort of thing. You need to use a QAudioDecoder to actually decode the raw data, and pipe the output to a QAudioOutput to hear it.
Hello sages of the Overflowing Stack, Android noob here..
I'm using CSipSimple and want to stream the call audio to another app, in chunks of 1 second audio data so that it can process the raw pcm data.
The code that handles the audio in CSipSimple is native, so I prefer using native approaches and not callback Java.
I thought of a few ways of doing so:
Use audio streaming and let the other app get it.
Writing the data to a file and let the other app read it.
Calling a service in the other application (AIDL)
Using intents.
These are the considerations leading to my dillema:
Streaming looks like the natural choice, but I couldn't find Android support for retrieving raw pcm data from an an audio stream. The intent mechanism is flexible and convenient, but I don't think that that's what they're meant for. Using a file seems cumbersome, although it's well supported. Finally, using a service seems like a good option but it seems less flexible and probably needs more error handling and thread management.
Can you guys point out the best alternative?
If you have another one you're welcome to share it..
I do not know about the streaming audio API support so I'll not touch this case.
As for writing data to a file and let other application to read it - this is a possible case how to solve your problem.
As for calling service through AIDL and using intents, I do not think that this is a good solution. The problem is that Binder has a limitation over the size of the data (1MB) that can be passed in a transaction.
To my point of view, the best solution (especially if you're working in native) is to use AshMem. This is a shared memory driver developed specifically for Android. Thus, in your service you create a shared memory region and pass the reference to it into your client app that reads information from the this memory.
I need to implement a playback of separate audio files in N channels, files may play sequentially or in parallel. I need to implement it on Android.
Timeline:
|file a...|file d....|file b...|......|file k|....
|.....|file g|file c..|file p.|....
I'm thinking two options, one being FMOD to decompress files and play them simultaneously. I have researched and FMOD seems to fit well and much easier than manually playing this using an AudioTrack. However, I can't understand if FMOD would allow us to save the entire merged output without playing it through.
I know that using solution here we can redirect output to a wav file, but is it possible to just create a final output instantly and save it using FMOD? Or will I have to manually merge PCMS into one stream after all..
Thanks.
An important question here is why you need to save the files out, if it's possible to do this offline then it would be a lot simpler. If you must record the concatenation of several files (including others played in parallel), it is quite possible with FMOD.
One way would be to use wave-writer-nrt output mode, which allows you to output a wav file based on FMOD playsound calls in faster than realtime.
Another way is to use a custom DSP to access the data stream of any submix as it plays, useful if you want other sounds actually playing at the same time.
Another is simply create the sound objects, then use Sound::lock to access the PCM data, which you could concatenate yourself to a destination. Keep in mind all the sounds would need to be the same sample rate and channels, otherwise you would need to do processing. Also keep in mind you cannot do this for parallel sounds unless you want to mix the audio yourself.
I'm attempting to stream from a URL using Android's built in MediaPlayer class. However, I also need to send a special header along with the URL. Is this possible without having to rewrite the whole steaming process?
If it's not possible to send a header, I would need to stream the file manually. However, it appears that the MediaPlayer class locks the file you are writing to when it begins reading the file. This means you cant just simply continue writing to the file while reading from it. I've seen the 'double buffer' method however that results in choppy playback. Any suggestions?
I asked a question recently about alternatives to the double buffer method you mentioned:
is-there-a-better-way-to-save-streamed-files-with-mediaplayer
I guess you could act as a proxy in a thread, handle your header and forward the rest to the media player? Or if you control the server pass the extra data in a different request...
Short version: What is the best way to get data encoded in an MP3 (and ideally in an
AAC/Ogg/WMA) into a Java array or ByteBuffer that I can then
manipulate?
I'm putting together a program that has slowing down and speeding up
sound files as one of its features. This works fine for WAV files,
which are a header plus the exact binary data that needs to be sent to
the speaker, and now I need to implement it for MP3 (ideally, this
would also support AAC, Ogg, and WMA, but since those are less popular
formats this is not required). Android does not expose an interface
to decode the MP3 without playing it, so I need to create that
interface.
Three options present themselves, though I'm open to others:
1) Write my own decoder. I already have a functional frame detector
that I was hoping to use for option (3), and now should only need to
implement the Huffman decoding tables.
2) Use JLayer, or an equivalent Java library, to handle the
decoding. I'm not entirely clear on what the license ramifications
are here.
3) Connect to the libmedia library/MediaPlayerService. This is what
SoundPool does, and the amount of use of that service make me believe
that while it's officially unstable, that implementation isn't going
anywhere. This means writing JNI code to connect to the service, but
I'm finding that that's a deep rabbit hole. At the surface, I'm
having trouble with the sp<> template.
I did that with libmad and the NDK. JLayer is way to slow and the media framework is a moving target. You can find info and source code at http://apistudios.com/hosted/marzec/badlogic/wordpress/?p=231
I have not tried it, but mp3transform is LGPL.