Saving output using FMOD without playback - android

I need to implement a playback of separate audio files in N channels, files may play sequentially or in parallel. I need to implement it on Android.
Timeline:
|file a...|file d....|file b...|......|file k|....
|.....|file g|file c..|file p.|....
I'm thinking two options, one being FMOD to decompress files and play them simultaneously. I have researched and FMOD seems to fit well and much easier than manually playing this using an AudioTrack. However, I can't understand if FMOD would allow us to save the entire merged output without playing it through.
I know that using solution here we can redirect output to a wav file, but is it possible to just create a final output instantly and save it using FMOD? Or will I have to manually merge PCMS into one stream after all..
Thanks.

An important question here is why you need to save the files out, if it's possible to do this offline then it would be a lot simpler. If you must record the concatenation of several files (including others played in parallel), it is quite possible with FMOD.
One way would be to use wave-writer-nrt output mode, which allows you to output a wav file based on FMOD playsound calls in faster than realtime.
Another way is to use a custom DSP to access the data stream of any submix as it plays, useful if you want other sounds actually playing at the same time.
Another is simply create the sound objects, then use Sound::lock to access the PCM data, which you could concatenate yourself to a destination. Keep in mind all the sounds would need to be the same sample rate and channels, otherwise you would need to do processing. Also keep in mind you cannot do this for parallel sounds unless you want to mix the audio yourself.

Related

Playing two different streams (files) simultaneously with proper sync on Android

I'm using two mediaPlayer instances to send two different streams to each channel. Half the time, it works fine but sometimes there is lag between left and right channel which is clearly audible. Is there any alternative except soundpool in Android to play multiple audio files simultaneously with sync ? soundpool is not suitable for my application since audiofiles are large (approx 20 MiB each). Audiofile format in question is : FLAC.
I found that there're no proper in-built mixing capabilities provided by Android API. I ended up using wav files instead of FLAC and mixing them on the fly as needed. Here is a higher level description how I achieved it.
Read both wav files and saving data part in byte array (Don't forget to strip out header bytes)
Mix them byte by byte to generate a unified wav file
In my use case, I just needed to mix left and right channels, but one can do all sorts of transformations as needed.
Create a temporary file to hold mixed data
Play the temporary file with mediaPlayer
One can also use audioTrack to play without storing resulting byte array to temporary file but I chose to use mediaPlayer due to built in seekTo functionality.
Hope, this approach is helpful.

Need QMediaPlayer to play mp3 data in memory

I guess small audio clips are necessary for many applications, thus I would expect QT have support playing mp3 in memory slices. Maybe decode mp3 data to wav data in memory may be one solution, but that needs time to decode all data first. For real time application, it is not a good idea. It also doesn't make sense to store mp3_data in a file and ask QMediaPlayer to play that, the performance is unacceptable.
This is my code after many searches by google, including stackoverflow:
m_buffer.setBuffer(&mp3_data_in_memory);
m_player.setMedia(QMediaContent(), &m_buffer);
m_player.play();
where m_buffer is a QBuffer instance, and mp3_data_in_memory is a QByteArray one; m_player is a QMediaPlayer instance.
I got some information that the code here doesn't work in MacOS and iOS, but I am running on Android now.
Does anyone have a solution for Android system? Thanks a lot.
Your code won't work because the media property requires a valid QMediaContent instance:
Setting this property to a null QMediaContent will cause the player to
discard all information relating to the current media source and to
cease all I/O operations related to that media.
There's also no way of telling the QMediaPlayer what format the data is in, you're just dumping raw data on it. In principle QMediaResource can hold this information, but it requires a url and is regarded as null without it.
As you may have guessed, QMediaPlayer and the related classes are high-level constructs not designed for this sort of thing. You need to use a QAudioDecoder to actually decode the raw data, and pipe the output to a QAudioOutput to hear it.

how to merge two sound object to one sound object?? (air for android) [duplicate]

How can merge two sounds and save as a new file?. One sound is a loaded mp3 file and the other from the microphone. Then I need to upload this sound into a server. Is this possible?
This all can be done, but if you looking simple example with few methods to call, I'm afraid it's not so easy.
You can extract bytes from sound with Sound.extract(). This data is sound amplitude in 16-bit numbers, right and left channel interleaved. Use ByteArray.readShort() to get them.
Microphone data can be captured with SampleDataEvent.SAMPLE_DATA, see example here. To mix them with song, just add sound amplitudes and write result into third array. The result will be essentially WAV-format (without header,) unpacked sound data. You can upload it raw, or search for "as3 mp3 encoder" (google), but these things are rare and written by entusiasts, so maybe you can get them work. Also, to mix sounds correctly, frequencies of data from mic and sound file must be equal.
And upload part - if this was file on disk, this would be easy - FileReference.upload(). But there's only data in memory. So you can look into Socket class to send it.

Low-level audio API for Android

I'm looking for some way in Android to play in-memory audio in a manner analogous to the waveOutOpen family of methods in Windows programming.
The waveOut... methods essentially let an application create arrays of sample values (like in-memory WAV files without the headers) and dump them into a queue for sequential playback. Windows transitions seamlessly from one array to the next, so as long as the application keeps dumping arrays into the queue ahead of playback, the program can create and play continuous audio of any arbitrary length. The Windows API also incorporates a callback mechanism that the application can use to indicate progress and load additional buffers.
As far as I can tell, the Android audio API lets an application play a file from local storage or a URL, or from a memory stream. Is there any way to get Android to "queue up" MediaPlayer.start() calls so that one player transitions (without glitches) into the next upon play completion? It appears that Jet does something like this, but only with its own internal synthesis engine.
Is there any other way of accessing Android audio in a waveOutOpen way?
android.media.AudioTrack
... is the class you are probably looking for.
http://developer.android.com/reference/android/media/AudioTrack.html#AudioTrack%28int,%20int,%20int,%20int,%20int,%20int%29
After creating it you simply feed it with binary data with given format using following method:
AudioTrack.writeTo(...)

Decoding Encoded Audio Data (MP3s, etc) on Android Without Playing It

Short version: What is the best way to get data encoded in an MP3 (and ideally in an
AAC/Ogg/WMA) into a Java array or ByteBuffer that I can then
manipulate?
I'm putting together a program that has slowing down and speeding up
sound files as one of its features. This works fine for WAV files,
which are a header plus the exact binary data that needs to be sent to
the speaker, and now I need to implement it for MP3 (ideally, this
would also support AAC, Ogg, and WMA, but since those are less popular
formats this is not required). Android does not expose an interface
to decode the MP3 without playing it, so I need to create that
interface.
Three options present themselves, though I'm open to others:
1) Write my own decoder. I already have a functional frame detector
that I was hoping to use for option (3), and now should only need to
implement the Huffman decoding tables.
2) Use JLayer, or an equivalent Java library, to handle the
decoding. I'm not entirely clear on what the license ramifications
are here.
3) Connect to the libmedia library/MediaPlayerService. This is what
SoundPool does, and the amount of use of that service make me believe
that while it's officially unstable, that implementation isn't going
anywhere. This means writing JNI code to connect to the service, but
I'm finding that that's a deep rabbit hole. At the surface, I'm
having trouble with the sp<> template.
I did that with libmad and the NDK. JLayer is way to slow and the media framework is a moving target. You can find info and source code at http://apistudios.com/hosted/marzec/badlogic/wordpress/?p=231
I have not tried it, but mp3transform is LGPL.

Categories

Resources