Need QMediaPlayer to play mp3 data in memory - android

I guess small audio clips are necessary for many applications, thus I would expect QT have support playing mp3 in memory slices. Maybe decode mp3 data to wav data in memory may be one solution, but that needs time to decode all data first. For real time application, it is not a good idea. It also doesn't make sense to store mp3_data in a file and ask QMediaPlayer to play that, the performance is unacceptable.
This is my code after many searches by google, including stackoverflow:
m_buffer.setBuffer(&mp3_data_in_memory);
m_player.setMedia(QMediaContent(), &m_buffer);
m_player.play();
where m_buffer is a QBuffer instance, and mp3_data_in_memory is a QByteArray one; m_player is a QMediaPlayer instance.
I got some information that the code here doesn't work in MacOS and iOS, but I am running on Android now.
Does anyone have a solution for Android system? Thanks a lot.

Your code won't work because the media property requires a valid QMediaContent instance:
Setting this property to a null QMediaContent will cause the player to
discard all information relating to the current media source and to
cease all I/O operations related to that media.
There's also no way of telling the QMediaPlayer what format the data is in, you're just dumping raw data on it. In principle QMediaResource can hold this information, but it requires a url and is regarded as null without it.
As you may have guessed, QMediaPlayer and the related classes are high-level constructs not designed for this sort of thing. You need to use a QAudioDecoder to actually decode the raw data, and pipe the output to a QAudioOutput to hear it.

Related

How to implement a seekable media input stream using Sodium for decryption

We have a product consisting of documents and media files that are encrypted for DRM protection. On the production side, we have a Python script that encrypts the files, and on the client side, an Android app that decrypts them. This means we need to have an encryption/decryption scheme that can work compatibly on both Python and Android platforms. I've settled on libsodium/NaCl because it is available on both platforms, is free and open source, and it's designed with high-level APIs that are supposed to provide "Expert selection of default primitives" (http://nacl.cr.yp.to/features.html), thus helping the developer get things configured right without having to be an expert in the details of cryptography parameters.
Based on that, I've been able to test successfully that data encrypted by Sodium on Python can be decrypted by Sodium on Android. There's a fair bit of learning time invested in Sodium, so I'd prefer not to have to change that, if at all possible.
However, when it comes to playing large DRM-protected videos on the Android side, I believe we need a solution that works for streaming, not just for decrypting a whole file into memory. Currently we are just reading the whole file into memory and decrypting it:
final byte[] plainBytes = secretBox.decrypt(nonce, cipherText);
Obviously that's not going to work well with large video files. If we were using javax.crypto.Cipher instead of Sodium, we could use it to implement a CipherInputStream (and use that to implement a exoplayer2.upstream.DataSource or something). But I'm having difficulty seeing how to use libsodium to implement a decryption stream.
The libsodium library I'm using does provide bindings to "stream" functions. But this meaning of "stream" seems to be a somewhat different thing from "streaming" in the sense of Java InputStream. Moreover, all those functions seem to be very specific to the low-level detailed parameters that up to this point, libsodium has not required me to be aware of. For example, chacha20, salsa20, xsalsa20, xchacha20poly1305, etc. Up to this point, I have no idea which of these algorithms is being used on either side; SecretBox just works.
So I guess the question I would like answered most is, how can libsodium be used in Android to provide seekable, streaming decryption? Do you know of any good example code?
Subquestions of that:
Admittedly, now that I look closer in the docs, I see that pynacl SecretBox uses XSalsa20 stream cipher. I wonder if I can count on that always being the case, since I'm supposed to be insulated from those details?
I think for media playing, you need more than just streaming, in the sense of being able to consume a small piece at a time, in sequence. For typical usage, you also need it to be seekable: the user wants to be able to skip back 5 seconds without having to wait for the player to reset to the beginning of the stream and process/decrypt the whole thing again up to 5 seconds ago.
Is it feasible that I could use javax.crypto.Cipher on the Android side, but configure it to be compatible with the encryption algorithm (XSalsa20) and its parameters from the PyNaCl SecretBox production process?
Update:
To clarify,
The issue of decryption key delivery is already solved to our satisfaction, so that is not what I'm asking help on here.
Our app is completely offline, so the streaming issues I mentioned have to do with loading and decrypting files from local storage, rather than waiting for downloads.
For video you might it easier to use existing mechanisms, as they will have already solved most of your issues.
For most video applications you will want to stream the video and play/seek as you go, rather than having to download the entire video, as you point out.
At this tine there are three major DRM's commonly used to encrypt and share keys between the server and the client: Widevine, PlayReady and FairPlay. All three will support the functionality you want for streamed videos. The disadvantage is that you will usually have to pay to use these DRM services.
You can also use HLS or DASH to streams the video, Adjustable Bit Rate or ABR streaming protocols (https://stackoverflow.com/a/42365034/334402).
These allow you also use less secure, but possibly adequate for your needs, key sharing mechanisms that essentially allow the key be shared in the clear while the content itself is still encrypted. These are both free and well supported:
HLS AES Encryption
DASH Cleasrkey Encryption
Have a look at these answers for examples of generating both streams: https://stackoverflow.com/a/45103073/334402, https://stackoverflow.com/a/46897097/334402
You can play back the streams using open source players like DASH.JS for browser and ExoPlayer for Android Native.
If you wanted more security but still wanted to avoid using a commercial DRM, you could also modify the above to configure the key on your player client directly rather than transiting it from server to client.
You then do have the risk that someone could hack or reverse engineer your client app to extract the key, but I think you will have this with your original approach anyway. The real value of DRM's systems is not the content encryption, which is essentially just AES, but the mechanisms they use to securely transport and store the keys. Ultimately, it is a question of cost and benefit - it sounds like your solution may work quite adequately with a custom configured key implementation.
As an aside, on the seeking question - most video formats are broken into groups of pictures or frames which can be decoded separately from the rest of the video before and afterwards, with the help of some header info. So you can decode at, or at least near, any given point without having to decode the entire video up to that point.
The thumbnails you see when you scroll or hover along the timeline on a player are generally actually a separate stream of still image snapshots or thumbnails at regular intervals in the video. This allows the player show the appropriate thumbnail as if it is showing the frame at that point in the video. If the user clicks to that point then the player requests that section of the video, if it does not already have it, decodes the relevant chunk and starts playing it.

Streaming audio between apps in Android

Hello sages of the Overflowing Stack, Android noob here..
I'm using CSipSimple and want to stream the call audio to another app, in chunks of 1 second audio data so that it can process the raw pcm data.
The code that handles the audio in CSipSimple is native, so I prefer using native approaches and not callback Java.
I thought of a few ways of doing so:
Use audio streaming and let the other app get it.
Writing the data to a file and let the other app read it.
Calling a service in the other application (AIDL)
Using intents.
These are the considerations leading to my dillema:
Streaming looks like the natural choice, but I couldn't find Android support for retrieving raw pcm data from an an audio stream. The intent mechanism is flexible and convenient, but I don't think that that's what they're meant for. Using a file seems cumbersome, although it's well supported. Finally, using a service seems like a good option but it seems less flexible and probably needs more error handling and thread management.
Can you guys point out the best alternative?
If you have another one you're welcome to share it..
I do not know about the streaming audio API support so I'll not touch this case.
As for writing data to a file and let other application to read it - this is a possible case how to solve your problem.
As for calling service through AIDL and using intents, I do not think that this is a good solution. The problem is that Binder has a limitation over the size of the data (1MB) that can be passed in a transaction.
To my point of view, the best solution (especially if you're working in native) is to use AshMem. This is a shared memory driver developed specifically for Android. Thus, in your service you create a shared memory region and pass the reference to it into your client app that reads information from the this memory.

Saving output using FMOD without playback

I need to implement a playback of separate audio files in N channels, files may play sequentially or in parallel. I need to implement it on Android.
Timeline:
|file a...|file d....|file b...|......|file k|....
|.....|file g|file c..|file p.|....
I'm thinking two options, one being FMOD to decompress files and play them simultaneously. I have researched and FMOD seems to fit well and much easier than manually playing this using an AudioTrack. However, I can't understand if FMOD would allow us to save the entire merged output without playing it through.
I know that using solution here we can redirect output to a wav file, but is it possible to just create a final output instantly and save it using FMOD? Or will I have to manually merge PCMS into one stream after all..
Thanks.
An important question here is why you need to save the files out, if it's possible to do this offline then it would be a lot simpler. If you must record the concatenation of several files (including others played in parallel), it is quite possible with FMOD.
One way would be to use wave-writer-nrt output mode, which allows you to output a wav file based on FMOD playsound calls in faster than realtime.
Another way is to use a custom DSP to access the data stream of any submix as it plays, useful if you want other sounds actually playing at the same time.
Another is simply create the sound objects, then use Sound::lock to access the PCM data, which you could concatenate yourself to a destination. Keep in mind all the sounds would need to be the same sample rate and channels, otherwise you would need to do processing. Also keep in mind you cannot do this for parallel sounds unless you want to mix the audio yourself.

Tracking the number of times a video was played

Folks,
The project that I am working on requires that a certain video can be played on an android device for x number of times. After that, it must stop playing. When a client gets the video file, he or she also gets another file that contains the Android device ID and the number of times the video can be played. The original file and the metadata file are both encrypted.
My first thought is just to write a video decoder for the video file. Each time the file is played, the decoder first checks if Android device and the count are valid, decrements the count, starts decrypting the data and streaming it to the mpeg-4 decoder shipped with the OS.
I would appreciate your feedback on this idea. Please share your thoughts if you feel there is a better way to do it.
One problem I see is where to store the actual count. Storing it in the file itself won't work as the user can simply backup the original file and replace it after the count exceeds. It has to be stored in some other part of the system that cannot be tampered by the end-user.
Thank you in advance for your help.
Regards,
Peter
Useless to store it anywhere on the actual device, because anywhere an app can touch a user can as well. Best bet is to use a remote server for authorization, but then you get spoofing problems. But your real goal is to make it a nuisance, not worth going around, instead of making it impossible to crack, because you can't.
Okay, the simplest way would be similar to something you first suggested, and needs no further infrastructure: store the information in a file. This is defeated by reloading the file, as you suggested, but even that is a high enough barrier for some.
Defeat reloading the file via obsfucating where you're storing the information. Possibilities include text files (easy to spot), or perhaps image files (like images that are supposedly button images).
Remember, it only takes 1 guy 1 time to point the playback into a recorder, and you have a perfect, DRM-free copy running around in the wild. Remember that you're simply trying to make it easy enough to view legitimately and difficult enough to crack (take the difference of those) that people won't bother cracking it.

Decoding Encoded Audio Data (MP3s, etc) on Android Without Playing It

Short version: What is the best way to get data encoded in an MP3 (and ideally in an
AAC/Ogg/WMA) into a Java array or ByteBuffer that I can then
manipulate?
I'm putting together a program that has slowing down and speeding up
sound files as one of its features. This works fine for WAV files,
which are a header plus the exact binary data that needs to be sent to
the speaker, and now I need to implement it for MP3 (ideally, this
would also support AAC, Ogg, and WMA, but since those are less popular
formats this is not required). Android does not expose an interface
to decode the MP3 without playing it, so I need to create that
interface.
Three options present themselves, though I'm open to others:
1) Write my own decoder. I already have a functional frame detector
that I was hoping to use for option (3), and now should only need to
implement the Huffman decoding tables.
2) Use JLayer, or an equivalent Java library, to handle the
decoding. I'm not entirely clear on what the license ramifications
are here.
3) Connect to the libmedia library/MediaPlayerService. This is what
SoundPool does, and the amount of use of that service make me believe
that while it's officially unstable, that implementation isn't going
anywhere. This means writing JNI code to connect to the service, but
I'm finding that that's a deep rabbit hole. At the surface, I'm
having trouble with the sp<> template.
I did that with libmad and the NDK. JLayer is way to slow and the media framework is a moving target. You can find info and source code at http://apistudios.com/hosted/marzec/badlogic/wordpress/?p=231
I have not tried it, but mp3transform is LGPL.

Categories

Resources