How to play digital Ocean hosted video on Android? - android

How to play video hosted on DigitalOcean Space on Android app?
What is the type of video that Digital Ocean offers like MPEG-DASH, HSL or RTMP, etc?
Can I use Exoplayer to play Digital Ocean videos?

DigitalOcean is a general purpose cloud computing provider and Spaces is its's AWS S3-API compatible object storage solution - i.e. it's not a dedicated video host solution.
You can store and access your video there, but the format, codecs containers etc, are up to you.
It does have a built in CDN which you may find helps video response times, but as with all CDN's you probably want to test and compare to decide which is best for you.
For streaming video you can:
use simply HTTPS streaming, essentially just downloading the video in chunks and playing it as you download it.
use a dedicated streaming protocol like HLS or HASH which will allow you offer multiple bit rates to improve the user experience. More on ABR streaming here: https://stackoverflow.com/a/42365034/334402
This usual way to support HLS and/or DASH is by using a streaming server - there are open source ones available, e.g. http://www.videolan.org/vlc/streaming.html
In either case, if you video is mp4 you also need to make sure the header information, the 'moov atom' is at the start as the player needs this to allow it start playback. There are tools to allow you do this and more info here: https://multimedia.cx/eggs/improving-qt-faststart/

Related

Downloading certain part of audio and continue downloading the rest when reaching a certain point in the audio

I want to create an app for sharing audio files. I want to build native mobile apps and made most of my progress on android with a PHP/MySQL backend so far. But now I want to step up my game and build my backend with Node.js and MongoDB.
Since big audio files take a while do download and can worsen the user experience I wondered if it is possible to just download the first 20% of an audio file. When the user reaches a certain point of the audio it downloads the rest/the next section. Therefore, we do not need to download the entire audio that might never get used. I just wonder if it is that difficult to add another section of the audio while playing without any interruptions.
For some reason I think this is how the big social media apps work but I cannot find any sources on this topic. I don't ask for code but just suggestions and references to help me. Am I on a good track or are there are ways to solve this problem? Also can you recommend to use Digital Ocean Spaces for this task??
Sources for Android and IOS will be very helpful!
You can do it by providing a "Range" attribute to the header of the HTTP-Request you are sending to the Digital Ocean space for the desired media file. The "Range" attribute says the number of bytes you want to receive.

How to implement a seekable media input stream using Sodium for decryption

We have a product consisting of documents and media files that are encrypted for DRM protection. On the production side, we have a Python script that encrypts the files, and on the client side, an Android app that decrypts them. This means we need to have an encryption/decryption scheme that can work compatibly on both Python and Android platforms. I've settled on libsodium/NaCl because it is available on both platforms, is free and open source, and it's designed with high-level APIs that are supposed to provide "Expert selection of default primitives" (http://nacl.cr.yp.to/features.html), thus helping the developer get things configured right without having to be an expert in the details of cryptography parameters.
Based on that, I've been able to test successfully that data encrypted by Sodium on Python can be decrypted by Sodium on Android. There's a fair bit of learning time invested in Sodium, so I'd prefer not to have to change that, if at all possible.
However, when it comes to playing large DRM-protected videos on the Android side, I believe we need a solution that works for streaming, not just for decrypting a whole file into memory. Currently we are just reading the whole file into memory and decrypting it:
final byte[] plainBytes = secretBox.decrypt(nonce, cipherText);
Obviously that's not going to work well with large video files. If we were using javax.crypto.Cipher instead of Sodium, we could use it to implement a CipherInputStream (and use that to implement a exoplayer2.upstream.DataSource or something). But I'm having difficulty seeing how to use libsodium to implement a decryption stream.
The libsodium library I'm using does provide bindings to "stream" functions. But this meaning of "stream" seems to be a somewhat different thing from "streaming" in the sense of Java InputStream. Moreover, all those functions seem to be very specific to the low-level detailed parameters that up to this point, libsodium has not required me to be aware of. For example, chacha20, salsa20, xsalsa20, xchacha20poly1305, etc. Up to this point, I have no idea which of these algorithms is being used on either side; SecretBox just works.
So I guess the question I would like answered most is, how can libsodium be used in Android to provide seekable, streaming decryption? Do you know of any good example code?
Subquestions of that:
Admittedly, now that I look closer in the docs, I see that pynacl SecretBox uses XSalsa20 stream cipher. I wonder if I can count on that always being the case, since I'm supposed to be insulated from those details?
I think for media playing, you need more than just streaming, in the sense of being able to consume a small piece at a time, in sequence. For typical usage, you also need it to be seekable: the user wants to be able to skip back 5 seconds without having to wait for the player to reset to the beginning of the stream and process/decrypt the whole thing again up to 5 seconds ago.
Is it feasible that I could use javax.crypto.Cipher on the Android side, but configure it to be compatible with the encryption algorithm (XSalsa20) and its parameters from the PyNaCl SecretBox production process?
Update:
To clarify,
The issue of decryption key delivery is already solved to our satisfaction, so that is not what I'm asking help on here.
Our app is completely offline, so the streaming issues I mentioned have to do with loading and decrypting files from local storage, rather than waiting for downloads.
For video you might it easier to use existing mechanisms, as they will have already solved most of your issues.
For most video applications you will want to stream the video and play/seek as you go, rather than having to download the entire video, as you point out.
At this tine there are three major DRM's commonly used to encrypt and share keys between the server and the client: Widevine, PlayReady and FairPlay. All three will support the functionality you want for streamed videos. The disadvantage is that you will usually have to pay to use these DRM services.
You can also use HLS or DASH to streams the video, Adjustable Bit Rate or ABR streaming protocols (https://stackoverflow.com/a/42365034/334402).
These allow you also use less secure, but possibly adequate for your needs, key sharing mechanisms that essentially allow the key be shared in the clear while the content itself is still encrypted. These are both free and well supported:
HLS AES Encryption
DASH Cleasrkey Encryption
Have a look at these answers for examples of generating both streams: https://stackoverflow.com/a/45103073/334402, https://stackoverflow.com/a/46897097/334402
You can play back the streams using open source players like DASH.JS for browser and ExoPlayer for Android Native.
If you wanted more security but still wanted to avoid using a commercial DRM, you could also modify the above to configure the key on your player client directly rather than transiting it from server to client.
You then do have the risk that someone could hack or reverse engineer your client app to extract the key, but I think you will have this with your original approach anyway. The real value of DRM's systems is not the content encryption, which is essentially just AES, but the mechanisms they use to securely transport and store the keys. Ultimately, it is a question of cost and benefit - it sounds like your solution may work quite adequately with a custom configured key implementation.
As an aside, on the seeking question - most video formats are broken into groups of pictures or frames which can be decoded separately from the rest of the video before and afterwards, with the help of some header info. So you can decode at, or at least near, any given point without having to decode the entire video up to that point.
The thumbnails you see when you scroll or hover along the timeline on a player are generally actually a separate stream of still image snapshots or thumbnails at regular intervals in the video. This allows the player show the appropriate thumbnail as if it is showing the frame at that point in the video. If the user clicks to that point then the player requests that section of the video, if it does not already have it, decodes the relevant chunk and starts playing it.

How to setup Live Streaming on Android app using AWS/GCP?

I'm developing an android app where user can start live streaming using his/her android-camera. I have AWS & GCP resources at hand. What I understand after some reading is,
I have to stream/publish, whatever android-camera is picking, to some server over some protocols(RTMP or HLS, etc..)
I have to setup server that will pull this input source and packages & stores it in a form that could be streamed/consumed on the mobile/web browser(Basically, an URL) and I believe AWS's MediaLive, MediaPackage, etc.. resources should do that.
I could use this URL are MediaSource for players on Android(like ExoPlayer)
My problem is I couldn't find good documentation on 1st part. I found this, https://github.com/bytedeco/javacv, which doesn't appear to be production-level work. While trying out 2nd part, while creating MediaLive channel on AWS, I was asked to point the channel to 2 destinations(I don't know what it means) which made me doubt my understanding of this process. I'm looking for some skeleton-procedure with official documentation on how to achieve this.
EDIT 1:
For the Input-Production part, I'm experimenting with this answer. https://stackoverflow.com/a/29061628/3881561
EDIT 2:
I've used https://github.com/ant-media/LiveVideoBroadcaster to send video source to RTMP server. I've created RTMP push input source in MediaLive and a channel with output - Archive(stores .ts files in S3). Now that the flow is working, How can I modify this architecture to allow multiple users to create live streaming?

In android, how to implement 'Multi-track Audio Support' using Chromecast?

I need to implement Multi-track Audio Support feature in Sender application. Means, When user casts any video/movie on TV using Chromecast, then on sender application, user should be able to see all available audio tracks and choose desired audio track to cast on TV.
In Developers website, I saw Tracks API, in that MediaTrack object, which can be used to configure a track.
https://developers.google.com/cast/docs/android_sender#tracks
Here, I am not getting, how to fetch all available audio tracks from selected mp4 file? Can anyone provide the direction to work with this?
what will be the role of receiver application, regarding this?
I looked at given reference sample app. From that app, I am clear of how to set MediaTracks.
https://github.com/googlecast/CastVideos-android
I am not getting how to extract audio tracks from mp4 file, so that we can set them as MediaTracks?
Is this part needs to be done in receiver app?
Any help will be highly appreciated.
Thanks.
Currently, in the Cast SDK, there is no support for multiple embedded audio tracks in an mp4 file

Android Online Radio Player

Is it possible to get track name while playing radio stream via MediaPlayer?
I would say pretty much with certainty - no, it isn't possible.
I can't see any MediaPlayer methods which suggest it's possible plus the way that metadata such as track name etc is presented in streaming media, will depend on the source, e.g, Shoutcast or otherwise.
If it can be done I'd be interested to know but I'd suspect you'd need to write something like a Shoutcast client (or other client depending on source). You'd still use MediaPlayer for streaming but would need extra code for accessing the metadata.

Categories

Resources