Android cast playing a video with custom encryption - android

I have an android app that plays some video content. The video is mp4 with some simple custom encryption.
In android the player (ExoPlayer) decrypts the video in real time while playing.
It uses a code like this:
// overriding the function that reads the video file to insert the decryption
public int read(byte[] buffer, int offset, int readLength) throws FileDataSourceException {
// ...
// buffer[] holds the video bits, decrypt them here
buffer[offset] = (byte)(buffer[offset] ^ 1234);
// ...
}
}
I now want to add support for Chrome cast - to be able to stream the video from a mobile phone to tv.
However looking at the api I cannot see a way to implement my decryption algorithm.
From what I see it supports either unencrypted videos or videos with some standard DRM.
Is it possible to implement a custom encryption, similar to the code above?

The new CAF framework provides three different options:
Styled Media Receiver
Custom Receiver
Default Media Receiver
The only one which supports DRM is the custom receiver and as you say it is designed for the standard DRM's.
However it should support CENC clear key which may be enough protection for your needs and will allow you avoid using a DRM service.
CENC clear key has the key in the clear as the name suggests. It is not very secure but it may be enough of a 'hurdle' (which is essentially what most security systems are) for you anyway.

Related

Playing sounds sequentially on android's google chrome (given the new restrictions on playing sound)

I have a small app that plays sequential sounds (a teaching app playing the sillables of a word)
This could be accomplished by firing an event right after each sound stopped playing. Something like:
var sounds = new Array(new Audio("1.mp3"), new Audio("2.mp3"));
var i = -1;
playSnd();
function playSnd() {
i++;
if (i == sounds.length) return;
sounds[i].addEventListener('ended', playSnd);
sounds[i].play();
}
(source)
However, now android chrome has implemented some new restrictions on how to play sound: Sound events must all be fired by a user action.
So, when I run code very similar to the above, the first sound plays, and then I get
Uncaught (in promise) DOMException: play() can only be initiated by a user gesture.
How can a sequence of sounds, determined at run time, be played on Android's Chrome?
To start with, Google Chrome on Android has been having the limitation of not allowing applications to play HTML audio(s) without an explicit action by the user. However, it is different than how stock browser(s), in most cases, handles it.
The reason, as Chromium Org puts it, is that, Autoplay is not honored on android as it will cost data usage.
You may find more details on the same here.
Apart from the fact that this results in wastage of bandwidth, this also makes some sense, since mobile devices are used in public and in houses, where unsolicited sound from random Web sites could be a nuisance.
However, in the later versions, this idea was over ruled and Chrome on Android started allowing autoplay of HTML audios and videos on it. Again after a set of reviews and discussions, this feature was reverted to what it was, making it mandatory for a user action to invoke HTML audios and videos on the Chrome for Android.
Here is something that I found more on the same. As it says, the reason stated was that "We're going to gather some data about how users react to autoplaying videos in order to decide whether to keep this restriction". And hence the playing option without a user action was reverted back.
You can also find more about the blocking of _autoplay of audio(s) and video(s) here on Forbes and The Verge.
However, this is something that I can suggest you to try which will help you achieve what you intend to. All you have to do is copy this code and paste in your Chrome for Android. This helps you reset the flag which is default set to not allowing to play HTML audios and videos without user interaction:
chrome://flags/#disable-gesture-requirement-for-media-playback
OR
about:flags/#disable-gesture-requirement-for-media-playback
If the above procedure doesn't help/work for you, you can do this:
Go into chrome://flags OR about:flags (this will direct you to chrome://flags) and Enable the "Disable gesture requirement for media playback" option (which is actually the same as the above URL specified).

Casting video to ChromeCast by Youtube app

I tried using a Android phone to cast to Chromecast device by Youtube app. I added some Videos to queue, then I used another phone to cast to Chromecast device. The second one automatically knows the videos added to queue on the first one.
I don't know how Youtube app can do this?
EDIT I guess Youtube app uses one custom data channel besides Media channel. When Video is added to queue, sender app will send somethings (eg: videoId) to receiver. Receiver will save it in array of video ID. When another phone connects to Chromecast device, It'll receiver array of video ID from the receiver. Can anyone give other solutions? Thanks
I guess what you are asking is how you can create a play list, potentially shared by multiple devices. If that is the case, you have a couple of choices:
keep the playlist in the receiver: this is the simplest option. This will be a simple array on the receiver, kept in memory, which will go away when application ends. A custom receiver is required and it can implement the methods such as "append, insert, get, clear, ... to provide what the senders need. When each sender connects, it can ask (calling 'get' for example) for the current "queue" and then can modify the queue by other methods such as 'clear', 'append', 'insert', .... Note that there is no long-term persistence on the receiver (local storage is available but will be cleared as son as the app is gone).
keep the playlist in the cloud: you need to do most of the things that you do in the previous option but you also persist the playlist to the cloud; the advantage is that playlist lasts beyond the life of a session (this may or may not be desired). In addition, sender apps can potentially get the playlist fro the cloud directly, if needed.
The important thing is that the main storage for your playlist is not your sender devices; they don't know (and shouldn't know) abut the presence of other senders in the eco-system.
On the receiver side, we recently published a simple sample that sows how the notion of (local) playlist can be implemented; that is a simplified example but is enough to show that with minimal work, you can take advantage of the Media Channel; for more sophisticated handling of a shared queue, you definitely need an out-of-bound channel/namespace to handle all the additional api's that I mentioned above.

how to show the media dynamically according to the content(video/audio/picture) sent from Android app to receiver in chromecast

In chromecast
i want to send different kinds of url(mp4/mp3/png..) to the receiver,but how does the receiver to show them dynamically?
this is: how does the receiver recogonize what kind of the RemoteMedia received?
In the current version of the SDK, there is nothing from the framework side to help you with that directly. You can include the mimetype in the metadata and retrieve that on your receiver and do as you see fit. That said, if your media is only audio or video, things are better since the video element can handle both and you can just treat them the same but for images, you have to do some other work. Another approach is to look at the extension and try to guess the type but that is not fully reliable.

Embed flash objects using Android

Does anyone know a method to communicate commands and receive events from flash objects using Android?
Flash media objects are quite handy for handling online media streams and I am wondering if we can control those objects to play media from an Android app.
A typical case will be: a flash object embedded in a webview and this webview is loaded by the Android application. The application send "play" commands to the object to start playing the stream and gets "finish" events when reaching the end of the streamed media.
create a webview in your main activity class (or anywhere)
var mainW = new WebView(this);
enable the needed features of your webview
mainW.getSettings().setPluginsEnabled(true);
mainW.getSettings().setAllowFileAccess(true);
mainW.getSettings().setPluginState(mainW.getSettings().getPluginState().ON);
place your html file somewhere which contains the swf and load it's location in webview
mainw.loadUrl(url);

Voice recognition on android with recorded sound clip?

I've used the voice recognition feature on Android and I love it. It's one of my customers' most praised features. However, the format is somewhat restrictive. You have to call the recognizer intent, have it send the recording for transcription to google, and wait for the text back.
Some of my ideas would require recording the audio within my app and then sending the clip to google for transcription.
Is there any way I can send an audio clip to be processed with speech to text?
I got a solution that is working well to have speech recognizing and audio recording. Here is the link to a simple Android project I created to show the solution's working. Also, I put some print screens inside the project to illustrate the app.
I'm gonna try to explain briefly the approach I used. I combined two features in that project: Google Speech API and Flac recording.
Google Speech API is called through HTTP connections. Mike Pultz gives more details about the API:
"(...) the new [Google] API is a full-duplex streaming API. What this means, is that it actually uses two HTTP connections- one POST request to upload the content as a “live” chunked stream, and a second GET request to access the results, which makes much more sense for longer audio samples, or for streaming audio."
However, this API needs to receive a FLAC sound file to work properly. That makes us to go to the second part: Flac recording
I implemented Flac recording in that project through extracting and adapting some pieces of code and libraries from an open source app called AudioBoo. AudioBoo uses native code to record and play flac format.
Thus, it's possible to record a flac sound, send it to Google Speech API, get the text, and play the sound that was just recorded.
The project I created has the basic principles to make it work and can be improved for specific situations. In order to make it work in a different scenario, it's necessary to get a Google Speech API key, which is obtained by being part of Google Chromium-dev group. I left one key in that project just to show it's working, but I'll remove it eventually. If someone needs more information about it, let me know cause I'm not able to put more than 2 links in this post.
Unfortunately not at this time. The only interface currently supported by Android's voice recognition service is the RecognizerIntent, which doesn't allow you to provide your own sound data.
If this is something you'd like to see, file a feature request at http://b.android.com. This is also tangentially related to existing issue 4541.
As far as I know there is still no way to directly send an audio clip to Google for transcription. However, Froyo (API level 8) introduced the SpeechRecognizer class, which provides direct access to the speech recognition service. So, for example, you can start playback of an audio clip and have your Activity start the speech recognizer listening in the background, which will return results after completion to a user-defined listener callback method.
The following sample code should be defined within an Activity since SpeechRecognizer's methods must be run in the main application thread. Also you will need to add the RECORD_AUDIO permission to your AndroidManifest.xml.
boolean available = SpeechRecognizer.isRecognitionAvailable(this);
if (available) {
SpeechRecognizer sr = SpeechRecognizer.createSpeechRecognizer(this);
sr.setRecognitionListener(new RecognitionListener() {
#Override
public void onResults(Bundle results) {
// process results here
}
// define your other overloaded listener methods here
});
Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
// the following appears to be a requirement, but can be a "dummy" value
intent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE, "com.dummy");
// define any other intent extras you want
// start playback of audio clip here
// this will start the speech recognizer service in the background
// without starting a separate activity
sr.startListening(intent);
}
You can also define your own speech recognition service by extending RecognitionService, but that is beyond the scope of this answer :)

Categories

Resources