Change cryptos used in WebRTC browser calls - android

I am developing a voice / video calling system in which there is browser to browser, Android to Android and Android to browser calling. Although I have managed to get that all working, I have run into a problem with the cryptos being used to encrypt the audio / video packets being sent between two clients. My system requires a certain set of cryptos, and I have managed to get that set working with Android to Android calling. However, the default cryptos being used in WebRTC enabled browsers are significantly weaker than the alternate crypto set being used for Android to Android calling. Thus, I have to "dumb down" the cryptos in the system so that I can have Android to browser calling.
Since I have no access to the code for WebRTC enabled browsers (and definitely cannot modify it) my only recourse is to somehow select or tell the peerconnection object which crypto level / set to use. I swear I have heard of this being done before, but I cannot find where I saw it nor anywhere that talks about doing it. So, I was wondering if anyone knew:
Is such a thing possible?
If possible, how does one set the cryptos for the call?
What cryptos are supported in Chrome and Firefox?
If I am remembering what I saw correctly, it was done somewhere along the lines of passing a JSON looking something like: { 'crypto' : 'AES....'} to the constraints parameter of webkitRTCPeerConnection. However, I could potentially be imagining all of this.

You can enable DTLS by passing the following to the PeerConnection constructor:
{ 'optional': [{'DtlsSrtpKeyAgreement': 'true'}]}
However, that doesn't let you pick the crypto algorithm. For that you could potentially munge the SDP with a different crypto line with the given SRTP key management parameters. However, I'm not sure offhand if anything other than the default is supported in Chrome. That may be a good question for the discuss-webrtc list.

Related

Is there any way to connect my phone camera feed to ROS?

Currently, I have made an android application that allows me to access my video feed. However, I wish to know if there is a way to directly convert my phone camera feed into a ROS topic to which I could subscribe to directly. Any advice would be greatly appreciated.
Which language is your android application in? For mobile devices, the best ros library is probably the roslibjs javascript library. There are demo examples publishing video & imu msgs and demoing basic functionality on how to use it, chiefly in some sort of browser / webapp or js engine. To understand what it's doing, in order to connect to it, reading the core ROS object constructor / initializer shows you which transportOptions it supports, and other initialization features (like options.url for your target url). If you chiefly use another language, or you want to send it over the internet, whether as a local buffer or as a remote buffer, you can use the url to transmit your messages.
Otherwise, if you can use python or (more strongly encouraged) C++, those are robust. If it's java... there's some vague end of life support for kinetic (click kinetic!), but no guarentees. If you want to or are using ROS2, the official ROS2-java org/repo is the only thing available (for java &/or android). As it's still being developed, ecosystem java/android support is the depth of this project's implementation.

Implementing a custom audio HAL in android

How to implementing a custom audio HAL in android?
What are the different approaches to implement the audio HAL in android Ex. TinyHAL, UCM etc. I am looking for various approaches to suite my requirement.
First and the most important question you should ask yourself is if you really need to implement your own HAL.
Secondly, there is a mandatory reading - Audio Architecture in Android.
Next, check the source code e.g. for one of the Nexus devices, this can be treated as a reference design.
After lots of code writing and lots of debugging pass CTS testsuit, to make sure you are compliant with the Android OS.

How to listen in Android when the browser or some app started downloading a files?

I need an application which will listen when some application starts downloading files (it will be evaluated like malicious behavior).
I was trying to do this with BroadcastReceiver and Intents, when I was filtering the "android.intent.action.DOWNLOAD_COMPLETE" (in xml) and show it into Toast, when some item was downloaded to the device, but it was not working for me.
I found now that there is exist a DownloadListener, which contains a onDownloadStart(..) method.
Do you think, that this can solve my problem?
Short of a custom version of the OS, you can't. At best, you could get a subset of requests that use DownloadManager. But most downloads don't use that- they just make direct HTTP requests. There's no way to track those from another application. If you were to use a custom version of the OS you could, but to get everything you'd pretty much need to be built into the Linux networking subsystem.

Custom Wrapper Codec Integration into Android

I need to develop a custom 'wrapper' video codec and integrate it into android (JB for now, ICS later). We want to use some custom decryption keys from the SIM (don't ask!). The best method (that would allow it to work alongside other non-encrypted media and to use the standard media player or other) seems to be to define our own mime-type, and link that to a custom wrapper codec that can do the custom decryption, and then pass the data on to a real codec. (Let's say the filetype is .mp4 for now.)
(An alternative might be to write our own media player, but we'd rather not go down that route because we really want the media to appear seamlessly alongside other media)
I've been trying to follow this guide:
how to integrate a decoder into multimedia framework
I'm having trouble with OMX Core registration - I can build the libstagefright.so from the android source by typing make stagefright but in the guide he says to use the libstagefrighthw.so which seems appropriate for JB, but I'm not sure how to build this, it doesn't seem to get built from using make stagefright unless I'm doing something wrong?
The other problem is that even if I do get the custom wrapper codec registered, I'm not sure how to go about passing the data off to a real codec.
If anyone has any suggestions (or can give some baby step by step instructions!), I'd really appreciate it - the deadline is quite tight for the proof of concept and I know very little about codecs or the media framework...
Many Thanks.
(p.s. I don't want to get into a mud fight about drm and analogue holes etc.., thanks)
In this post, I am using H.264 as an example, but the solution(s) can be extended to support other codecs like MPEG-4, VC-1, VP8 etc. There are 2 possible solutions to solve your problem, which I am enlisting below, each with their own pros and cons to help you take an informed decision.
Solution 1: Extend the codec to support new mode
In JellyBean, one could register the same OMX component with same MIME types as 2 different component names viz., OMX.ABC.XYZ and OMX.ABC.XYZ.secure. The former is used for normal playback and is the more commonly used component. The latter is used when the parser i.e. MediaExtractor indicates the presence of secure content. In OMXCodec::Create, after findMatchingCodecs returns a list of codecs, we can observe the choice to select .secure component as here.
Steps to follow:
In your platform, register another component with some new extension like OMX.H264.DECODER.decrypt or something similar. Change is required only in media_codecs.xml. The choice of whether to register a new factory method or have a common factory method is your choice.
From your parser, when you encounter the specific use-case, set a new flag like kKeyDecryptionRequired. For this you will have to define a new flag in Metadata.h and a corresponding quirk in OMXCodec.h.
Modify the OMXCodec::create method to append a .decrypt suffix similar to the .secure suffix as shown above.
With all changes in OMXCodec, Metadata, MediaExtractor modules, you will have to rebuild only libstagefright.so and replace the same on your platform.
Voila!! your integration should be complete. Now comes the main challenge inside the component. As part of the component implementation, you should be able to differentiate between an ordinary component creation and .decrypt component creation.
From a runtime perspective, assuming that your component is aware of the fact that it is a .decrypt component or not, you could handle the decryption as part of the OMX_EmptyThisBuffer call, where you could decrypt the data and then pass it to underlying codec.
Pros: Easy to integrate, Minimal changes in Android framework, Scalable to other codecs, No new MIME type registration required.
Cons: You need to track the future revisions of android, specifically on the new quirks, flags and choice of .decrypt extension. If Google decides to employ something similar, you will have to adapt / modify your solution accordingly.
Solution 2: Registration of new MIME Type
From your question, it is not clear if you were able to define the MIME type or not and hence, I am capturing the steps for clarity.
Steps to follow:
Register a new MIME type at MediaDefs as shown here. For example, you could employ a new MIME type as const char *MEDIA_MIMETYPE_VIDEO_AVC_ENCRYPT = "video/avc-encrypt";
Register your new component with this updated MIME type in media_codecs.xml. Please note that you will have to ensure that the component quirks are also handled accordingly.
In OMXCodec::setVideoOutputFormat method implementation, you will have to introduce the support for handling your new MIME type as shown for H.264 here. Please note that you will have to handle similar changes in OMXCodec to support the new MIME type.
In MediaExtractor, you will have to signal the MIME type for the video track using the newly defined type. With these changes, your component will be selected and created.
However, the challenge still remains: Where to perform the decryption? For this, you could as well employ the same solution as described in the previous section i.e. handle the same as part of OMX_EmptyThisBuffer call.
Pros: None that I can think of..
Cons: First, solution is not scalable. You will have to keep adding newer MIME types and keep modifying the Stagefright framework. Next, the changes in OMXCodec will require corresponding changes in MediaExtractor. Hence, even though your initial focus is on MP4 extractor, if you wish to extend the solution to other container formats like AVI, MKV, you will have to include the support for new MIME types in these extractors.
Lastly, some notes.
As a preferred solution, I would recommend Solution 1 as it is easy and simple.
I haven't touched upon ACodec based implementation of the codec. However, I do feel that Solution 1 would be a far more easier solution to implement even if such a support is required in future.
If you aren't modifying the OMX core, you shouldn't require to modify the libstagefrighthw.so. Just FYI, this is typically implemented by the vendors as part of their vendor specific modules as in vendor/<xyz>/hardware/.... You need to check with your platform provider on the sources for libstagefrighthw.so.

controling network communication

I need to develop an Android application that sets up connection via WiFi with computer and then sends packets with data. Jowever, I need to control send packets, not only theirs data but also headers, there should be possible to modify any field in their header as well. In windows in it is possible with use of winpcap and jpcap, and I wonder if sth similar I may find on Android. Is there any ready API that will help with my problem?
There's no API available to a Java/Dalvik app on Android which would allow you to do that.
Android is a Linux system, though. So you could try to find/write one or two Linux applications to support your effort - or use JNI.
Bottomline: Native code will definitely be necessary to achieve what you want, no way to do this in Java alone.

Categories

Resources