I am using the FFMPEG built of AppUnite with the latest patch for stagefright support in order to playback http live streams: https://review.appunite.com/#/c/1779/
As the stream does not start at 0, I added the following code to avoid a black screen:
struct Player {
+ int64_t video_start_time;
}
void player_get_video_duration(struct Player *player) {
+ player->video_start_time = 0;
+ for (i = 0; i < player->capture_streams_no; ++i) {
+ AVStream *stream = player->input_streams[i];
+ if (stream->start_time > 0) {
+ player->video_start_time = av_rescale_q(
+ stream->start_time, stream->time_base, AV_TIME_BASE_Q);
+
+ LOGI(3, "player_set_data_source stream[%d] start_time: %ld",
+ i, player->video_start_time);
+
+ break;
+ }
+ }
}
enum WaitFuncRet player_wait_for_frame(
struct Player *player, double time, int stream_no) {
- int64_t current_time = av_gettime();
+ int64_t current_time = av_gettime() + player->video_start_time;
}
However, as soon as the sleep_time in player_wait_for_frame drops below 0, playback freezes and then hangs waiting for a frame that never arrives. The queues allocated by player_alloc_queues function seem being not big enough to hold the real-time stream pushed in between player_open_input and player_start_decoding_threads. Increasing the number of nodes in the queue does not resolve the issue however. The issue seems to be clearly in the player_wait_for_frame method but I am unable to find a solution.
I spent quite a lot time trying to resolve this nasty issue, but without success so far. Any help really appreciated!!!
i got the same problem using that library. In my case i was trying to play a DVB stream forwarded over the network. The stream is live and sometime (because of network problems or other reasons) some packets could be missed or corrupted, and the library stop to play.
To get it working i did other 2 fixes in addition to the "video start time fix" (the patch in your post).
The first one is disable the "stop stream on decoding failure":
## -1112,7 +1127,7 ##
av_free_packet(packet_data->packet);
}
queue_pop_finish(queue, &player->mutex_queue, &player->cond_queue);
- if (err < 0) {
+ if (!player->is_live_stream && err < 0) {
pthread_mutex_lock(&player->mutex_queue);
goto stop;
}
The second one is disable the PTS adjustment for frames who arrive too late (in a live stream we must drop them):
## -738,7 +741,19 ##
"player_wait_for_frame[%d] Waiting for frame: sleeping: %" SCNd64,
stream_no, sleep_time);
- if (sleep_time < -300000ll) {
+ if (player->is_live_stream && sleep_time < -1000000ll) {
+ // 1000 ms late
+ int64_t new_value = player->start_time - sleep_time;
+
+ LOGI(4,
+ "player_wait_for_frame[%d] skipping frame because too late",
+ stream_no);
+
+ ret = WAIT_FUNC_RET_SKIP;
+ break;
+ }
+
+ if (sleep_time < -300000ll) {
// 300 ms late
int64_t new_value = player->start_time - sleep_time;
The "player->is_live_stream" is an int 0/1, i set to 1 when i'm playing a live stream, so for other kinds of sources the library still work as before.
Hope this can help you :)
Related
I'm working on adding a live broadcasting feature to an Android app. I do so through RTMP and make use of the DailyMotion Android SDK, which in turn makes use of Kickflip.
Everything works perfect, except for the playback of the audio on the website (which makes use of Flash). The audio does work in VLC, so it seems to be an issue with Flash being unable to decode the AAC audio.
For the audio I instantiate an encoder with the "audio/mp4a-latm" mime type. The Android developer docs state the following about this mime type: "audio/mp4a-latm" - AAC audio (note, this is raw AAC packets, not packaged in LATM!). I expect that my problem lies here, but yet I have not been able to find a solution for it.
Pretty much all my research, including this SO question about the matter pointed me in the direction of adding an ADTS header to the audio byte array. That results in the following code in the writeSampleData method:
boolean isHeader = false;
if ((bufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
isHeader = true;
} else {
pts = bufferInfo.presentationTimeUs - mFirstPts;
}
if (mFirstPts != -1 && pts >= 0) {
pts /= 1000;
byte data[] = new byte[bufferInfo.size + 7];
addADTStoPacket(data, bufferInfo.size + 7);
encodedData.position(bufferInfo.offset);
encodedData.get(data, 7, bufferInfo.size);
addDataPacket(new AudioPacket(data, isHeader, pts, mAudioFirstByte));
}
The addADTStoPacket method is identical to the one in the above mentioned SO post, but I will show it here regardless:
private void addADTStoPacket(byte[] packet, int packetLen) {
int profile = 2; //AAC LC
//39=MediaCodecInfo.CodecProfileLevel.AACObjectELD;
int freqIdx = 4; //44.1KHz
int chanCfg = 1; //CPE
// fill in ADTS data
packet[0] = (byte)0xFF;
packet[1] = (byte)0xF9;
packet[2] = (byte)(((profile-1)<<6) + (freqIdx<<2) +(chanCfg>>2));
packet[3] = (byte)(((chanCfg&3)<<6) + (packetLen>>11));
packet[4] = (byte)((packetLen&0x7FF) >> 3);
packet[5] = (byte)(((packetLen&7)<<5) + 0x1F)
packet[6] = (byte)0xFC;
}
The variables in the above method match the settings I have configured in the application, so I'm pretty sure that's fine.
The data is written to the output stream in the following method of the AudioPacket class:
#Override
public void writePayload(OutputStream outputStream) throws IOException {
outputStream.write(mFirstByte);
outputStream.write(mIsAudioSpecificConfic ? 0 : 1);
outputStream.write(mData);
}
Am I missing something here? I could present more code if necessary, but I think this covers the most related parts. Thanks in advance and I really hope someone is able to help, I've been stuck for a couple of days now...
I would like to produce mp4 file by multiplexing audio from mic (overwrite didGetAudioData) and video from camera (overwrite onpreviewframe).However, I encountered the sound and video synchronization problem, video will appear faster than audio. I wondered if the problem related to incompatible configurations or presentationTimeUs, could someone guide me how to fix the problem. Below were my software.
Video configuration
formatVideo = MediaFormat.createVideoFormat(MIME_TYPE_VIDEO, 640, 360);
formatVideo.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar);
formatVideo.setInteger(MediaFormat.KEY_BIT_RATE, 2000000);
formatVideo.setInteger(MediaFormat.KEY_FRAME_RATE, 30);
formatVideo.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5);
got video presentationPTS as below,
if(generateIndex == 0) {
videoAbsolutePtsUs = 132;
StartVideoAbsolutePtsUs = System.nanoTime() / 1000L;
}else {
CurrentVideoAbsolutePtsUs = System.nanoTime() / 1000L;
videoAbsolutePtsUs =132+ CurrentVideoAbsolutePtsUs-StartVideoAbsolutePtsUs;
}
generateIndex++;
audio configuration
format = MediaFormat.createAudioFormat(MIME_TYPE, 48000/*sample rate*/, AudioFormat.CHANNEL_IN_MONO /*Channel config*/);
format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC);
format.setInteger(MediaFormat.KEY_SAMPLE_RATE,48000);
format.setInteger(MediaFormat.KEY_CHANNEL_COUNT,1);
format.setInteger(MediaFormat.KEY_BIT_RATE,64000);
got audio presentationPTS as below,
if(generateIndex == 0) {
audioAbsolutePtsUs = 132;
StartAudioAbsolutePtsUs = System.nanoTime() / 1000L;
}else {
CurrentAudioAbsolutePtsUs = System.nanoTime() / 1000L;
audioAbsolutePtsUs =CurrentAudioAbsolutePtsUs - StartAudioAbsolutePtsUs;
}
generateIndex++;
audioAbsolutePtsUs = getJitterFreePTS(audioAbsolutePtsUs, audioInputLength / 2);
long startPTS = 0;
long totalSamplesNum = 0;
private long getJitterFreePTS(long bufferPts, long bufferSamplesNum) {
long correctedPts = 0;
long bufferDuration = (1000000 * bufferSamplesNum) / 48000;
bufferPts -= bufferDuration; // accounts for the delay of acquiring the audio buffer
if (totalSamplesNum == 0) {
// reset
startPTS = bufferPts;
totalSamplesNum = 0;
}
correctedPts = startPTS + (1000000 * totalSamplesNum) / 48000;
if(bufferPts - correctedPts >= 2*bufferDuration) {
// reset
startPTS = bufferPts;
totalSamplesNum = 0;
correctedPts = startPTS;
}
totalSamplesNum += bufferSamplesNum;
return correctedPts;
}
Was my issue caused by applying jitter function for audio only? If yes, how could I apply jitter function for video? I also tried to find correct audio and video presentationPTS by https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/EncodeDecodeTest.java. But encodedecodeTest only provided video PTS. That's the reason my implementation used system nanotime for both audio and video. If I want to use video presentationPTS in encodedecodetest, how to construct the compatible audio presentationPTS? Thanks for help!
below are how i queue yuv frame to video mediacodec for reference. For audio part, it is identical except for different presentationPTS.
int videoInputBufferIndex;
int videoInputLength;
long videoAbsolutePtsUs;
long StartVideoAbsolutePtsUs, CurrentVideoAbsolutePtsUs;
int put_v =0;
int get_v =0;
int generateIndex = 0;
public void setByteBufferVideo(byte[] buffer, boolean isUsingFrontCamera, boolean Input_endOfStream){
if(Build.VERSION.SDK_INT >=18){
try{
endOfStream = Input_endOfStream;
if(!Input_endOfStream){
ByteBuffer[] inputBuffers = mVideoCodec.getInputBuffers();
videoInputBufferIndex = mVideoCodec.dequeueInputBuffer(-1);
if (VERBOSE) {
Log.w(TAG,"[put_v]:"+(put_v)+"; videoInputBufferIndex = "+videoInputBufferIndex+"; endOfStream = "+endOfStream);
}
if(videoInputBufferIndex>=0) {
ByteBuffer inputBuffer = inputBuffers[videoInputBufferIndex];
inputBuffer.clear();
inputBuffer.put(mNV21Convertor.convert(buffer));
videoInputLength = buffer.length;
if(generateIndex == 0) {
videoAbsolutePtsUs = 132;
StartVideoAbsolutePtsUs = System.nanoTime() / 1000L;
}else {
CurrentVideoAbsolutePtsUs = System.nanoTime() / 1000L;
videoAbsolutePtsUs =132+ CurrentVideoAbsolutePtsUs - StartVideoAbsolutePtsUs;
}
generateIndex++;
if (VERBOSE) {
Log.w(TAG, "[put_v]:"+(put_v)+"; videoAbsolutePtsUs = " + videoAbsolutePtsUs + "; CurrentVideoAbsolutePtsUs = "+CurrentVideoAbsolutePtsUs);
}
if (videoInputLength == AudioRecord.ERROR_INVALID_OPERATION) {
Log.w(TAG, "[put_v]ERROR_INVALID_OPERATION");
} else if (videoInputLength == AudioRecord.ERROR_BAD_VALUE) {
Log.w(TAG, "[put_v]ERROR_ERROR_BAD_VALUE");
}
if (endOfStream) {
Log.w(TAG, "[put_v]:"+(put_v++)+"; [get] receive endOfStream");
mVideoCodec.queueInputBuffer(videoInputBufferIndex, 0, videoInputLength, videoAbsolutePtsUs, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
} else {
Log.w(TAG, "[put_v]:"+(put_v++)+"; receive videoInputLength :" + videoInputLength);
mVideoCodec.queueInputBuffer(videoInputBufferIndex, 0, videoInputLength, videoAbsolutePtsUs, 0);
}
}
}
}catch (Exception x) {
x.printStackTrace();
}
}
}
How I solved this in my application was by setting the PTS of all video and audio frames against a shared "sync clock" (note the sync also means it's thread-safe) that starts when the first video frame (having a PTS 0 on its own) is available. So if audio recording starts sooner than video, audio data is dismissed (doesn't go into encoder) until video starts, and if it starts later, then the first audio PTS will be relative to the start of the entire video.
Ofcourse you are free to allow audio to start first, but players will usually skip or wait for the first video frame anyway. Also be careful that encoded audio frames will arrive "out of order" and MediaMuxer will fail with an error sooner or later. My solution was to queue them all like this: sort them by pts when a new one comes in, then write everything that is older than 500 ms (relative to the newest one) to MediaMuxer, but only those with a PTS higher than the latest written frame. Ideally this means data is smoothly written to MediaMuxer, with a 500 ms delay. Worst case, you will lose a few audio frames.
I'm trying to develop such a android app: when it is druming or beating now of the current playing music, I can do something. So I should analyse the current music first, and then I should decide whether it is beating now!
I have test the Android Api Demo, using the MediaPlayer and Visualizer class I can get the original byte data, but how can I know whether it is beating now?
I'm new here...sorry if I have not describe clearly and any answer is appreciative!
Here is my partial code to refresh android view:
public void updateVisualizer(byte[] fft)
{
if(mFirst )
{
mInfoView.setText(mInfoView.getText().toString() + "\nCaptureSize: " + fft.length);
mFirst = false;
}
byte[] model = new byte[fft.length / 2 + 1];
model[0] = (byte) Math.abs(fft[0]);
for (int i = 2, j = 1; j < mSpectrumNum;)
{
model[j] = (byte) Math.hypot(fft[i], fft[i + 1]);
i += 2;
j++;
}
mBytes = model;
// I want to decide whether it is beating now
/*if(beating){
doSomeThing();
}*/
invalidate();
}
I'm trying to decode a bitmap from an extended FilterInputStream. I have to perform on-the-fly byte manipulation to the image data to provide a decodable image to SKIA, however it seems like SKIA ignores my custom InputStream and initializes one of its own...
When I run my test application, attempting to load in a 2mb large JPEG results in ObfuscatedInputStream.read([]) being called only once from BitmapFactory.decodeStream()
It seems like once the type of file is determined from the first 16kb of data retrieved from my ObfuscatedInputStream it initializes its own native stream and reads from that, effectively rendering all changes I make to how the input stream should work useless...
Here is the buffered read function in my extended FilterInputStream class. The Log.d at the top of the function is only executed once.
#Override
public int read(byte b[], int off, int len) throws IOException
{
Log.d(TAG, "called read[] with aval + " + super.available() + " len " + len);
int numBytesRead = -1;
if (pos == 0)
{
numBytesRead = fill(b);
if (numBytesRead < len)
{
int j;
numBytesRead += ((j = super.read(b, numBytesRead, len - numBytesRead)) == -1) ? 0 : j ;
}
}
else
numBytesRead = super.read(b, 0, len);
if (numBytesRead > -1)
pos += numBytesRead;
Log.d(TAG, "actually read " + numBytesRead);
return numBytesRead;
}
Has anyone ever encountered this issue? It seems like the only way to get my desired behavior is to rewrite portions of the SKIA library... I would really like to know what the point of the InputStream parameter is if the native implementation initializes a stream of its own...
turns out that it wasnt able to detect that it was an actual image from the first 1024 bytes it takes in. If it doesnt detect that the file is an actual image, it will not bother decoding the rest, hence only having read[] called once.
I've implemented RTSP on Android MediaPlayer using VLC as rtsp
server with this code:
# vlc -vvv /home/marco/Videos/pippo.mp4 --sout
#rtp{dst=192.168.100.246,port=6024-6025,sdp=rtsp://192.168.100.243:8080/test.sdp}
and on the Android project:
Uri videoUri = Uri.parse("rtsp://192.168.100.242:8080/test.sdp");
videoView.setVideoURI(videoUri);
videoView.start();
This works fine but if I'd like also to play live stream RTP so I
copied the sdp file into the sdcard (/mnt/sdcard/test.sdp) and setting
vlc:
# vlc -vvv /home/marco/Videos/pippo.mp4 --sout
#rtp{dst=192.168.100.249,port=6024-6025}
I tried to play the stream RTP setting the path of the sdp file
locally:
Uri videoUri = Uri.parse("/mnt/sdcard/test.sdp");
videoView.setVideoURI(videoUri);
videoView.start();
But I got an error:
D/MediaPlayer( 9616): Couldn't open file on client side, trying server side
W/MediaPlayer( 9616): info/warning (1, 26)
I/MediaPlayer( 9616): Info (1,26)
E/PlayerDriver( 76): Command PLAYER_INIT completed with an error or info PVMFFailure
E/MediaPlayer( 9616): error (1, -1)
E/MediaPlayer( 9616): Error (1,-1)
D/VideoView( 9616): Error: 1,-1
Does anyone know where's the problem? I'm I wrong or it's not possible
to play RTP on MediaPlayer?
Cheers
Giorgio
I have a partial solution for you.
I'm currently working on a Ra&D project involving RTP streaming of medias from a server to Android clients.
By doing this work, I contribute to my own library called smpte2022lib you may find here :
http://sourceforge.net/projects/smpte-2022lib/.
Helped with such library (the Java implementation is currently the best one) you may be able to parse RTP multicast streams coming from professional streaming equipements, VLC RTP sessions, ...
I already tested it successfully with streams coming from captured profesionnal RTP streams with SMPTE-2022 2D-FEC or with simple streams generated with VLC.
Unfortunately I cannot put a code-snippet here as the project using it is actually under copyright, but I ensure you you can use it simply by parsing UDP streams helped with RtpPacket constructor.
If the packets are valid RTP packets (the bytes) they will be decoded as such.
At this moment of time, I wrap the call to RtpPacket's constructor to a thread that actually stores the decoded payload as a media file. Then I will call the VideoView with this file as parameter.
Crossing fingers ;-)
Kind Regards,
David Fischer
Possible in android using ( not mediaPlayer but other stuff further down the stack) but do you really want do pursue RTSP/RTP when the rest of the media ecosystem does not??
IMO - there are far better media/stream approaches under the umbrella of HTML5/WebRTC. Like look at what 'Ondello' is doing with streams.
That said, here is some old-project code for android/RTSP/SDP/RTP using 'netty' and 'efflux'. It will negotiate some portions of 'Sessions' on SDP file providers. Cant remember whether it would actually play the audio portion of Youtube/RTSP stuff, but that is what my goal was at the time. ( i think that it worked using AMR-NB codec but , there were tons of issues and i dropped RTSP on android like a bad habit!)
on Git....
#Override
public void mediaDescriptor(Client client, String descriptor)
{
// searches for control: session and media arguments.
final String target = "control:";
Log.d(TAG, "Session Descriptor\n" + descriptor);
int position = -1;
while((position = descriptor.indexOf(target)) > -1)
{
descriptor = descriptor.substring(position + target.length());
resourceList.add(descriptor.substring(0, descriptor.indexOf('\r')));
}
}
private int nextPort()
{
return (port += 2) - 2;
}
private void getRTPStream(TransportHeader transport){
String[] words;
// only want 2000 part of 'client_port=2000-2001' in the Transport header in the response
words = transport.getParameter("client_port").substring(transport.getParameter("client_port").indexOf("=") +1).split("-");
port_lc = Integer.parseInt(words[0]);
words = transport.getParameter("server_port").substring(transport.getParameter("server_port").indexOf("=") +1).split("-");
port_rm = Integer.parseInt(words[0]);
source = transport.getParameter("source").substring(transport.getParameter("source").indexOf("=") +1);
ssrc = transport.getParameter("ssrc").substring(transport.getParameter("ssrc").indexOf("=") +1);
// assume dynamic Packet type = RTP , 99
getRTPStream(session, source, port_lc, port_rm, 99);
//getRTPStream("sessiona", source, port_lc, port_rm, 99);
Log.d(TAG, "raw parms " +port_lc +" " +port_rm +" " +source );
// String[] words = session.split(";");
Log.d(TAG, "session: " +session);
Log.d(TAG, "transport: " +transport.getParameter("client_port")
+" " +transport.getParameter("server_port") +" " +transport.getParameter("source")
+" " +transport.getParameter("ssrc"));
}
private void getRTPStream(String session, String source, int portl, int portr, int payloadFormat ){
// what do u do with ssrc?
InetAddress addr;
try {
addr = InetAddress.getLocalHost();
// Get IP Address
// LAN_IP_ADDR = addr.getHostAddress();
LAN_IP_ADDR = "192.168.1.125";
Log.d(TAG, "using client IP addr " +LAN_IP_ADDR);
} catch (UnknownHostException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
final CountDownLatch latch = new CountDownLatch(2);
RtpParticipant local1 = RtpParticipant.createReceiver(new RtpParticipantInfo(1), LAN_IP_ADDR, portl, portl+=1);
// RtpParticipant local1 = RtpParticipant.createReceiver(new RtpParticipantInfo(1), "127.0.0.1", portl, portl+=1);
RtpParticipant remote1 = RtpParticipant.createReceiver(new RtpParticipantInfo(2), source, portr, portr+=1);
remote1.getInfo().setSsrc( Long.parseLong(ssrc, 16));
session1 = new SingleParticipantSession(session, payloadFormat, local1, remote1);
Log.d(TAG, "remote ssrc " +session1.getRemoteParticipant().getInfo().getSsrc());
session1.init();
session1.addDataListener(new RtpSessionDataListener() {
#Override
public void dataPacketReceived(RtpSession session, RtpParticipantInfo participant, DataPacket packet) {
// System.err.println("Session 1 received packet: " + packet + "(session: " + session.getId() + ")");
//TODO close the file, flush the buffer
// if (_sink != null) _sink.getPackByte(packet);
getPackByte(packet);
// System.err.println("Ssn 1 packet seqn: typ: datasz " +packet.getSequenceNumber() + " " +packet.getPayloadType() +" " +packet.getDataSize());
// System.err.println("Ssn 1 packet sessn: typ: datasz " + session.getId() + " " +packet.getPayloadType() +" " +packet.getDataSize());
// latch.countDown();
}
});
// DataPacket packet = new DataPacket();
// packet.setData(new byte[]{0x45, 0x45, 0x45, 0x45});
// packet.setSequenceNumber(1);
// session1.sendDataPacket(packet);
// try {
// latch.await(2000, TimeUnit.MILLISECONDS);
// } catch (Exception e) {
// fail("Exception caught: " + e.getClass().getSimpleName() + " - " + e.getMessage());
// }
}
//TODO below should collaborate with the audioTrack object and should write to the AT buffr
// audioTrack write was blocking forever
public void getPackByte(DataPacket packet) {
//TODO this is getting called but not sure why only one time
// or whether it is stalling in mid-exec??
//TODO on firstPacket write bytes and start audioTrack
// AMR-nb frames at 12.2 KB or format type 7 frames are handled .
// after the normal header, the getDataArray contains extra 10 bytes of dynamic header that are bypassed by 'limit'
// real value for the frame separator comes in the input stream at position 1 in the data array
// returned by
// int newFrameSep = 0x3c;
// bytes avail = packet.getDataSize() - limit;
// byte[] lbuf = new byte[packet.getDataSize()];
// if ( packet.getDataSize() > 0)
// lbuf = packet.getDataAsArray();
//first frame includes the 1 byte frame header whose value should be used
// to write subsequent frame separators
Log.d(TAG, "getPackByt start and play");
if(!started){
Log.d(TAG, " PLAY audioTrak");
track.play();
started = true;
}
// track.write(packet.getDataAsArray(), limit, (packet.getDataSize() - limit));
track.write(packet.getDataAsArray(), 0, packet.getDataSize() );
Log.d(TAG, "getPackByt aft write");
// if(!started && nBytesRead > minBufferSize){
// Log.d(TAG, " PLAY audioTrak");
// track.play();
// started = true;}
nBytesRead += packet.getDataSize();
if (nBytesRead % 500 < 375) Log.d(TAG, " getPackByte plus 5K received");
}
}
Actually it's possible to play RTSP/RTP streams on Android by using a modified version of ExoPlayer which officially doesn't support RTSP/RTP (issue 55), however, there's an active pull request #3854 to add this support.
In the meantime, you can clone the original authors exoplayer fork which does support RTSP (branch dev-v2-rtsp):
git clone -b dev-v2-rtsp https://github.com/tresvecesseis/ExoPlayer.git.
I've tested it and it works perfectly. The authors are working actively to fix the issues reported by many users and I hope that RTSP support at some point becomes part of the official exoplayer.
Unfortunately it is not possible to play an RTP Stream with the Android MediaPlayer.
Solutions to this problems include the decoding of the RTP Stream with ffmpeg. Tutorials on how to compile ffmpeg for Android can be found on the Web.