I have some small project to stream video to android device. Streaming is done but I have problem with controlling video. The MediaController doesn't work when I push pause there is no effect. VideoView.pause() also doesn't work. Streaming server is based on GStreamer (server was wrote by my friend), and I'am using Android 2.2 CyanogenMod.
This is server code :
#include <gst/gst.h>
#include <gst/rtsp-server/rtsp-server.h>
int
main (int argc, char *argv[])
{
GMainLoop *loop;
GstRTSPServer *server;
GstRTSPMediaMapping *mapping;
GstRTSPMediaFactory *factory;
gchar *str;
gst_init (&argc, &argv);
if (argc < 2) {
g_message ("usage: %s <filename>", argv[0]);
return -1;
}
loop = g_main_loop_new (NULL, FALSE);
/* create a server instance */
server = gst_rtsp_server_new ();
/* get the mapping for this server, every server has a default mapper object
* that be used to map uri mount points to media factories */
mapping = gst_rtsp_server_get_media_mapping (server);
str = g_strdup_printf ("( "
"filesrc location=\"%s\" ! decodebin2 name=d "
"d. ! queue ! videoscale ! video/x-raw-yuv, width=500, height=300 "
"! ffenc_mpeg4 ! rtpmp4vpay name=pay0 "
"d. ! queue ! audioconvert ! faac ! rtpmp4apay name=pay1"
" )", argv[1]);
/* make a media factory for a test stream. The default media factory can use
* gst-launch syntax to create pipelines.
* any launch line works as long as it contains elements named pay%d. Each
* element with pay%d names will be a stream */
factory = gst_rtsp_media_factory_new ();
gst_rtsp_media_factory_set_launch (factory, str);
g_free (str);
/* attach the test factory to the /test url */
gst_rtsp_media_mapping_add_factory (mapping, "/test", factory);
/* don't need the ref to the mapper anymore */
g_object_unref (mapping);
/* attach the server to the default maincontext */
gst_rtsp_server_attach (server, NULL);
/* start serving */
g_main_loop_run (loop);
return 0;
}
From what I have gathered, the VideoView in android only accepts h.264 feeds so you need to be encoding in h.264.
Related
I have searched the question about Gstreamer rtsp client for a long time. But no luck.
Now I can display the live stream or the recorded stream by Gstreamer(gstreamer-1.0-android-armv7-1.6.0) from server on Android device, then I want to send PLAY/PAUSE/ to change server state when playing recorded stream.
My question: is there a simple way to obtain and access pipeline when working with gst-rtsp-stream? Could someone please provide an example?
Nov 10 Update:
GstBus *bus;
CustomData *data = (CustomData *)userdata;
GSource *timeout_source;
GSource *bus_source;
GError *error = NULL;
guint flags;
/* Create our own GLib Main Context and make it the default one */
data->context = g_main_context_new ();
g_main_context_push_thread_default(data->context);
/* Build pipeline */
data->pipeline = gst_parse_launch("playbin", &error);
if (error) {
gchar *message = g_strdup_printf("Unable to build pipeline: %s", error->message);
g_clear_error (&error);
set_ui_message(message, data);
g_free (message);
return NULL;
}
/* Set latency to 0 ns */
gst_pipeline_set_latency(data->pipeline, 0);
/* Disable subtitles */
g_object_get (data->pipeline, "flags", &flags, NULL);
flags &= ~GST_PLAY_FLAG_TEXT;
g_object_set (data->pipeline, "flags", flags, NULL);
/* Set the pipeline to READY, so it can already accept a window handle, if we have one */
data->target_state = GST_STATE_READY;
gst_element_set_state(data->pipeline, GST_STATE_READY);
/* Instruct the bus to emit signals for each received message, and connect to the interesting signals */
bus = gst_element_get_bus (data->pipeline);
bus_source = gst_bus_create_watch (bus);
g_source_set_callback (bus_source, (GSourceFunc) gst_bus_async_signal_func, NULL, NULL);
g_source_attach (bus_source, data->context);
g_source_unref (bus_source);
g_signal_connect (G_OBJECT (bus), "message::error", (GCallback)error_cb, data);
g_signal_connect (G_OBJECT (bus), "message::state-changed", (GCallback)state_changed_cb, data);
gst_object_unref (bus);
I'm working on a native android project and trying to use OpenSL to play some audio effects. Working from the native audio sample project VisualGDB provides, I've written the code posted below.
Near the end, you can see I have commented a line that stores the contents of a variable called hello in the buffer to the destination. hello comes from the sample project, and contains about 700 lines of character bytes like this:
"\x02\x00\x01\x00\xff\xff\x09\x00\x0c\x00\x10\x00\x07\x00\x07\x00"
which make an audio file of someone saying "hello". When reading that byte data into the stream, my code works fine and I hear "hello" when I run the application. When I read from wav file to play the asset I want, however, I only hear static. The size of the data buffer is the same as the size of the file, so it appears it's being read in properly. The static plays for the duration of the wav file (or very close to it).
I really know nothing about data formats or audio programming. I've tried tweaking the format_pcm variables some with different enum values, but had no success. Using a tool called GSpot I found on The Internet, I know the following about the audio file I'm trying to play:
File Size: 557 KB (570,503 bytes) (this is the same size as the data buffer
AAsset_read returns
Codec: PCM Audio
Sample rate: 48000Hz
Bit rate: 1152 kb/s
Channels: 1
Any help or direction would be greatly appreciated.
SLDataLocator_AndroidSimpleBufferQueue loc_bufq = { SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 1 };
SLDataFormat_PCM format_pcm;
format_pcm.formatType = SL_DATAFORMAT_PCM;
format_pcm.numChannels = 1;
format_pcm.samplesPerSec = SL_SAMPLINGRATE_48;// SL_SAMPLINGRATE_8;
format_pcm.bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_8; // SL_PCMSAMPLEFORMAT_FIXED_16;
format_pcm.containerSize = 16;
format_pcm.channelMask = SL_SPEAKER_FRONT_CENTER;
format_pcm.endianness = SL_BYTEORDER_LITTLEENDIAN;
SLDataSource audioSrc = { &loc_bufq, &format_pcm };
// configure audio sink
SLDataLocator_OutputMix loc_outmix = { SL_DATALOCATOR_OUTPUTMIX, manager->GetOutputMixObject() };
SLDataSink audioSnk = { &loc_outmix, NULL };
//create audio player
const SLInterfaceID ids[3] = { SL_IID_BUFFERQUEUE, SL_IID_EFFECTSEND, SL_IID_VOLUME };
const SLboolean req[3] = { SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE };
SLEngineItf engineEngine = manager->GetEngine();
result = (*engineEngine)->CreateAudioPlayer(engineEngine, &bqPlayerObject, &audioSrc, &audioSnk,
3, ids, req);
// realize the player
result = (*bqPlayerObject)->Realize(bqPlayerObject, SL_BOOLEAN_FALSE);
// get the play interface
result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_PLAY, &bqPlayerPlay);
// get the buffer queue interface
result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_BUFFERQUEUE,
&bqPlayerBufferQueue);
// register callback on the buffer queue
result = (*bqPlayerBufferQueue)->RegisterCallback(bqPlayerBufferQueue, bqPlayerCallback, NULL);
// get the effect send interface
result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_EFFECTSEND,
&bqPlayerEffectSend);
// get the volume interface
result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_VOLUME, &bqPlayerVolume);
// set the player's state to playing
result = (*bqPlayerPlay)->SetPlayState(bqPlayerPlay, SL_PLAYSTATE_PLAYING);
uint8* pOutBytes = nullptr;
uint32 outSize = 0;
result = MyFileManager::GetInstance()->OpenFile(m_strAbsolutePath, (void**)&pOutBytes, &outSize, true);
const char* filename = m_strAbsolutePath->GetUTF8String();
result = (*bqPlayerBufferQueue)->Enqueue(bqPlayerBufferQueue, pOutBytes, outSize);
// result = (*bqPlayerBufferQueue)->Enqueue(bqPlayerBufferQueue, hello, sizeof(hello));
if (SL_RESULT_SUCCESS != result) {
return JNI_FALSE;
}
Several things were to blame. The format of the wave files I was testing with was not what the specification described. There seemed to be a lot of empty data after the first chunk of header data. Also, the buffer that needs to be passed to the queue needs to be a char* of just the wav data, not the header. I'd wrongly assumed the queue parsed the header out.
I'm working on an audio streamer, and I want to be able to modify the file I'm streaming, and also the target I'm streaming to. To do this I would modify the location for my filesrc, or I would modify the host/port of my udpsink.
I am having trouble understanding everything I need to know to get this pipeline linked together and playing. Previously I hard coded everything and used the gst pipeline parsing tool with this pipeline:
filesrc location=/storage/sdcard0/Music/RunToTheHills.ogg ! oggdemux ! vorbisdec ! audioresample ! audioconvert ! audio/x-raw-int,channels=2,depth=16,width=16,rate=44100 ! rtpL16pay ! udpsink host=192.168.100.126 port=9001
Now I want to change the filesrc location, and udp host/port as mentioned above.
My application is an Android app using the NDK. However, this should not effect the code needed to set up a pipeline.
Here's what I've got so far, which results in a segfault.
My data structure:
/**
* Structure to hold all the various variables we need.
* This is handed to callbacks
*/
typedef struct _CustomData {
jobject app; /* The Java app */
GstElement *pipeline; /* gStreamer pipeline */
GstElement *filesrc; /* Input file */
GstPad *fileblock; /* Used to block filesrc */
GstElement *ogg; /* Ogg demultiplexer */
GstElement *vorbis; /* Vorbis decoder */
GstElement *resample;
GstElement *convert;
GstCaps *caps;
GstElement *rtp; /* RTP packer */
GstElement *udp; /* UDP sender */
GMainContext *context; /* GLib Context */
GMainLoop *main_loop; /* GLib main loop */
gboolean initialised; /* True after initialisation */
GstState state; /* Pipeline state */
GstState target_state; /* What state we want to put the pipeline into */
gint64 duration; /* Clip length */
gint64 desired_position; /* Where we want to track to within the clip */
GstClockTime last_seek_time; /* Used to throttle seeking */
gboolean is_live; /* Live streams don't need buffering */
} CustomData;
And here's my creation of the pipeline:
data->pipeline = gst_pipeline_new("pipeline");
data->filesrc = gst_element_factory_make("filesrc", NULL);
if (!data->filesrc) {
GST_ERROR("Failed to create filesrc.");
return NULL;
}
g_object_set(G_OBJECT(data->filesrc), "location", "/storage/sdcard0/Music/RunToTheHills.ogg", NULL);
data->fileblock = gst_element_get_static_pad(data->filesrc, "src");
data->ogg = gst_element_factory_make("oggdemux", NULL);
if (!data->ogg) {
GST_ERROR("Failed to create oggdemux.");
return NULL;
}
data->vorbis = gst_element_factory_make("vorbisdec", NULL);
if (!data->vorbis) {
GST_ERROR("Failed to create vorbisdec.");
return NULL;
}
data->resample = gst_element_factory_make("audioresample", NULL);
if (!data->resample) {
GST_ERROR("Failed to create audioresample.");
return NULL;
}
data->convert = gst_element_factory_make("audioconvert", NULL);
if (!data->convert) {
GST_ERROR("Failed to create audioconvert.");
return NULL;
}
data->caps = gst_caps_new_simple("audio/x-raw-int",
"channels", G_TYPE_INT, 2,
"depth", G_TYPE_INT, 16,
"width", G_TYPE_INT, 16,
"rate", G_TYPE_INT, 44100);
if (!data->caps) {
GST_ERROR("Failed to create caps");
return NULL;
}
data->rtp = gst_element_factory_make("rtpL16pay", NULL);
if (!data->rtp) {
GST_ERROR("Failed to create rtpL16pay.");
return NULL;
}
data->udp = gst_element_factory_make("udpsink", NULL);
if (!data->udp) {
GST_ERROR("Failed to create udpsink.");
return NULL;
}
g_object_set(G_OBJECT(data->udp), "host", "192.168.100.126", NULL);
g_object_set(G_OBJECT(data->udp), "port", 9001, NULL);
if (!data->ogg || !data->vorbis || !data->resample || !data->convert || !data->caps || !data->rtp || !data->udp) {
GST_ERROR("Unable to create all elements!");
return NULL;
}
gst_bin_add_many(GST_BIN(data->pipeline), data->filesrc, data->ogg, data->vorbis,
data->resample, data->convert, data->caps, data->rtp, data->udp);
/* Link all the elements together */
gst_element_link(data->filesrc, data->ogg);
gst_element_link(data->ogg, data->vorbis);
gst_element_link(data->vorbis, data->resample);
gst_element_link(data->resample, data->convert);
gst_element_link_filtered(data->convert, data->rtp, data->caps);
gst_element_link(data->rtp, data->udp);
Can someone give me some hints as to where I went wrong?
For interest, here's my previously working pipeline:
data->pipeline = gst_parse_launch("filesrc location=/storage/sdcard0/Music/RunToTheHills.ogg ! oggdemux ! vorbisdec ! audioresample ! audioconvert ! audio/x-raw-int,channels=2,depth=16,width=16,rate=44100 ! rtpL16pay ! udpsink host=192.168.100.126 port=9001", &error);
if (error) {
gchar *message = g_strdup_printf("Unable to build pipeline: %s", error->message);
g_clear_error (&error);
set_ui_message(message, data);
g_free (message);
return NULL;
}
You cannot simply link the oggdemux to the vorbisdec, because the demux has sometimes pads.
You need to add a handler function for the 'pad-added' signal of the demux and then perform the link there.
/* Connect to the pad-added signal */
g_signal_connect (data->ogg, "pad-added", G_CALLBACK (pad_added_handler), data);
And the handler:
void on_pad_added (GstElement *src, GstPad *new_pad, CustomData *data)
{
GstPad *sink_pad = gst_element_get_static_pad (data->vorbis, "sink");
GstPadLinkReturn ret;
GstCaps *new_pad_caps = NULL;
GstStructure *new_pad_struct = NULL;
const gchar *new_pad_type = NULL;
g_print ("Received new pad '%s' from '%s':\n", GST_PAD_NAME (new_pad), GST_ELEMENT_NAME (src));
/* If our converter is already linked, we have nothing to do here */
if (gst_pad_is_linked (sink_pad)) {
g_print (" We are already linked. Ignoring.\n");
goto exit;
}
/* Check the new pad's type */
new_pad_caps = gst_pad_get_caps (new_pad);
new_pad_struct = gst_caps_get_structure (new_pad_caps, 0);
new_pad_type = gst_structure_get_name (new_pad_struct);
if (!g_str_has_prefix (new_pad_type, "audio/x-raw")) {
g_print (" It has type '%s' which is not raw audio. Ignoring.\n", new_pad_type);
goto exit;
}
/* Attempt the link */
ret = gst_pad_link (new_pad, sink_pad);
if (GST_PAD_LINK_FAILED (ret)) {
g_print (" Type is '%s' but link failed.\n", new_pad_type);
} else {
g_print (" Link succeeded (type '%s').\n", new_pad_type);
}
exit:
/* Unreference the new pad's caps, if we got them */
if (new_pad_caps != NULL)
gst_caps_unref (new_pad_caps);
/* Unreference the sink pad */
gst_object_unref (sink_pad);
}
Also, since you're getting a segmentation fault, i believe there is a memory issue. Are you sure you're using the CustomData structure right? I notice you're using data->element instead of data.element.
I am using the IP Webcam program on android and receiving it on my PC by WiFi. What I want is to use opencv in Visual Studio, C++, to get that video stream, there is an option to get MJPG stream by the following URL: http://MyIP:port/videofeed
How to get it using opencv?
Old question, but I hope this can help someone (same as my answer here)
OpenCV expects a filename extension for its VideoCapture argument,
even though one isn't always necessary (like in your case).
You can "trick" it by passing in a dummy parameter which ends in the
mjpg extension:
So perhaps try:
VideoCapture vc;
ipCam.open("http://MyIP:port/videofeed/?dummy=param.mjpg")
Install IP Camera Adapter and configure it to capture the videostream. Then install ManyCam and you'll see "MPEG Camera" in the camera section.(you'll see the same instructions if you go to the link on how to setup IPWebCam for skype)
Now you can access your MJPG stream just like a webcam through openCV. I tried this with OpenCV 2.2 + QT and works well.
Think this helps.
I did a dirty patch to make openCV working with android ipWebcam:
In the file OpenCV-2.3.1/modules/highgui/src/cap_ffmpeg_impl.hpp
In the function bool CvCapture_FFMPEG::open( const char* _filename )
replace:
int err = av_open_input_file(&ic, _filename, NULL, 0, NULL);
by
AVInputFormat* iformat = av_find_input_format("mjpeg");
int err = av_open_input_file(&ic, _filename, iformat, 0, NULL);
ic->iformat = iformat;
and comment:
err = av_seek_frame(ic, video_stream, 10, 0);
if (err < 0)
{
filename=(char*)malloc(strlen(_filename)+1);
strcpy(filename, _filename);
// reopen videofile to 'seek' back to first frame
reopen();
}
else
{
// seek seems to work, so we don't need the filename,
// but we still need to seek back to filestart
filename=NULL;
int64_t ts = video_st->first_dts;
int flags = AVSEEK_FLAG_FRAME | AVSEEK_FLAG_BACKWARD;
av_seek_frame(ic, video_stream, ts, flags);
}
That should work. Hope it helps.
This is the solution (im using IP Webcam on android):
CvCapture* capture = 0;
capture = cvCaptureFromFile("http://IP:Port/videofeed?dummy=param.mjpg");
I am not able to comment, so im posting new post. In original answer is an error - used / before dummy. THX for solution.
Working example for me
// OpenCVTest.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include "opencv2/highgui/highgui.hpp"
/**
* #function main
*/
int main( int argc, const char** argv )
{
CvCapture* capture;
IplImage* frame = 0;
while (true)
{
//Read the video stream
capture = cvCaptureFromFile("http://192.168.1.129:8080/webcam.mjpeg");
frame = cvQueryFrame( capture );
// create a window to display detected faces
cvNamedWindow("Sample Program", CV_WINDOW_AUTOSIZE);
// display face detections
cvShowImage("Sample Program", frame);
int c = cvWaitKey(10);
if( (char)c == 27 ) { exit(0); }
}
// clean up and release resources
cvReleaseImage(&frame);
return 0;
}
Broadcast mjpeg from a webcam with vlc, how described at http://tumblr.martinml.com/post/2108887785/how-to-broadcast-a-mjpeg-stream-from-your-webcam-with
I've implemented RTSP on Android MediaPlayer using VLC as rtsp
server with this code:
# vlc -vvv /home/marco/Videos/pippo.mp4 --sout
#rtp{dst=192.168.100.246,port=6024-6025,sdp=rtsp://192.168.100.243:8080/test.sdp}
and on the Android project:
Uri videoUri = Uri.parse("rtsp://192.168.100.242:8080/test.sdp");
videoView.setVideoURI(videoUri);
videoView.start();
This works fine but if I'd like also to play live stream RTP so I
copied the sdp file into the sdcard (/mnt/sdcard/test.sdp) and setting
vlc:
# vlc -vvv /home/marco/Videos/pippo.mp4 --sout
#rtp{dst=192.168.100.249,port=6024-6025}
I tried to play the stream RTP setting the path of the sdp file
locally:
Uri videoUri = Uri.parse("/mnt/sdcard/test.sdp");
videoView.setVideoURI(videoUri);
videoView.start();
But I got an error:
D/MediaPlayer( 9616): Couldn't open file on client side, trying server side
W/MediaPlayer( 9616): info/warning (1, 26)
I/MediaPlayer( 9616): Info (1,26)
E/PlayerDriver( 76): Command PLAYER_INIT completed with an error or info PVMFFailure
E/MediaPlayer( 9616): error (1, -1)
E/MediaPlayer( 9616): Error (1,-1)
D/VideoView( 9616): Error: 1,-1
Does anyone know where's the problem? I'm I wrong or it's not possible
to play RTP on MediaPlayer?
Cheers
Giorgio
I have a partial solution for you.
I'm currently working on a Ra&D project involving RTP streaming of medias from a server to Android clients.
By doing this work, I contribute to my own library called smpte2022lib you may find here :
http://sourceforge.net/projects/smpte-2022lib/.
Helped with such library (the Java implementation is currently the best one) you may be able to parse RTP multicast streams coming from professional streaming equipements, VLC RTP sessions, ...
I already tested it successfully with streams coming from captured profesionnal RTP streams with SMPTE-2022 2D-FEC or with simple streams generated with VLC.
Unfortunately I cannot put a code-snippet here as the project using it is actually under copyright, but I ensure you you can use it simply by parsing UDP streams helped with RtpPacket constructor.
If the packets are valid RTP packets (the bytes) they will be decoded as such.
At this moment of time, I wrap the call to RtpPacket's constructor to a thread that actually stores the decoded payload as a media file. Then I will call the VideoView with this file as parameter.
Crossing fingers ;-)
Kind Regards,
David Fischer
Possible in android using ( not mediaPlayer but other stuff further down the stack) but do you really want do pursue RTSP/RTP when the rest of the media ecosystem does not??
IMO - there are far better media/stream approaches under the umbrella of HTML5/WebRTC. Like look at what 'Ondello' is doing with streams.
That said, here is some old-project code for android/RTSP/SDP/RTP using 'netty' and 'efflux'. It will negotiate some portions of 'Sessions' on SDP file providers. Cant remember whether it would actually play the audio portion of Youtube/RTSP stuff, but that is what my goal was at the time. ( i think that it worked using AMR-NB codec but , there were tons of issues and i dropped RTSP on android like a bad habit!)
on Git....
#Override
public void mediaDescriptor(Client client, String descriptor)
{
// searches for control: session and media arguments.
final String target = "control:";
Log.d(TAG, "Session Descriptor\n" + descriptor);
int position = -1;
while((position = descriptor.indexOf(target)) > -1)
{
descriptor = descriptor.substring(position + target.length());
resourceList.add(descriptor.substring(0, descriptor.indexOf('\r')));
}
}
private int nextPort()
{
return (port += 2) - 2;
}
private void getRTPStream(TransportHeader transport){
String[] words;
// only want 2000 part of 'client_port=2000-2001' in the Transport header in the response
words = transport.getParameter("client_port").substring(transport.getParameter("client_port").indexOf("=") +1).split("-");
port_lc = Integer.parseInt(words[0]);
words = transport.getParameter("server_port").substring(transport.getParameter("server_port").indexOf("=") +1).split("-");
port_rm = Integer.parseInt(words[0]);
source = transport.getParameter("source").substring(transport.getParameter("source").indexOf("=") +1);
ssrc = transport.getParameter("ssrc").substring(transport.getParameter("ssrc").indexOf("=") +1);
// assume dynamic Packet type = RTP , 99
getRTPStream(session, source, port_lc, port_rm, 99);
//getRTPStream("sessiona", source, port_lc, port_rm, 99);
Log.d(TAG, "raw parms " +port_lc +" " +port_rm +" " +source );
// String[] words = session.split(";");
Log.d(TAG, "session: " +session);
Log.d(TAG, "transport: " +transport.getParameter("client_port")
+" " +transport.getParameter("server_port") +" " +transport.getParameter("source")
+" " +transport.getParameter("ssrc"));
}
private void getRTPStream(String session, String source, int portl, int portr, int payloadFormat ){
// what do u do with ssrc?
InetAddress addr;
try {
addr = InetAddress.getLocalHost();
// Get IP Address
// LAN_IP_ADDR = addr.getHostAddress();
LAN_IP_ADDR = "192.168.1.125";
Log.d(TAG, "using client IP addr " +LAN_IP_ADDR);
} catch (UnknownHostException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
final CountDownLatch latch = new CountDownLatch(2);
RtpParticipant local1 = RtpParticipant.createReceiver(new RtpParticipantInfo(1), LAN_IP_ADDR, portl, portl+=1);
// RtpParticipant local1 = RtpParticipant.createReceiver(new RtpParticipantInfo(1), "127.0.0.1", portl, portl+=1);
RtpParticipant remote1 = RtpParticipant.createReceiver(new RtpParticipantInfo(2), source, portr, portr+=1);
remote1.getInfo().setSsrc( Long.parseLong(ssrc, 16));
session1 = new SingleParticipantSession(session, payloadFormat, local1, remote1);
Log.d(TAG, "remote ssrc " +session1.getRemoteParticipant().getInfo().getSsrc());
session1.init();
session1.addDataListener(new RtpSessionDataListener() {
#Override
public void dataPacketReceived(RtpSession session, RtpParticipantInfo participant, DataPacket packet) {
// System.err.println("Session 1 received packet: " + packet + "(session: " + session.getId() + ")");
//TODO close the file, flush the buffer
// if (_sink != null) _sink.getPackByte(packet);
getPackByte(packet);
// System.err.println("Ssn 1 packet seqn: typ: datasz " +packet.getSequenceNumber() + " " +packet.getPayloadType() +" " +packet.getDataSize());
// System.err.println("Ssn 1 packet sessn: typ: datasz " + session.getId() + " " +packet.getPayloadType() +" " +packet.getDataSize());
// latch.countDown();
}
});
// DataPacket packet = new DataPacket();
// packet.setData(new byte[]{0x45, 0x45, 0x45, 0x45});
// packet.setSequenceNumber(1);
// session1.sendDataPacket(packet);
// try {
// latch.await(2000, TimeUnit.MILLISECONDS);
// } catch (Exception e) {
// fail("Exception caught: " + e.getClass().getSimpleName() + " - " + e.getMessage());
// }
}
//TODO below should collaborate with the audioTrack object and should write to the AT buffr
// audioTrack write was blocking forever
public void getPackByte(DataPacket packet) {
//TODO this is getting called but not sure why only one time
// or whether it is stalling in mid-exec??
//TODO on firstPacket write bytes and start audioTrack
// AMR-nb frames at 12.2 KB or format type 7 frames are handled .
// after the normal header, the getDataArray contains extra 10 bytes of dynamic header that are bypassed by 'limit'
// real value for the frame separator comes in the input stream at position 1 in the data array
// returned by
// int newFrameSep = 0x3c;
// bytes avail = packet.getDataSize() - limit;
// byte[] lbuf = new byte[packet.getDataSize()];
// if ( packet.getDataSize() > 0)
// lbuf = packet.getDataAsArray();
//first frame includes the 1 byte frame header whose value should be used
// to write subsequent frame separators
Log.d(TAG, "getPackByt start and play");
if(!started){
Log.d(TAG, " PLAY audioTrak");
track.play();
started = true;
}
// track.write(packet.getDataAsArray(), limit, (packet.getDataSize() - limit));
track.write(packet.getDataAsArray(), 0, packet.getDataSize() );
Log.d(TAG, "getPackByt aft write");
// if(!started && nBytesRead > minBufferSize){
// Log.d(TAG, " PLAY audioTrak");
// track.play();
// started = true;}
nBytesRead += packet.getDataSize();
if (nBytesRead % 500 < 375) Log.d(TAG, " getPackByte plus 5K received");
}
}
Actually it's possible to play RTSP/RTP streams on Android by using a modified version of ExoPlayer which officially doesn't support RTSP/RTP (issue 55), however, there's an active pull request #3854 to add this support.
In the meantime, you can clone the original authors exoplayer fork which does support RTSP (branch dev-v2-rtsp):
git clone -b dev-v2-rtsp https://github.com/tresvecesseis/ExoPlayer.git.
I've tested it and it works perfectly. The authors are working actively to fix the issues reported by many users and I hope that RTSP support at some point becomes part of the official exoplayer.
Unfortunately it is not possible to play an RTP Stream with the Android MediaPlayer.
Solutions to this problems include the decoding of the RTP Stream with ffmpeg. Tutorials on how to compile ffmpeg for Android can be found on the Web.