How to send requests to server by gstreamer rtsp client? - android

I have searched the question about Gstreamer rtsp client for a long time. But no luck.
Now I can display the live stream or the recorded stream by Gstreamer(gstreamer-1.0-android-armv7-1.6.0) from server on Android device, then I want to send PLAY/PAUSE/ to change server state when playing recorded stream.
My question: is there a simple way to obtain and access pipeline when working with gst-rtsp-stream? Could someone please provide an example?
Nov 10 Update:
GstBus *bus;
CustomData *data = (CustomData *)userdata;
GSource *timeout_source;
GSource *bus_source;
GError *error = NULL;
guint flags;
/* Create our own GLib Main Context and make it the default one */
data->context = g_main_context_new ();
g_main_context_push_thread_default(data->context);
/* Build pipeline */
data->pipeline = gst_parse_launch("playbin", &error);
if (error) {
gchar *message = g_strdup_printf("Unable to build pipeline: %s", error->message);
g_clear_error (&error);
set_ui_message(message, data);
g_free (message);
return NULL;
}
/* Set latency to 0 ns */
gst_pipeline_set_latency(data->pipeline, 0);
/* Disable subtitles */
g_object_get (data->pipeline, "flags", &flags, NULL);
flags &= ~GST_PLAY_FLAG_TEXT;
g_object_set (data->pipeline, "flags", flags, NULL);
/* Set the pipeline to READY, so it can already accept a window handle, if we have one */
data->target_state = GST_STATE_READY;
gst_element_set_state(data->pipeline, GST_STATE_READY);
/* Instruct the bus to emit signals for each received message, and connect to the interesting signals */
bus = gst_element_get_bus (data->pipeline);
bus_source = gst_bus_create_watch (bus);
g_source_set_callback (bus_source, (GSourceFunc) gst_bus_async_signal_func, NULL, NULL);
g_source_attach (bus_source, data->context);
g_source_unref (bus_source);
g_signal_connect (G_OBJECT (bus), "message::error", (GCallback)error_cb, data);
g_signal_connect (G_OBJECT (bus), "message::state-changed", (GCallback)state_changed_cb, data);
gst_object_unref (bus);

Related

OpenSL: What's causing static when loading wav files through AAsset_read?

I'm working on a native android project and trying to use OpenSL to play some audio effects. Working from the native audio sample project VisualGDB provides, I've written the code posted below.
Near the end, you can see I have commented a line that stores the contents of a variable called hello in the buffer to the destination. hello comes from the sample project, and contains about 700 lines of character bytes like this:
"\x02\x00\x01\x00\xff\xff\x09\x00\x0c\x00\x10\x00\x07\x00\x07\x00"
which make an audio file of someone saying "hello". When reading that byte data into the stream, my code works fine and I hear "hello" when I run the application. When I read from wav file to play the asset I want, however, I only hear static. The size of the data buffer is the same as the size of the file, so it appears it's being read in properly. The static plays for the duration of the wav file (or very close to it).
I really know nothing about data formats or audio programming. I've tried tweaking the format_pcm variables some with different enum values, but had no success. Using a tool called GSpot I found on The Internet, I know the following about the audio file I'm trying to play:
File Size: 557 KB (570,503 bytes) (this is the same size as the data buffer
AAsset_read returns
Codec: PCM Audio
Sample rate: 48000Hz
Bit rate: 1152 kb/s
Channels: 1
Any help or direction would be greatly appreciated.
SLDataLocator_AndroidSimpleBufferQueue loc_bufq = { SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 1 };
SLDataFormat_PCM format_pcm;
format_pcm.formatType = SL_DATAFORMAT_PCM;
format_pcm.numChannels = 1;
format_pcm.samplesPerSec = SL_SAMPLINGRATE_48;// SL_SAMPLINGRATE_8;
format_pcm.bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_8; // SL_PCMSAMPLEFORMAT_FIXED_16;
format_pcm.containerSize = 16;
format_pcm.channelMask = SL_SPEAKER_FRONT_CENTER;
format_pcm.endianness = SL_BYTEORDER_LITTLEENDIAN;
SLDataSource audioSrc = { &loc_bufq, &format_pcm };
// configure audio sink
SLDataLocator_OutputMix loc_outmix = { SL_DATALOCATOR_OUTPUTMIX, manager->GetOutputMixObject() };
SLDataSink audioSnk = { &loc_outmix, NULL };
//create audio player
const SLInterfaceID ids[3] = { SL_IID_BUFFERQUEUE, SL_IID_EFFECTSEND, SL_IID_VOLUME };
const SLboolean req[3] = { SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE };
SLEngineItf engineEngine = manager->GetEngine();
result = (*engineEngine)->CreateAudioPlayer(engineEngine, &bqPlayerObject, &audioSrc, &audioSnk,
3, ids, req);
// realize the player
result = (*bqPlayerObject)->Realize(bqPlayerObject, SL_BOOLEAN_FALSE);
// get the play interface
result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_PLAY, &bqPlayerPlay);
// get the buffer queue interface
result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_BUFFERQUEUE,
&bqPlayerBufferQueue);
// register callback on the buffer queue
result = (*bqPlayerBufferQueue)->RegisterCallback(bqPlayerBufferQueue, bqPlayerCallback, NULL);
// get the effect send interface
result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_EFFECTSEND,
&bqPlayerEffectSend);
// get the volume interface
result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_VOLUME, &bqPlayerVolume);
// set the player's state to playing
result = (*bqPlayerPlay)->SetPlayState(bqPlayerPlay, SL_PLAYSTATE_PLAYING);
uint8* pOutBytes = nullptr;
uint32 outSize = 0;
result = MyFileManager::GetInstance()->OpenFile(m_strAbsolutePath, (void**)&pOutBytes, &outSize, true);
const char* filename = m_strAbsolutePath->GetUTF8String();
result = (*bqPlayerBufferQueue)->Enqueue(bqPlayerBufferQueue, pOutBytes, outSize);
// result = (*bqPlayerBufferQueue)->Enqueue(bqPlayerBufferQueue, hello, sizeof(hello));
if (SL_RESULT_SUCCESS != result) {
return JNI_FALSE;
}
Several things were to blame. The format of the wave files I was testing with was not what the specification described. There seemed to be a lot of empty data after the first chunk of header data. Also, the buffer that needs to be passed to the queue needs to be a char* of just the wav data, not the header. I'd wrongly assumed the queue parsed the header out.

Requesting interface SL_IID_ANDROIDSIMPLEBUFFERQUEUE on OpenSL ES recorder object returns SL_RESULT_FEATURE_UNSUPPORTED

I have written a basic recorder app using the Android NDK and OpenSL ES. It compiles and links fine, but when I try to run it on a Galaxy Nexus device I get the following error:
W/libOpenSLES(10708): Leaving Object::GetInterface (SL_RESULT_FEATURE_UNSUPPORTED)
This happens on the line:
res = (*recorderObj)->GetInterface(recorderObj, SL_IID_ANDROIDSIMPLEBUFFERQUEUE, &recorderBufferQueueItf);
Does this mean that recording using OpenSL ES on a Galaxy Nexus device isn't supported, or did I merely make a mistake? Below is the relevant code:
static SLObjectItf recorderObj;
static SLEngineItf EngineItf;
static SLRecordItf recordItf;
static SLAndroidSimpleBufferQueueItf recorderBufferQueueItf;
static SLDataSink recDest;
static SLDataLocator_AndroidSimpleBufferQueue recBuffQueue;
static SLDataFormat_PCM pcm;
/* Setup the data source structure */
locator_mic.locatorType = SL_DATALOCATOR_IODEVICE;
locator_mic.deviceType = SL_IODEVICE_AUDIOINPUT;
locator_mic.deviceID = SL_DEFAULTDEVICEID_AUDIOINPUT;
locator_mic.device = NULL;
audioSource.pLocator = (void *) &locator_mic;
audioSource.pFormat = NULL;
/* Setup the data sink structure */
recBuffQueue.locatorType = SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE;
recBuffQueue.numBuffers = NB_BUFFERS_IN_QUEUE;
/* set up the format of the data in the buffer queue */
pcm.formatType = SL_DATAFORMAT_PCM;
pcm.numChannels = 1;
pcm.samplesPerSec = SL_SAMPLINGRATE_44_1;
pcm.bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_16;
pcm.containerSize = SL_PCMSAMPLEFORMAT_FIXED_16;
pcm.channelMask = SL_SPEAKER_FRONT_CENTER;
pcm.endianness = SL_BYTEORDER_LITTLEENDIAN;
recDest.pLocator = (void *) &recBuffQueue;
recDest.pFormat = (void * ) &pcm;
/* Create audio recorder */
res = (*EngineItf)->CreateAudioRecorder(EngineItf, &recorderObj, &audioSource, &recDest, 0, iidArray, required);
CheckErr(res);
/* Realizing the recorder in synchronous mode. */
res = (*recorderObj)->Realize(recorderObj, SL_BOOLEAN_FALSE);
CheckErr(res);
/* Get the RECORD interface - it is an implicit interface */
LOGI("GetInterface: Recorder");
res = (*recorderObj)->GetInterface(recorderObj, SL_IID_RECORD, &recordItf);
CheckErr(res);
/* Get the buffer queue interface which was explicitly requested */
LOGI("GetInterface: Buffer Queue");
res = (*recorderObj)->GetInterface(recorderObj, SL_IID_ANDROIDSIMPLEBUFFERQUEUE, &recorderBufferQueueItf);
CheckErr(res);
Any help with this issue would be most welcome :)
When you create the Audio Recorder, you specify "0" as the third-to-last argument, which is the number of non-implicit interfaces to be supported. The buffer queue is not an implicit interface for a recorder.
Try changing
res = (*EngineItf)->CreateAudioRecorder(EngineItf, &recorderObj, &audioSource, &recDest, 0, iidArray, required);
to
res = (*EngineItf)->CreateAudioRecorder(EngineItf, &recorderObj, &audioSource, &recDest, 1, iidArray, required);

Creating a gstreamer pipeline with the intent of modifying parts

I'm working on an audio streamer, and I want to be able to modify the file I'm streaming, and also the target I'm streaming to. To do this I would modify the location for my filesrc, or I would modify the host/port of my udpsink.
I am having trouble understanding everything I need to know to get this pipeline linked together and playing. Previously I hard coded everything and used the gst pipeline parsing tool with this pipeline:
filesrc location=/storage/sdcard0/Music/RunToTheHills.ogg ! oggdemux ! vorbisdec ! audioresample ! audioconvert ! audio/x-raw-int,channels=2,depth=16,width=16,rate=44100 ! rtpL16pay ! udpsink host=192.168.100.126 port=9001
Now I want to change the filesrc location, and udp host/port as mentioned above.
My application is an Android app using the NDK. However, this should not effect the code needed to set up a pipeline.
Here's what I've got so far, which results in a segfault.
My data structure:
/**
* Structure to hold all the various variables we need.
* This is handed to callbacks
*/
typedef struct _CustomData {
jobject app; /* The Java app */
GstElement *pipeline; /* gStreamer pipeline */
GstElement *filesrc; /* Input file */
GstPad *fileblock; /* Used to block filesrc */
GstElement *ogg; /* Ogg demultiplexer */
GstElement *vorbis; /* Vorbis decoder */
GstElement *resample;
GstElement *convert;
GstCaps *caps;
GstElement *rtp; /* RTP packer */
GstElement *udp; /* UDP sender */
GMainContext *context; /* GLib Context */
GMainLoop *main_loop; /* GLib main loop */
gboolean initialised; /* True after initialisation */
GstState state; /* Pipeline state */
GstState target_state; /* What state we want to put the pipeline into */
gint64 duration; /* Clip length */
gint64 desired_position; /* Where we want to track to within the clip */
GstClockTime last_seek_time; /* Used to throttle seeking */
gboolean is_live; /* Live streams don't need buffering */
} CustomData;
And here's my creation of the pipeline:
data->pipeline = gst_pipeline_new("pipeline");
data->filesrc = gst_element_factory_make("filesrc", NULL);
if (!data->filesrc) {
GST_ERROR("Failed to create filesrc.");
return NULL;
}
g_object_set(G_OBJECT(data->filesrc), "location", "/storage/sdcard0/Music/RunToTheHills.ogg", NULL);
data->fileblock = gst_element_get_static_pad(data->filesrc, "src");
data->ogg = gst_element_factory_make("oggdemux", NULL);
if (!data->ogg) {
GST_ERROR("Failed to create oggdemux.");
return NULL;
}
data->vorbis = gst_element_factory_make("vorbisdec", NULL);
if (!data->vorbis) {
GST_ERROR("Failed to create vorbisdec.");
return NULL;
}
data->resample = gst_element_factory_make("audioresample", NULL);
if (!data->resample) {
GST_ERROR("Failed to create audioresample.");
return NULL;
}
data->convert = gst_element_factory_make("audioconvert", NULL);
if (!data->convert) {
GST_ERROR("Failed to create audioconvert.");
return NULL;
}
data->caps = gst_caps_new_simple("audio/x-raw-int",
"channels", G_TYPE_INT, 2,
"depth", G_TYPE_INT, 16,
"width", G_TYPE_INT, 16,
"rate", G_TYPE_INT, 44100);
if (!data->caps) {
GST_ERROR("Failed to create caps");
return NULL;
}
data->rtp = gst_element_factory_make("rtpL16pay", NULL);
if (!data->rtp) {
GST_ERROR("Failed to create rtpL16pay.");
return NULL;
}
data->udp = gst_element_factory_make("udpsink", NULL);
if (!data->udp) {
GST_ERROR("Failed to create udpsink.");
return NULL;
}
g_object_set(G_OBJECT(data->udp), "host", "192.168.100.126", NULL);
g_object_set(G_OBJECT(data->udp), "port", 9001, NULL);
if (!data->ogg || !data->vorbis || !data->resample || !data->convert || !data->caps || !data->rtp || !data->udp) {
GST_ERROR("Unable to create all elements!");
return NULL;
}
gst_bin_add_many(GST_BIN(data->pipeline), data->filesrc, data->ogg, data->vorbis,
data->resample, data->convert, data->caps, data->rtp, data->udp);
/* Link all the elements together */
gst_element_link(data->filesrc, data->ogg);
gst_element_link(data->ogg, data->vorbis);
gst_element_link(data->vorbis, data->resample);
gst_element_link(data->resample, data->convert);
gst_element_link_filtered(data->convert, data->rtp, data->caps);
gst_element_link(data->rtp, data->udp);
Can someone give me some hints as to where I went wrong?
For interest, here's my previously working pipeline:
data->pipeline = gst_parse_launch("filesrc location=/storage/sdcard0/Music/RunToTheHills.ogg ! oggdemux ! vorbisdec ! audioresample ! audioconvert ! audio/x-raw-int,channels=2,depth=16,width=16,rate=44100 ! rtpL16pay ! udpsink host=192.168.100.126 port=9001", &error);
if (error) {
gchar *message = g_strdup_printf("Unable to build pipeline: %s", error->message);
g_clear_error (&error);
set_ui_message(message, data);
g_free (message);
return NULL;
}
You cannot simply link the oggdemux to the vorbisdec, because the demux has sometimes pads.
You need to add a handler function for the 'pad-added' signal of the demux and then perform the link there.
/* Connect to the pad-added signal */
g_signal_connect (data->ogg, "pad-added", G_CALLBACK (pad_added_handler), data);
And the handler:
void on_pad_added (GstElement *src, GstPad *new_pad, CustomData *data)
{
GstPad *sink_pad = gst_element_get_static_pad (data->vorbis, "sink");
GstPadLinkReturn ret;
GstCaps *new_pad_caps = NULL;
GstStructure *new_pad_struct = NULL;
const gchar *new_pad_type = NULL;
g_print ("Received new pad '%s' from '%s':\n", GST_PAD_NAME (new_pad), GST_ELEMENT_NAME (src));
/* If our converter is already linked, we have nothing to do here */
if (gst_pad_is_linked (sink_pad)) {
g_print (" We are already linked. Ignoring.\n");
goto exit;
}
/* Check the new pad's type */
new_pad_caps = gst_pad_get_caps (new_pad);
new_pad_struct = gst_caps_get_structure (new_pad_caps, 0);
new_pad_type = gst_structure_get_name (new_pad_struct);
if (!g_str_has_prefix (new_pad_type, "audio/x-raw")) {
g_print (" It has type '%s' which is not raw audio. Ignoring.\n", new_pad_type);
goto exit;
}
/* Attempt the link */
ret = gst_pad_link (new_pad, sink_pad);
if (GST_PAD_LINK_FAILED (ret)) {
g_print (" Type is '%s' but link failed.\n", new_pad_type);
} else {
g_print (" Link succeeded (type '%s').\n", new_pad_type);
}
exit:
/* Unreference the new pad's caps, if we got them */
if (new_pad_caps != NULL)
gst_caps_unref (new_pad_caps);
/* Unreference the sink pad */
gst_object_unref (sink_pad);
}
Also, since you're getting a segmentation fault, i believe there is a memory issue. Are you sure you're using the CustomData structure right? I notice you're using data->element instead of data.element.

Android VideoView GStreamer Streaming(MediaController doesn't work )

I have some small project to stream video to android device. Streaming is done but I have problem with controlling video. The MediaController doesn't work when I push pause there is no effect. VideoView.pause() also doesn't work. Streaming server is based on GStreamer (server was wrote by my friend), and I'am using Android 2.2 CyanogenMod.
This is server code :
#include <gst/gst.h>
#include <gst/rtsp-server/rtsp-server.h>
int
main (int argc, char *argv[])
{
GMainLoop *loop;
GstRTSPServer *server;
GstRTSPMediaMapping *mapping;
GstRTSPMediaFactory *factory;
gchar *str;
gst_init (&argc, &argv);
if (argc < 2) {
g_message ("usage: %s <filename>", argv[0]);
return -1;
}
loop = g_main_loop_new (NULL, FALSE);
/* create a server instance */
server = gst_rtsp_server_new ();
/* get the mapping for this server, every server has a default mapper object
* that be used to map uri mount points to media factories */
mapping = gst_rtsp_server_get_media_mapping (server);
str = g_strdup_printf ("( "
"filesrc location=\"%s\" ! decodebin2 name=d "
"d. ! queue ! videoscale ! video/x-raw-yuv, width=500, height=300 "
"! ffenc_mpeg4 ! rtpmp4vpay name=pay0 "
"d. ! queue ! audioconvert ! faac ! rtpmp4apay name=pay1"
" )", argv[1]);
/* make a media factory for a test stream. The default media factory can use
* gst-launch syntax to create pipelines.
* any launch line works as long as it contains elements named pay%d. Each
* element with pay%d names will be a stream */
factory = gst_rtsp_media_factory_new ();
gst_rtsp_media_factory_set_launch (factory, str);
g_free (str);
/* attach the test factory to the /test url */
gst_rtsp_media_mapping_add_factory (mapping, "/test", factory);
/* don't need the ref to the mapper anymore */
g_object_unref (mapping);
/* attach the server to the default maincontext */
gst_rtsp_server_attach (server, NULL);
/* start serving */
g_main_loop_run (loop);
return 0;
}
From what I have gathered, the VideoView in android only accepts h.264 feeds so you need to be encoding in h.264.

How to get MJPG stream video from android IPWebcam using opencv

I am using the IP Webcam program on android and receiving it on my PC by WiFi. What I want is to use opencv in Visual Studio, C++, to get that video stream, there is an option to get MJPG stream by the following URL: http://MyIP:port/videofeed
How to get it using opencv?
Old question, but I hope this can help someone (same as my answer here)
OpenCV expects a filename extension for its VideoCapture argument,
even though one isn't always necessary (like in your case).
You can "trick" it by passing in a dummy parameter which ends in the
mjpg extension:
So perhaps try:
VideoCapture vc;
ipCam.open("http://MyIP:port/videofeed/?dummy=param.mjpg")
Install IP Camera Adapter and configure it to capture the videostream. Then install ManyCam and you'll see "MPEG Camera" in the camera section.(you'll see the same instructions if you go to the link on how to setup IPWebCam for skype)
Now you can access your MJPG stream just like a webcam through openCV. I tried this with OpenCV 2.2 + QT and works well.
Think this helps.
I did a dirty patch to make openCV working with android ipWebcam:
In the file OpenCV-2.3.1/modules/highgui/src/cap_ffmpeg_impl.hpp
In the function bool CvCapture_FFMPEG::open( const char* _filename )
replace:
int err = av_open_input_file(&ic, _filename, NULL, 0, NULL);
by
AVInputFormat* iformat = av_find_input_format("mjpeg");
int err = av_open_input_file(&ic, _filename, iformat, 0, NULL);
ic->iformat = iformat;
and comment:
err = av_seek_frame(ic, video_stream, 10, 0);
if (err < 0)
{
filename=(char*)malloc(strlen(_filename)+1);
strcpy(filename, _filename);
// reopen videofile to 'seek' back to first frame
reopen();
}
else
{
// seek seems to work, so we don't need the filename,
// but we still need to seek back to filestart
filename=NULL;
int64_t ts = video_st->first_dts;
int flags = AVSEEK_FLAG_FRAME | AVSEEK_FLAG_BACKWARD;
av_seek_frame(ic, video_stream, ts, flags);
}
That should work. Hope it helps.
This is the solution (im using IP Webcam on android):
CvCapture* capture = 0;
capture = cvCaptureFromFile("http://IP:Port/videofeed?dummy=param.mjpg");
I am not able to comment, so im posting new post. In original answer is an error - used / before dummy. THX for solution.
Working example for me
// OpenCVTest.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include "opencv2/highgui/highgui.hpp"
/**
* #function main
*/
int main( int argc, const char** argv )
{
CvCapture* capture;
IplImage* frame = 0;
while (true)
{
//Read the video stream
capture = cvCaptureFromFile("http://192.168.1.129:8080/webcam.mjpeg");
frame = cvQueryFrame( capture );
// create a window to display detected faces
cvNamedWindow("Sample Program", CV_WINDOW_AUTOSIZE);
// display face detections
cvShowImage("Sample Program", frame);
int c = cvWaitKey(10);
if( (char)c == 27 ) { exit(0); }
}
// clean up and release resources
cvReleaseImage(&frame);
return 0;
}
Broadcast mjpeg from a webcam with vlc, how described at http://tumblr.martinml.com/post/2108887785/how-to-broadcast-a-mjpeg-stream-from-your-webcam-with

Categories

Resources