Updating my previous question...
Currently I am working on a live media streaming application. I have been able to use the Media Recorder class to record medias and store it in a file. Currently I am trying to stream this media from android to Adobe Media streamer using RTP. What I have done us using the parcel file descriptor I have packaged the video contents which are captured from the device camera. Now I would want to use the RTPPackets class provided by Sipdroid to convert this packets into RTP packets.
I am having problems integrating this RTP Packets and using the same in to my application as I see there are multiple steps to be done
1) Creating RTP packets, even though I have got the code for RTP packetizer I am not exactly sure how to use the same and where to use this. Because I clearly not sure on what would be the default payload value, ssrc and ccrc values. May the first time these will carry the default values but the second time onward what would set the values for these parameters would it be the parcel file descriptor?
2) Creating a simple server kind of code on the mobile which continues to create the RTP packets and keep sending them to the Adobe media server?
Any help would be much appreciated.
Thanks,
K.Saravanan.
Related
I have a source that transmits video in H264 format (in this case it is Colasoft Packet Player which transmits video stream through IP:PORT) and I want to be able to listen and receive the video stream in my Android app.
I read a lot of things across the internet about Socket, DatagramSocket and AudioManager, but I'm totally confused about what exactly I need and how to implement that.
What I need is to be able to capture the video frame by frame.
I would love to get some help.
You can use the VLC Android library
And here there's an explanation on how to embed it into your app.
You can let ffmpeg do this job.
Have a look at:
https://github.com/alphons/SegmentationStreaming
I have to get the live stream video from DJI Phantom 3 camera in my C++ application, in order to do a Computer Vision processing in OpenCV.
First I tried sending the H264 raw data through an UDP socket, inside this callback:
mReceivedVideoDataCallBack = new CameraReceivedVideoDataCallback() {
#Override
public void onResult(byte[] videoBuffer, int size) {
//Here, I call a method from a class I created, that sends the buffer through UDP
if (gravar_trigger) controleVideo.enviarFrame(videoBuffer, size);
if (mCodecManager != null) mCodecManager.sendDataToDecoder(videoBuffer, size);
}
};
That communication above works well. However, I haven't been able to decode that UDP H264 data in my C++ desktop application. I have tested with FFmpeg lib, but couldn't get to alocate an AVPacketwith my UDP data, in order to decode using avcodec_send_packet and avcodec_receive_frame. I also had problems with AVCodecContext, since my UDP communication wasn't a stream like RTSP, where it could get information about its source. Therefore, I had to change how I was trying to solve the problem.
Then, I found libstreaming, in which can be associate to stream the android video camera to a Wowza Server, creating something like a RTSP stream connection, where the data could be obtained in my final C++ application easily using OpenCV videoCapture. However, libstreaming uses its own surfaceView. In other words, I would have to link the libstreaming surfaceView with the DJI Drone's videoSurface. I'm really new to Android, so don't have any clue of how to do that.
To sum up, is that the correct approach? Someone has a better idea? Thanks in advance
I'm going to wager a couple things. Well, Mostly one thing. One typically needs to handle creating fragmented video packets before sending them. The IDRs of h264 are too large for udp streaming.
Having a solid com link between endpoints, you can add a method which converts a single potentially large packet input to one or more small packet outputs.
The packets that are larger than perhaps 1000 bytes need to be broken into several h264 NALU type 28s. The packets that are small and have the same timestamp can be sent in STAP-A type 24s. Typically you can find inband sps/pps in a stap-a.
Once you have a packetizer for IDRs and large slices, write you depacketizor on the receiver and then you should get clear decoded pictures.
Refer to the h264 RTP specs for how to make the type 28s.
After a long time, I finally developed a system that can stream the DJI drone camera correctly
https://github.com/raullalves/DJI-Drone-Camera-Streaming
I'm trying to stream audio across several devices using the Nearby Connections API but I'm not really sure if this is really possible/recommendable.
What I want to do is broadcast the audio files (both songs stored on the phone and from apps such as Google Music, Spotify ...) to the other devices connected and so they can start playing the songs while the receive all the data chunks of the songs.
I think with the Nearby Connections API we can only send 4KB payload chunks when we call Nearby.Connections.sendReliableMessage() so what I'm doing so far is call that function as many times as required sending 4KB chunks each time until I manage to deliver the entire file.For the onMessageReceived () Listener what I do is to store all the chunks that I receive in a byte array so once all the chunks have been transferred I can play back the song from the byte array file.
With the approach I'm taking I guess I'd be able to reproduce the song once I've transferred it on its totality, but I'd like to reproduce the songs while I'm actually receiving the data chunks, and in a synchronized manner with all the devices.
Does this makes sense to you guys? Is it the right approach? Is there any other more effective way of doing this? (I already know about the option of streaming audio using Wifi-Direct, but I'd like to use Nearby)
The guy in this tutorial had a similar problem with chunks of audio.
He shows how to play the song, while the bytes are still downloaded and the audio file is built.
Maybe you can utilize the "Incremental Media Download"-part of the tutorial.
Quote:
This is where the magic happens as we download media content from the the url stream until we have enough content buffered to start the MediaPlayer. We then let the MediaPlayer play in the background while we download the remaining audio. If the MediaPlayer reaches the end of the buffered audio, then we transfer any newly downloaded audio to the MediaPlayer and let it start playing again.
Things get a little tricky here because:
(a) The MediaPlayer seems to lock the file so we can’t simply append our content to the existing file.
...
If this doesn't work I would just use the nearby-connection to exchange IP Adresses and go for a Wifi-Direct solution.
I hope this helps and I'd love to hear what your final solution looks like!
I implemented this several years ago for live audio/video packets sent in a serial stream to an Android 4.0 device. It would work the same for audio (or video) packets being streamed over the Nearby connections API.
The solution was to run a http streaming server from within the Android App, then consume this using the Android media player API with its http streaming capabilities (or you can embed ExoPlayer in your app if you prefer as it also supports http streaming).
This was achieved by piping the data stream directly into a FFSERVER process running on the device. The Android NDK was used to create and manage the named pipe required as input into FFSERVER.
As this was done a few years ago I have not tested this on versions of Android 4.1+. Anyone who does this will need to adhere to the FFmpeg GPL/LGPL license when building and distributing FFSERVER.
I'm trying to build a system that live-streams video and audio captured by android phones. I want to use media recorder to encode the data and then send it over RTP but the problem is how can i get the encoded data in a buffer.
You can't. At least you can't without some hacks. Media recorder does not support writing to buffers.
The trick is to create pipe, extract pipe descriptor and pass it to setOutputFile(FileDescriptor fd) function. There are some issues with this approach, as MediaRecorder does not write media content in stream-oriented way. In other words, it relies on the fact that it can navigate back through file and write some package headers later on.
More details on this can be found here: Broadcasting video with Android - without writing to local files
I found two other options (I haven't tried either):
A FileDescriptor to a memory buffer: http://www.devdaily.com/java/jwarehouse/android/core/java/android/os/MemoryFile.java.shtml
Android 4.0 implements the OpenMAX multimedia API: http://developer.android.com/about/versions/android-4.0-highlights.html
The latter is probably your best bet.
I have some design questions that I want to discuss with people interested in helping me. I am planning to develop a simple VoIP program that allows two Android phones in the same network to use VoIP. My goal is simply to capture sound, send the data with UDP, receive UDP data and play sound.
My current design is to have 2 threads: one captures the microphone and sends the data; the other one receives bytes and plays them.
I was starting to implement that using MediaPlayer and MediaRecorder. The issue that came up is how do I record and play sound? By that, I would like to know if I need to use a file, although that seems slow, or if there is anyway to have the recording automatically sent to my UDP socket please?
Basically, I wonder if I have to record to a file, then to be able to play it, or if I could just pass a socket (for recording and playing).
Does anyone has any suggestion please?
Thank you very much
MediaRecorder needs an FD so, you can use sockets as well. I dont see any issues with that. It all depends on how you would design your system.
Don't use those classes for streaming audio - use AudioTrack and AudioRecord instead.
They provide the functionality you need for playing and recording raw audio data, without dealing with an FD.
When you record a frame (either byte[] or short[]), wrap it with a UDP packet.
When you receive a UDP packet, unpack the relevant byte[] or short[] and play it.