This is not really a question as much as it is a presentation of all my attempts to solve one of the most challenging functionalities I was faced with.
I use libstreaming library to stream realtime videos to Wowza Server and I need to record it at the same time inside the SD card. I am presenting below all my attempts in order to collect new ideias from the community.
Copy bytes from libstreaming stream to a mp4 file
Development
We created an interception in libstreaming library to copy all the sent bytes to a mp4 file. Libstreaming sends the bytes to Wowza server through a LocalSocket. It users MediaRecorder to access the camera and the mic of the device and sets the output file as the LocalSocket's input stream. What we do is create a wrapper around this input stream extending from InputStream and create a File output stream inside it. So, every time libstreaming executes a reading over the LocaSocket's input stream, we copy all the data to the output stream, trying to create a valid MP4 file.
Impediment
When we tried to read the file, it is corrupted. We realized that there are meta information missing from the MP4 file. Specifically the moov atom. We tried to delay the closing of the streaming in order to give time to send this header (this was still a guessing) but it didn't work. To test the coherence of this data, we used a paid software to try to recover the video, including the header. It became playable, but it was mostly green screen. So this became an not trustable solution. We also tried using "untrunc", a free open source command line program and it couldn't even start the recovery, since there was no moov atom.
Use ffmpeg compiled to android to access the camera
Development
FFMPEG has a gradle plugin with a java interface to use it inside Android apps. We thought we could access the camera via command line (it is probably in "/dev/video0") and sent it to the media server.
Impediment
We got the error "Permission Denied" when trying to access the camera. The workaround would be to root the device to have access to it, but it make the phones loose their warranty and could brick them.
Use ffmpeg compiled to android combined with MediaRecorder
Development
We tried to make FFMPEG stream a mp4 file being recorded inside the phone via MediaRecorder
Impediment
FFMPEG can not stream MP4 files that are not yet done with the recording.
Use ffmpeg compiled to android with libstreaming
Development
Libstreaming uses LocalServerSocket as the connection between the app and the server, so we thought that we could use ffmpeg connected with LocalServerSocket local address to copy the streaming directly to a local file inside the SD card. Right after the streaming started, we also ran the ffmpeg command to start recording the data to a file. Using ffmpeg, we believed that it would create a MP4 file in the proper way, which means with the moov atom header included.
Impediment
The "address" created is not readable via command line, as a local address inside the phone. So the copy is not possible.
Use OpenCV
Development
OpenCV is an open-source, cross-platform library that provides building blocks for computer vision experiments and applications. It offers high-level interfaces for capturing, processing, and presenting image data. It has their own APIs to connect with the device camera so we started studding it to see if it had the necessary functionalities to stream and record at the same time.
Impediment
We found out that the library is not really defined to do this, but more as image mathematical manipulation. We got even the recommendation to use libstreaming (which we do already).
Use Kickflip SDK
Development
Kickflip is a media streaming service that provides their own SDK for development in android and IOS. It also uses HLS instead of RTMP, which is a newer protocol.
Impediment
Their SDK requires that we create a Activity with camera view that occupies the entire screen of the device, breaking the usability of our app.
Use Adobe Air
Development
We started consulting other developers of app's already available in the Play Store, that stream to servers already.
Impediment
Getting in touch with those developers, they reassured that would not be possible to record and stream at the same time using this technology. What's more, we would have to redo the entire app from scratch using Adobe Air.
UPDATE
Webrtc
Development
We started using WebRTC following this great project. We included the signaling server in our NODEJS server and started doing the standard handshake via socket. We were still toggling between local recording and streaming via webrtc.
Impediment
Webrtc does not work in every network configuration. Other than that, the camera acquirement is all native code, which makes a lot harder to try to copy the bytes or intercept it.
If you are willing to part with libstreaming, there is a library which can easily stream and record to a local file at the same time.
https://github.com/pedroSG94/rtmp-rtsp-stream-client-java
Clone the project and run the sample app. For example, tap "Default RTSP." Type in your endpoint. Tap "Start stream" then tap "Start record." Then tap "Stop Stream" and "Stop record." I've tested this with Wowza Server and it works well. The project can also be used as a library rather than standalone app.
As soon as OpenCV 3.0 is available (the RC1 can be downloaded here), we could add another option to this list:
Using OpenCV's built in Motion-JPEG encoder
Related
Whereas on Android 4.2 and up camera2 allows me to grab the raw data stream from the camera, older devices only allow me to write the encoded stream to a file.
I ran Skype on some older devices (e.g. SDK 10) and was able to make a video call, which means that Skype must somehow be able to grab the unencoded stream before it is encoded and written to file.
I found some interesting articles on the web,
http://www.hnwatcher.com/r/1170899/Video-recording-and-processing-in-Android/
http://code.google.com/p/ipcamera-for-android/
https://github.com/NanoHttpd/nanohttpd/
but I don't see how this would be working reliably and across all devices.
I was able to read out the encoded file while Android was still writing the video, but the problem here was that Android writes the MOOV box to the end of the file and only when recording has stopped. So the information in the MDAT box is worthless before the file is closed.
Does anybody know of a library that I could use to grab the data stream from the camera and immediately use it as live stream? Has anybody tried to find out how Skype does this technically?
I am attempting to interface with a Motorola Blink1 camera (baby monitor) which hosts an unencrypted video+audio stream, the video is mjpeg but of particular interest is the ADPCM encoded audio stream. The video+audio feed is made available via a public URL on the local network.
Does anyone know how one might connect and decode such a video stream with the audio (I know OpenCV etc can do this without audio) within an Android app? Or failing that, any open source Java lib that can do this?
As an aside the desktop/web interface on the device uses the Java applet based GNU GPL v2 Cambozola viewer here:
http://charliemouse.com/code/cambozola/index.html
which Motorola have modified to add ADPCM support but do not appear to have released the modified source anywhere :-/ however it does indicate that this can be done...
Full credit to Motorola as after I wrote to their tech support requesting their modified GPL source for Cambozola, they did after a couple of months get their dev team to prepare the source code for their web viewer app and publish it here
https://github.com/nikhilvs/cambozola-bms
And it includes the ADPCM decoder routines integrated into it. For future reference they are prepending the JPEG stream with a short binary stream containing frame count, temperature info and the audio packets.
Kudos to your dev team Motorola for releasing your code in response to a tech support query I really wasn't expecting such a positive response.
I'm working on this right now, and I've identified most of the things needed to decode the ADPCM stream. Documenting my progress here: http://www.surfrock66.com/improving-the-motorola-blink-baby-monitorcamera/
It's using an open source streamer (GPLv3) which they've modified, code here: https://code.google.com/p/mjpg-streamer/ I'm actually contacting Motorola to get the source now, as they are under obligation to publish their modifications.
For a school project my partner and I are trying to push video from an android tablet (Nexus 7) to a server (as an ip webcam), pull from the server into OpenCV 2.4.6, process that, send it back to a server, and have the tablet display the feed in real or near-real time.
We arent using opencv for android because the goal is for a remote user to decide how to process the video (i.e. selecting a template to match or something from the stream).
Our question is this: we have managed to get the android webcam stream onto a server as an h264 rtsp stream. All the documentation for how to pull an rtsp stream is either outdated, really confusing, or altogether non-existent. We tried using a VideoCapture object and then tried using cvCreateFileCapture but neither seem to be working. How do we do this?
There is an open source project that does precisely this combination.
Android generating the content from front or rear camera
RTSP protocol for streaming
h264
https://code.google.com/p/spydroid-ipcamera/
I want to stream video recording from my android phone to network media server.
The first problem is that when setting MediaRecorder output to socket, the stream is missing some mdat size headers. This can be fixed by preprocessing that stream locally and adding missing data to stream in order to produce valid output stream.
The question is how to proceed from there.
How can I go about output that stream as an RTMP stream?
First, let's unwind your question. As you've surmised, RTMP isn't currently supported by Android. You can use a few side libraries to add support, but these may not be full implementations or have other undesirable side effects and bugs that cause them to fail to meet your needs.
The common alternative in this case is to use RTSP. It provides a comparable session format that has its own RFC, and its packet structure when combined with RTP is very similar (sans some details) to your desired protocol. You could perform the necessary fixups here to transmute RTP/RTSP into RTMP, but as mentioned, such effort is currently outside the development scope of your application.
So, let's assume you would like to use RTMP (invalidating this thread) and that the above-linked library does not meet your needs.
You could, for example, follow this tutorial for recording and playback using Livu, Wowza, and Adobe Flash Player, talking with the Livu developer(s) about licensing their client. Or, you could use this client library and its full Android recorder example to build your client.
To summarize:
RTSP
This thread, using Darwin Media Server, Windows Media Services, or VLC
RTMP
This library,
This thread and this tutorial, using Livu, Wowza, and Adobe Flash Player
This client library and this example recorder
Best of luck with your application. I admit that I have a less than comprehensive understanding of all of these libraries, but these appear to be the standard solutions in this space at the time of this writing.
Edit:
According to the OP, walking the RTMP library set:
This library: He couldn't make the library demos work. More importantly, RTMP functionality is incomplete.
This thread and this tutorial, using Livu, Wowza, and Adobe Flash Player: This has a long tutorial on how to consume video, but its tutorial on publication is potentially terse and insufficient.
This client library and this example recorder: The given example only covers audio publication. More work is needed to make this complete.
In short: more work is needed. Other answers, and improvements upon these examples, are what's needed here.
If you are using a web-browser on Android device, you can use WebRTC for video capturing and server-side recording, i.e with Web Call Server 4
Thus the full path would be:
Android Chrome [WebRTC] > WCS4 > recording
So you don't need RTMP protocol here.
If you are using a standalone RTMP app, you can use any RTMP server for video recording. As i know Wowza supports H.264+Speex recording.
I am working on an Android app, that needs to do the following:
- capture a (animated) view to video including audio (from a mp3 file)
- encode the captured video (probably a bunch of raw image buffers) and audio to avi.
After searching, FFMPEG seems the most suitable. Does anybody have a sample code to accomplish what I need. I would really appreciate.
Whyhow
It's not clear what you mean by 'a (animated) view' to capture, but be aware that android apps running with normal permissions cannot access the raw framebuffer. The computation part of ffmpeg builds in the ndk without undue work and there's a lot you can read about on the web, but the output (or in your case input) drivers are a bit of a permissions problem. Also you should expect encoding to be much slower than real time unless you can somehow manage to leverage hardware acceleration features of your particular device's SOC.
if u are building your app for android then u can use .avi writer code. You can get this code from "Koders website". Search for "Koders site" on google .you will get the link. I have tested the .avi file writer code and its working fine.