Video recording to a circular buffer on Android - android

I'm looking for the best way (if any...) to capture continuous video to a circular buffer on the SD card, allowing the user to capture events after they have happened.
The standard video recording API allows you to just write directly to a file, and when you reach the limit (set by the user, or the capacity of the SD card) you have to stop and restart the recording. This creates up to a 2 second long window where the recording is not running. This is what some existing apps like DailyRoads Voyager already do. To minimize the chance of missing something important you can set the splitting time to something long, like 10 minutes, but if an event occurs near the end of this timespan you are wasting space by storing the 9 minutes of nothing at the beginning.
So, my idea for now is as follows: I'll have a large file that will serve as the buffer. I'll use some code I've found to capture the frames and save them to the file myself, wrapping around at the end. When the user wants to keep some part, I'll mark it by pointers to the beginning and end in the buffer. The recording can continue as before, skipping over regions that are marked for retention.
After the recording is stopped, or maybe during that on a background thread (depending on phone/card speed) I'll copy out the marked region to another file and remove the overwrite protection.
Main question, if you don't care about the details above: I can't seem to find a way to convert the individual frames to a video file in the Android SDK. Is it possible? If not, are there any available libraries, maybe in native code, that can do this?
I don't really care about the big buffer of uncompressed frames, but the exported videos should be compressed in an Android-friendly format. But if there is a way to compress the buffer I would like to hear about it.
Thank you.

In Android's MediaRecorder there is two way to specify the output. One is a filename and another is a FileDescriptor.
Using static method fromSocket of ParcelFileDescriptor you can create an instance of ParcelFileDescriptor pointing to a socket. Then, call getFileDescriptor to get the FileDescriptor to be passed to the MediaRecorder.
Since you can get the encoded video from the socket (as if you were creating a local web server), you will be able to access individual frames of the video, although not so directly, because you will need to decode it first.

Related

How to download a part of mp3 from server?

Use Case
My use case is roughly equal to, adding a 15-second mp3 file to a ~1 min video. All transcoding merging part will be done by FFmpeg-android so that's not the concern right now.
The flow is as follows
User can select any 15 seconds (ExoPlayer-streaming) of an mp3 (considering 192Kbps/44.1KHz of 3mins = up to 7MB)
Then download ONLY the 15 second part and add it to the video's audio stream. (using FFmpeg)
Use the obtained output
Tried solutions
Extracting fragment of audio from a url
RANGE_REQUEST - I have replicated the exact same algorithm/formula in Kotlin using the exact sample file provided. But the output is not accurate (± 1.5 secs * c) where c is proportional to startTime
How to crop a mp3 from x to x+n using ffmpeg?
FFMPEG_SS - This works flawlessly with remote URLs as input, but there are two downsides,
as startTime increases, the size of downloaded bytes are closer to the actual size of the mp3.
ffmpeg-android does not support network requests module (at least the way we complied)
So above two solutions have not been fruitful and currently, I am downloading the whole file and trimming it locally, which is definitely a bad UX.
I wonder how Instagram's music addition to story feature works because that's close to what I wanted to implement.
Its is not possible the way you want to do it. mp3 files do not have timestamps. If you just jump to the middle of an mp3, (and look for the frame start marker), then start decoding, You have no idea at what time this frame is for, because frames are variable size. The only way to know, is to count the number of frames before the current position. Which means you need the whole file.

What would be the most efficient way to buffer the last X minutes of MediaProjection

I'm having a bit of trouble thinking of an efficient solution. There are a few problems I am foreseeing, the first being...
OOM Prevention
If I wanted the past 30 seconds or even 5 minutes it's doable, but what if I wanted the past 30 minutes or full hour, or maybe EVERYTHING? Keeping a byte buffer means storing it in RAM. Storing over a hundred megabytes sounds like Virtual Memory suicide.
Okay so what if we store a Y amount of time, say 30 seconds, of the previously recorded media to disk in some tmp file. That potentially could work and I can use a library like mp4 parser to concatenate them all when finished. However...
If we have 30 minutes worth that's about 60 30-second clips. This seems like a great way to burn through an SD card and even if that's not a problem I can't imagine the time needed to concatenate over a hundred files into one.
From what I've been researching, I was thinking of using local sockets to do something like...
MediaRecorder -> setOutputFile(LocalSocket.getFD())
Then in the local socket...
LocalSocket -> FileOutputStream -> write(data, position, bufsiz) -> flush()
Where the background thread handles writing and keeping track of the position, and the buffer.
This is purely pseudocode and I'm not far enough in yet to test this, am I going in the right direction with this? From what I'm thinking, this only keeps one file which gets overwritten. As it only gets written to once every Y seconds it minimized IO overhead and also minimizes the amount of RAM it eats up.
Video Length to Buffer Size
How would I obtain the size the buffer should be from requested video size. It's strange since I see some long videos that are small but short videos that are huge. So I don't know how to accurately determine this. Anyone know how I can predict this if I know the video length, encoding, etc which gets set up from Media Recorder?
Examples
Does anyone know of any examples of this? I don't think the idea is entirely original but I don't see a lot of them out there and if it does it is closed source. An example goes a long way.
Thanks in advance
The "continuous capture" Activity in Grafika does this, using MediaCodec rather than MediaRecorder. It keeps the last N seconds of video in a circular buffer in memory, and writes it to disk when requested by the user.
The CircularEncoder constructor estimates the memory requirements based on target bit rate. At a reasonable rate (say 4Mbps) you'd need 1.8GB to store an hour worth of video, so that's not going to fit in RAM on current devices. Five minutes is 150MB, which is pushing the bounds of good manners. Spooling out to a file on disk is probably necessary.
Passing data through a socket doesn't buy you anything that you don't get from an appropriate java.util.concurrent data structure. You're just involving the OS in a data copy.
One approach would be to create a memory-mapped file, and just treat it the same way CircularEncoder treats its circular buffer. In Grafika, the frame data goes into a single large byte buffer, and the meta-data (which tells you things like where each packet starts and ends) sits in a parallel array. You could store the frame data on disk, and keep the meta-data in memory. Memory mapping would work for the five-minute case, but generally not for the full hour case, as getting a contiguous virtual address range that large can be problematic.
Without memory-mapped I/O the approach is essentially the same, but you have to seek/read/write with file I/O calls. Again, keep the frame metadata in memory.
An additional buffer stage might be necessary if the disk I/O stalls. When writing video data through MediaMuxer I've seen periodic one-second stalls, which is more buffering than MediaCodec has, leading to dropped frames. You can defer solving that until you're sure you actually have a problem though.
There are some additional details you need to consider, like dropping frames at the start to ensure your video starts on a sync frame, but you can see how Grafika solved those.

Android Visualizer behavior

I'm trying to use a C library (Aubio) to perform beat detection on some music playing from a MediaPlayer in Android. To capture the raw audio data, I'm using a Visualizer, which sends a byte buffer at regular intervals to a callback function, which in turn sends it to the C library through JNI.
I'm getting inconsistent results (i.e. almost no beats are detected, and the only ones who are are not really consistent with the audio). I've checked multiple times, and, while I can't exactly rule out what I'm doing on my own, I'm wondering how exactly the Android Visualizer behaves, since it is not explicit in the documentation.
If I set the buffer size using setCaptureSize, does that mean that the captured buffer is averaged over the complete audio samples? For instance, if I divide the capture size by 2, will it still represent the same captured sound, but with 2 times less precision on the time axis?
Is it the same with the capture rate? For instance, does setting twice the capture size with half the rate yield the same data?
Are the captures consecutive? To put it another way, if I take too long to process a capture, are the sounds played during the processing ignored when I receive the next capture?
Thanks for your insight!
Make sure the callback function gets the entire audio signal, for instance by counting the frames that get out of it, and the ones that reach the callback.
It would help to be pointed at Visualizer documentation.

Read raw audio and extract SMPTE timecode in android

I am trying to extract the SMPTE timecode (wikipedia) from an audio input stream in android.
As mentioned here https://stackoverflow.com/a/2099226 first step is to scan the input stream byte sequence for 0011111111111101 to synchronize. But how to do this with the AudioRecord class?
That answer isn't really correct. The audio signal you are getting is a modulated carrier wave, and extracting SMPTE bits from it is a multi-step process: The raw data you get through the mike or audio in isn't going to correspond to SMPTE timecode. Therefore, you need to decode the audio, which is not at all simple.
The first step is to convert your audio signal from biphase mark code. I haven't implemented a SMPTE reader myself, but you know the clock rate from the SMPTE standard, so the first thing I would do is filter carefully to get rid of background noise, since it sounds like you are taking the audio in from the mike. A gentle high-pass to remove any DC offset should do and a gentle lowpass for HF noise should also help. (You could instead use a broad bandpass)
Then, you need to find the start of each clock cycle. You could do something fancy like an autocorrelation or PLL algorithm, but I suspect that knowing the approximate clock rate from from the SMPTE standard and being able to adjust a few percent up and down is good enough -- maybe better. So, just look for repeating transitions according to the spec. Doing something fancy will help if you suspect your timecode is highly warped (which might be the case if you have a really old tape deck or you want to sync at very high/low speeds, but LTC isn't really designed for this. That's more VTC's domain.).
Once you've identified the clock, you need to determine, for each clock tick, if a transition in the signal occurred at the start of the clock cycle. Each clock tick will have a transition in the middle, but a transition at the start indicates a 0 bit. That's how BMC transmits both clock and data in a single stream. This allows you to create a new stream of your actual SMPTE data.
Now you've decoded the BMC into a SMPTE stream. The next step is to look for the sync code. Looking at the spec on Wikipedia and from what I remember of SMPTE, I would assert that it is not enough to find a single sync code, which may happen by accident or coincidence elsewhere in the 80-bit block. Instead, you must find several in a row at the right interval. Then you can read your data into 80-bit SMPTE blocks, and, as you read, you must continue to verify the sync codes. If you don't see one where you expected it, start the search from scratch.
Finally, once you've decoded, you'll have to come up with some way to "flywheel" because you will almost certainly not read all data correctly all the time (no checksums!). That is the nature of the beast.

Taking multiple pictures fast and uploading them in android

I am developing a security application which records a series of images and then uploads them to a server. I have a few problems through.
1 My picture capture code is working, but it is very slow, I have the takePicture() method inside onPictureCallback to take another picture, however I only get a couple pictures a minute, however if on the system camera app by clicking the shutter button very fast you can take pictures at a much higher speed, I thought my way would be the fastest possible, do you know how I can increase the the speed?
2 My upload code is also working, however im not sure how to create a upload que from the pictures taken. I have tried using a database, however the file comes back static and I can't put the then static URI into the database as the method won't accept a static variable. I can't use a standard array as I would like to be able to resume uploading if the phone restarts.
3 Lastly im only using taking pictures as their doen't seem to be a way to access frames while recording video. Is their some sort of way, to reocord video a low framerate, pause it get a frame put that in an upload que and then carry on recording?
Im just guessing that if you pause a video your saving it somewhere temporarily and carrying on afterwards.
I would be very grateful if I can get help with any of the 3 issues.
For problem number 2, try using a scheme that can handle concurrent connections like non-blocking sockets or something similar so that multiple images can be uploaded at once. This method would make the queueing scheme unnecessary.
If possible, I would recommend using a networking library like eventlet since it handles all of that ugly concurrent networking code for you.

Categories

Resources