Android Visualizer behavior - android

I'm trying to use a C library (Aubio) to perform beat detection on some music playing from a MediaPlayer in Android. To capture the raw audio data, I'm using a Visualizer, which sends a byte buffer at regular intervals to a callback function, which in turn sends it to the C library through JNI.
I'm getting inconsistent results (i.e. almost no beats are detected, and the only ones who are are not really consistent with the audio). I've checked multiple times, and, while I can't exactly rule out what I'm doing on my own, I'm wondering how exactly the Android Visualizer behaves, since it is not explicit in the documentation.
If I set the buffer size using setCaptureSize, does that mean that the captured buffer is averaged over the complete audio samples? For instance, if I divide the capture size by 2, will it still represent the same captured sound, but with 2 times less precision on the time axis?
Is it the same with the capture rate? For instance, does setting twice the capture size with half the rate yield the same data?
Are the captures consecutive? To put it another way, if I take too long to process a capture, are the sounds played during the processing ignored when I receive the next capture?
Thanks for your insight!

Make sure the callback function gets the entire audio signal, for instance by counting the frames that get out of it, and the ones that reach the callback.
It would help to be pointed at Visualizer documentation.

Related

How to use noise meter package in flutter to give only few decibel readings per second

I am using noise meter to read noise in decibels. When I run the app it is recording almost 120 readings per second. I don't want those many recordings. Is there any way to specify that I want only one or two recordings per second like that. Thanks in advance. noise_meter package.
I am using code from git hub which is already written using noise_meter github repo noise_meter example
I tried to calculate no. of samples using sample rate which is 40100 in the package. but I can't understand it.
As you see in the source code , audio streamer uses a fixed size buffer of a new thousand and an audio sample rate of 41000, and includes this comment Uses a buffer array of size 512. Whenever buffer is full, the content is sent to Flutter. So, small audio blocks will arrive at the consumer frequently (as you might expect from a streamer). It doesn't seem possible to adjust this.
The noise meter package simply takes each block of audio and calculates the noise level, so the rate of arrival of those is exactly the same as rate of arrival of audio blocks from the underlying package.
Given the simplicity of the noise meter calculation, you could replace it with your own code directly on top of audio streamer. You just need to collect multiple blocks of audio together before performing the simple decibel calculation.
Alternatively you could simply discard N out of each N+1 samples.

Android AudioRecord read(...) method - will it return same values if invoked twice in a short period of time?

I'm working with Android audio, using AudioRecord to capture audio in real time. I'm using AudioRecord's read(...) method to obtain audio samples. Sample rate is 48000Hz and in read(...) method i want to get 1/4 of that which is 12000 samples. What will hapen if i call read method let's say 10 times per second ? Will it return overlapping values ?
The answer is "No". You won't read the same data twice out of the AudioRecord. read() blocks until at least one "chunk" of new audio becomes available. read()s are not timing-sensitive, except in the sense that if you wait too long, parts of the AudioRecord's internal buffer will get lost/overwritten (since it is of finite size).
Source: Personal experience and many hours of testing on multiple devices, both real and emulated.

Measuring Android MediaRecorder delay

Android's MediaRecorder class introduces a significant (order of magnitude a second) delay when recording video. See e.g. Delay in preparing media recorder on android.
After extensive searching, I can't find a way to get rid of this, only workarounds avoiding MediaRecorder and using onPreviewFrame instead.
In my particular application, MediaRecorder's delay wouldn't be so bad if I could measure it programatically -- assuming its standard deviation wasn't too high under everyday background load conditions. Does anyone know how to do that?
I thought of using a FileObserver on the recorded file to find out when frames start being written but, since I don't know the pipeline delays, I couldn't draw a firm conclusion from that.
On a related note, does anyone know if the 'recording alert' sound is played before the first frame is recorded? Is there a way of turning that sound off?

Smooth playback of consecutive mp4 video sequences on Android

I want to play (render to surface) two or more consecutive mp4 video sequences (each stored in a separate file on my device and maybe not present at startup) on my Android device in a smooth (no stalls, flicker, etc.) manner. So, the viewer might get the impression of watching only one continuous video. In a first step it would be sufficient to achieve this only for my Nexus 7 tablet.
For displaying only one video I have been using the MediaCodec API in a similar way to http://dpsm.wordpress.com/2012/07/28/android-mediacodec-decoded/ and it works fine. By only creating (and configure) the second decoder after the first sequence has finished (decoder.stop and decoder.release of the first one was called), one can see the blending. For a smooth fade between two different video sequences I was thinking about having an init functionality where the second video is already initialized via decoder.configure(format, surface, null, 0) during the playback of the first one. Furthermore the first frame is also queued via decoder.queueInputBuffer.
But doing so results to the following error:
01-13 16:20:37.182: E/BufferQueue(183): [SurfaceView] connect: already connected (cur=3, req=3)
01-13 16:20:37.182: E/MediaCodec(9148): native_window_api_connect returned an error: Invalid argument (-22)
01-13 16:20:37.182: E/Decoder Init(9148): Exception decoder.configure: java.lang.IllegalStateException
It seems to me that one surface can only be used by one decoder simultaneously. So, is there any other possibility for doing that? Maybe using OpenGL?
Best,
Alex.
What you describe with using multiple instances of MediaCodec will work, but you can only have one "producer" connected to a Surface at a time. You'd need to tear down the first before you can proceed with the second, and I'm not sure how close you can get the timing.
What you can do instead is decode to a SurfaceTexture, and then draw that on the SurfaceView (using, as you thought, OpenGL).
You can see an example of rendering MP4 files to a SurfaceTexture in the ExtractMpegFramesTest example. From there you just need to render the texture to your surface (SurfaceView? TextureView?), using something like the STextureRender class in CameraToMpegTest.
There's some additional examples in Grafika, though the video player there is closer to what you already have (decoder outputs to TextureView).
Incidentally, you'll need to figure out how much of a delay to put between the last frame of movie N and the first frame of movie N+1. If the recordings were taken at fixed frame rates it's easy enough, but some sources (e.g. screenrecord) don't record that way.
Update: If you can guarantee that your movie clips have the same characteristics (size, encoding type -- essentially everything in MediaFormat), there's an easier way. You can flush() the decoder when you hit end-of-stream and just start feeding in the next file. I use this to loop video in the Grafika video player (see MoviePlayer#doExtract()).
Crazy idea. Try margin the two videos to one. They both on your device, so it shouldn't take to long. And you can implement the fade effect by yourself.

Video recording to a circular buffer on Android

I'm looking for the best way (if any...) to capture continuous video to a circular buffer on the SD card, allowing the user to capture events after they have happened.
The standard video recording API allows you to just write directly to a file, and when you reach the limit (set by the user, or the capacity of the SD card) you have to stop and restart the recording. This creates up to a 2 second long window where the recording is not running. This is what some existing apps like DailyRoads Voyager already do. To minimize the chance of missing something important you can set the splitting time to something long, like 10 minutes, but if an event occurs near the end of this timespan you are wasting space by storing the 9 minutes of nothing at the beginning.
So, my idea for now is as follows: I'll have a large file that will serve as the buffer. I'll use some code I've found to capture the frames and save them to the file myself, wrapping around at the end. When the user wants to keep some part, I'll mark it by pointers to the beginning and end in the buffer. The recording can continue as before, skipping over regions that are marked for retention.
After the recording is stopped, or maybe during that on a background thread (depending on phone/card speed) I'll copy out the marked region to another file and remove the overwrite protection.
Main question, if you don't care about the details above: I can't seem to find a way to convert the individual frames to a video file in the Android SDK. Is it possible? If not, are there any available libraries, maybe in native code, that can do this?
I don't really care about the big buffer of uncompressed frames, but the exported videos should be compressed in an Android-friendly format. But if there is a way to compress the buffer I would like to hear about it.
Thank you.
In Android's MediaRecorder there is two way to specify the output. One is a filename and another is a FileDescriptor.
Using static method fromSocket of ParcelFileDescriptor you can create an instance of ParcelFileDescriptor pointing to a socket. Then, call getFileDescriptor to get the FileDescriptor to be passed to the MediaRecorder.
Since you can get the encoded video from the socket (as if you were creating a local web server), you will be able to access individual frames of the video, although not so directly, because you will need to decode it first.

Categories

Resources