I am implementing an Application that includes the functionality of saving Recorded Video in to Different Video Files based on a certain amount of Time.
For Achieving that i have implemented a Custom Camera and used the MediaRecorder.stop() and MediaRecorder.start() in a certain Loop.
But this approach is creating a Lag Effect while restarting Media Recorder (Stop and Start). Is it possible to seamlessly Stop and Start Recording using Media Recorder or any Third Party Library ?
Any help is Highly Appreciated.
I believe the best solution to implement chunks recording is to set maximum time in MediaRecorder Object
mMediaRecorder.setMaxDuration(CHUNK_TIME);
then you can attach an info listener, it will intimate you when it will hit maximum chunk time
mMediaRecorder.setOnInfoListener(new MediaRecorder.OnInfoListener() {
#Override
public void onInfo(MediaRecorder mr, int what, int extra) {
if (what == MediaRecorder.MEDIA_RECORDER_INFO_MAX_DURATION_REACHED) {
// restartVideo()
}
}
});
in restartVideo you should firstly clear previous MediaRecorder Object and start video again.
You can create two instances of MediaRecorder which will overlap slightly (i.e. when the stream is close to the end of the first chunk you can prepare and start the second one). It is possible to record 2 video files using 2 MediaRecorders at the same time if they capture only the video. Unfortunately sharing the mic between 2 MediaRecorder instances is not supported.
Related
I want to implement loop video recording (e.g., DVR devices for vehicles/cars work this way).
MediaRecorder has setMaxDuration method:
After recording reaches the specified duration, a notification will be
sent to the MediaRecorder.OnInfoListener with a "what" code of
MEDIA_RECORDER_INFO_MAX_DURATION_REACHED and recording will be
stopped. Stopping happens asynchronously, there is no guarantee that
the recorder will have stopped by the time the listener is notified.
So when it reaches that "max duration" it stops recording but asynchronously and how can I start a new recording session if previous one can still be in progress?
Should I create a new instance of MediaRecorder for the next recording session? Will it work fine?
private val infoListener: MediaRecorder.OnInfoListener =
MediaRecorder.OnInfoListener { mr, what, extra ->
when (what) {
MediaRecorder.MEDIA_RECORDER_INFO_MAX_DURATION_REACHED -> {
// I want to start a new recording session
}
...
}
}
For a continuous recording application, setMaxFileSize() is more useful, because the MediaRecorder will send a MEDIA_RECORDER_INFO_MAX_FILESIZE_APPROACHING code to OnInfoListener, at which point your application can call setNextOutputFile() to set the next filename and allow the MediaRecorder to continue into the new file without stopping and restarting the recording from the application. If you know the video and audio bitrates, you can estimate the file size corresponding to your desired duration. It is will not be the exact duration, but still useful for accomplishing basic chunked recording.
Your application will need to keep track of the files you create and delete the old ones if you want to implement a circular recording scheme with a limited total storage size.
When I play a sound in my app it comes off as clippy and distorted. Here is a recording: recording.
Here is the sound file as it was uploaded to android studio: success sound cue
Here is the function that calls the sound from the .raw
public void playCorrectAnswerSound() {
final MediaPlayer mp = MediaPlayer.create(this, R.raw.correct);
mp.start();
}
Heres how I call it:
Thread t = new Thread(){
public void run(){
playCorrectAnswerSound();
}
};
t.start()
This is my first time debugging a sound related issue. I don't know what else to include in this post, so if you need more info please say so.
EDIT: I was asked to record more of the distortion. Here it is. Also I should say that after more testing, my physical device lacks the sound distortion while the sound distortion is present on 3 different emulators.
I'm going to say this is stuttering due to underrun (starvation) of the MediaPlayer's internal playback buffer. That is, the MediaPlayer can't supply data fast enough to keep up with the sound hardware (which suggests a severe lack of processing power). If the buffer starves, it'll start to play back old data (because it's a circular buffer). This causes a sharp phase transition, which sounds like a "click". Presumably the MediaPlayer recovers quickly enough that the "correct" sound resumes playing shortly thereafter.
Here is a picture of the spectrum from Audacity. 0-4KHz. The first row is the clean .mp3; the next four rows are the distorted recordings (in no particular order). All rows have been aligned in time, and are roughly the same amplitude. The large vertical stripes in the last four rows represent the distortion/clicks that you hear.
My goal is to play local file while recording device's microphone input with low-latency.
I've come to Superpowered library, because from the documentation it provides low-latency feature.
I've created the player using SuperpoweredAdvancedAudioPlayer and SuperpoweredAndroidAudioIO and it plays fine.
SuperpoweredAndroidAudioIO has the construcor with parameters boolean enableInput, boolean enableOutput. Currently I'm using enableInput == false and enableOutput == true. When I put these parameters to true - no effect.
I wonder if it is possible to record file and play other file simultaneously?
Also there is SuperpoweredRecorder class in library but it says not for direct writing to disk. And need to use createWAV, fwrite, closeWAV methods.
I've tried implement Recorder separately but the quality is not good (it is two-three times faster than real recording + sound is distored).
Here is the simplest piece of code for recording I used:
void SuperpoweredFileRecorder::start(const char *destinationPath) {
file = createWAV(destinationPath, sampleRate, 2);
audioIO = new SuperpoweredAndroidAudioIO(sampleRate, bufferSize, true, false, audioProcessing, NULL, bufferSize); // Start audio input/output.
}
void SuperpoweredFileRecorder::stop() {
closeWAV(file);
audioIO->stop();
}
static bool audioProcessing(void *clientdata, short int *audioInputOutput, int numberOfSamples, int samplerate) {
fwrite(audioInputOutput, sizeof(short int), numberOfSamples, file);
return false;
}
Probably I cannot use Superpowered for that purpose and need to just make recording with OpenSL ES directly.
Thanks in advance!
After experiments I found the solution.
SuperpoweredRecorder works fine for recording tracks;
I've created to separate SuperpoweredAndroidAudioIO sources - one for playback and another for recorder. After some synchronization manipulation it works well (I minimized latency to very low level, so it suits my needs).
I post some code snippet with the idea I implemented:
https://bitbucket.org/snippets/kasurd/Mynnp/nativesuperpoweredrecorder-with
Hope it helps somebody!
You can do this with one instance of the SuperpoweredAndroidAudioIO with enableInput and enableOutput set to true.
The audio processing callback (audioProcessing() in your case) receives audio (microphone) in the audioInputOutput parameter. Just pass that to your SuperpoweredRecorder, and it will write it onto disk.
After that, do your SuperpoweredAdvancedAudioPlayer processing, and convert the result into audioInputOutput. That will go to the audio output.
So it's like, in pseudo-code:
audioProcessing(audioInputOutput) {
recorder->process(audioInputOutput)
player->process(some_buffer)
float_to_short_int(some_buffer, audioInputOutput)
}
Never do any fwrite in the audio processing callback, as it must complete within a very short time, and disk operations may be too slow.
For me this works when I double the numberOfSamples
fwrite(audioInputOutput, sizeof(short int), numberOfSamples * 2, file);
This will lead to a clear stereo output
I try to stream (progressive e.g: http://server.com/video.mp4)
when i use the standard google mediaplayer (VideoView from android package) and register an onBufferingUpdateListener then i get the bufferpercentage that refers to the download state of the hole video. This player has also a loading view where i can see the buffer state.
This bufferpercentage and view shows me how much of the video has been downloaded.
Now when i use the Vitamio player, the onBufferingUpdateListener shows me after a few seconds 99 percent of buffering and there is no loading view too. And when i pause the playback it stops buffering immediately instead of continue buffering like the google videoview does. This is very usefull if you have a slow http stream.
Is there a way to make the vitamio-videoplayer buffer the videofiles in the same way as the google videoplayer does?
thank you
daniel
Sorry i posted that question as wrong user. Here the Answer of what i tried:
VideoView (android default - just plays few video formats) from inside the android.widget and from io.vov.vitamio.widget (vitamio - plays most video formats) package has the same structure. In both you can register an OnBufferingUdateListener that returns the bufferstate in percent:
videoview.setOnBufferingUpdateListener(new io.vov.vitamio.MediaPlayer.OnBufferingUpdateListener() {
public void onBufferingUpdate(io.vov.vitamio.MediaPlayer mp, int i) {
Log.v(TAG, "Buffer percentage done: "+i);
}
});
or with the android default VideoView:
videoview.setOnBufferingUpdateListener(new android.media.MediaPlayer.OnBufferingUpdateListener() {
public void onBufferingUpdate(android.media.MediaPlayer mp, int i) {
Log.v(TAG, "Buffer percentage done: "+i);
}
});
If i use android.widget.VideoView the buffer percentage slowly increases until it reaches 100% - The video file has been downloaded completely. And it continues updating BufferingUpdate when i press the pause button.
When i use io.vov.vitamio.widget.VideoView the percentage reaches 100% within seconds. Then the video starts and the OnBufferingUpdateListener never gets called again (when i call getBufferPercentage it is always at 99 percent. That seems to be the reason). And as i sayed: It seems to stop buffering when i press the pause button.
I think the buffering works different in vitamio. But that's crap. Especially when i stream videos from the web and the video datarate is higher than the download speed i need to prebuffer the video by pressing pause and wait until it has downloaded enough data to watch it smoothly. Hope you got what i mean. thank you
Now, i have made a library to concatenate 2 videos, using the mp4parser library.
And with this i can pause and resume recording a video (after it records the second video, it appends it to the first one).
Now, my boss told me to do a wrapper, and use this for the phones that do not have hardware support for pausing a video. For phones that have that (Samsung Galaxy S2 and Samsung Galaxy S1 can pause a video recording , with their camera application), i need to do this with no libraries, so it would be fast.
How can I implement this native, if as seen on the media recorder state diagram, http://developer.android.com/reference/android/media/MediaRecorder.html , there is no pause state?
I have decompiled the Camera.apk app from an Samsung Galaxe Ace, and the code has in the CamcorderEngine.class a method like this:
public void doPauseVideoRecordingSync()
{
Log.v("CamcorderEngine", "doPauseVideoRecordingSync");
if (this.mMediaRecorder == null)
{
Log.e("CamcorderEngine", "MediaRecorder is not initialized.");
return;
}
if (!this.mMediaRecorderRecording)
{
Log.e("CamcorderEngine", "Recording is not started yet.");
return;
}
try
{
this.mMediaRecorder.pause();
enableAlertSound();
return;
}
catch (RuntimeException localRuntimeException)
{
Log.e("CamcorderEngine", "Could not pause media recorder. ", localRuntimeException);
enableAlertSound();
}
}
If I try this.mMediaRecorder.pause(); in my code, it does not work, how is this possible, they use the same import (android.media.MediaRecorder). Have they rewritten the whole code at a system level?
Is it possible to take the input stream of the second video (while recording it), and directly append this data to my first video?
for my concatenate method, i use 2 parameters (the 2 videos, which both are FileInputStream), is it possible to take the InputStream from the recording function and pass it as the second parameter?
If I try this.mMediaRecorder.pause();
The MediaRecorder class does not have a pause() function, so this is obvious that there is a custom MediaRecorder class on this specific device. This is not something unusual, as the only thing required from the OEMs is to pass the "android compatability tests" on the device; there is no restriction on adding functionality.
Is it possible to take the input stream of the second video (while
recording it), and directly append this data to my first video?
I am not sure if you can do this, because the video stream is encoded data (codec header, key frames, and so on), and just combining 2 streams into 1 file will not produce a valid video file in my opinion.
Basically what you can do:
get raw data images from camera preview surface (see Camera.setPreviewCallback())
use a android.media.MediaCodec to encode the video
and then use an OutputFilStream to write to the file.
This will give you the flexability you want, as in this case you in you app decide which frames get into encoder, and which do not.
However, it maybe an overkill for your specific project, as well as some performance issues may rise.
PS. Oh, an by the way, try taking a look at the MediaMuxer - maybe it can help you too. developer.android.com/reference/android/media/MediaMuxer.html