I'm getting an error when I start a live streaming using mediarecoder using libstreaming android on sony z and wowza streaming server.
07-20 10:49:37.832: E/MediaRecorder(6752): start failed: -19
ConfNotSupportedException
This error is thrown when I change frame rate of video in (15 < fps < 30 or fps > 30). If I set fps = 15 or fps = 30, this error is not thrown. This error is only on sony z device, on some different devices are Samsung, Htc, Nexus are not.
I downloaded Wowza Gocoder app to test whether this error is only on sony z. And, I can change fps from 15 to 60 without any errors. So, I guess the libstreaming library has problems.
My config code :
// Configures the SessionBuilder
mSession = SessionBuilder.getInstance()
.setContext(getApplicationContext())
.setAudioEncoder(SessionBuilder.AUDIO_AAC)
.setAudioQuality(new AudioQuality(8000, 16000))
.setVideoEncoder(SessionBuilder.VIDEO_H264)
.setSurfaceView(mSurfaceView).setPreviewOrientation(0)
.setCallback(this).build();
// Configures the RTSP client
mClient = new RtspClient();
mClient.setSession(mSession);
mClient.setCallback(this);
// Use this to force streaming with the MediaRecorder API
mSession.getVideoTrack().setStreamingMethod(
MediaStream.MODE_MEDIARECORDER_API);
This code below start stream :
protected void encodeWithMediaRecorder() throws IOException, ConfNotSupportedException {
Log.d(TAG,"Video encoded using the MediaRecorder API");
// We need a local socket to forward data output by the camera to the packetizer
createSockets();
// Reopens the camera if needed
destroyCamera();
createCamera();
// The camera must be unlocked before the MediaRecorder can use it
unlockCamera();
try {
mMediaRecorder = new MediaRecorder();
mMediaRecorder.setCamera(mCamera);
mMediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);
mMediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
mMediaRecorder.setVideoEncoder(mVideoEncoder);
mMediaRecorder.setPreviewDisplay(mSurfaceView.getHolder().getSurface());
mMediaRecorder.setVideoSize(mRequestedQuality.resX,mRequestedQuality.resY);
mMediaRecorder.setVideoFrameRate(mRequestedQuality.framerate);
// The bandwidth actually consumed is often above what was requested
mMediaRecorder.setVideoEncodingBitRate((int)(mRequestedQuality.bitrate*0.8));
// We write the output of the camera in a local socket instead of a file !
// This one little trick makes streaming feasible quiet simply: data from the camera
// can then be manipulated at the other end of the socket
FileDescriptor fd = null;
if (sPipeApi == PIPE_API_PFD) {
fd = mParcelWrite.getFileDescriptor();
} else {
fd = mSender.getFileDescriptor();
}
mMediaRecorder.setOutputFile(fd);
mMediaRecorder.prepare();
mMediaRecorder.start();
} catch (Exception e) {
throw new ConfNotSupportedException(e.getMessage());
}
Any one have ideas? Thanks!
Commenting out
mMediaRecorder.setVideoFrameRate(mRequestedQuality.framerate);
solved my problem.
Related
I'm developing an Android native app where I record the screen video stream (encoding it with the native AMediaCodec library to video/avc) and I mux it with an AAC / audio/mp4a-latm audio track. My code works just fine on several devices, but I got some problems with some devices (Huawei P8 and P10 lite, running Android 6.0.0 and 7.0 respectively, and Nexus 5 running Android 6.0.1). The issue is that, whenever I try to add the second track to the muxer (no matter the order I add them), it fails returning a -10000 error code.
I've simplified the problem, trying to just mux an audio and a video file together; the results are the same. In this simplified version I use two AMediaExtractor's to get the audio and video formats to configure the AMediaMuxer, but when I add the second track I still get an error. Here's the code:
const auto videoSample = "videosample.mp4";
const auto audioSample = "audiosample.aac";
const auto filePath = "muxed_file.mp4"
auto* extractorV = AMediaExtractor_new();
AMediaExtractor_setDataSource(extractorV, videoSample);
AMediaExtractor_selectTrack(extractorV, 0U); // here I take care to select the right "video/avc" track
auto* videoFormat = AMediaExtractor_getTrackFormat(extractorV, 0U);
auto* extractorA = AMediaExtractor_new();
AMediaExtractor_setDataSource(extractorA, audioSample);
AMediaExtractor_selectTrack(extractorA, 0U); // here I take care to select the right "mp4a-latm" track
auto* audioFormat = AMediaExtractor_getTrackFormat(extractorA, 0U);
auto fd = open(filePath.c_str(), O_WRONLY | O_CREAT | O_TRUNC, 0666);
auto* muxer = AMediaMuxer_new(fd, AMEDIAMUXER_OUTPUT_FORMAT_MPEG_4);
auto videoTrack = AMediaMuxer_addTrack(muxer, videoFormat); // the operation succeeds: videoTrack is 0
auto audioTrack = AMediaMuxer_addTrack(muxer, audioFormat); // error: audioTrack is -10000
AMediaExtractor_seekTo(extractorV, 0, AMEDIAEXTRACTOR_SEEK_CLOSEST_SYNC);
AMediaExtractor_seekTo(extractorA, 0, AMEDIAEXTRACTOR_SEEK_CLOSEST_SYNC);
AMediaMuxer_start(muxer);
Is there something wrong with my code? Is it something that is not supposed to work on Android prior to 8 or it'a pure coincidence? I've read a lot of posts (especially by #fadden) here on SO, but I'm not able to figure it out.
Let me give you some context:
the failure is independent from the order I add the two tracks: it will always be the second AMediaMuxer_addTrack() to fail
audio and video tracks should be ok: when I mux only one of the tracks, everything works well even on Huaweis and Nexus 5, I obtain correct output files, both with the audio or video track alone
I tried to move the AMediaExtractor_seekTo() calls to other positions, without success
the same code works just fine on other devices (OnePlus 5 and Nokia 7 plus, both running Android >= 8.0)
Just for completeness, this is the code I later use to obtain the output mp4 file:
AMediaMuxer_start(muxer);
// mux the VIDEO track
std::array<uint8_t, 256U * 1024U> videoBuf;
AMediaCodecBufferInfo videoBufInfo{};
videoBufInfo.flags = AMediaExtractor_getSampleFlags(extractorV);
bool videoEos{};
while (!videoEos) {
auto ts = AMediaExtractor_getSampleTime(extractorV);
videoBufInfo.presentationTimeUs = std::max(videoBufInfo.presentationTimeUs, ts);
videoBufInfo.size = AMediaExtractor_readSampleData(extractorV, videoBuf.data(), videoBuf.size());
if(videoBufInfo.presentationTimeUs == -1 || videoBufInfo.size < 0) {
videoEos = true;
} else {
AMediaMuxer_writeSampleData(muxer, videoTrack, videoBuf.data(), &videoBufInfo);
AMediaExtractor_advance(extractorV);
}
}
// mux the audio track
std::array<uint8_t, 256U * 1024U> audioBuf;
AMediaCodecBufferInfo audioBufInfo{};
audioBufInfo.flags = AMediaExtractor_getSampleFlags(extractorA);
bool audioEos{};
while (!audioEos) {
audioBufInfo.size = AMediaExtractor_readSampleData(extractorA, audioBuf.data(), audioBuf.size());
if(audioBufInfo.size < 0) {
audioEos = true;
} else {
audioBufInfo.presentationTimeUs = AMediaExtractor_getSampleTime(extractorA);
AMediaMuxer_writeSampleData(muxer, audioTrack, audioBuf.data(), &audioBufInfo);
AMediaExtractor_advance(extractorA);
}
}
AMediaMuxer_stop(muxer);
AMediaMuxer_delete(muxer);
close(fd);
AMediaFormat_delete(audioFormat);
AMediaExtractor_delete(extractorA);
AMediaFormat_delete(videoFormat);
AMediaExtractor_delete(extractorV);
I'm testing libstreaming library. My app description: One device stream video from camera to another device via RTSP. All working perfectly on KitKat devices but my Huawei p8 lite (Lollipop) can't run stream beacause:
W/AudioSystem: AudioFlinger server died!
W/IMediaDeathNotifier: media server died
E/MediaPlayer: error (100, 0)
E/MediaPlayer: Error (100,0)
My stream server side:
// Configures the SessionBuilder
SessionBuilder.getInstance()
.setSurfaceView(surfaceView)
.setPreviewOrientation(90)
.setContext(getApplicationContext())
.setAudioEncoder(SessionBuilder.AUDIO_NONE)
.setAudioQuality(new AudioQuality(16000, 32000))
.setVideoEncoder(SessionBuilder.VIDEO_H264)
.setVideoQuality(new VideoQuality(320,240,20,500000));
// Starts the RTSP server
this.startService(new Intent(this,RtspServer.class));
Play stream side:
private void play() {
if(mediaPlayer == null) {
mediaPlayer = new MediaPlayer();
}
setErrorListener();
mediaPlayer.setDisplay(surfaceHolder);
mediaPlayer.setOnPreparedListener(this);
try {
//RTSP SERVER URI
String videoUri = "rtsp://192.168.1.1:8086?/";
mediaPlayer.setDataSource(this, Uri.parse(videoUri));
mediaPlayer.prepareAsync();
} catch (IOException e) {
e.printStackTrace();
}
}
When lollipop device is streaming then kitkat device dont have any isuess to play stream. Why play stream not working on this particular device?
It's solved.
For some reason url created with ( "?/" ) doesn't work on Lollipop:
String videoUri = "rtsp://192.168.1.1:8086?/";
Then I edit method Parse(); in libstreaming. I removed the whole part about parameterization session using url.
And I changed videoUri to without (?):
String videoUri ="rtsp://192.168.1.1:8086/";
I am working on a function that when a button is pressed, it will launch voice recognition and at the same time will record what the user says. Codes as follows:
button_start.setOnTouchListener( new View.OnTouchListener()
{
#Override
public boolean onTouch(View arg0, MotionEvent event)
{
if (pressed == false)
{
Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
intent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE,"voice.recognition.test");
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, "zh-HK");
intent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS,1);
sr.startListening(intent);
Log.i("111111","11111111");
pressed = true;
}
recordAudio();
}
if((event.getAction()==MotionEvent.ACTION_UP || event.getAction()==MotionEvent.ACTION_CANCEL))
{
stopRecording();
}
return false;
}
});
}
public void recordAudio()
{
isRecording = true;
try
{
mediaRecorder = new MediaRecorder();
mediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
mediaRecorder.setOutputFile(audioFilePath);
mediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
mediaRecorder.prepare();
}
catch (Exception e)
{
e.printStackTrace();
}
mediaRecorder.start();
}
public void stopRecording()
{
if (isRecording)
{
mediaRecorder.stop();
mediaRecorder.reset(); // set state to idle
mediaRecorder.release();
mediaRecorder = null;
isRecording = false;
}
else
{
mediaPlayer.release();
mediaPlayer.reset();
mediaPlayer = null;
}
}
class listener implements RecognitionListener
{
// standard codes onReadyForSpeech, onBeginningOfSpeech, etc
}
Questions:
I have made the app step by step, and at first the app does not have recording functions, and the voice recognition works perfectly.
After I have tested many times and considered the voice recognition is ok, I start to incorporate the recording functions using the MediaRecorder.
I then tested, once the button_start is pressed, ERROR3 AUDIO message immediately appears even before I tried to speak.
I play back the voice recording. The voice is recorded and saved properly too.
What is happening? Why Cannot recording at the same time when using voice recognition?
Thanks!
--EDIT-- module for Opus-Record WHILE Speech-Recognition also runs
--EDIT-- 'V1BETA1' streaming, continuous, recognition with minor change to sample project. Alter that 'readData()', so the raw PCM in 'sData' is shared by 2 threads ( fileSink thread , recognizerAPI thread from sample project). For the sink, just hook up an encoder using a PCM stream refreshed at each 'sData' IO. remember to CLO the stream and it will work. review 'writeAudiaDataToFile()' for more on fileSink....
--EDIT-- see this thread
There is going to be a basic conflict over the HAL and the microphone buffer when you try to do:
speechRecognizer.startListening(recognizerIntent); // <-- needs mutex use of mic
and
mediaRecorder.start(); // <-- needs mutex use of mic
You can only choose one or the other of the above actions to own the audio API's underlying the mic!
If you want to mimic the functionality of Google Keep where you talk only once and as output from the one input process (your speech into mic) you get 2 separate types of output (STT and a fileSink of say the MP3) then you must split something as it exits the HAL layer from the mic.
For example:
Pick up the RAW audio as PCM 16 coming out of the mic's buffer
Split the above buffer's bytes (you can get a stream from the buffer and pipe the stream 2 places)
STRM 1 to the API for STT either before or after you encode it (there are STT APIs accepting both Raw PCM 16 or encoded)
STRM 2 to an encoder, then to the fileSink for your capture of the recording
Split can operate on either the actual buffer produced by the mic or on a derivative stream of those same bytes.
For what you are getting into, I recommend you look at getCurrentRecording() and consumeRecording() here.
STT API reference: Google "pultz speech-api". Note that there are use-cases on the API's mentioned there.
buferUtils
code
more code
It seems that even after I am setting the video recorder profile to low, the video is in the highest quality.
Here is my code :
camera.setDisplayOrientation(90);
camera.unlock();
recorder.reset();
recorder.setCamera(camera);
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
recorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);
recorder.setProfile(CamcorderProfile.get(CamcorderProfile.QUALITY_LOW));
//when removing these comments I get an exception on my 4.2.2 device when calling start() on the recorder.
/* recorder.setVideoFrameRate(24);
recorder.setVideoSize(480, 360);
*/
recorder.setOrientationHint(90);
file = FileUtils.getFileName(FileTypes.VIDEO);
if (!file.exists()) {
try {
file.createNewFile();
} catch (IOException e) {
e.printStackTrace();
}
}
recorder.setOutputFile(FileUtils.getFileName(FileTypes.VIDEO).toString());
recorder.setMaxDuration(45000);
Try this. Although this seems like the same as your code, It works for me.create a seperate instance for CamcorderProfile and set the recorder profile to this instance.
CamcorderProfile cprofileLow = CamcorderProfile
.get(CamcorderProfile.QUALITY_LOW);
recorder.setProfile(cprofileLow);
recorder.setOutputFile("/sdcard/videocapture_example.mp4");
recorder.setMaxDuration(50000); // 50 seconds
recorder.setMaxFileSize(3000000); // Approximately 3 megabytes
Please do notice that you have a high end device(since yours is of 4.2.2), you may get a comparitively good resolution provided,it is the lowest possible resolution in your device.
hello,i want to use mediaRecorder to record voice. i want to save the format is amr.
this.mediaRecorder = new MediaRecorder();
this.mediaRecorder.setAudioChannels(1);
this.mediaRecorder.setAudioSamplingRate(8000);
this.mediaRecorder.setAudioEncodingBitRate(16);
this.mediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
this.mediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.RAW_AMR);
this.mediaRecorder.setOutputFile(this.file.getAbsolutePath());
this.mediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
i used this.mediaRecorder.setAudioEncodingBitRate(16), some device is ok
mediaRecorder.setAudioEncodingBitRate(12500),somedevice is ok
but i delete the mediaRecorder.setAudioEncodingBitRate some device is ok
so my question how to get the default the AudioEncodingBitRate.
which parameter i need to use?
You set the AudioEncodingBitRate too low. I made the same mistake :-)
This seems to work:
MediaRecorder recorder = new MediaRecorder();
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
if (Build.VERSION.SDK_INT >= 10) {
recorder.setAudioSamplingRate(44100);
recorder.setAudioEncodingBitRate(96000);
recorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
} else {
// older version of Android, use crappy sounding voice codec
recorder.setAudioSamplingRate(8000);
recorder.setAudioEncodingBitRate(12200);
recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
}
recorder.setOutputFile(file.getAbsolutePath());
try {
recorder.prepare();
} catch (IOException e) {
throw new RuntimeException(e);
}
The idea comes from here
plus: read the docs. The docs of setAudioSamplingRate say the following:
The sampling rate really depends on the format for the audio recording, as well as the capabilities of the platform. For instance, the sampling rate supported by AAC audio coding standard ranges from 8 to 96 kHz, the sampling rate supported by AMRNB is 8kHz, and the sampling rate supported by AMRWB is 16kHz.
I am using bellow configurations and gives amazing clear recording output.
localFileName = getFileName()+".wav";
localFile = new File(localdir, localFileName);
mRecorder = new MediaRecorder();
mRecorder.setAudioSource(MediaRecorder.AudioSource.DEFAULT);
mRecorder.setOutputFormat(AudioFormat.ENCODING_PCM_16BIT);
mRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
mRecorder.setAudioChannels(1);
mRecorder.setAudioEncodingBitRate(128000);
mRecorder.setAudioSamplingRate(44100);
mRecorder.setOutputFile(localFile.getPath());
however if you are recording along with playing audio simultaneously it has some issues in samsung devices.
[but again only when you are playing audio and recording both together at the same time]
I find that the encoding bitrate should be calculated from the sample rate.
There is a good write-up of how these values relate on https://micropyramid.com/blog/understanding-audio-quality-bit-rate-sample-rate/
I use 8:1 compression for high-quality recordings. I prefer 48 KHz sampling, but the same logic works at an 8000 Hz sample rate requested for this post.
final int BITS_PER_SAMPLE = 16; // 16-bit data
final int NUMBER_CHANNELS = 1; // Mono
final int COMPRESSION_AMOUNT = 8; // Compress the audio at 8:1
public MediaRecorder setupRecorder(String filename, int selectedAudioSource, int sampleRate) {
final int uncompressedBitRate = sampleRate * BITS_PER_SAMPLE * NUMBER_CHANNELS;
final int encodedBitRate = uncompressedBitRate / COMPRESSION_AMOUNT;
mediaRecorder = new MediaRecorder();
try {
mediaRecorder.setAudioSource(selectedAudioSource);
mediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
mediaRecorder.setAudioEncoder(MediaRecorder.OutputFormat.AMR_NB);
mediaRecorder.setAudioSamplingRate(sampleRate);
mediaRecorder.setAudioEncodingBitRate(encodedBitRate);
mediaRecorder.setOutputFile(filename);
}catch (Exception e) {
// TODO
}
return mediaRecorder;
}
MediaRecorder mediaRecorder = setupRecorder(this.file.getAbsolutePath(),
MediaRecorder.AudioSource.MIC,
8000);