I am developing an application in which I have to perform three functions play, record and pause an audio file.
Has anyone implemented it before?
Take a look at MediaPlayer or AudioTrack for playback. The difference is that MediaPlayer can play several audio formats directly from file (or in some cases even from remote URL), while AudioTrack plays only from raw LPCM buffer,
For recording take a look at MediaRecorder or AudioRecord. The difference is that MediaRecorder records audio and video to .3gp, while AudioRecord gives you only audio as raw LPCM buffers. The AudioRecord data can be used to create .wav files (though some extra code is required for this).
Related
I need to record an audio on android that I later want to encrypt. So I'm working with the AudioRecord class, since it works the audio at a low level using the bytes directly.
I found a piece of code that works with short and then converts it in to bytes, which is what I want. But once I have created the audio, I can not play it with any audio player in the phone.
What should I have to do in order for the phone to recognize it as a valid audio file?
Please forgive me because I really don't remember all in detail, but I had this issue before and I do remember that the audio recorded by AudioRecord has no format, so in order to make it playable you first need to set a format to it, where you have to specify all of the characteristics that you've set up when initializing your AudioRecord instance (such as sample rate, number of channels, etc). I found an example of how to record an audio using AudioRecord and later setting up wav format: https://selvaline.blogspot.com/2016/04/record-audio-wav-format-android-how-to.html I hope it helps.
I'm trying to record (screen)-video and (microphone)-audio of my device. For my video-screen-recording I've used MediaProjection to get raw-video data and MediaCodec and MediaMuxer to create the video-file, which works fine.
Now I want to add an audio-recording, especially the microphone audio. For this I used the AudioRecord class to get the raw-audio data of my device-microphone. The problem I'm facing now is to get all the raw-streams (video & audio) together with the MediaMuxer, for example should it be synchronous or asynchronous?
Because the audio and video recording process are in two diffrent threads, so I don't know when I have to call the writeSampleData()-method of MediaMuxer. Or should I first write all video-data with the MediaMuxer and then afterwards the audio-data? Is it even possible to call the MediaMuxer in two different threads at "the same time"? And how can I guarantee that audio and video are synchronized during the muxing?
I have an app that plays video files with no audio. I'm using the mediaplayer function. I want to have the videos play, but not output anything to audio, as there is no audio tracks. However, Android ducks all other audio output (music, for example) when my app is running. Is there a way to use mediaplayer and completely remove my app from audio streams?
I've reviewed Google's Managing Audio Focus article, and I can't devise a way to do so. I just want to use mediaplayer for the video, and completely forget about audio.
Is there a different function I have to call in mediaPlayer.setAudioStreamType?
you can mute all the audio streams using the below code snippet
AudioManager am = (AudioManager)NextVideoPlayer.this.getSystemService(Context.AUDIO_SERVICE);
am.setStreamMute(AudioManager.STREAM_MUSIC, true);
I have already developed Streaming Audio application using MediaPlayer API. All the features are works fine, expect it takes more time to start the playback (buffer time is more).
i want to add recording live audio stream(save the live stream data in disk, not the recording from MIC). as MediaPlayer does not provide any API to access the raw data stream, i am planning to build custom audio player.
i want to control the buffering time, access to the raw audio stream, should able to play all the audio format which are supported in android natively. which api (libPd or OpenSL ES or AudioTrack) will be suitable to build the custom audio player in Android?
In my experience OpenSL_ES would be the choice, here there is a Link that explains how to do audio streaming that you may find useful. bufferframes determines how many samples you will collect before playing, so smaller bufferframes faster response time, but you have to balance that with your device processing capabilities.
You can record with libpd (pd-for-android) too.
All the recording process is managed by libpd.
Check the ScenePlayer project, it uses libpd and lets you record audio into a folder on sdcard:
https://github.com/libpd/pd-for-android/tree/master/ScenePlayer
I want to record human voice on my Android phone. I noticed that Android has two classes to do this: AudioRecord and MediaRecorder. Can someone tell me what's the difference between the two and what are appropriate use cases for each?
I want to be able to analyse human speech in real-time to measure amplitude, etc. Am I correct in understanding that AudioRecord is better suited for this task?
I noticed on the official Android guide webpage for recording audio, they use MediaRecorder with no mention of AudioRecord.
If you want to do your analysis while recording is still in progress, you need to use AudioRecord, as MediaRecorder automatically records into a file. AudioRecord has the disadvantage, that after calling startRecording() you need to poll the data yourself from the AudioRecord instance. Also, you must read and process the data fast enough such that the internal buffer is not overrun (look in the logcat output, AudioRecord will tell you when that happens).
As I understand MediaRecorder is a black box which gives compressed audio file on the output and AudioRecorder gives you just raw sound stream and you have to compress it by yourself.
MediaRecorder gives you the max amptitude from last call of getMaxAmplitude() method so you can implement a sound visualizer for example.
So in most cases MediaRecorder is the best choice except those in which you should make some complicated sound processing and you need access to the raw audio stream.
AudioRecorderer first saves data in minBuffer then it is copied from there to the temporary buffer, in MediaRecorder it is copied to files.
In AudioRecorder we need the api setRecordPosition() to copy the saved data at required position, whereas in MediaRecorder the file pointer is does this job to set the position of the marker.
AudioRecorder can be used to those apps which run on an emulator this can be done by providing low sample rate such as 8000, while using MediaRecorder the audio cannot be recorded using emulator.
In AudioRecord the screen sleep after sometime, while in MediaRecorder the screen does not sleep.