I'm trying to develop application that using the MediaRecorder API that runs on HMT-1.
Android Studio is used for development, and the operating environment is Android 10 or higher.
While shooting a video using the MediaRecoder API, we are verifying whether the same microphone can be used in another process such as the SpeechRecognizer API.
Recording processing alone with the MediaRecoder API and voice input alone with the SpeechRecognizer API can be performed without problems.
However, if you try to record and input voice at the same time, an error will occur.
If you want to use the input voice for multiple processes, please let me know if you have any reference documents or samples.
MediaRecorder settings.
path = getExternalFilesDir(null)!!.path
mMediaRecorder = MediaRecorder()
mMediaRecorder!!.setAudioSource(MediaRecorder.AudioSource.MIC)
mMediaRecorder!!.setVideoSource(MediaRecorder.VideoSource.SURFACE)
mMediaRecorder!!.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4)
mMediaRecorder!!.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB)
mMediaRecorder!!.setAudioEncodingBitRate(16)
mMediaRecorder!!.setAudioSamplingRate(44100)
mMediaRecorder!!.setVideoSize(1024, 768)
mMediaRecorder!!.setVideoEncoder(MediaRecorder.VideoEncoder.H264)
mMediaRecorder!!.setVideoEncodingBitRate(10000000)
mMediaRecorder!!.setOutputFile(path + "/" + DateFormat.format("yyyyMMdd'-'kkmmss", Calendar.getInstance()) + ".mp4")
mMediaRecorder!!.setOnInfoListener(this)
mMediaRecorder!!.setMaxDuration(VIDEO_DURATION)
mMediaRecorder!!.setMaxFileSize(VIDEO_FILESIZE)
mMediaRecorder!!.setPreviewDisplay(mSurfaceHolder!!.surface)
val rotation = (getSystemService(WINDOW_SERVICE) as WindowManager)
if(rotation.defaultDisplay.rotation == 2){
mMediaRecorder!!.setOrientationHint(180)
}
mMediaRecorder!!.prepare()
try {
mMediaRecorder!!.start()
} catch (ex: IOException) {
ex.printStackTrace()
mMediaRecorder!!.release()
}
SpeechRecognizer settings.
mSpeechRecognizer = SpeechRecognizer.createSpeechRecognizer(applicationContext)
mSpeechRecognizer?.setRecognitionListener(createRecognitionListenerStringStream { recognize_text_view.text = it })
public fun onStart(View: View){
mSpeechRecognizer?.startListening(Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH))
}
public fun onStop(View: View){
mSpeechRecognizer?.stopListening()
}
Related
I am developing a voice call app for android using PeerJS and WebView. And I want the audio to play through the earpiece. Here is my code,
private fun initAudio(){
am = getSystemService(AUDIO_SERVICE) as AudioManager
volumeControlStream = AudioManager.STREAM_VOICE_CALL
am.mode = AudioManager.MODE_IN_COMMUNICATION
am.isSpeakerphoneOn = false//<= not working in android 12
}
private fun toggleSpeakerMode(){
am.isSpeakerphoneOn = !am.isSpeakerphoneOn // <= final value is always true in android 12
}
The above code works fine on older versions of android, but not in android 12 (emulator).
am.isSpeakerphoneOn is always true in android 12. Am I doing something wrong here? Or is it a bug in the emulator?
there is a new API call in Android 12/S/API 31, setCommunicationDevice(AudioDeviceInfo). for switching between speaker an built-in earpiece now we can use:
ArrayList<Integer> targetTypes = new ArrayList<>();
if (earpieceMode) {
targetTypes.add(AudioDeviceInfo.TYPE_BUILTIN_EARPIECE);
} else { // play out loud
targetTypes.add(AudioDeviceInfo.TYPE_BUILTIN_SPEAKER);
}
// more targetTypes may be added in some cases
// set up will pick and first available, so order matters
List<AudioDeviceInfo> devices = audioManager.getAvailableCommunicationDevices();
outer:
for (Integer targetType : targetTypes) {
for (AudioDeviceInfo device : devices) {
if (device.getType() == targetType) {
boolean result = audioManager.setCommunicationDevice(device);
Log.i("AUDIO_MANAGER", "setCommunicationDevice type:" + targetType + " result:" + result);
if (result) break outer;
}
}
}
mode change isn't needed (but for voip calls is strongly suggested) and my streams are AudioManager.STREAM_VOICE_CALL type (where applicable)
By default webrtc uses earpiece for voice playing.
However,alternatively You can call the setSpeakerphoneOn(false) that is defined in the AudioManager.java class.
Just pass false in this function parameter and it will disable the speaker phone in the call and earpiece will be used.
I have also tested it on the android 12 phones and it is working fine.
If issue still persist then you have some bug in your emulator.
I am developing native android WebRTC client that is suppoded to stream audio from custom device (I am getting audio stream via Bluetooth from that device). I am using libjingle library to implement WebRTC and I wonder if and how it is possible to hook up custom audio stream to audio track?
Currently I am adding default audio track like this:
localMS = factory.createLocalMediaStream("ARDAMS");
AudioSource audioSource = factory.createAudioSource(new MediaConstraints());
localMS.addTrack(factory.createAudioTrack("ARDAMSa0", audioSource));
I saw that there is WebRtcAuidioRecord (https://github.com/pristineio/webrtc-android/blob/master/libjingle_peerconnection/src/main/java/org/webrtc/voiceengine/WebRtcAudioRecord.java) - is it possible to override it?
Anybody tried doing something like that?
Your post lead me to the below code, I am going to try it and let you know if I get it to work. I am trying to send one audio stream to Watson API and one to WebRTC but Android only lets one InputStream read for the microphone. I will update you if I get it to work.
private org.webrtc.MediaStream createMediaStream() {
org.webrtc.MediaStream mediaStream = mFactory.createLocalMediaStream(ARDAMS);
if (mEnableVideo) {
mVideoCapturer = createVideoCapturer();
if (mVideoCapturer != null) {
mediaStream.addTrack(createVideoTrack(mVideoCapturer));
} else {
mEnableVideo = false;
}
}
if (mEnableAudio) {
createAudioCapturer();
mediaStream.addTrack(mFactory.createAudioTrack(
AUDIO_TRACK_ID,
mFactory.createAudioSource(mAudioConstraints)));
}
return mediaStream;
}
/**
* Creates a instance of WebRtcAudioRecord.
*/
private void createAudioCapturer() {
if (mOption.getAudioType() == PeerOption.AudioType.EXTERNAL_RESOURCE) {
WebRtcAudioRecord.setAudioRecordModuleFactory(new WebRtcAudioRecordModuleFactory() {
#Override
public WebRtcAudioRecordModule create() {
AudioCapturerExternalResource module = new AudioCapturerExternalResource();
module.setUri(mOption.getAudioUri());
module.setSampleRate(mOption.getAudioSampleRate());
module.setBitDepth(mOption.getAudioBitDepth());
module.setChannel(mOption.getAudioChannel());
return module;
}
});
} else {
WebRtcAudioRecord.setAudioRecordModuleFactory(null);
}
}
Source:
https://www.programcreek.com/java-api-examples/?code=DeviceConnect/DeviceConnect-Android/DeviceConnect-Android-master/dConnectDevicePlugin/dConnectDeviceWebRTC/app/src/main/java/org/deviceconnect/android/deviceplugin/webrtc/core/MediaStream.java
I have been trying to implement the flashlight/torch feature of the camera using the GooglePlay Services Vision API (using Nuget from Visual Studio) for the past few days without success. I have noticed that there is a GitHub implementation of this API which has such functionality but that is only available to Java users.
I was wondering if there is anything related to C# Xamarin users.
The Camera object is not made available on this API therefore I am not able to alter the Camera parameters needed to activate the flashlight.
I would like to be sure if that functionality is not available so I don't waste more time over this. It just might be the case that the Xamarin developers have not attended to this functionality and they might in a near future.
UPDATE
https://github.com/googlesamples/android-vision/blob/master/visionSamples/barcode-reader/app/src/main/java/com/google/android/gms/samples/vision/barcodereader/BarcodeCaptureActivity.java
In there you can see that on line 214 we have such method call:
mCameraSource = builder.setFlashMode(useFlash ? Camera.Parameters.FLASH_MODE_TORCH : null).build();
SetFlashMode is not a method of the CameraSource in Nuget, but it is on the GitHub (open source version).
Xamarin Vision Library Didn't expose the method to set Flash Mode.
WorkAround.
Using Reflection. You can get the Camera Object from CameraSouce and add the flash parameter then set the updated parameters to the camera.
This should be called after surfaceview has been created
Code
public Camera getCameraObject (CameraSource _camSource)
{
Field [] cFields = _camSource.Class.GetDeclaredFields ();
Camera _cam = null;
try {
foreach (Field item in cFields) {
if (item.Name.Equals ("zzbNN")) {
Console.WriteLine ("Camera");
item.Accessible = true;
try {
_cam = (Camera)item.Get (_camSource);
} catch (Exception e) {
Logger.LogException (this, e);
}
}
}
} catch (Exception e) {
Logger.LogException (this, e);
}
return _cam;
}
public void setFlash (bool isEnable)
{
try {
isTorch = !isEnable;
var _cam = getCameraObject (mCameraSource);
if (_cam == null) return;
var _pareMeters = _cam.GetParameters ();
var _listOfSuppo = _cam.GetParameters ().SupportedFlashModes;
_pareMeters.FlashMode = isTorch ? _listOfSuppo [0] : _listOfSuppo [3];
_cam.SetParameters (_pareMeters);
} catch (Exception e) {
Logger.LogException (this, e);
}
}
Basically, anything you can do with Android can be done with Xamarin.Android. All the underlying APIs area available.
Since you have existing Java code, you can create a binding project that enables you to call the code from your Xamarin.Android project. Here's a good article on how to get started: Binding a Java Library
On the other hand, I don't think you need a library to do what you want to. If you only want torch/flashlight functionality, you just need to adapt the Java code from this answer to work in Xamarin.Android with C#.
I am currently Developing a VoIP Android Application, and for VoIP support, I am using an Open source library Linphone.
Currently voice calling is happening, but video calling is not happening. After analyzing for a while, I came to know that by default when the app is loaded, the LinphoneCore library is using the H264 video codec.
But the VOIP Asterik server is configured with the VP8 video codec. I cannot change the video codec, which is configured in the server. Hence due to a codec mismatch, video data is not going.
So how can I set manually the video codec to VP8 from my app to LinphoneCore once the app is loaded?
To set videoCodec to LinphoneCore, what you can do is , once your LinphoneCore is ready, you can just Retrieve the VideoCodec Payload that it supports and then set a particular payload and disable others as shown below in the code.
private void enableVp8Codec () {
LinphoneCore lc = LinphoneManager.getLcIfManagerNotDestroyedOrNull();
if (lc != null) {
PayloadType[] lPayLoadArr = lc.getVideoCodecs();
for (final PayloadType pt : lPayLoadArr) {
try {
if (pt.getMime().equals("VP8")) {
lc.enablePayloadType(pt, true);
} else {
lc.enablePayloadType(pt, false);
}
} catch (LinphoneCoreException e) {
Log.e("tag",e.getMessage());
}
}
}
}
This method you can probably call in onResume of your Activity
I have been searching for couple of days now and havent been able to find a suitable solution.
I am trying to check if any app in the background is using the microphone, so my app can use it, otherwise i want just to show message "Microphone in use by another app".
I tried checking all the applications in the background and their permissions but that doesnt solve my problem, since there is package wearable.app which asks for the permissions but it doesnt affect the audio, or it is not using it.
I tried the other solutions that i was able to find here or on google, but none of that seems to be the proper way.
All i want to check if the microphone is not being used, so my app can use it.
Any suggestion i will appreciate.
After searching more i found the solution and i am adding it here for anyone that needs it to find it easier.
private boolean validateMicAvailability(){
Boolean available = true;
AudioRecord recorder =
new AudioRecord(MediaRecorder.AudioSource.MIC, 44100,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_DEFAULT, 44100);
try{
if(recorder.getRecordingState() != AudioRecord.RECORDSTATE_STOPPED ){
available = false;
}
recorder.startRecording();
if(recorder.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING){
recorder.stop();
available = false;
}
recorder.stop();
} finally{
recorder.release();
recorder = null;
}
return available;
}
You can do it the other way around.
Get the microphone in your app.
Get a list of the installed apps, who have a RECORD permission.
Then check if one of these apps is on the foreground and if there is one release the microphone so that the other app can use it (for example when a phone call occurs).
A bit dirty practice but I think it is what you are looking for.
Cheers!
This is how is done
AudioManager am = (AudioManager)context.getSystemService(Context.AUDIO_SERVICE);
if(am.getMode()==AudioManager.MODE_IN_COMMUNICATION){
//Mic is in use
}
MODE_NORMAL -> You good to go. Mic not in use
MODE_RINGTONE -> Incoming call. The phone is ringing
MODE_IN_CALL -> A phone call is in progress
MODE_IN_COMMUNICATION -> The Mic is being used by another application
AudioManager.AudioRecordingCallback()
am.registerAudioRecordingCallback(new AudioManager.AudioRecordingCallback() {
#Override
public void onRecordingConfigChanged(List<AudioRecordingConfiguration> configs) {
super.onRecordingConfigChanged(configs);
try {
isMicOn = configs.get(0) != null;
}catch (Exception e)
{
isMicOn = false;
}
if (isMicOn) {
//microphone is on
} else {
// microphone is off
}
Toast.makeText(context, isMicOn ? "Mic on" : "Mic off", Toast.LENGTH_SHORT).show();
}
}, null);
I know this may sound a bit tedious or the long way... But have you considered recording a logcat? Record a log for both Kernel and apps. Recreate the issue, then compare both logs to see what program is occupied when the kernel utilizes the mic.
Since sharing audio input behaviour varies depending on Android versions, this answer aims to provide a complete solution based on the docs.
Pre-Android 10
Before Android 10 the input audio stream could only be captured by one
app at a time. If some app was already recording or listening to
audio, your app could create an AudioRecord object, but an error would
be returned when you called AudioRecord.startRecording() and the
recording would not start.
So, you can use this function to check if the mic is used by another app for pre Android 10 versions.
private fun isAnotherAppUsingMic(): Boolean {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q) return false
createRecorder().apply {
try {
startRecording()
if (recordingState != AudioRecord.RECORDSTATE_RECORDING) {
return true
}
stop()
return false
} catch (e: IllegalStateException) {
return true
} finally {
release()
}
}
}
private fun createRecorder(): AudioRecord {
return AudioRecord(
MediaRecorder.AudioSource.MIC,
SAMPLE_RATE_HZ,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT,
2 * AudioRecord.getMinBufferSize(
SAMPLE_RATE_HZ,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT
)
)
}
const val SAMPLE_RATE_HZ = 44100
Android 10 and above
Android 10 imposes a priority scheme that can switch the input audio
stream between apps while they are running. In most cases, if a new
app acquires the audio input, the previously capturing app continues
to run, but receives silence.
So, for Android versions 10 and higher, in most cases your app will take priority if there is another app like voice or screen recorder is already running and then you start using mic in your app. But you will need to check for Voice/Video call as it has higher priority and mic won't be available for your app (it will receive silence). You can use below code to check if there is an active call:
private fun isVoiceCallActive(): Boolean {
val audioManager = context.getSystemService(Context.AUDIO_SERVICE) as AudioManager
return audioManager.mode in listOf(
AudioManager.MODE_IN_CALL,
AudioManager.MODE_IN_COMMUNICATION
)
}
In summary you can merge above two function to check if mic is available before you want to use it.
fun isMicAvailable() = !isAnotherAppUsingMic() && !isVoiceCallActive()