I am using interactive video broadcasting in my app.
I am attaching class in which I am using live streaming.
I am getting the audio issue when I go back from the live streaming screen to the previous screen. I still listen to the audio of the host.
previously I was using leave channel method and destroying rtc client object, but after implementing this when I go back from streaming class then it closes all users screen who are using this app because of leave channel method. after that, I removed this option from my on destroy method.
Now I am using disable audio method which disables the audio but when I open live streaming class it doesn't enable audio. Enable audio method is not working I also used the mute audio local stream method and rtc handler on user mute audio method.
I am getting error--
"LiveStreamingActivity has leaked IntentReceiver io.agora.rtc.internal.AudioRoutingController$HeadsetBroadcastReceiver#101a7a7
that was originally registered here. Are you missing a call to
unregisterReceiver()? android.app.IntentReceiverLeaked: Activity
com.allin.activities.home.homeActivities.LiveStreamingActivity has
leaked IntentReceiver
io.agora.rtc.internal.AudioRoutingController$HeadsetBroadcastReceiver#101a7a7
that was originally registered here. Are you missing a call to
unregisterReceiver()?"
Receiver is registering in SDK and exception is coming inside the SDK that is jar file I can't edit.
Please help this in resolving my issue as I have to live the app on
play store.
//firstly I have tried this but it automatically stops other
devices streaming.
override fun onDestroy() {
/* if (mRtcEngine != null) {
leaveChannel()
RtcEngine.destroy(mRtcEngine)
mRtcEngine = null
}*/
//second I have tried disabling the audio so that user will
not hear
the host voice
if (mRtcEngine != null) //
{
mRtcEngine!!.disableAudio()
}
super.onDestroy()
}
// then I when I came back from the previous screen to live streaming activity everything is initializing again but the audio is not able to audible.
override fun onResume() {
super.onResume()
Log.e("resume", "resume")
if (mRtcEngine != null) {
mRtcEngine!!.enableAudio()
// mRtcEngine!!.resumeAudio()
}
}
code I am using
//agora rtc engine and handler initialization-----------------
private var mRtcEngine: RtcEngine? = null
private var mRtcEventHandler = object : IRtcEngineEventHandler() {
#SuppressLint("LongLogTag")
override fun onFirstRemoteVideoDecoded(uid: Int, width: Int,
height: Int, elapsed: Int) {
}
override fun onUserOffline(uid: Int, reason: Int) {
runOnUiThread {
val a = reason //if login =0 user is offline
try {
if (mUid == uid) {
if (surfaceView?.parent != null)
(surfaceView?.parent as ViewGroup).removeAllViews()
if (mRtcEngine != null) {
leaveChannel()
RtcEngine.destroy(mRtcEngine)
mRtcEngine = null
}
setResult(IntentConstants.REQUEST_CODE_LIVE_STREAMING)
finish()
}
} catch (e: Exception) {
e.printStackTrace()
}
}
}
override fun onUserMuteVideo(uid: Int, muted: Boolean) {
runOnUiThread {
// onRemoteUserVideoMuted(uid, muted);
Log.e("video","muted")
}
}
override fun onAudioQuality(uid: Int, quality: Int, delay:
Short, lost: Short) {
super.onAudioQuality(uid, quality, delay, lost)
Log.e("", "")
}
override fun onUserJoined(uid: Int, elapsed: Int) {
// super.onUserJoined(uid, elapsed)
mUid = uid
runOnUiThread {
try {
setupRemoteVideo(mUid!!)
} catch (e: Exception) {
e.printStackTrace()
}
}
Log.e("differnt_uid----", mUid.toString())
}
}
private fun initAgoraEngineAndJoinChannel() {
if(mRtcEngine==null)
{
initializeAgoraEngine()
setupVideoProfile()
}
}
//initializing rtc engine class
#Throws(Exception::class)
private fun initializeAgoraEngine() {
try {
var s = RtcEngine.getSdkVersion()
mRtcEngine = RtcEngine.create(baseContext, AgoraConstants.APPLICATION_ID, mRtcEventHandler)
} catch (e: Exception) {
// Log.e(LOG_TAG, Log.getStackTraceString(e));
throw RuntimeException("NEED TO check rtc sdk init fatal error\n" + Log.getStackTraceString(e))
}
}
#Throws(Exception::class)
private fun setupVideoProfile() {
//mRtcEngine?.muteAllRemoteAudioStreams(true)
// mLogger.log("channelName account = " + channelName + ",uid = " + 0);
mRtcEngine?.enableVideo()
//mRtcEngine.clearVideoCompositingLayout();
mRtcEngine?.enableLocalVideo(false)
mRtcEngine?.setEnableSpeakerphone(false)
mRtcEngine?.muteLocalAudioStream(true)
joinChannel()
mRtcEngine?.setVideoProfile(Constants.CHANNEL_PROFILE_LIVE_BROADCASTING, true)
mRtcEngine?.setChannelProfile(Constants.CHANNEL_PROFILE_LIVE_BROADCASTING)
mRtcEngine?.setClientRole(Constants.CLIENT_ROLE_AUDIENCE,"")
val speaker = mRtcEngine?.isSpeakerphoneEnabled
val camerafocus = mRtcEngine?.isCameraAutoFocusFaceModeSupported
Log.e("", "")
}
#Throws(Exception::class)
private fun setupRemoteVideo(uid: Int) {
val container = findViewById<FrameLayout>(R.id.fl_video_container)
if (container.childCount >= 1) {
return
}
surfaceView = RtcEngine.CreateRendererView(baseContext)
container.addView(surfaceView)
mRtcEngine?.setupRemoteVideo(VideoCanvas(surfaceView, VideoCanvas.RENDER_MODE_HIDDEN, uid))
mRtcEngine?.setRemoteVideoStreamType(uid, 1)
mRtcEngine?.setCameraAutoFocusFaceModeEnabled(false)
mRtcEngine?.muteRemoteAudioStream(uid, false)
mRtcEngine?.adjustPlaybackSignalVolume(0)
// mRtcEngine.setVideoProfile(Constants.VIDEO_PROFILE_180P, false); // Earlier than 2.3.0
surfaceView?.tag = uid // for mark purpose
val audioManager: AudioManager =
this#LiveStreamingActivity.getSystemService(Context.AUDIO_SERVICE) as AudioManager
//audioManager.mode = AudioManager.MODE_IN_CALL
val isConnected: Boolean = audioManager.isWiredHeadsetOn
if (isConnected) {
/* audioManager.isSpeakerphoneOn = false
audioManager.isWiredHeadsetOn = true*/
mRtcEngine?.setEnableSpeakerphone(false)
mRtcEngine?.setDefaultAudioRoutetoSpeakerphone(false)
mRtcEngine?.setSpeakerphoneVolume(0)
mRtcEngine?.enableInEarMonitoring(true)
// Sets the in-ear monitoring volume to 50% of original volume.
mRtcEngine?.setInEarMonitoringVolume(200)
mRtcEngine?.adjustPlaybackSignalVolume(200)
} else {
/* audioManager.isSpeakerphoneOn = true
audioManager.isWiredHeadsetOn = false*/
mRtcEngine?.setEnableSpeakerphone(true)
mRtcEngine?.setDefaultAudioRoutetoSpeakerphone(true)
mRtcEngine?.setSpeakerphoneVolume(50)
mRtcEngine?.adjustPlaybackSignalVolume(50)
mRtcEngine?.enableInEarMonitoring(false)
// Sets the in-ear monitoring volume to 50% of original volume.
mRtcEngine?.setInEarMonitoringVolume(0)
}
Log.e("", "")
}
#Throws(Exception::class)
private fun joinChannel() {
mRtcEngine?.joinChannel(
null,
AgoraConstants.CHANNEL_NAME,
"Extra Optional Data",
0
) // if you do not specify the uid, we will generate the uid for you
}
#Throws(Exception::class)
private fun leaveChannel() {
mRtcEngine!!.leaveChannel()
}
I think first you want to put setupRemoteVideo in onFirstRemoteVideoDecoded callback instead of the onUserJoined callback. Also, in the onDestroy callback, you should call RtcEngine.destroy() instead of RtcEngine.destroy(mRtcEngine).
Related
I'm working on Android NFC based application with requirement of continuously read/write data to SLIX-2(ICode) tag from any activity.
As of now, application starts to initialize NFCManager which does most of the heavy lifting for Tag detection, continuously polling for presence check, read & write data.
BaseActivity does initialization of ANFCManager with other required work such as pending restart Intent, check nfc adapter, enableForegroundDispatch, ...
private fun initField() {
mNfcManager = ANfcManager(this)
}
private fun createPendingRestartIntent() {
pendingIntent = PendingIntent.getActivity(this, 0, Intent(this, javaClass)
.addFlags(Intent.FLAG_ACTIVITY_SINGLE_TOP), 0
)
}
override fun onResume() {
super.onResume()
try {
if(mNfcManager.checkNfcPowerStatus()) // NfcAdapter enabled or not
setReadyToHandleTag()
else Log.w(TAG, "Nfc is not supported or disabled.")
} catch (e: AcmNfcManager.NfcNotEnabledException) {
Log.e(TAG, "Nfc not enabled", e)
}
}
private fun setReadyToHandleTag() {
try {
TECHLISTS = arrayOf(arrayOf(IsoDep::class.java.name), arrayOf(NfcV::class.java.name),
arrayOf(NfcA::class.java.name), arrayOf(NfcB::class.java.name),
arrayOf(NfcF::class.java.name),arrayOf(Ndef::class.java.name),
arrayOf(NdefFormatable::class.java.name))
val tagDetected = IntentFilter(NfcAdapter.ACTION_TECH_DISCOVERED)
tagDetected.addCategory(Intent.CATEGORY_DEFAULT)
TAGFILTERS = arrayOf(tagDetected)
} catch (e: Exception) {
Log.e(TAG, "TECH or TAG filter no detected!!!" )
}
pendingIntent?.let { mNfcManager.enableForegroundDispatch(this, it, TAGFILTERS, TECHLISTS) }
}
override fun onNewIntent(intent: Intent) {
super.onNewIntent(intent)
nfcState = mNfcManager.filterIntent(intent)
dispatchActionOnTag(nfcState)
}
// this abs function will provide the Tag state in the corresponding class
abstract fun dispatchActionOnTag(tag: Boolean)
Each Activity has NfcListener for tag detection and will do the read/write using ANfcManager API's. Also to continuously checking the tag presence, using handler with looper internal class inside NFC Manager for presence check.
Here is the function inside ActivityA which trigger the method after tag detection as well as presence check thread,
override fun dispatchActionOnTag(tag: Boolean) {
mNfcStatus = tag
if (nfcStateListener() != null) {
nfcStateListener().updateNfcState(tag)
mNfcManager.startTagCheck() // presence check handler every x sec
}
}
This same function is been repeated(kind of not clean but still works) in each of the activity for tag detection and presence check & based on that read/write data to the Tag.
Here comes my problem,
Preconditions :
Tag in my application(product) is at a fixed location(sticked in a hardware) & is not usually taken out unless there is a tag change.
There are situations where Tag can be taken out in mostly ActivityB or ActivityC activity will be running, which required to repeat the same callback code in these activities.
Required:
- When switching from ActvityA-> ActivityB, Tag detection flow is not done(onNewIntent) or TAg is not taken out from proximity and tapped again. How will I write/read data to the tag?
ANFCManager,
class ANfcManager #Inject constructor(context: Context) {
private val mContext = context
private lateinit var nfcAdapter: NfcAdapter
private lateinit var mTag: Tag
private lateinit var iCodeTag: ICodeSlix2
private lateinit var icode: ICode
init {
val readPermission = ContextCompat.checkSelfPermission(
mContext,
Manifest.permission.WRITE_EXTERNAL_STORAGE
) == PackageManager.PERMISSION_GRANTED
if (!readPermission) {
ActivityCompat.requestPermissions(
mContext as Activity,
arrayOf(Manifest.permission.WRITE_EXTERNAL_STORAGE), 113
)
}
/**
* initialize background thread for presence check every x seconds.
*/
val thread = HandlerThread("PresenceCheckThread")
thread.start()
mHandler = PresenceHandler(thread.looper)
}
fun enableForegroundDispatch(
activity: FragmentActivity, intent: PendingIntent,
filters: Array<IntentFilter>?, techLists: Array<Array<String>>?
) {
nfcAdapter.enableForegroundDispatch(activity, intent, filters, techLists)
}
fun disableForegroundDispatch(activity: Activity) {
nfcAdapter.disableForegroundDispatch(activity)
}
fun filterIntent(intent: Intent): Boolean {
val action = intent.action
if (NfcAdapter.ACTION_TECH_DISCOVERED == action
|| NfcAdapter.ACTION_TAG_DISCOVERED == action
|| NfcAdapter.ACTION_NDEF_DISCOVERED == action
) {
if (intent.hasExtra(NfcAdapter.EXTRA_TAG)) {
mTag = intent.getParcelableExtra(NfcAdapter.EXTRA_TAG)!!
return if (discoverTag()) {
Toast.makeText(mContext, "Tag detected.", Toast.LENGTH_SHORT).show()
true
} else {
ignoreTag()
false
}
}
}
return false
}
/**
* discover the Tag family.
*/
fun discoverTag(): Boolean {
icode = getTag(mTag)
if (ICodeSlix2::class.java.isInstance(icode))
iCodeTag = icode as ICodeSlix2
return iCodeTag != null
}
fun checkNfcPowerStatus(): Boolean {
return checkNfcPowerStatus(mContext)
}
/**
* Check Nfc status
*/
private fun checkNfcPowerStatus(context: Context?): Boolean {
nfcAdapter = NfcAdapter.getDefaultAdapter(context)
var enabled = false
if (nfcAdapter != null) {
enabled = nfcAdapter.isEnabled
}
return enabled
}
fun writeUpdateBlocks() {
try {
iCodeTag.connect()
.
. // proprietary code
.
}catch (e: IOException) {
e.printStackTrace()
Log.e(TAG, "IOException: ", e)
} catch (e: SmartCardException) {
e.printStackTrace()
Log.e(TAG, "SmartCardException: ", e)
} catch (e: IllegalArgumentException) {
e.printStackTrace()
Log.e(TAG, "IllegalArgumentException: ", e)
} catch (e: IllegalStateException) {
e.printStackTrace()
Log.e(TAG, "IllegalArgumentException: ", e)
} catch (e: IndexOutOfBoundsException) {
e.printStackTrace()
Log.e(TAG, "IndexOutOfBoundsException: ", e)
} finally {
iCodeTag.close()
}
}
Required: - When switching from ActvityA-> ActivityB, Tag detection
flow is not done(onNewIntent) or TAg is not taken out from proximity
and tapped again. How will I write/read data to the tag?
So the Tag object is a Parcelable Object , just pass it from ActivityA to ActivityB, you don't need to re-discover it.
e.g. something like (sorry in Java not Kotlin)
ActivityA
Intent intent = new Intent(getActivity(), ActivityB.class);
intent.putExtra("TAG", mTag);
startActivity(intent);
The in ActivityB onCreate
Intent intent = getIntent();
mTag = intent.getParcelableExtra("TAG")
// Start doing stuff with the Tag just like if you got it via discovery
// ANfcManager might need a `setTag` method to set it without discovery.
// or allow a Tag be be passed in the ANfcManager constructor
Not that I would use enableForegroundDispatch for reading and especially writing to Tags as I found it too unreliable, I would recommend enableReaderMode but then you can still pass the Tag Object between activities.
It was quick to convert the Manager class as Singleton and rest all remain same.
BaseActivity,
fun initField() {
mNfcManager = ANfcManager.getInstance(this)
}
class ANfcManager private constructor(context: Context){
companion object : SingletonHolder<ANfcManager, Context>(::ANfcManager) {
val TAG = ANfcManager::class.java.simpleName
}
init{
mContext = context
.
.
.
}
}
I have a project using RecognitionListener written in Kotlin. The speech-to-text function was always a success and never presented any problems.
Since last week, it's onResult function started to be called twice. No changes were made on the project. I tested old versions of the project (from months ago) and those had the same problem.
There are three different cases:
Small text (1 to 8 words) and SpeechRecognizer being stopped automatically -> onResult() called twice;
Big text (9 words or more) and SpeechRecognizer being stopped automatically -> Normal behavior (onResult() called once);
Any text size and SpeechRecognizer stopListening() function called manually (from code) -> Normal behavior.
Here is the VoiceRecognition speech-to-text class code:
class VoiceRecognition(private val activity: Activity, language: String = "pt_BR") : RecognitionListener {
private val AudioLogTag = "AudioInput"
var voiceRecognitionIntentHandler: VoiceRecognitionIntentHandler? = null
var voiceRecognitionOnResultListener: VoiceRecognitionOnResultListener? = null //Must have this
var voiceRecognitionLayoutChanger: VoiceRecognitionLayoutChanger? = null
var isListening = false
private val intent: Intent
private var speech: SpeechRecognizer = SpeechRecognizer.createSpeechRecognizer(activity)
init {
speech.setRecognitionListener(this)
intent = Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH)
intent.putExtra(
RecognizerIntent.EXTRA_LANGUAGE_MODEL,
RecognizerIntent.LANGUAGE_MODEL_FREE_FORM
)
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, language)
}
//It is important to put this function inside a clickListener
fun listen(): Boolean {
if (ContextCompat.checkSelfPermission(activity, Manifest.permission.RECORD_AUDIO) != PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(activity, arrayOf(Manifest.permission.RECORD_AUDIO), 1)
return false
}
speech.startListening(intent)
Log.i(AudioLogTag, "startListening")
return true
}
//Use this if you want to stop listening but still get recognition results
fun endListening(){
Log.i(AudioLogTag, "stopListening")
speech.stopListening()
isListening = false
}
fun cancelListening(){
Log.i(AudioLogTag, "cancelListening")
speech.cancel()
voiceRecognitionLayoutChanger?.endListeningChangeLayout()
isListening = false
}
override fun onReadyForSpeech(p0: Bundle?) {
Log.i(AudioLogTag, "onReadyForSpeech")
voiceRecognitionLayoutChanger?.startListeningChangeLayout()
isListening = true
}
override fun onRmsChanged(p0: Float) {
// Log.i(AudioLogTag, "onRmsChanged: $p0")
// progressBar.setProgress((Int) p0)
}
override fun onBufferReceived(p0: ByteArray?) {
Log.i(AudioLogTag, "onBufferReceived: $p0")
}
override fun onPartialResults(p0: Bundle?) {
Log.i(AudioLogTag, "onPartialResults")
}
override fun onEvent(p0: Int, p1: Bundle?) {
Log.i(AudioLogTag, "onEvent")
}
override fun onBeginningOfSpeech() {
Log.i(AudioLogTag, "onBeginningOfSpeech")
}
override fun onEndOfSpeech() {
Log.i(AudioLogTag, "onEndOfSpeech")
voiceRecognitionLayoutChanger?.endListeningChangeLayout()
isListening = false
}
override fun onError(p0: Int) {
speech.cancel()
val errorMessage = getErrorText(p0)
Log.d(AudioLogTag, "FAILED: $errorMessage")
voiceRecognitionLayoutChanger?.endListeningChangeLayout()
isListening = false
}
override fun onResults(p0: Bundle?) {
val results: ArrayList<String> = p0?.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION) as ArrayList<String>
Log.i(AudioLogTag, "onResults -> ${results.size}")
val voiceIntent: Int? = voiceRecognitionIntentHandler?.getIntent(results[0])
if (voiceIntent != null && voiceIntent != 0) {
voiceRecognitionIntentHandler?.handle(voiceIntent)
return
}
voiceRecognitionOnResultListener!!.onResult(results[0])
}
private fun getErrorText(errorCode: Int): String {
val message: String
when (errorCode) {
SpeechRecognizer.ERROR_AUDIO -> message = "Audio recording error"
SpeechRecognizer.ERROR_CLIENT -> message = "Client side error"
SpeechRecognizer.ERROR_INSUFFICIENT_PERMISSIONS -> message = "Insufficient permissions"
SpeechRecognizer.ERROR_NETWORK -> message = "Network error"
SpeechRecognizer.ERROR_NETWORK_TIMEOUT -> message = "Network timeout"
SpeechRecognizer.ERROR_NO_MATCH -> message = "No match"
SpeechRecognizer.ERROR_RECOGNIZER_BUSY -> message = "RecognitionService busy"
SpeechRecognizer.ERROR_SERVER -> message = "Error from server"
SpeechRecognizer.ERROR_SPEECH_TIMEOUT -> message = "No speech input"
else -> message = "Didn't understand, please try again."
}
return message
}
//Use it in your overriden onPause function.
fun onPause() {
voiceRecognitionLayoutChanger?.endListeningChangeLayout()
isListening = false
speech.cancel()
Log.i(AudioLogTag, "pause")
}
//Use it in your overriden onDestroy function.
fun onDestroy() {
speech.destroy()
}
listen(), endListening() and cancelListening() are all called from a button.
I found this open issue: https://issuetracker.google.com/issues/152628934
As I commented, I assume it is an issue with the "speech recognition service" and not with the Android RecognitionListener class.
this is my temporary workaround
singleResult=true;
#Override
public void onResults(Bundle results) {
Log.d(TAG, "onResults"); //$NON-NLS-1$
if (singleResult) {
ArrayList<String> matches = results.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
if (matches != null && matches.size() > 0) {
Log.d("single Result", "" + matches.get(0));
}
singleResult=false;
}
getHandler().postDelayed(new Runnable() {
#Override
public void run() {
singleResult=true;
}
},100);
}
I had the same problem and I've just added a boolean flag in my code, but ofcourse it's a temporary solution and I don't know the source of this problem.
val recognizer = SpeechRecognizer.createSpeechRecognizer(context)
recognizer.setRecognitionListener(
object : RecognitionListener {
var singleResult = true
override fun onResults(results: Bundle?) {
if (singleResult) {
results?.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION).let {
// do something with result
}
// next result will be ignored
singleResult = false
}
}
}
This just started happening in one of my apps yesterday. I added a boolean to allow the code to execute only once, but I'd love an explanation as to why it suddenly started doing this. Any updates?
I use the the following code based on time differences, which should continue to work if Google ever gets around to fixing this bug.
long mStartTime = System.currentTimeMillis(); // Global Var
#Override
public void onResults(Bundle results)
{
long difference = System.currentTimeMillis() - mStartTime;
if (difference < 100)
{
return;
}
mStartTime = System.currentTimeMillis();
ArrayList<String> textMatchList =
results.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
Event_Handler(VOICE_DATA, textMatchList.get(0));
// process event
}
ya even I faced the same issue with my app but I fixed it with a custom logic that is using a flag means a variable let it be a temp variable and by default set it as a false.
what you need to do set the temp as true wherever you are starting listening voice.
Then in your handler what you need to do is just add a if condition on the basis of temp variable like
if (temp) {
do something
temp = false
}
so what will happen you handler will be called twice as usual but you will be able to handle this data only.
I am only able to broadcast audio alone using mic and speaker, and if I use setExternalAudioSource method, then the broadcast encounter with some heavy unwanted noise. I just want to broadcast the raw audio data alone without using mic, speaker and unwanted noise.
private val PERMISSION_REQ_ID_RECORD_AUDIO = 22
private var mRtcEngine: RtcEngine? = null// Tutorial Step 1
private val mRtcEventHandler = object : IRtcEngineEventHandler() { // Tutorial Step 1
override fun onUserOffline(uid: Int, reason: Int) { // Tutorial Step 4
//runOnUiThread { onRemoteUserLeft(uid, reason) }
}
override fun onUserMuteAudio(uid: Int, muted: Boolean) { // Tutorial Step 6
// runOnUiThread { onRemoteUserVoiceMuted(uid, muted) }
}
}
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
if (checkSelfPermission(Manifest.permission.RECORD_AUDIO, PERMISSION_REQ_ID_RECORD_AUDIO)) {
createRtcChannel()
}
}
fun checkSelfPermission(permission: String, requestCode: Int): Boolean {
if (ContextCompat.checkSelfPermission(this,
permission) != PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(this,
arrayOf(permission),
requestCode)
return false
}
return true
}
private fun createRtcChannel() {
initializeAgoraEngine() // Tutorial Step 1
joinChannel()
}
private fun initializeAgoraEngine() {
try {
mRtcEngine = RtcEngine.create(this, getString(R.string.agora_app_id), mRtcEventHandler)
//set the channel as live broadcast mode
mRtcEngine?.setChannelProfile(Constants.CHANNEL_PROFILE_LIVE_BROADCASTING)
mRtcEngine?.setClientRole(Constants.CLIENT_ROLE_BROADCASTER)
} catch (e: Exception) {
}
}
private fun joinChannel() {
mRtcEngine?.joinChannel(null, "voiceDemoChannel1", "Extra Optional Data", 0) // if you do not specify the uid, we will generate the uid for you
val payload = IOUtils.toByteArray(assets.openFd("ringtone.mp3").createInputStream())
mRtcEngine?.setExternalAudioSource(
true,
8000,
1 );
mRtcEngine?.pushExternalAudioFrame(
payload,
1000
)
}
Is this possible using agora or is there any alternative to it?
Reasons can cause the noise:
Your source audio PCM samples are noisy by themselves
Engine bugs
The sample rate you set is wrong
For the first one, you can check your PCM samples directly. For the 2nd, as there are already many people using, it's rare to be true. So I woulld suguest you to check the sample rate if you are sure your source PCM samples are good.
Also, you can set up the APM option before you join channel to enable audio enhancement for external source, by
setParameters("{\"che.audio.override_apm\":true}")
I have the following code for music recognition. I am using intent service to do all the music recognition in the service. I have done all the basic steps like adding all the permissions required and adding the ACRCloud android SDK in the project.
class SongIdentifyService(discoverPresenter : DiscoverPresenter? = null) : IACRCloudListener , IntentService("SongIdentifyService") {
private val callback : SongIdentificationCallback? = discoverPresenter
private val mClient : ACRCloudClient by lazy { ACRCloudClient() }
private val mConfig : ACRCloudConfig by lazy { ACRCloudConfig() }
private var initState : Boolean = false
private var mProcessing : Boolean = false
override fun onHandleIntent(intent: Intent?) {
Log.d("SongIdentifyService", "onHandeIntent called" )
setUpConfig()
addConfigToClient()
if (callback != null) {
startIdentification(callback)
}
}
public fun setUpConfig(){
Log.d("SongIdentifyService", "setupConfig called")
this.mConfig.acrcloudListener = this#SongIdentifyService
this.mConfig.host = "some-host"
this.mConfig.accessKey = "some-accesskey"
this.mConfig.accessSecret = "some-secret"
this.mConfig.protocol = ACRCloudConfig.ACRCloudNetworkProtocol.PROTOCOL_HTTP // PROTOCOL_HTTPS
this.mConfig.reqMode = ACRCloudConfig.ACRCloudRecMode.REC_MODE_REMOTE
}
// Called to start identifying/discovering the song that is currently playing
fun startIdentification(callback: SongIdentificationCallback)
{
Log.d("SongIdentifyService", "startIdentification called")
if(!initState)
{
Log.d("AcrCloudImplementation", "init error")
}
if(!mProcessing) {
mProcessing = true
if (!mClient.startRecognize()) {
mProcessing = false
Log.d("AcrCloudImplementation" , "start error")
}
}
}
// Called to stop identifying/discovering song
fun stopIdentification()
{
Log.d("SongIdentifyService", "stopIdentification called")
if(mProcessing)
{
mClient.stopRecordToRecognize()
}
mProcessing = false
}
fun cancelListeningToIdentifySong()
{
if(mProcessing)
{
mProcessing = false
mClient.cancel()
}
}
fun addConfigToClient(){
Log.d("SongIdentifyService", "addConfigToClient called")
this.initState = this.mClient.initWithConfig(this.mConfig)
if(this.initState)
{
this.mClient.startPreRecord(3000)
}
}
override fun onResult(result: String?) {
Log.d("SongIdentifyService", "onResult called")
Log.d("SongIdentifyService",result)
mClient.cancel()
mProcessing = false
val result = Gson().fromJson(result, SongIdentificationResult :: class.java)
if(result.status.code == 3000)
{
callback!!.onOfflineError()
}
else if(result.status.code == 1001)
{
callback!!.onSongNotFound()
}
else if(result.status.code == 0 )
{
callback!!.onSongFound(MusicDataMapper().convertFromDataModel(result))
//callback!!.onSongFound(Song("", "", ""))
}
else
{
callback!!.onGenericError()
}
}
override fun onVolumeChanged(p0: Double) {
TODO("not implemented") //To change body of created functions use File | Settings | File Templates.
}
interface SongIdentificationCallback {
// Called when the user is offline and music identification failed
fun onOfflineError()
// Called when a generic error occurs and music identification failed
fun onGenericError()
// Called when music identification completed but couldn't identify the song
fun onSongNotFound()
// Called when identification completed and a matching song was found
fun onSongFound(song: Song)
}
}
Now when I am starting the service I am getting the following error:
I checked the implementation of the ACRCloudClient and its extends android Activity. Also ACRCloudClient uses shared preferences(that's why I am getting a null pointer exception).
Since keeping a reference to an activity in a service is not a good Idea its best to Implement the above code in the activity. All the implementation of recognizing is being done in a separate thread anyway in the ACRCloudClient class so there is no point of creating another service for that.
I have implemented the ExoPlayer in my application using the example from the Codelab : https://codelabs.developers.google.com/codelabs/exoplayer-intro/#3, algo with the example from https://medium.com/google-exoplayer/playing-ads-with-exoplayer-and-ima-868dfd767ea, the only difference is that I use AdsMediaSource instead of the deprecated ImaAdsMediaSource.
My Implementation is this:
class HostVideoFullFragment : Fragment(), AdsMediaSource.MediaSourceFactory {
override fun getSupportedTypes() = intArrayOf(C.TYPE_DASH, C.TYPE_HLS, C.TYPE_OTHER)
override fun createMediaSource(uri: Uri?, handler: Handler?, listener: MediaSourceEventListener?): MediaSource {
#C.ContentType val type = Util.inferContentType(uri)
return when (type) {
C.TYPE_DASH -> {
DashMediaSource.Factory(
DefaultDashChunkSource.Factory(mediaDataSourceFactory),
manifestDataSourceFactory)
.createMediaSource(uri, handler, listener)
}
C.TYPE_HLS -> {
HlsMediaSource.Factory(mediaDataSourceFactory)
.createMediaSource(uri, handler, listener)
}
C.TYPE_OTHER -> {
ExtractorMediaSource.Factory(mediaDataSourceFactory)
.createMediaSource(uri, handler, listener)
}
else -> throw IllegalStateException("Unsupported type for createMediaSource: $type")
}
}
private var player: SimpleExoPlayer? = null
private lateinit var playerView: SimpleExoPlayerView
private lateinit var binding: FragmentHostVideoFullBinding
private var playbackPosition: Long = 0
private var currentWindow: Int = 0
private var playWhenReady = true
private var inErrorState: Boolean = false
private lateinit var adsLoader: ImaAdsLoader
private lateinit var manifestDataSourceFactory: DataSource.Factory
private lateinit var mediaDataSourceFactory: DataSource.Factory
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
//Initialize the adsLoader
adsLoader = ImaAdsLoader(activity as Context, Uri.parse("https://pubads.g.doubleclick.net/gampad/ads?sz=640x480&iu=/124319096/external/ad_rule_samples&ciu_szs=300x250&ad_rule=1&impl=s&gdfp_req=1&env=vp&output=vmap&unviewed_position_start=1&cust_params=deployment%3Ddevsite%26sample_ar%3Dpremidpost&cmsid=496&vid=short_onecue&correlator="))
manifestDataSourceFactory = DefaultDataSourceFactory(
context, Util.getUserAgent(context, "BUO-APP"))//TODO change the applicationName with the right application name
//
mediaDataSourceFactory = DefaultDataSourceFactory(
context,
Util.getUserAgent(context, "BUO-APP"),//TODO change the applicationName with the right application name
DefaultBandwidthMeter())
}
private fun initializePlayer() {
/*
* Since the player can change from null (when we release resources) to not null we have to check if it's null.
* If it is then reset again
* */
if (player == null) {
//Initialize the Exo Player
player = ExoPlayerFactory.newSimpleInstance(DefaultRenderersFactory(activity as Context),
DefaultTrackSelector())
}
val uri = Uri.parse(videoURl)
val mediaSourceWithAds = buildMediaSourceWithAds(uri)
//Bind the view from the xml to the SimpleExoPlayer instance
playerView.player = player
//Add the listener that listens for errors
player?.addListener(PlayerEventListener())
player?.seekTo(currentWindow, playbackPosition)
player?.prepare(mediaSourceWithAds, true, false)
//In case we could not set the exo player
player?.playWhenReady = playWhenReady
//We got here without an error, therefore set the inErrorState as false
inErrorState = false
//Re update the retry button since, this method could have come from a retry click
updateRetryButton()
}
private inner class PlayerEventListener : Player.DefaultEventListener() {
fun updateResumePosition() {
player?.let {
currentWindow = player!!.currentWindowIndex
playbackPosition = Math.max(0, player!!.contentPosition)
}
}
override fun onPlayerStateChanged(playWhenReady: Boolean, playbackState: Int) {
//The player state has ended
//TODO check if there is going to be a UI change here
// if (playbackState == Player.STATE_ENDED) {
// showControls()
// }
// updateButtonVisibilities()
}
override fun onPositionDiscontinuity(#Player.DiscontinuityReason reason: Int) {
if (inErrorState) {
// This will only occur if the user has performed a seek whilst in the error state. Update
// the resume position so that if the user then retries, playback will resume from the
// position to which they seek.
updateResumePosition()
}
}
override fun onPlayerError(e: ExoPlaybackException?) {
var errorString: String? = null
//Check what was the error so that we can show the user what was the correspond problem
if (e?.type == ExoPlaybackException.TYPE_RENDERER) {
val cause = e.rendererException
if (cause is MediaCodecRenderer.DecoderInitializationException) {
// Special case for decoder initialization failures.
errorString = if (cause.decoderName == null) {
when {
cause.cause is MediaCodecUtil.DecoderQueryException -> getString(R.string.error_querying_decoders)
cause.secureDecoderRequired -> getString(R.string.error_no_secure_decoder,
cause.mimeType)
else -> getString(R.string.error_no_decoder,
cause.mimeType)
}
} else {
getString(R.string.error_instantiating_decoder,
cause.decoderName)
}
}
}
if (errorString != null) {
//Show the toast with the proper error
Toast.makeText(activity as Context, errorString, Toast.LENGTH_LONG).show()
}
inErrorState = true
if (isBehindLiveWindow(e)) {
clearResumePosition()
initializePlayer()
} else {
updateResumePosition()
updateRetryButton()
}
}
}
private fun clearResumePosition() {
//Clear the current resume position, since there was an error
currentWindow = C.INDEX_UNSET
playbackPosition = C.TIME_UNSET
}
/*
* This is for the multi window support
* */
private fun isBehindLiveWindow(e: ExoPlaybackException?): Boolean {
if (e?.type != ExoPlaybackException.TYPE_SOURCE) {
return false
}
var cause: Throwable? = e.sourceException
while (cause != null) {
if (cause is BehindLiveWindowException) {
return true
}
cause = cause.cause
}
return false
}
private fun buildMediaSourceWithAds(uri: Uri): MediaSource {
/*
* This content media source is the video itself without the ads
* */
val contentMediaSource = ExtractorMediaSource.Factory(
DefaultHttpDataSourceFactory("BUO-APP")).createMediaSource(uri) //TODO change the user agent
/*
* The method constructs and returns a ExtractorMediaSource for the given uri.
* We simply use a new DefaultHttpDataSourceFactory which only needs a user agent string.
* By default the factory will use a DefaultExtractorFactory for the media source.
* This supports almost all non-adaptive audio and video formats supported on Android. It will recognize our mp3 file and play it nicely.
* */
return AdsMediaSource(
contentMediaSource,
/* adMediaSourceFactory= */ this,
adsLoader,
playerView.overlayFrameLayout,
/* eventListener= */ null, null)
}
override fun onStart() {
super.onStart()
if (Util.SDK_INT > 23) {
initializePlayer()
}
}
override fun onResume() {
super.onResume()
hideSystemUi()
/*
* Starting with API level 24 Android supports multiple windows.
* As our app can be visible but not active in split window mode, we need to initialize the player in onStart.
* Before API level 24 we wait as long as possible until we grab resources, so we wait until onResume before initializing the player.
* */
if ((Util.SDK_INT <= 23 || player == null)) {
initializePlayer()
}
}
}
The ad never shows and if it shows it shows a rendering error E/ExoPlayerImplInternal: Renderer error. which never allows the video to show. I've run the examples from the IMA ads https://developers.google.com/interactive-media-ads/docs/sdks/android/ example code and it doesn't work neither. Does anyone have implemented the Exo Player succesfully with the latest ExoPlayer library version?
Please Help. Thanks!
When on an emulator, be sure to enable gpu rendering on the virtual device
The problem is that the emulator can not render videos. Therefore it wasn't showing the ads or the video. Run the app on a phone and it will work