I'm trying to create an Android app that connects to the Doorbird device, I know the company's official app, but, I need more features that are tailored to my needs.
For someone that doesn't know what is Doorbird device, Doorbird is a smart intercom, a product of Doorbird company, the device can transmit audio and video from him to any consumer, like Android system, over HTTP and RTSP and he can get Audio stream and play it, for example, to record audio from Android device and transmit it to Doorbird. The audio is in format G711 u-law.
I was able to get the video and audio stream received from Doorbird and it works perfectly but I don't succeed to transmit the audio, in the u-law format of course, to Doorbird.
The error I get is
HTTP FAILED: java.net.ProtocolException: Unexpected status line:
I tried to transmit the same bytes I get from Doorbird back to Doorbird but still the same error.
Of course, I work according to the API that they published but there is not much information about an agreed protocol to transmit audio.
Offical Doorbird API
Is there an example of an Android project that integrates with Doorbird?
Can anyone help in trying to broadcast audio to Doorbird?
Which protocol should be?
Even someone who knows to transmit audio to Doorbird with any other tools and any system and not just Android OS, I'd appreciate it.
This is what I tried, I received the data from Doorbird (and as I said its works) and waiting 3 seconds, and transmit it with Retrofit Libray back to Doorbird.
const val AUDIO_PATH =
"http://192.168.1.187/bha-api/audio-receive.cgi?http-user=XXXXXX0001&http-password=XXXXXXXXXX"
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
//InputStream inputStream = getResources().openRawResource(R.raw.piano12);
val thread = Thread { this.playUrl() }
thread.start()
//val inStr = assets.open("doorbird_record")
}
private fun playUrl() {
val inStr = URL(AUDIO_PATH).openStream()
val buffer = ByteArray(1000)
var i = 0
//while (inStr.read(buffer).also { i = it } != -1) {
Handler(Looper.getMainLooper()).postDelayed({
//inStr.close()
inStr.read(buffer)
Log.d("DoorbirdLog", inStr.toString())
val part = MultipartBody.Part.createFormData(
"doorbirdStream", "doorbird", buffer.toRequestBody(
("audio/basic").toMediaType()
)
)
//val rb = file.asRequestBody(("audio/*").toMediaType())
val call = NetworkManager.instanceServiceApi.upload(part)
call.enqueue(object : Callback<ResponseBody> {
override fun onResponse(
call: Call<ResponseBody>,
response: Response<ResponseBody>
) {
val i = response.body()
Log.d("success", i.toString())
}
override fun onFailure(call: Call<ResponseBody>, t: Throwable) {
Log.d("failed", t.message.toString())
}
})
}, 3000)
}
And the Retrofit instance:
#Multipart
#Headers( "Content-Type: audio/basic",
"Content-Length: 9999999",
"Connection: Keep-Alive",
"Cache-Control: no-cache")
#POST("audio-transmit.cgi?http-user=XXXXXX0001&http-password=XXXXXXXXXX")
fun upload(#Part part: MultipartBody.Part): Call<ResponseBody>
I'd appreciate your assistance
Eventually, I was able to find a solution, I'll briefly present here the solution for those who will encounter an attempt to integrate with Doorbird.
private const val FREQUENCY_SAMPLE_RATE_TRANSMIT = 8000
private const val RECORD_STATE_STOPPED = 0
override suspend fun recordAndTransmitAudio(audioTransmitUrl: String) =
withContext(Dispatchers.IO) {
val minBufferSize = AudioRecord.getMinBufferSize(
FREQUENCY_SAMPLE_RATE_TRANSMIT, AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT
)
mRecorder = AudioRecord(
MediaRecorder.AudioSource.VOICE_COMMUNICATION,
FREQUENCY_SAMPLE_RATE_TRANSMIT, AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT, minBufferSize
)
mRecorder?.let { enableAcousticEchoCanceler(it.audioSessionId) }
mRecorder?.startRecording()
val bufferShort = ShortArray(minBufferSize)
val buffer = ByteArray(minBufferSize)
val urlConnection = URL(audioTransmitUrl).openConnection() as HttpURLConnection
urlConnection.apply {
doOutput = true
setChunkedStreamingMode(minBufferSize)
}
val output = DataOutputStream(urlConnection.outputStream)
output.flush()
try {
mRecorder?.let { recorder ->
while (recorder.read(bufferShort, 0, bufferShort.size) != RECORD_STATE_STOPPED) {
G711UCodecManager.encode(bufferShort, minBufferSize, buffer, 0)
output.write(buffer)
}
}
}catch (e: Exception){
Log.d(TAG, e.message.toString())
}
output.close()
urlConnection.disconnect()
}
First, we will prepare the necessary parameters for recording and transmission
We get the minimum size of the buffer for recording
Define the object with which we will record
Activate the echo cancellation
And start recording
Open connection with the transmit URL
While loop as long as the recording has not stopped
Encode the data we recorded from PCM 16Bit format to G.711 μ-law format
And of course, after we finished the recording we cleaned up resources.
Related
I am decoding an mp3 file, first I convert the mp3 file into a chunks of byteArray of size 1000 and put it in a circularArray and then pass it to mediaCodec callback for decoding (decode one byteArray at a time), I follow this link. It is working fine for Samsung devices, but if I use other than Samsung devices (Vivo, pixel 3a) it crashes at the mediaCodec.getInputBuffer(index) in the callback of onInputBufferAvailable by giving the exception IllegalStateException. My code is as follows:
var decoder: MediaCodec = MediaCodec.createDecoderByType("audio/mpeg")
decoder.configure(format, null, null, 0)
decoder.setCallback(object : MediaCodec.Callback() {
override fun onInputBufferAvailable(mediaCodec: MediaCodec, i: Int) {
while (true) {
if (circularArray!!.size() > 0) {
val data: ByteArray = circularArray.popFirst()
val info = MediaCodec.BufferInfo()
val buffer = mediaCodec.getInputBuffer(i)
buffer!!.put(data, 0, data.size)
mediaCodec.queueInputBuffer(i, 0, data.size, 0, 0)
break
}
}
}
override fun onOutputBufferAvailable(mediaCodec: MediaCodec, i: Int, info: MediaCodec.BufferInfo) {
//DECODING PACKET ENDED
val outBuffer = mediaCodec.getOutputBuffer(i)
val chunk = ByteArray(info.size)
outBuffer!![chunk] // Read the buffer all at once
outBuffer!!.clear()
Log.d(TAG, "onOutputBufferAvailable: ${info.size}")
audioTrack!!.write(chunk, info.offset, info.offset + info.size)
mediaCodec.releaseOutputBuffer(i, false)
}
override fun onError(mediaCodec: MediaCodec, e: MediaCodec.CodecException) {}
override fun onOutputFormatChanged(mediaCodec: MediaCodec, mediaFormat: MediaFormat) {}
})
decoder!!.start()
I converted my file like this
val tempBuf = ByteArray(1000)
var byteRead: Int
try {
val bufferedInputStream = BufferedInputStream(FileInputStream(mp3File))
while (bufferedInputStream.read(tempBuf).also { byteRead = it } != -1) {
circularArray.addLast(tempBuf.copyOf())
}
bufferedInputStream.close()
Thread(aacDecoderAndPlayRunnable).start()
} catch (e: java.lang.Exception) {
Log.d(TAG, "fileToInputStream: ${e.message}")
e.printStackTrace()
null
}
The exception where the app crashes is
Even if I try to get the format form mediaCodec in the callback, it gives an exception and crashes anyway. I also checked supportedTypes from the codec it supports audio/mpeg.
First of all, the MediaCodec works with a queue of input buffers. And you can read more about it in the docs.
The second parameter of the onInputBufferAvailable callback is the index of the buffer. When calling getInputBuffer() you must pass this index instead of 0:
val buffer = mediaCodec.getInputBuffer(i)
Second, consider using the MediaExtractor instead of reading the file yourself. It supplies you will presentation timestamps and flags to pass into queueInputBuffer().
Third, you need to remove the while (true) loop. You can only queue one buffer per callback.
I am getting some audio streaming data as base64 String, I convert it in byteArray and then write a file locally as mp3 file to play in mediaplayer. But the problem is mediaplayer througing error(1,-2147483648). How to solve this, I tried with many SO posts but nothing works.
**what I am trying to do is fetch base64 string save locally and play**.
val file = requireContext().getExternalFilesDir(null)?.absolutePath + "/audioRecording1.mp3"
val mediaPlayer = MediaPlayer()
try {
val output = FileOutputStream(file)
output.write(mp3SoundByteArray)
output.close()
val fis = FileInputStream(file)
mediaPlayer.setDataSource(fis.fd)
fis.close()
mediaPlayer.setAudioAttributes(
AudioAttributes.Builder().
setContentType(AudioAttributes.CONTENT_TYPE_MUSIC).
setUsage(AudioAttributes.USAGE_MEDIA).
build())
mediaPlayer.prepareAsync()
mediaPlayer.setOnPreparedListener {
mediaPlayer.start()
}
mediaPlayer.setOnErrorListener { mediaPlayer, i, i2 ->
Log.v("","${i,i2}")
true
}
}catch (e:Exception){
toast(e.message!!)
}
could you please tell me how to overcome this?
I am not sure, but it seams that you have trouble with file saving
fun saveFile(responseBody: ResponseBody?, pathWhereYouWantToSaveFile: String) {
val body = responseBody ?: return
var input: InputStream? = null
try {
val uri = Uri.parse(pathWhereYouWantToSaveFile)
input = body.byteStream()
val parcelFileDescriptor =
context.getContentResolver().openFileDescriptor(uri, FileConst.WRITE_MODE)
val fileOutputStream = FileOutputStream(parcelFileDescriptor?.fileDescriptor)
fileOutputStream.use { output ->
val bufferSize = BUFFER_SIZE.toInt()
val buffer = ByteArray(bufferSize)
var read: Int
while (input.read(buffer).also { read = it } != END_OF_FILE) {
output.write(buffer, START_OFFSET, read)
}
output.flush()
}
} catch (exception: Exception) {
logErrorIfDebug(exception)
} finally {
input?.close()
}
}
const val READ_MODE = "r"
const val WRITE_MODE = "w"
const val START_OFFSET = 0
const val END_OF_FILE = -1
const val BUFFER_SIZE = 4 * BYTES_IN_KILOBYTE
Try this in your viewModel or data sourse layer, then send result to UI layer and use there
Have you checked that your file saved correct? You can go to directory and try to open file. If everything okey, you can get it by uri in your media player.
Also you should check - perhaps you are creating another path for save and retrieve
Better way to use player is https://exoplayer.dev/
But native library also can work with internal uri path.
If you just take a random part of a base64 encoded audio stream then your bytearray will (after decoding) contain a part of an audiofile.
Some audio stream bytes.
Not a complete valid mp3 file with headers and such.
If you had said: am getting a mp3 file in one base64 String then your approch would be ok.
I have solved the issue without writing any header. below way.
val clipData =android.util.Base64.decode(data,0)
val output = FileOutputStream(file,true)
output.write(clipData)
output.close()
I've followed the instruction from Google on how to cast media metadata to chromecast, the initial loading is fine, it will show the title, image and play the stream, but my problem is that I am streaming a live audio stream and need to update the metadata from time to time without having to buffer the audio again.
This is a sample of my code:
override fun loadMediaLoadRequestData(request: PlatformBridgeApis.MediaLoadRequestData?)
{
if (request == null) return
val remoteMediaClient: RemoteMediaClient = remoteMediaClient ?: return
val mediaLoadRequest = getMediaLoadRequestData(request)
remoteMediaClient.load(mediaLoadRequest)
}
fun getMediaLoadRequestData(request: PlatformBridgeApis.MediaLoadRequestData): MediaLoadRequestData {
val mediaInfo = getMediaInfo(request.mediaInfo)
return MediaLoadRequestData.Builder()
.setMediaInfo(mediaInfo)
.setAutoplay(request.shouldAutoplay)
.setCurrentTime(request.currentTime)
.build()
}
fun getMediaInfo(mediaInfo: PlatformBridgeApis.MediaInfo?): MediaInfo? {
if (mediaInfo == null) return null
val streamType = getStreamType(mediaInfo.streamType)
val metadata = getMediaMetadata(mediaInfo.mediaMetadata)
val mediaTracks = mediaInfo.mediaTracks.map { getMediaTrack(it) }
val customData = JSONObject(mediaInfo.customDataAsJson ?: "{}")
return MediaInfo.Builder(mediaInfo.contentId)
.setStreamType(streamType)
.setContentType(mediaInfo.contentType)
.setMetadata(metadata)
.setMediaTracks(mediaTracks)
.setStreamDuration(mediaInfo.streamDuration)
.setCustomData(customData)
.build()
}
Does anyone have any suggestion on how to modify loadMediaLoadRequestData in order to trigger the Chromecast receiver to update only the MediaMetadata and not have the stream buffer again?
From sample doc Following code starts recording audio and streams to google cloud and receives responses. Everything works but I want to close straming after a certain condition is met.
if (mPermissionToRecord) {
val isFirstRequest = AtomicBoolean(true)
mAudioEmitter = AudioEmitter()
textView.setText("starting listener.")
// start streaming the data to the server and collect responses
val requestStream = mSpeechClient.streamingRecognizeCallable()
.bidiStreamingCall(object : ApiStreamObserver<StreamingRecognizeResponse> {
override fun onNext(value: StreamingRecognizeResponse) {
runOnUiThread {
when {
value.resultsCount > 0 -> mTextView.setText(
value.getResults(0).getAlternatives(
0
).transcript
)
else -> mTextView.setText(getString(R.string.api_error))
}
}
}
override fun onError(t: Throwable) {
//Log.e(TAG, "an error occurred", t)
textView.setText("an error occurred "+t.toString())
}
override fun onCompleted() {
//Log.d(TAG, "stream closed")
textView.setText("stream closed")
}
})
// monitor the input stream and send requests as audio data becomes available
mAudioEmitter!!.start { bytes ->
val builder = StreamingRecognizeRequest.newBuilder()
.setAudioContent(bytes)
// if first time, include the config
if (isFirstRequest.getAndSet(false)) {
builder.streamingConfig = StreamingRecognitionConfig.newBuilder()
.setConfig(
RecognitionConfig.newBuilder()
.setLanguageCode("en-US")
.setEncoding(RecognitionConfig.AudioEncoding.LINEAR16)
.setSampleRateHertz(16000)
.build()
)
.setInterimResults(true)
.setSingleUtterance(false)
.build()
}
// send the next request
requestStream.onNext(builder.build())
}
} else {
Log.e(TAG, "No permission to record! Please allow.")
}
AudioEmitter() is a audio recorder class. I tried to call:
mAudioEmitter?.stop()
mAudioEmitter = null
but that only stops the audio recording. I want to stop the audio streaming as well.
Calling mSpeechClient.shutdown() crashes the app.
How to stop SpeechClient bidiStreamingCall?
I am only able to broadcast audio alone using mic and speaker, and if I use setExternalAudioSource method, then the broadcast encounter with some heavy unwanted noise. I just want to broadcast the raw audio data alone without using mic, speaker and unwanted noise.
private val PERMISSION_REQ_ID_RECORD_AUDIO = 22
private var mRtcEngine: RtcEngine? = null// Tutorial Step 1
private val mRtcEventHandler = object : IRtcEngineEventHandler() { // Tutorial Step 1
override fun onUserOffline(uid: Int, reason: Int) { // Tutorial Step 4
//runOnUiThread { onRemoteUserLeft(uid, reason) }
}
override fun onUserMuteAudio(uid: Int, muted: Boolean) { // Tutorial Step 6
// runOnUiThread { onRemoteUserVoiceMuted(uid, muted) }
}
}
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
if (checkSelfPermission(Manifest.permission.RECORD_AUDIO, PERMISSION_REQ_ID_RECORD_AUDIO)) {
createRtcChannel()
}
}
fun checkSelfPermission(permission: String, requestCode: Int): Boolean {
if (ContextCompat.checkSelfPermission(this,
permission) != PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(this,
arrayOf(permission),
requestCode)
return false
}
return true
}
private fun createRtcChannel() {
initializeAgoraEngine() // Tutorial Step 1
joinChannel()
}
private fun initializeAgoraEngine() {
try {
mRtcEngine = RtcEngine.create(this, getString(R.string.agora_app_id), mRtcEventHandler)
//set the channel as live broadcast mode
mRtcEngine?.setChannelProfile(Constants.CHANNEL_PROFILE_LIVE_BROADCASTING)
mRtcEngine?.setClientRole(Constants.CLIENT_ROLE_BROADCASTER)
} catch (e: Exception) {
}
}
private fun joinChannel() {
mRtcEngine?.joinChannel(null, "voiceDemoChannel1", "Extra Optional Data", 0) // if you do not specify the uid, we will generate the uid for you
val payload = IOUtils.toByteArray(assets.openFd("ringtone.mp3").createInputStream())
mRtcEngine?.setExternalAudioSource(
true,
8000,
1 );
mRtcEngine?.pushExternalAudioFrame(
payload,
1000
)
}
Is this possible using agora or is there any alternative to it?
Reasons can cause the noise:
Your source audio PCM samples are noisy by themselves
Engine bugs
The sample rate you set is wrong
For the first one, you can check your PCM samples directly. For the 2nd, as there are already many people using, it's rare to be true. So I woulld suguest you to check the sample rate if you are sure your source PCM samples are good.
Also, you can set up the APM option before you join channel to enable audio enhancement for external source, by
setParameters("{\"che.audio.override_apm\":true}")