How to set sound playback from external speaker? - android

There is a kind of weird issue, I am using oboe lib https://github.com/google/oboe, for sound playback. Of course you can choose sound playback output according to android settings
https://developer.android.com/reference/android/media/AudioDeviceInfo
So, if I need to set exact output chanel I need to set it to oboe lib.
By the way output chanal that I need is TYPE_BUILTIN_SPEAKER, but on some devices (sometimes, not constantly) I hear the sound from
TYPE_BUILTIN_EARPIECE
How I am doing this, I have such method to get needed chanel id
fun findAudioDevice(app: Application,
deviceFlag: Int,
deviceType: Int): AudioDeviceInfo?
{
var result: AudioDeviceInfo? = null
val manager = app.getSystemService(Context.AUDIO_SERVICE) as AudioManager
val adis = manager.getDevices(deviceFlag)
for (adi in adis)
{
if (adi.type == deviceType)
{
result = adi
break
}
}
return result
}
How I use it
val id = getAudioDeviceInfoId(getBuildInSpeakerInfo())
private fun getBuildInSpeakerInfo(): AudioDeviceInfo?
{
return com.tetavi.ar.basedomain.utils.Utils.findAudioDevice( //
getApplication<Application>(), //
AudioManager.GET_DEVICES_OUTPUTS, //
AudioDeviceInfo.TYPE_BUILTIN_SPEAKER //
)
}
private fun getAudioDeviceInfoId(info: AudioDeviceInfo?): Int
{
var result = -1
if (info != null)
{
result = info.id
}
return result
}
And eventually I need to set this id to oboe lib. Oboe lib is native lib, so with JNI I pass this id and set it
oboe::Result oboe_engine::createPlaybackStream()
{
oboe::AudioStreamBuilder builder;
const oboe::SharingMode sharingMode = oboe::SharingMode::Exclusive;
const int32_t sampleRate = mBackingTrack->getSampleRate();
const oboe::AudioFormat audioFormat = oboe::AudioFormat::Float;
const oboe::PerformanceMode performanceMode = oboe::PerformanceMode::PowerSaving;
builder.setSharingMode(sharingMode)
->setPerformanceMode(performanceMode)
->setFormat(audioFormat)
->setCallback(this)
->setSampleRate(sampleRate);
if (m_output_playback_chanel_id != EMPTY_NUM)
{
//set output playback chanel (like internal or external speaker)
builder.setDeviceId(m_output_playback_chanel_id); <------------- THIS LINE
}
return builder.openStream(&mAudioStream);
}
So, actually issue is that on some devices (sometimes, not constantly) I still hear that sound playback goes from internal speaker TYPE_BUILTIN_EARPIECE inspite of I set directly that I need to use TYPE_BUILTIN_SPEAKER
I checked a few times the flow, from moment that I get this id (it is acctually is 3) and up to the moment when I set it as a param to oboe lib, but still sometimes I hear sound from internal speaker.
So, question is - if I miss something here? Maybe some trick should be implemented or something else?

Related

Movie is deprecated now

I'm looking for the alternate of Movie like to get the duration of GIF. I tried in imageDecoder but I can't able to get the duration.
//Deprecated
val movie = Movie.decodeStream(`is`)
val duration = movie.duration()
Movie probably still works even though it's deprecated. But if you're not going to go on to use that Movie instance to play the GIF, that's a bad way of getting the duration because it will have to load the entire thing when all you really need to find the duration is in the meta data at the beginning of the file.
You could use the Metadata Extractor library to do this.
Since it's reading from a file, it is blocking and should be done in the background. Here's an example using a suspend function to accomplish that.
/** Returns duration in ms of the GIF of the stream, 0 if it has no duration,
* or null if it could not be read. */
suspend fun InputStream.readGifDurationOrNull(): Int? = withContext(Dispatchers.IO) {
try {
val metadata = ImageMetadataReader.readMetadata(this#readGifDurationOrNull)
val gifControlDirectories = metadata.getDirectoriesOfType(GifControlDirectory::class.java)
if (gifControlDirectories.size <= 1) {
return#withContext 0
}
gifControlDirectories.sumOf {
it.getInt(GifControlDirectory.TAG_DELAY) * 10 // Gif uses 10ms units
}
} catch (e: Exception) {
Log.e("readGifDurationOrNull", "Could not read metadata from input", e)
null
}
}
Credit to this answer for how to get the appropriate duration info from the metadata.

How can I catch a stream from Webview and change it to VOICE_CALL Stream with AEC?

I am using Agora, and it has some issues. One of them is the speaker's voice comes out to the media sound.
On the browser, it can't control the media volume, So, I created an app to handle this. In the app, I dispatch the volume up/down button to control media volume.
However, this method created howling issue. So, I'd like to send the sound to STREAM_VOICE_CALL and use AEC(Acoustic Echo Cancellation) API on Android so that the sound comes out to the right stream and it can handle the echo problem.
what I wrote,
private fun enableVoiceCallMode() {
with(audioManager) {
volumeControlStream = AudioManager.STREAM_VOICE_CALL
setStreamVolume(
AudioManager.STREAM_VOICE_CALL,
audioManager.getStreamVolume(AudioManager.STREAM_VOICE_CALL),
0
)
}
}
But this didn't work.
And also, I tried to apply AEC like this:
private fun enableEchoCanceler() {
if (AcousticEchoCanceler.isAvailable() && aec == null) {
aec = AcousticEchoCanceler.create(audioManager.generateAudioSessionId())
aec?.enabled = true
} else {
aec!!.enabled = false
aec!!.release()
aec = null
}
}
private fun releaseEchoCanceler() {
aec!!.enabled = false
aec?.release()
aec = null
}
However, I don't know if AcousticEchoCanceler.create(audioManager.generateAudioSessionId()) is correct way or not.
please help me out.

Use AudioRecord to record but the sound has changed

This is an issue related to Android AudioRecord and Exoplayer.
The required scene is to record while playing the accompaniment (note: the sound of the accompaniment file is fluctuatingly large and small);
The problem found so far: When the earphone is inserted into the recording and playback, the recorded sound is normal, but the sound played through the speaker gradually becomes flat (no longer loud and small)
And I tried these parameters, but nothing works:
val canceler = AcousticEchoCanceler.create(record.audioSessionId)
val suppressor = NoiseSuppressor.create(record.audioSessionId)
if (suppressor != null) {
suppressor.enabled = false
}
if (canceler != null) {
canceler.enabled = false
}
if (AutomaticGainControl.isAvailable()) {
val gainControl = AutomaticGainControl.create(record.audioSessionId)
gainControl.enabled = false
}

How to get buffer and sound info for an equalizer using MediaBrowser, MediaController, MediaSession and Exoplayer?

I have an app which can play playlists based on google docs on how to create and Audio App https://developer.android.com/guide/topics/media-apps/audio-app/building-an-audio-app
I would like to add an equalizer, like this one https://github.com/Yalantis/Horizon, but I cannot find how to get the needed information, I have never worked with sound before so I am a bit lost.
According to the docs I should first: "initialize the Horizon object with params referring to your sound:"
mHorizon = Horizon(
glSurfaceView, ResourcesCompat.getColor(resources, R.color.grey2),
RECORDER_SAMPLE_RATE, RECORDER_CHANNELS, RECORDER_ENCODING_BIT //Where to get these 3 constants?
)
And then: "to update Horizon call updateView method with chunk of sound data to proceed:"
val buffer = ByteArray(//Where to get the bytes?)
mHorizon!!.updateView(buffer)
How could I get this data? I looked in the android documentation but couldn't find anything.
You need to add a custom RendererFactory to your Exoplayer to get audio bytes. See the below code:
val rendererFactory = RendererFactory(this, object : TeeAudioProcessor.AudioBufferSink {
override fun flush(sampleRateHz: Int, channelCount: Int, encoding: Int) {
}
override fun handleBuffer(buffer: ByteBuffer) {
//pass bytes to the your function
}
})
exoPlayer = ExoPlayerFactory.newSimpleInstance(this, rendererFactory,DefaultTrackSelector())
You will get the bytes in a ByteBuffer, To convert it to ByteArray use below code:
try{
val arr = ByteArray(buffer.remaining())
buffer[arr] //pass this array to the required function
}
catch(exception:Exception)
{
// handle exception here
}

How can I trim a video from Uri, including files that `mp4parser` library can handle, but using Android's framework instead?

Background
Over the past few days, I've worked on making a customizable, more updated version of a library for video trimming, here (based on this library)
The problem
While for the most part, I've succeeded making it customizable and even converted all files into Kotlin, it had a major issue with the trimming itself.
It assumes the input is always a File, so if the user chooses an item from the apps chooser that returns a Uri, it crashes. The reason for this is not just the UI itself, but also because a library that it uses for trimming (mp4parser) assumes an input of only File (or filepath) and not a Uri (wrote about it here). I tried multiple ways to let it get a Uri instead, but failed. Also wrote about it here.
That's why I used a solution that I've found on StackOverflow (here)for the trimming itself. The good thing about it is that it's quiet short and uses just Android's framework itself. However, it seems that for some video files, it always fails to trim them. As an example of such files, there is one on the original library repository, here (issue reported here).
Looking at the exception, this is what I got:
E: Unsupported mime 'audio/ac3'
E: FATAL EXCEPTION: pool-1-thread-1
Process: life.knowledge4.videocroppersample, PID: 26274
java.lang.IllegalStateException: Failed to add the track to the muxer
at android.media.MediaMuxer.nativeAddTrack(Native Method)
at android.media.MediaMuxer.addTrack(MediaMuxer.java:626)
at life.knowledge4.videotrimmer.utils.TrimVideoUtils.genVideoUsingMuxer(TrimVideoUtils.kt:77)
at life.knowledge4.videotrimmer.utils.TrimVideoUtils.genVideoUsingMp4Parser(TrimVideoUtils.kt:144)
at life.knowledge4.videotrimmer.utils.TrimVideoUtils.startTrim(TrimVideoUtils.kt:47)
at life.knowledge4.videotrimmer.BaseVideoTrimmerView$initiateTrimming$1.execute(BaseVideoTrimmerView.kt:220)
at life.knowledge4.videotrimmer.utils.BackgroundExecutor$Task.run(BackgroundExecutor.java:210)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:458)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:301)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:764)
What I've found
Reported about the issue here. I don't think it will get an answer, as the library hasn't updated in years...
Looking at the exception, I tried to also trim without sound. This works, but it's not a good thing, because we want to trim normally.
Thinking that this code might be based on someone else's code, I tried to find the original one. I've found that it is based on some old Google code on its gallery app, here, in a class called "VideoUtils.java" in package of "Gallery3d". Sadly, I don't see any new version for it. Latest one that I see is of Gingerbread, here.
The code that I've made out of it looks as such:
object TrimVideoUtils {
private const val DEFAULT_BUFFER_SIZE = 1024 * 1024
#JvmStatic
#WorkerThread
fun startTrim(context: Context, src: Uri, dst: File, startMs: Long, endMs: Long, callback: VideoTrimmingListener) {
dst.parentFile.mkdirs()
//Log.d(TAG, "Generated file path " + filePath);
val succeeded = genVideoUsingMuxer(context, src, dst.absolutePath, startMs, endMs, true, true)
Handler(Looper.getMainLooper()).post { callback.onFinishedTrimming(if (succeeded) Uri.parse(dst.toString()) else null) }
}
//https://stackoverflow.com/a/44653626/878126 https://android.googlesource.com/platform/packages/apps/Gallery2/+/634248d/src/com/android/gallery3d/app/VideoUtils.java
#JvmStatic
#WorkerThread
private fun genVideoUsingMuxer(context: Context, uri: Uri, dstPath: String, startMs: Long, endMs: Long, useAudio: Boolean, useVideo: Boolean): Boolean {
// Set up MediaExtractor to read from the source.
val extractor = MediaExtractor()
// val isRawResId=uri.scheme == "android.resource" && uri.host == context.packageName && !uri.pathSegments.isNullOrEmpty())
val fileDescriptor = context.contentResolver.openFileDescriptor(uri, "r")!!.fileDescriptor
extractor.setDataSource(fileDescriptor)
val trackCount = extractor.trackCount
// Set up MediaMuxer for the destination.
val muxer = MediaMuxer(dstPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4)
// Set up the tracks and retrieve the max buffer size for selected tracks.
val indexMap = SparseIntArray(trackCount)
var bufferSize = -1
try {
for (i in 0 until trackCount) {
val format = extractor.getTrackFormat(i)
val mime = format.getString(MediaFormat.KEY_MIME)
var selectCurrentTrack = false
if (mime.startsWith("audio/") && useAudio) {
selectCurrentTrack = true
} else if (mime.startsWith("video/") && useVideo) {
selectCurrentTrack = true
}
if (selectCurrentTrack) {
extractor.selectTrack(i)
val dstIndex = muxer.addTrack(format)
indexMap.put(i, dstIndex)
if (format.containsKey(MediaFormat.KEY_MAX_INPUT_SIZE)) {
val newSize = format.getInteger(MediaFormat.KEY_MAX_INPUT_SIZE)
bufferSize = if (newSize > bufferSize) newSize else bufferSize
}
}
}
if (bufferSize < 0)
bufferSize = DEFAULT_BUFFER_SIZE
// Set up the orientation and starting time for extractor.
val retrieverSrc = MediaMetadataRetriever()
retrieverSrc.setDataSource(fileDescriptor)
val degreesString = retrieverSrc.extractMetadata(MediaMetadataRetriever.METADATA_KEY_VIDEO_ROTATION)
if (degreesString != null) {
val degrees = Integer.parseInt(degreesString)
if (degrees >= 0)
muxer.setOrientationHint(degrees)
}
if (startMs > 0)
extractor.seekTo(startMs * 1000, MediaExtractor.SEEK_TO_CLOSEST_SYNC)
// Copy the samples from MediaExtractor to MediaMuxer. We will loop
// for copying each sample and stop when we get to the end of the source
// file or exceed the end time of the trimming.
val offset = 0
var trackIndex: Int
val dstBuf = ByteBuffer.allocate(bufferSize)
val bufferInfo = MediaCodec.BufferInfo()
// try {
muxer.start()
while (true) {
bufferInfo.offset = offset
bufferInfo.size = extractor.readSampleData(dstBuf, offset)
if (bufferInfo.size < 0) {
//InstabugSDKLogger.d(TAG, "Saw input EOS.");
bufferInfo.size = 0
break
} else {
bufferInfo.presentationTimeUs = extractor.sampleTime
if (endMs > 0 && bufferInfo.presentationTimeUs > endMs * 1000) {
//InstabugSDKLogger.d(TAG, "The current sample is over the trim end time.");
break
} else {
bufferInfo.flags = extractor.sampleFlags
trackIndex = extractor.sampleTrackIndex
muxer.writeSampleData(indexMap.get(trackIndex), dstBuf,
bufferInfo)
extractor.advance()
}
}
}
muxer.stop()
return true
// } catch (e: IllegalStateException) {
// Swallow the exception due to malformed source.
//InstabugSDKLogger.w(TAG, "The source video file is malformed");
} catch (e: Exception) {
e.printStackTrace()
} finally {
muxer.release()
}
return false
}
}
The exception is thrown on val dstIndex = muxer.addTrack(format) . For now, I've wrapped it in try-catch, to avoid a real crash.
I tried to search for newer versions of this code (assuming that it got fixed later), but failed.
Searching on the Internet and here, I've found only one similar question, here, but it's not the same at all.
The questions
Is it possible to use Android's framework to trim such problematic files? Maybe there is a newer version of the trimming of the videos code? I'm interested of course only for the pure implementation of video trimming, like the function I wrote above, of "genVideoUsingMuxer" .
As a temporary solution, is it possible to detect problematic input videos, so that I won't let the user start to trim them, as I know they will fail?
Is there maybe another alternative to both of those, that have a permissive license and doesn't bloat the app? For mp4parser, I wrote a separate question, here.
Why does it occur?
audio/ac3 is an unsupported mime type.
MediaMuxer.addTrack() (native) calls MPEG4Writer.addSource(), which prints this log message before returning an error.
EDIT
My aim was not to provide an answer to each of your sub-questions, but to give you some insight into the fundamental problem. The library you have chosen relies on the Android's MediaMuxer component. For whatever reason, the MediaMuxer developers did not add support for this particular audio format. We know this because the software prints out an explicit message to that effect, then immediately throws the IllegalStateException mentioned in your question.
Because the issue only involves a particular audio format, when you provide a video-only input, everything works fine.
To fix the problem, you can either alter the library to provide for the missing functionality, or find a new library that better suits your needs. sannies/mp4parser may be one such alternative, although it has different limitations (if I recall correctly, it requires all media to be in RAM during the mastering process). I do not know if it supports ac3 explicitly, but it should provide a framework to which you can add support for arbitrary mime types.
I would encourage you to wait for a more complete answer. There may be far better ways to do what you are trying to do. But it is apparent that the library you are using simply does not support all possible mime types.

Categories

Resources