Getting Started with ExoPlayer - android

I am working on my first Android application using Kotlin. The activity is simple, connect to an audio stream from a given URL and allow the user to pause, resume, and/or stop the stream. I've been able to connect to and play the requested audio stream using ExoPlayer, but have a problem and question that I have not found addressed in ExoPlayer documentation. I've followed this documentation as best as I can to write the following:
class StreamAudio : AppCompatActivity() {
private lateinit var playerView: StyledPlayerView
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_stream_audio)
// Bind PlayerView to appropriate element in layout
playerView = findViewById(R.id.audio_player)
// Initialize instance of ExoPlayer
var player = ExoPlayer.Builder(applicationContext).build()
// Set MediaItem to stream_url and play
var stream: MediaItem = MediaItem.fromUri("http://stream_url")
player.setMediaItem(stream)
player.prepare()
player.play()
}
}
When running the application the audio streams, but no StyledPlayer UI ever appears, only a solid black view. Tapping the screen does not cause any UI to appear. When substituting with a StyledPlayerControlView, the UI appears briefly before disappearing and leaving a blank white view.
It seems like bad form to initialize the player within onCreate(), how can I access the player in another function? For example, to stop and release the player upon returning to the parent activity?
Thanks in advance for any help, the logs below only show an error described here that doesn't seem likely to cause UI issues.
Log Output:
W/Codec2Client: query -- param skipped: index = 1342179345.
query -- param skipped: index = 2415921170.
E/FMQ: grantorIdx must be less than 3
grantorIdx must be less than 3
D/CCodecBufferChannel: [c2.android.mp3.decoder#722] Created input block pool with allocatorID 16 => poolID 17 - OK (0)
I/CCodecBufferChannel: [c2.android.mp3.decoder#722] Created output block pool with allocatorID 16 => poolID 34 - OK
D/CCodecBufferChannel: [c2.android.mp3.decoder#722] Configured output block pool ids 34 => OK
E/ion: ioctl c0044901 failed with code -1: Inappropriate ioctl for device
E/FMQ: grantorIdx must be less than 3
E/FMQ: grantorIdx must be less than 3
D/AudioTrack: getTimestamp_l(24): device stall time corrected using current time 4430536774140
D/BufferPoolAccessor2.0: bufferpool2 0xe830a4d8 : 5(40960 size) total buffers - 4(32768 size) used buffers - 0/5 (recycle/alloc) - 4/410 (fetch/transfer)
D/BufferPoolAccessor2.0: evictor expired: 1, evicted: 1

Related

Why does my Android Webview app with a Hls.js video player freezes after 3-4 hours?

Im at the breaking point with this error. I developed a svelte webapp with an hls.js video player which I packaged in an android webview for Firesticks. The App works great except for one odd issue where after about 3-4 hours of playback the video freezes. I was able to catch some logs using adb.
The error does not happen in the usual hls.js onError handlers, but instead is being generated elsewhere. Cannot read properties of null (reading 'byteLength') is totally ambiguous but it is the best I've been able to get. Unfortunately the error only happens on the actual minified JS code and not on any browser debug builds.
Im at a total loss as to what could be causing this or how to even debug it. Maybe someone has experienced something like this in the past? Below is the video element in my svelte component and the hls.js initialization code.
<div class="playback-view" on:click|stopPropagation>
<video
bind:this={videoRef}
bind:currentTime={$ProgramTime}
on:ended={onPlaybackEnded}
bind:duration
bind:paused>
<track kind="captions">
</video>
const destroyHls = () => {
if (hls !== null && hls !== undefined){
hls.stopLoad()
hls.detachMedia()
hls.destroy()
}
}
const reloadSource = () => {
destroyHls()
if (videoRef !== null && videoRef !== undefined){
hls = new Hls({
// Audio codec for bitrate above 22hz
defaultAudioCodec: "mp4a.40.2",
//(seconds) If buffer < this value fragment will be loaded
// The "minimum" length of the buffer
maxBufferLength: 15,
backBufferLength: 1800,
// (seconds) The maximum length of the buffer
maxMaxBufferLength: 60,
// (bytes) The amount hls will try to load
// maxBufferSize: 120 * 1000 * 1000,
// (seconds) The amount to offset the stream by when stalling
// currentTime += (nb nudge retry -1)*nudgeOffset
nudgeOffset: 0.1,
// Number of nudges before BUFFER_STALLED_ERROR
nudgeMaxRetry: 3,
})
hls.attachMedia(videoRef)
hls.loadSource($Playback.playbackUrl)
hls.on(Hls.Events.ERROR, handleHLSError)
videoRef.play()
paused = false
}
}
After a comment suggested to check the buffer size. I decreased the backBufferLength and the maxMaxBufferLength. Now however I am faced with a new error.
"hlsError" {"type":"mediaError","parent":"main","details":"bufferAppendError","err":{"stack":"Error: Failed to execute 'appendBuffer' on 'SourceBuffer': The HTMLMediaElement.error attribute is not null.
This is failing to append to source buffer, but shouldn't hls.js be taking care of the clearing the buffer?
Unfortunately I was unable to figure out a consistently working set of settings for this issue. Changing HLS options simply resulted in different errors happening. In the end I basically did something similar to what #VC.One suggested.
I used the onerror methods of hls.js to detect fragLoadError and when it shows up I do a reload of the stream at that specific time. This means calling reloadSource as defined in the original question.
I don't feel amazing about this solution since it is only a bandaid, but it works rather reliably. At most you will get a 5 second freeze in the video playback every couple of hours.

Problem with method AudioRecord.read - Return value -1 (Android)

I have a problem with the Android AudioRecord library.
I need to record an audio stream from the device's microphone.
The initialization of the class is as follows:
recorder = new AudioRecord(
currentAudioSource,
SAMPLE_RATE_IN_8KHZ,
CHANNEL, // Mono
ENCODING, // ENCODING_PCM_16BIT
bufferSize // 2048
);
Then, I call recorder.startRecording() to make the audio stream active.
And to acquire this flow I call the method in a loop:
recorder.read(samples, 0, currentBufferSize)
The variable samples is a short[] and currentBufferSize is the lenght of the buffer.
The "read" method works correctly for the first N loop.
At loop N + 1, the method is stuck waiting to give me back 2048 short, until I call the stopRecording which gives me the registered values (less than 2048 short).
On the next registration, the read method returns me an empty short [] and the error code "-1" (general error).
It's been a few days since I got it right, also because the error does not occur in a systematic way, but randomly on multiple devices.
Do you have any ideas to resolve this situation?
Thank you
I solved with this solution:
while(runWorker) {
if (recorder != null && recorder.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING) {
...
Thread.sleep(40);
sampled = this.recorder.read(samples, 0, currentBufferSize, AudioRecord.READ_NON_BLOCKING);
...
I inserted a 40 ms pause and used the non-blocking Read.
This works on all devices except some Samsung tablets. This is the related post: AudioRecord.read with read mode: READ_NON_BLOCKING not working on Tablet Samsung
On Samsung tablets, the non-blocking Read always returns the value 0, as if it were not recording audio. If I use AudioRecord.READ_BLOCKING it works correctly.
Do you have any ideas about it?
Thanks.
Michele

HMS ASR cannot start recording

I'm trying to add HMS automatic speech recognition (ASR) to my app. I already have SpeechRecognizer implemented, but it requires GMS to work.
The current HMS implementation works on a non-huawei device with HMS core installed, but does not work on my Huawei Mediapad T5.
Things I've tried
The methods are called from different threads (main thread and graphics thread), so I've tried synchronizing the methods on a lock or posting a Runnable to the activity handler, without making much a difference. I.E., wrapping the functions in synchronized(lock) or activity.post.
Code:
fun init(activity: Activity)
speechRecognizer = MLAsrRecognizer.createAsrRecognizer(activity)
speechIntent = Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH)
.putExtra(
MLAsrCaptureConstants.LANGUAGE,
"en-US"
)
.putExtra(MLAsrConstants.FEATURE, MLAsrConstants.FEATURE_ALLINONE)
speechRecognizer?.setAsrListener(listener)
fun startListening()
speechRecognizer?.startRecognizing(speechIntent)
fun destroy()
speechRecognizer?.destroy()
Logs
4945-4945 W/HmsSpeechRecognitionHolder#c1cafe: init on Thread[main,5,main]
4945-4945 W/InputMethodManager: startInputReason = 1
4945-4945 W/HmsSpeechRecognitionHolder#c1cafe: startListening on Thread[main,5,main]
4945-4945 E/HaLogProvider: forbiddenHiLog openHa = false
4945-4945 E/HaLogProvider: forbiddenHiLog.getVenderCountry=ca
4945-4945 E/HaLogProvider: forbiddenHiLog openHa = false
4945-4945 E/MLASR_HaAdapter_MLKitAsr: mEventsToBeReported: has no response event isInfoGatherStart:falsemsg: 0
4945-4945 E/HwCustAudioRecordImpl: isOpenEC : false
634-985 E/HuaweiProcessing: ProcessingLib_Create: the algo have already been created.
634-985 W/EffectsFactory: EffectCreate() library huawei_processing: could not create fx Huawei Audio Preprocessing Effect, error -22
634-985 E/EffectFactoryHAL: Error creating effect e707d040-6b79-11e2-b16a-0002a5d5c51b: Invalid argument
721-8060 E/AudioEffect: set(): AudioFlinger could not create effect, status: -19
721-8060 W/AudioPolicyEffects: addInputEffects(): failed to create Fx huawei_pre_processing on source 1
634-8049 E/baidu_asr_interface: asr_baidu_set_parameters_data-not baidu asr
634-7993 W/DeviceHAL: Device 0x78c2d00000 get_mic_mute: Function not implemented
634-987 W/DeviceHAL: Device 0x78c2dc4680 get_mic_mute: Function not implemented
721-8009 W/HuaweiAudioFlinger: soundtrigger is now disable or not support, pls enable it first from setting
721-8009 W/APM_AudioPolicyManager: startInput(78) failed: other input 70 already started
721-8009 E/AudioFlinger: RecordThread::start error,setCallingAppName -1
4945-4945 E/AudioRecord: start() status -38
4945-4945 E/MLASR_A: getVendorCountry=ca
500-8480 W/libc: Set property "hw.wifi.dns_stat" to "99,14,14044,1,34759"
1472-1472 W/HwKeyguardDragHelper: AnimationBlocked
4945-5079 W/libEGL: EGLNativeWindowType 0x79e0317010 disconnect failed
1140-2290 E/WindowManager: win=Window{d80c651 u0 ProjectActivity} destroySurfaces: appStopped=true win.mWindowRemovalAllowed=false win.mRemoveOnExit=false
767-767 E/wificond: Failed to get NL80211_RATE_INFO_NOISE
767-767 E/wificond: Failed to get NL80211_RATE_INFO_SNR
767-767 E/wificond: Failed to get NL80211_STA_INFO_CNAHLOAD
1140-1316 E/WificondControl: Noise: 0, Snr: -1, Chload: -1
767-767 E/wificond: Failed to get NL80211_RATE_INFO_SNR
767-767 E/wificond: Failed to get NL80211_STA_INFO_CNAHLOAD
767-767 E/wificond: Failed to get NL80211_RATE_INFO_NOISE
767-767 E/wificond: Failed to get NL80211_RATE_INFO_SNR
767-767 E/wificond: Failed to get NL80211_STA_INFO_CNAHLOAD
1140-1316 E/WificondControl: Noise: 0, Snr: -1, Chload: -1
761-8466 W/ACodec: forcing OMX state to Idle when received shutdown in ExecutingState
769-8467 W/SimpleSoftOMXComponent: onChangeState mState= 3, mTargetState = 3, state = 2
769-8467 W/SimpleSoftOMXComponent: checkTransitions port buf count is not zero
769-1826 W/SimpleSoftOMXComponent: checkTransitions port buf count is not zero
769-1826 W/SimpleSoftOMXComponent: checkTransitions port buf count is not zero
769-1826 W/SimpleSoftOMXComponent: checkTransitions port buf count is not zero
769-1826 W/SimpleSoftOMXComponent: checkTransitions port buf count is not zero
769-1826 W/SimpleSoftOMXComponent: checkTransitions mState = 2, mTargetState = 1
721-8060 W/HuaweiAudioFlinger: soundtrigger is now disable or not support, pls enable it first from setting
1900-3437 E/HSM: BMNCaller:is not PermissionEnabled.
721-6695 W/AudioFlinger::EffectModule: EffectModule 0x7ba4f22a00 destructor called with unreleased interface
634-941 E/audio_hw_primary: in_remove_audio_effect error effect is null
634-941 W/StreamHAL: Error from HAL stream in function remove_audio_effect: Function not implemented
721-6695 E/AudioFlinger::EffectModule: Error when removing effect: -38
721-6695 W/AudioFlinger::EffectHandle: disconnect Effect handle 0x7ba4e45800 disconnected after thread destruction
1640-1796 W/AudioState: session release and not found sessionId: 81
4945-4945 W/HmsSpeechRecognitionHolder#c1cafe: destroy on Thread[main,5,main]
4945-8481 E/HwCustAudioRecordImpl: isOpenEC : false
4945-8481 E/HwCustAudioRecordImpl: isOpenEC : false
Things I found suspicious in the logs
634-985 E/HuaweiProcessing: ProcessingLib_Create: the algo have already been created.
634-985 W/EffectsFactory: EffectCreate() library huawei_processing: could not create fx Huawei Audio Preprocessing Effect, error -22
634-985 E/EffectFactoryHAL: Error creating effect e707d040-6b79-11e2-b16a-0002a5d5c51b: Invalid argument
721-8060 E/AudioEffect: set(): AudioFlinger could not create effect, status: -19
721-8060 W/AudioPolicyEffects: addInputEffects(): failed to create Fx huawei_pre_processing on source 1
721-8009 W/APM_AudioPolicyManager: startInput(78) failed: other input 70 already started
721-8009 E/AudioFlinger: RecordThread::start error,setCallingAppName -1
4945-4945 E/AudioRecord: start() status -38
Note: the HMS demo apps I've tried works correctly on my Mediapad T5.
Update: After some fixes pointed out by #shirley, ASR seems to be working reliably on a P30Lite. But still facing the same issue on an older Mediapad T5.
According to the logs you provided, the voice of the user is not detected.
The meanings of logs and status codes are as follows:
solution
It is recommended that you add logs to the callback method of the MLAsrListener listener to view the speech recognition process.
You are advised to check mSpeechRecognizer.destroy(). Check whether this method is invoked prematurely and has ended before it starts.
Check whether the device is faulty or whether the microphone of the device is invalid. Replace the device and perform the test.
After reviewing your logs, the following errors were found:
The reason for this error is:The Languagecode for speech recognition exceeds 10.
You can view the code here:
Ensure that the speech recognition Languagecode does not exceed 10.
11203 ,subError code: 3002,errorMessage: Service unavailable
The cause of this error is that the app_id information is not found in the project.
You are advised to check whether the agconnect-services.json file exists in the project, as shown in the following
If the file does not exist, you need to add it to the project. If the file exists, ensure that app_id is correct.
For details, see the following Docs.
Check whether Automatic Speech Recognition fails to be enabled.
If Automatic Speech Recognition fails to be enabled, you can obtain the cause by using the onError(int error, String errorMessage) method of the MLAsrListener class, as shown in the following figure.
You can add the above method to the listener's class:
2.If speech recognition is enabled successfully but the speech recognition result is not obtained:
The MLAsrConstants.FEATURE parameter is set to FUNCTION_ALLINONE. Therefore, you need to obtain the speech recognition result in the onResults(Bundle results) method, as shown in the following figure.
Some models of phones and tablets could have resource limitation issues when using the ML ASR. The symptom of the problem is that the phone/tablet does not respond after the microphone button is clicked, or an error message is displayed indicating that the speech recognition service is not installed on the device. Not only HMS ASR, I tried using native Android SpeechRecognizer to implement speech recognition and the sample app hangs after click on the button on a limited hardware resource phone simulator.
To fix your issue, I would suggest to switch using the HMS ML Kit ASR to using HMS ML Kit Real-Time Transcription (RTT). RTT provides similar features as ASR for speech recognition and convert the speech to text. For more details, see the HMS ML Kit RTT document at ML Kit-Real-Time Transcription (huawei.com).
The code for RTT is similar to ASR, you need to provide a SpeechRecognitionListener class or anonymous class to implements MLSpeechRealTimeTranscriptionListener. And there is sample code at the document link too. I tested the sample code on my Huawei phone Mate 30 Pro and it is working just fine.

Flutter SoundPool play sound issue

I am struggling playing sounds with SoundPool on Flutter.
I have read/Googled about the dreaded message "AUDIO_OUTPUT_FLAG_FAST denied by server; frameCount 0 -> 48000"
So far, here are the steps I have taken :
Image running in emulator is Pixel 3A
Files are wav, with a size of 90K, they last 1 second each, they are 48K sampled (ffmprobe says :Duration: 00:00:01.00, bitrate: 768 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 48000 Hz, 1 channels, s16, 768 kb/s)
I have pre-loaded then during app startup with Asset, follows code (within a foreach):
var content = await _bundle.load(notePath);
sounds[notePath] = content;
newNote.soundId = await _pool.load(content);
new Timer(const Duration(milliseconds: (2000)), () => print("loaded $notePath"));
I have put a duration, else some (randomly) sounds were not loaded (I have 72 sounds to load).
When I need to play the sound (upon tap on the screen), I simply do a
_pool.stop(1);
_pool.play(noteToPlay.soundId);
Which creates some crackle (one guy proposed instead to always play a sound at volume 0 to avoid this, but this will drain the battery).
Now the problem :
The message appears in the console, but I could live with it if there were not the following issue :
Randomly, some sounds are not played, the taps are simply ignored. The app never crashes. If I remove the soundPool.play call, then everything is ok (corresponding file is found, tap is handled...), so I gather that soundPool gets lost during one of the Future playback. Player can tap very quickly, but even If I go slow, it still fails.
SoundPool is instanciated like this :
class NoteService {
final Soundpool _pool = Soundpool(streamType: StreamType.notification);
[snip class definition]
Thank you for your insight.
Solved it by changing streamType in SoundPool's constructor to StreamType.alarm
pool = Soundpool(streamType: StreamType.alarm);

SurfaceTexture's onFrameAvailable() method always called too late

I'm trying to get the following MediaExtractor example to work:
http://bigflake.com/mediacodec/ - ExtractMpegFramesTest.java (requires 4.1, API 16)
The problem I have is that outputSurface.awaitNewImage(); seems to always throw RuntimeException("frame wait timed out"), which is thrown whenever the mFrameSyncObject.wait(TIMEOUT_MS) call times out. No matter what I set TIMEOUT_MS to be, onFrameAvailable() always gets called right after the timeout occurs. I tried with 50ms and with 30000ms and it's the same.
It seems like the onFrameAvailable() call can't be done while the thread is busy, and once the timeout happens which ends the thread code execution, it can parse the onFrameAvailable() call.
Has anyone managed to get this example to work, or knows how MediaExtractor is supposed to work with GL textures?
Edit: tried this on devices with API 4.4 and 4.1.1 and the same happens on both.
Edit 2:
Got it working on 4.4 thanks to fadden. The issue was that the ExtractMpegFramesWrapper.runTest() method called th.join(); which blocked the main thread and prevented the onFrameAvailable() call from being processed. Once I commented th.join(); it works on 4.4. I guess maybe the ExtractMpegFramesWrapper.runTest() itself was supposed to run on yet another thread so the main thread didn't get blocked.
There was also a small issue on 4.1.2 when calling codec.configure(), it gave the error:
A/ACodec(2566): frameworks/av/media/libstagefright/ACodec.cpp:1041 CHECK(def.nBufferSize >= size) failed.
A/libc(2566): Fatal signal 11 (SIGSEGV) at 0xdeadbaad (code=1), thread 2625 (CodecLooper)
Which I solved by adding the following before the call:
format.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 0);
However the problem I have now on both 4.1.1 (Galaxy S2 GT-I9100) and 4.1.2 (Samsung Galaxy Tab GT-P3110) is that they both always set info.size to 0 for all frames. Here is the log output:
loop
input buffer not available
no output from decoder available
loop
input buffer not available
no output from decoder available
loop
input buffer not available
no output from decoder available
loop
input buffer not available
no output from decoder available
loop
submitted frame 0 to dec, size=20562
no output from decoder available
loop
submitted frame 1 to dec, size=7193
no output from decoder available
loop
[... skipped 18 lines ...]
submitted frame 8 to dec, size=6531
no output from decoder available
loop
submitted frame 9 to dec, size=5639
decoder output format changed: {height=240, what=1869968451, color-format=19, slice-height=240, crop-left=0, width=320, crop-bottom=239, crop-top=0, mime=video/raw, stride=320, crop-right=319}
loop
submitted frame 10 to dec, size=6272
surface decoder given buffer 0 (size=0)
loop
[... skipped 1211 lines ...]
submitted frame 409 to dec, size=456
surface decoder given buffer 1 (size=0)
loop
sent input EOS
surface decoder given buffer 0 (size=0)
loop
surface decoder given buffer 1 (size=0)
loop
surface decoder given buffer 0 (size=0)
loop
surface decoder given buffer 1 (size=0)
loop
[... skipped 27 lines all with size=0 ...]
surface decoder given buffer 1 (size=0)
loop
surface decoder given buffer 0 (size=0)
output EOS
Saving 0 frames took ? us per frame // edited to avoid division-by-zero error
So no images get saved. However the same code and video works on 4.3. The video I am using is an .mp4 file with "H264 - MPEG-4 AVC (avc1)" video codec and "MPEG AAAC Audio (mp4a)" audio codec.
I also tried other video formats, but they seem to die even sooner on 4.1.x, while both work on 4.3.
Edit 3:
I did as you suggested, and it seems to save the frame images correctly. Thank you.
Regarding KEY_MAX_INPUT_SIZE, I tried not setting, or setting it to 0, 20, 200, ... 200000000, all with the same result of info.size=0.
I am now unable to set the render to a SurfaceView or TextureView on my layout. I tried replacing this line:
mSurfaceTexture = new SurfaceTexture(mTextureRender.getTextureId());
with this, where surfaceTexture is a SurfaceTexture defined in my xml-layout:
mSurfaceTexture = textureView.getSurfaceTexture();
mSurfaceTexture.attachToGLContext(mTextureRender.getTextureId());
but it throws a weird error with getMessage()==null on the second line. I couldn't find any other way to get it to draw on a View of some kind. How can I change the decoder to display the frames on a Surface/SurfaceView/TextureView instead of saving them?
The way SurfaceTexture works makes this a bit tricky to get right.
The docs say the frame-available callback "is called on an arbitrary thread". The SurfaceTexture class has a bit of code that does the following when initializing (line 318):
if (this thread has a looper) {
handle events on this thread
} else if (there's a "main" looper) {
handle events on the main UI thread
} else {
no events for you
}
The frame-available events are delivered to your app through the usual Looper / Handler mechanism. That mechanism is just a message queue, which means the thread needs to be sitting in the Looper event loop waiting for them to arrive. The trouble is, if you're sleeping in awaitNewImage(), you're not watching the Looper queue. So the event arrives, but nobody sees it. Eventually awaitNewImage() times out, and the thread returns to watching the event queue, where it immediately discovers the pending "new frame" message.
So the trick is to make sure that frame-available events arrive on a different thread from the one sitting in awaitNewImage(). In the ExtractMpegFramesTest example, this is done by running the test in a newly-created thread (see the ExtractMpegFramesWrapper class), which does not have a Looper. (For some reason the thread that executes CTS tests has a looper.) The frame-available events arrive on the main UI thread.
Update (for "edit 3"): I'm a bit sad that ignoring the "size" field helped, but pre-4.3 it's hard to predict how devices will behave.
If you just want to display the frame, pass the Surface you get from the SurfaceView or TextureView into the MediaCodec decoder configure() call. Then you don't have to mess with SurfaceTexture at all -- the frames will be displayed as you decode them. See the two "Play video" activities in Grafika for examples.
If you really want to go through a SurfaceTexture, you need to change CodecOutputSurface to render to a window surface rather than a pbuffer. (The off-screen rendering is done so we can use glReadPixels() in a headless test.)

Categories

Resources