Is there any way to automatically get the audiodelay to set in the videoplayer? It is really annoying to fix it manually every time.
Exoplayer (google native player used in yt) is using the getLatency method, not part of the public SDK
(https://github.com/google/ExoPlayer/blob/b5beb32618ac99adc58b537031a6f7c3dd761b9a/library/core/src/main/java/com/google/android/exoplayer2/audio/AudioTrackPositionTracker.java#L172)
so I cant replicate this in due to xamarin not including this method in the c# wrapper
var method = typeof(AudioTrack).GetMethod("getLatency"); // => null
(Tried stuff from https://developer.amazon.com/docs/fire-tv/audio-video-synchronization.html#section1-2)
I also tried to find the native bindings the android audiotrack in vlc to get getTimestamp or getPlaybackHeadPosition, but I was unsucessfull.
Is there any way to get the audiodelay caused by bluetooth headphones in xamarin.forms on android?
Is there any way to get the android AudioTrack from libvlc (if that is even used)?
You can't. LibVLC does not offer to detect the latency caused by an external speaker.
Your best bet is to manually sync it with SetAudioDelay (or write a libvlc plugin with that feature).
EDIT: Being told by a core dev that latency is handled on all platforms except Android. You might want to test it. In any case, it may happen in a future libvlc android version.
Related
I passed three weeks searching how to create a video live call for my android app (using android studio), but can't find exactly what I'm looking for, I don't want to use something like quikbox or snitch because it's my final year project and I have to do it programatically, I find that webRTC for android can be used but unfortunately I didn't understand how to use it.
So please can anyone help me with any things.
This is unfortunately something that's still hard. The team is working on it though: see https://bugs.chromium.org/p/webrtc/issues/detail?id=6328 for progress.
There's also https://bugs.chromium.org/p/webrtc/issues/detail?id=6804 that has resulted in this bot archiving .aar builds: https://build.chromium.org/p/client.webrtc.fyi/builders/Android%20Archive which should make it easier to consume the library.
I know that the QMultimediaWidgets are not supported for C++. I am developing a native application for Android as well. Since I don't use QML I need a way of playing my videos in the application. I want to use the QMediaPlayer since I rely on the signals and slots. Is there any manually developed backend which works on Android or a solution how I can render the video myself still using QMediaPlayer?
Is there a way I can developed such a backend myself using ffmpeg or any available program on Android? Will there be any update for this in Qt soon?
QtMultimediaWidgets is not supported on Android so you need to use the QML elements. What you can theoretically try is to embed a QML scene using the MediaPlayer and VideoOutput elements in your QWidget-based app using QWidget::createWindowContainer. Once you see this can be done, you can get your QMediaPlayer object from QML using the mediaObject property of the MediaPlayer QML element. I never tried to do something like this actually.
You may also try to use another plugin like QtAV, but you may lose acceleration.
I am trying to write a native android application in which I want to play couple of audio streams. In order to have proper synchronization with other audio streams in rest of the android system, I need to manage playback of these streams properly. While programming in java, android framework provides APIs like 'AudioManager.requestAudioFocus', 'AudioManager.abandonAudioFocus' and also provides appropriate callbacks according to behavior of other audio streams.
So, is there any possible way by means of which I can call these methods from a native code ?
It seems there is one more way of using OpenSL APIs. Does OpenSl provides methods similar to requestAudioFocus / abandonAudioFocus ?
Thanks in advance
I'm writing a small call recording library for my rooted phone.
I saw in some application that recording is done through ALSA or CAF on rooted phones.
I couldn't find any example / tutorial on how to use ALSA or CAF for call recording (or even for audio recording for that matter).
I saw tinyAlsa lib project, but I couldn't figure how to use it in an android app.
Can someone please show me some tutorial or code example on how to integrate ALSA or CAF in an Android application?
Update
I managed to wrap tinyAlsa with JNI calls. However, calls like mixer_open(0) returns null pointers, and calls like pcm_open(...) returns a pointer but subsequent call to is_pcm_ready(pcm) always returns false.
Am I doing something wrong? Am I missing something?
Here's how to build ALSA lib using the Android's toolchain.
and here you can find another repo mentioning ALSA for android
I suggest you to read this post on order to understand what are your choices and the current platform situation.
EDIT after comments:
I think that you need to implement your solution with tinyalsa assuming by you are using the base ALSA implementation. If the tiny version is missing something then you may need to ask the author (but it sounds strange to me, because you are doing basic operations).
After reading this post, we can get some clues about why root is needed (accessing protected mount points).
Keep us updated with your progress, it's an interesting topic!
I am using ffmpeg to decode a file and play it back on an android device. I have this working and would now like to decode two streams at the same time. I have read some comments regarding needing to use av_lockmgr_register() call with ffmpeg, unfortunately I am not sure how to use these and how the flow would work when using these locks.
Currently I have seperate threads on the java side making requests through JNI to native code that is communicating with ffmpeg.
Do the threads need to be on the native(NDK) side, or can I manage them on the java side? And do I need to do any locking, and if so how does that work with ffmpeg?
***UPDATE
I have this working now, it appears that setting up the threads at the java sdk level transfers into separate threads at the native level. With that I was able to create a struct with my variables, and then pass a variable to the native layer to specify what struct to use for each video. So for I have needed to use any mutexs or locks at the native level, and haven't had any issues.
Does anyone know of potential gotchas I may encounter by not doing so with ffmpeg?
I'll answer this, my latest update approach appears to be working. By controlling the threads from the java layer and making my native calls on separate threads everything is working and I have not encountered any issues.