I know that the QMultimediaWidgets are not supported for C++. I am developing a native application for Android as well. Since I don't use QML I need a way of playing my videos in the application. I want to use the QMediaPlayer since I rely on the signals and slots. Is there any manually developed backend which works on Android or a solution how I can render the video myself still using QMediaPlayer?
Is there a way I can developed such a backend myself using ffmpeg or any available program on Android? Will there be any update for this in Qt soon?
QtMultimediaWidgets is not supported on Android so you need to use the QML elements. What you can theoretically try is to embed a QML scene using the MediaPlayer and VideoOutput elements in your QWidget-based app using QWidget::createWindowContainer. Once you see this can be done, you can get your QMediaPlayer object from QML using the mediaObject property of the MediaPlayer QML element. I never tried to do something like this actually.
You may also try to use another plugin like QtAV, but you may lose acceleration.
Related
Is there any way to automatically get the audiodelay to set in the videoplayer? It is really annoying to fix it manually every time.
Exoplayer (google native player used in yt) is using the getLatency method, not part of the public SDK
(https://github.com/google/ExoPlayer/blob/b5beb32618ac99adc58b537031a6f7c3dd761b9a/library/core/src/main/java/com/google/android/exoplayer2/audio/AudioTrackPositionTracker.java#L172)
so I cant replicate this in due to xamarin not including this method in the c# wrapper
var method = typeof(AudioTrack).GetMethod("getLatency"); // => null
(Tried stuff from https://developer.amazon.com/docs/fire-tv/audio-video-synchronization.html#section1-2)
I also tried to find the native bindings the android audiotrack in vlc to get getTimestamp or getPlaybackHeadPosition, but I was unsucessfull.
Is there any way to get the audiodelay caused by bluetooth headphones in xamarin.forms on android?
Is there any way to get the android AudioTrack from libvlc (if that is even used)?
You can't. LibVLC does not offer to detect the latency caused by an external speaker.
Your best bet is to manually sync it with SetAudioDelay (or write a libvlc plugin with that feature).
EDIT: Being told by a core dev that latency is handled on all platforms except Android. You might want to test it. In any case, it may happen in a future libvlc android version.
Is it possible to edit a video using QT/QML for Android and iOS?
I would like to be able to trim a video from timeX to timeY and add (if possible) a watermark.
I did some research but I didn't find anything good.
All Qt has to offer regarding video is contained in the Qt Multimedia library.
This library is not designed to do video editing, so you will have nothing out of the box.
However, it might be possible to combine QMediaRecorder and QMediaPlayer to trim a video. And you also have access to video frames: https://doc.qt.io/qt-5/videooverview.html#working-with-low-level-video-frames
I am not sure you will be able to do what you want using only Qt Multimedia, you might be better off using a dedicated video editing library. Maybe you can take a look at OpenShot, it is an open source video editor. The user interface is built with Qt and the video editing functions are in a separate library: libopenshot.
I am building a custom audio player with MediaCodec/MediaExtractor/AudioTrack etc. which mixes and plays multiple audio files.
Therefore I need a resampling algorithm, if one of the files has a different samplerate.
I can see that there is a native AudioResample class available:
https://android.googlesource.com/platform/frameworks/av/+/jb-mr1.1-release/services/audioflinger/AudioResampler.h -
But so far I did not find any examples how it can be used.
My question:
Is it possible to use the native resampler on Android? (in Java or with JNI)
If yes, does anyone know an example out there? Or any docs how one can use this custom AudioResampler class?
Thanks for any hints!
This is not a public API, so you can't officially rely on using it (and even unofficially, using it would be very hard). You need to find a library (ideally in C, for NDK) to bundle within your app.
I have an android application.
Part of the features of the app is to play ogg files.
I tried using the MediaPlayer for this task, but it caused huge slowdown of the app on the first play.
I'm guessing this is because the music file is loaded whole into memory, which leaves less memory for the app.
The question is, is it possible to use only the Music API of LibGdx for this task?
Can I use it without all the stuff with the application of LibGdx, since I have my own activities written in pure Android code.
Well, i would recommend to take a look into the Music backend of libgdx. There is code for the regular music Api which you can take a look at. Everything there is pure android and i am sure that you can take snippets of it to create your own musik player.
I would recommend to put that into it's own thread so it does not effect your other code. Use a kind of event system to start stop new songs for example.
I dont think, that you can simply integrate a part of libgdx to your project. If you try to take a look at the wiki about interfaceing with platform code. Create an interface in your android project and create a player in the core project of libgdx. Now just call that interface inside of your android project and it get forwared to the libgdx core "musicplayer".
I am using ffmpeg to decode a file and play it back on an android device. I have this working and would now like to decode two streams at the same time. I have read some comments regarding needing to use av_lockmgr_register() call with ffmpeg, unfortunately I am not sure how to use these and how the flow would work when using these locks.
Currently I have seperate threads on the java side making requests through JNI to native code that is communicating with ffmpeg.
Do the threads need to be on the native(NDK) side, or can I manage them on the java side? And do I need to do any locking, and if so how does that work with ffmpeg?
***UPDATE
I have this working now, it appears that setting up the threads at the java sdk level transfers into separate threads at the native level. With that I was able to create a struct with my variables, and then pass a variable to the native layer to specify what struct to use for each video. So for I have needed to use any mutexs or locks at the native level, and haven't had any issues.
Does anyone know of potential gotchas I may encounter by not doing so with ffmpeg?
I'll answer this, my latest update approach appears to be working. By controlling the threads from the java layer and making my native calls on separate threads everything is working and I have not encountered any issues.