I am Writing video player in android. So far i could able to capture the frames, with the help of av_read_frame and avcodec_decode_video2, and updating to SDL2.0. I have followed dranger tutorial02.c http://dranger.com/ffmpeg/ .
Sudo Code is :
while (1)
{
1. Read packet
2. check if video frame; if not Go to Step 3.
2.1 if video frame, then update with SDL_UpdateYUVTexture,
3. Handle SDL Event
4. Clear the Renderer.
5. Present Renderer.
}
I wonder, do i need to take care of synchronization of video, dts/pts calculation while i need only to display video?
This scenario works well in the samsung, but not in other mobiles.
What woud be your advice?
It depends. If you're ok with the fact that your video will a) play as fast as the device can decode it and b) will play with different speed on different devices and even on the same device depending on other processes, then you don't need to synchronize, and can just dump the frames as soon as they're decoded.
Otherwise you still need to synchronize the video output to PTS. Since you don't have audio, and won't have audio clock, your only option would be to synchronize the video to the system clocks which makes it simpler.
Related
My android app plays videos in Exoplayer 2, and now I'd like to play a video backwards.
I searched around a lot and found only the idea to convert it to a gif and this from WeiChungChang.
Is there any more straight-forward solution? Another player or a library that implements this for me is probably too much to ask, but converting it to a reverse gif gave me a lot of memory problems and I don't know what to do with the WeiChungChang idea. Playing only mp4 in reverse would be enough tho.
Videos are frequently encoded such that the encoding for a given frame is dependent on one or more frames before it, and also sometimes dependent on one or more frames after it also.
In other words to create the frame correctly you may need to refer to one or more previous and one or more subsequent frames.
This allows a video encoder reduce file or transmission size by encoding fully the information for every reference frame, sometimes called I frames, but for the frames before and/or after the reference frames only storing the delta to the reference frames.
Playing a video backwards is not a common player function and the player would typically have to decode the video as usual (i.e. forwards) to get the frames and then play them in the reverse order.
You could extend ExoPlayer to do this yourself but it may be easier to manipulate the video on the server side if possible first - there exist tools which will reverse a video and then your players will be able to play it as normal, for example https://www.videoreverser.com, https://www.kapwing.com/tools/reverse-video etc
If you need to reverse it on the device for your use case, then you could use ffmpeg on the device to achieve this - see an example ffmpeg command to do this here:
https://video.stackexchange.com/a/17739
If you are using ffmpeg it is generally easiest to use via a wrapper on Android such as this one, which will also allow you test the command before you add it to your app:
https://github.com/WritingMinds/ffmpeg-android-java
Note that video manipulation is time and processor hungry so this may be slow and consume more battery than you want on your mobile device if the video is long.
I want to create an Android app that plays multiple mp3s simultaneously, with precise sync (less than 1/10 of a second off) and independent volume control. Size of each mp3 could be over 1MB, run time up to several minutes. My understanding is that MediaPlayer will not do the precise sync, and SoundPool can't handle files over 1MB or 5 seconds run time. I am experimenting with superpowered and may end up using that, but I'm wondering if there's anything simpler, given that I don't need any processing (reverb, flange, etc.), which is superpowered's focus.
Also ran across the YouTube video on Android high-performance audio, from Google I/O 2016. Wondering if anyone has any experience with this.
https://www.youtube.com/watch?v=F2ZDp-eNrh4
Superpowered was originally made for my DJ app (DJ Player in the App Store), where precisely syncing multiple tracks is a requirement.
Therefore, syncing multiple mp3s and independent volume control is definitely possible and core to Superpowered. All you need is the SuperpoweredAdvancedAudioPlayer class for this.
The CrossExample project in the SDK has two players playing in sync.
The built-in audio features in Android are highly device and/or build dependent. You can't get a consistent feature set with those. In general, the audio features of Android are not stable. That's why you need a specialized audio library which does everything "inside" your application (so is not a "wrapper" around Android's audio features).
When you are playing compressed files (AAC, MP3, etc) on Android in most situations they are decoded in hardware to save power, except when the output goes to a USB audio interface. The hardware codec accepts data in big chunks (again, to save power). Since it's not possible to issue a command to start playing multiple streams at once, what will often be happening is that one stream will already send a chunk of compressed audio to hardware codec, and it will start playing, while others haven't yet sent their data.
You really need to decode these files in your app and mix the output to produce a single audio stream. Then you will guarantee the desired synchronization. The built-in mixing facilities are mostly intended to allow multiple apps to use the same sound output, they are not designed for multitrack mixing.
I want to make an app, that will have an feature of recording in a loop. That means, app will continuously record video and when a user hits "end of recording" button, the video will have only the last 1 minute recorded. What is the best way to achieve this?
As far as I know, there is no simple way to achieve this. Some rough ideas, though, in order of increasing difficulty:
If you can safely assume that the total recording time will be fairly short (i.e., you won't run out of storage space on the device), you could record the entire video and then perform a post-processing step that trims the video to size.
Record the video in one-minute chunks. When the user stops recording, compute how much of the previous chunk you need to prepend to the current chunk. Stitch the chunks together.
Register as a PreviewCallback and store the video frames in your own file format. Periodically remove the frames that you don't care about because they're too old. You would need to store the audio separately, and then you would need to transcode the custom format into a standard format.
Each of these would probably require some NDK code to do the work efficiently.
I'm making an app that takes a video and does some computation on the video. I need to carry out this computation on individual frames of the video. So, I have two questions:
Are videos in Android stored as a sequence of pictures? (I've seen a lot of Android devices that advertise having 25-30 fps cameras) If yes, can I, as a developer, get access to these frames that make up a video and how so?
If not, is there any way for me to generate at least 15-20 distinct frames per second from a video taken on an android device? (and of course, do the computation on those frames generated)
Videos are stored as videos. To manipulate frames one can use FFMPEG library. There are FFMPEG ports to Android such as in Dolphin opensource player. This would require C/C++ programming with NDK though.
I'm trying to create an app to stream live TV. Currently the problem I'm facing is that after say 10 minutes of playing, the video will freeze but the audio will carry on. This is on a 1.3mbps stream. I also have lower streams, such as a 384kbps stream, that might last an hour or so, but will still do the same. I've tested this with a local video, that is high quality (file size is 2.3gb) and that has no lag and doesn't freeze at all, so it must be something to do with the way HLS is streamed to android.
Does anyone have any idea on how to solve this problem?
Thanks