Is there a way to step through a video file frame by frame? I've tried using a VideoView and I've had minor success. I can get the video the step through key frames but not individual frames. I figured this would be the default settings, especially with the way video compression works. Is there a way to override this default behavior or a configuration I can change?
The default behaviour in stagefright media framework is always to seek to key frame. (As opposed to the earlier framework's - opencore whose default seeking behaviour was to seek to time.)
You cannot do frame by frame seeking by using the MediaPlayer API's provided by Android.
If you really want implement frame by frame seeking then you will have to use a 3rd party multimedia framework like FFMPEG or you will need to implement your own.
Have you looked at MediaMetadataRetriever ? There you can use getFrameAtTime(long timeUs, int option) and return a Bitmap.
Depending of the use, maybe it's what you need.
Related
I have an encoded video stream that I'm playing through exoplayer. What I want to do is get each frame of the video and edit it before it is displayed (e.g. changing some pixels).
Is it possible to do this with exoplayer? I've been looking at the implementation of MediaCodecVideoRenderer.java in the exoplayer source, but it seems that each MediaCodec releases its output buffer to a surface itself, without possibility of editing the frame before rendering.
It will depend on exactly what you want to modify, but it is possible to use a GLSurface view and listen for each frame and then transform the frame, assuming it is not encrypted (with encrypted you van usually still apply transformation bit you definitely should not be able to read the frame itself).
There is a good example project which does something similar to apply filters to videos, extending ExoPlayer - take a look at the EPlayerRenderer class in particular.
https://github.com/MasayukiSuda/ExoPlayerFilter
You can also do a similar thing with openCV - read in a frame, modify it and then display it. This may be easier if you are doing compilacted image manipulations.
The MediaPlayer's seekTo() and getCurrentPostition() are working inaccurately and very approximately and this issue is being unsolved by Google for a long time.
I need a good library that can return a precise position of a playback in milliseconds and seek where it needed. But I've tried some like presto, vitamio, ExoPlayer (for this I can't find any documentation how to play from sd card) and yet didn't find a good library.
Using ffmpeg is complex for me and the only java wrapper I've found is only for decoding , not playback .
Please, give me advice how to playback audio and get exact values for getCurrentPostition() and seekTo()
Check out FFmpegMediaPlayer, it's a ffmpeg library for playback on Android:
https://github.com/wseemann/FFmpegMediaPlayer
I feel your pain, the built-in MediaPlayer seems to be a steaming pile of garbage with seeking (though I am seeking videos, not audio).
Also be aware that your encoding could be wrong -- double check that the length reported in the codec tags matches the length reported by MediaPlayer.
I work on a project that requires to have exact seeking of a video because the system needs to be synchronized to other devices. The OS uses for video playback is Android. So far I used the MediaPlayer class, but depending on the key frame amount, seeking is highly inaccurate.
So my next idea is to cache decoded images and wrap an own playback class around it. So far I understand how to use the MediaExtractor and the MediaCodec classes to decode videos manually. The class android.media.ImageReader seems to be exactly what I want.
But what I do not understand is how to render such an android.media.Image manually once I've got it? I'd like to prevent to do the YUV to RGB conversion manually, instead a preferred method would be to be able to put such an Image into a Surface or copy it to a SurfaceTexture somehow.
Please take a look here
In order to support use cases where need to sync videos being played on several devices this player makes exact seek
I am trying to create an app with the following features:
normal video playback
slower video playback
frame by frame
reverse video playback (normal, slower, frame by frame)
seekable to specific times
video scrubbing
no video sound needed
video is recorded via the device's camera
The closest comparison to an app, would be the Ubersense Coach app for iOS and Coach's Eye on Android, though there are a few others, and they have all these features.
I have looked into several choices so far:
First the built in Android Media Player, which can't really do anything I need.
Then the MediaExtractor/decoder, looking through the code for Grafika (https://github.com/google/grafika), which can't play backwards.
Tried to pull out each frame as needed with the MediaMetadataRetriever, which is too slow (100ms per frame) and the GC is in the way.
Looked for a library that could potentially solve the issue, without luck so far.
With MediaExtractor I already have the ability to play back video, forward frame by frame or full speed. But I do not have that luxury in reverse, and the seeking does take some time since I need it without artifacts.
When trying to go in reverse, even trying to seek to a previous sync and advancing to the frame, before the one I currently had, it is not doable without huge lag (as expected).
I am wondering if there is a better codec I could use, or a library I have yet to stumble upon. And would rather avoid having to create something custom in native code if possible.
Thanks in advance
Michael
This is not a question about playing two separate videos in two separate VideoViews on the one activity.
I have been asked to see if it is possible to create an activity with a single VideoView. When the user opens the Activity, they are asked to select a base video and then select a second video. Both videos will be playing in the one VideoView at the same time, but the base video will have an alpha of 255 and the second video will have an alpha of 150.
For testing though, video files located on the phone will do.
At this time, I have only been able to create an activity that plays a single video in a VideoView.
I thought if I created a custom VideoView class I could override the onDraw function and somehow grab the video frame from the second video, apply alpha and then redraw it over the first VideoView's canvas, but I do not know where to start.
My other concern with this process is the amount of memory used to play two videos at once in the one VideoView as well as the processing required to apply the alpha and then redraw it seamlessly without affecting the performance or playback of the video.
I'm not sure where to start or how best to approach this and if possible, was hoping for some guidance as to either methods or objects to use.
I'm developing a demo application to show the client on an Android 2.2 system using Eclipse. I'm not looking to target any higher systems at this time as the demo phone runs Android 2.2.
I'm not entirely sure why you would want to use a VideoView like that. VideoViews use only one MediaPlayer and using it to sync one video on top of another would probably require a very kludgey implementation of two MediaPlayers through the same VideoView subclass, rendering on the same surface.
Take a look at the source code to see how a MediaPlayer renders a video inside of a VideoView and how MediaController controls playback. You can probably hack around in there to have two MediaPlayers point at once to the same VideoView/SurfaceView. Alternatively you could probably subclass MediaPlayer to handle multiple data sources.
Doing either of these things is counter to what VideoView and MediaPlayer are built for, and performance is going to take a huge hit.
If using a VideoView is not a hard requirement, then my suggestion would be to use an existing video library like ffmpeg, which would be easier and more performant than rewriting base media classes (caveat: using ffmpeg will require the NDK, I suggest using an existing ffmpeg wrapper to save time).
Once ffmpeg is added to your project, applying the secondary video as an OverlayVideoFilter would be fairly easy, and should allow you to layer one video on top of the other (though controlling playback simultaneously might be a challenge left for you).
The correct path to take probably depends on what you want to do with the compound video once you get it (export the video as a single video, control playback, etc.).
Playing two videos in a single VideoView is not possible. This is because the VideoView is in reality an extended SurfaceView, which is both outdated, and never worked super well to begin with. (more on this at the bottom)
I don't know why you have a hard requirement on using a VideoView, as it is very simplistic, and will not give you what you need.
If your requirement for VideoView is because you want to keep the media controls and playback functionality, you're better off making a custom View. Extend LinearLayout and add two SurfaceViews to it with weights of 1. Copy the content of VideoView.java and place it in your new View, and make the modifications to handle two SurfaceViews playing two videos synchronously.
You're actually better off using TextureViews instead of SurfaceViews, which where added in api 14. It rectifies many of SurfaceView's shortcomings, and will handle things like animations better than the VideoView will.