I want to make an app which takes a video from the camera, adds additional visual info (overlays) and creates a video file from it which can later be uploaded to a server.
How to do that?
Without prior experience with such tasks, I assume there are 2 options:
screen-capture and encoding to video file. However the resulting framerate may not be sufficient.
record the video to sdcard and reencode later with added overlays. Live encoding is not needed, thus it's ok for the encoding process to be slower then realtime.
You will have to resort to using for instance ffmpeg and the NDK to encode your own video. There's plenty of examples out there, but it's still somewhat cumbersome.
Hope this helps:
Use RelativeLayout. Put the camera
preview as the first child of the
RelativeLayout and the VideoView as
the second child. The VideoView will
appear to be "on top of" the
SurfaceView for the camera preview.
BTW, VideoView really is a
SurfaceView. Note that you may decide
someday to use a SurfaceView and
MediaPlayer, rather than a VideoView,
so you can get more control on video
playback
Source: http://osdir.com/ml/Android-Developers/2010-03/msg00077.html
Related
I am working on a project to read a video file from sdcard then process frames and re diplay as a video in real time. So far I didn't manage to come up with a solution for directly extract frames from the MediaPlayer like MediaPlayer.getCurrentFrame();. MediaMetadataRetriever.getFrameAtTime() is super slow, difficult to get a descent frame rate.
The only thing I have right now is using a TextureView surface with MediaPlayer. Here I start the MediaPlayer and in real time read the bitmap form TextureView asTextureView.getBitMap(), then process BitMap and display it on another ImageView. Here this gives me a a descent frame rate.
The problem here is TextureView has to be in the xml layout and should visible, Which I do not want.
Can some one please shed some light here? Is it possible to somehow hide the TextureView which is attaching to the MediaPlayer, without fake hiding like using RelativeLayouts :). iOS has a solution for this which is AVPLAyerItemVideoOutput, I need something like that with android.
Or any other work around to extract frames from video file?
Thank you
For video processing ...... you can use the FFMPEG Library for getting frames of videos but for that you have the knowledge of android native integration.
I hope this will help you.enter link description here
I work on a project that requires to have exact seeking of a video because the system needs to be synchronized to other devices. The OS uses for video playback is Android. So far I used the MediaPlayer class, but depending on the key frame amount, seeking is highly inaccurate.
So my next idea is to cache decoded images and wrap an own playback class around it. So far I understand how to use the MediaExtractor and the MediaCodec classes to decode videos manually. The class android.media.ImageReader seems to be exactly what I want.
But what I do not understand is how to render such an android.media.Image manually once I've got it? I'd like to prevent to do the YUV to RGB conversion manually, instead a preferred method would be to be able to put such an Image into a Surface or copy it to a SurfaceTexture somehow.
Please take a look here
In order to support use cases where need to sync videos being played on several devices this player makes exact seek
I am trying to create an app with the following features:
normal video playback
slower video playback
frame by frame
reverse video playback (normal, slower, frame by frame)
seekable to specific times
video scrubbing
no video sound needed
video is recorded via the device's camera
The closest comparison to an app, would be the Ubersense Coach app for iOS and Coach's Eye on Android, though there are a few others, and they have all these features.
I have looked into several choices so far:
First the built in Android Media Player, which can't really do anything I need.
Then the MediaExtractor/decoder, looking through the code for Grafika (https://github.com/google/grafika), which can't play backwards.
Tried to pull out each frame as needed with the MediaMetadataRetriever, which is too slow (100ms per frame) and the GC is in the way.
Looked for a library that could potentially solve the issue, without luck so far.
With MediaExtractor I already have the ability to play back video, forward frame by frame or full speed. But I do not have that luxury in reverse, and the seeking does take some time since I need it without artifacts.
When trying to go in reverse, even trying to seek to a previous sync and advancing to the frame, before the one I currently had, it is not doable without huge lag (as expected).
I am wondering if there is a better codec I could use, or a library I have yet to stumble upon. And would rather avoid having to create something custom in native code if possible.
Thanks in advance
Michael
This is not a question about playing two separate videos in two separate VideoViews on the one activity.
I have been asked to see if it is possible to create an activity with a single VideoView. When the user opens the Activity, they are asked to select a base video and then select a second video. Both videos will be playing in the one VideoView at the same time, but the base video will have an alpha of 255 and the second video will have an alpha of 150.
For testing though, video files located on the phone will do.
At this time, I have only been able to create an activity that plays a single video in a VideoView.
I thought if I created a custom VideoView class I could override the onDraw function and somehow grab the video frame from the second video, apply alpha and then redraw it over the first VideoView's canvas, but I do not know where to start.
My other concern with this process is the amount of memory used to play two videos at once in the one VideoView as well as the processing required to apply the alpha and then redraw it seamlessly without affecting the performance or playback of the video.
I'm not sure where to start or how best to approach this and if possible, was hoping for some guidance as to either methods or objects to use.
I'm developing a demo application to show the client on an Android 2.2 system using Eclipse. I'm not looking to target any higher systems at this time as the demo phone runs Android 2.2.
I'm not entirely sure why you would want to use a VideoView like that. VideoViews use only one MediaPlayer and using it to sync one video on top of another would probably require a very kludgey implementation of two MediaPlayers through the same VideoView subclass, rendering on the same surface.
Take a look at the source code to see how a MediaPlayer renders a video inside of a VideoView and how MediaController controls playback. You can probably hack around in there to have two MediaPlayers point at once to the same VideoView/SurfaceView. Alternatively you could probably subclass MediaPlayer to handle multiple data sources.
Doing either of these things is counter to what VideoView and MediaPlayer are built for, and performance is going to take a huge hit.
If using a VideoView is not a hard requirement, then my suggestion would be to use an existing video library like ffmpeg, which would be easier and more performant than rewriting base media classes (caveat: using ffmpeg will require the NDK, I suggest using an existing ffmpeg wrapper to save time).
Once ffmpeg is added to your project, applying the secondary video as an OverlayVideoFilter would be fairly easy, and should allow you to layer one video on top of the other (though controlling playback simultaneously might be a challenge left for you).
The correct path to take probably depends on what you want to do with the compound video once you get it (export the video as a single video, control playback, etc.).
Playing two videos in a single VideoView is not possible. This is because the VideoView is in reality an extended SurfaceView, which is both outdated, and never worked super well to begin with. (more on this at the bottom)
I don't know why you have a hard requirement on using a VideoView, as it is very simplistic, and will not give you what you need.
If your requirement for VideoView is because you want to keep the media controls and playback functionality, you're better off making a custom View. Extend LinearLayout and add two SurfaceViews to it with weights of 1. Copy the content of VideoView.java and place it in your new View, and make the modifications to handle two SurfaceViews playing two videos synchronously.
You're actually better off using TextureViews instead of SurfaceViews, which where added in api 14. It rectifies many of SurfaceView's shortcomings, and will handle things like animations better than the VideoView will.
Is it possible to record video with overlay view? While recording the video I have displayed one small image on the overlay view. What I want to do is I want those overlay image along with the video recorded. So when I will open that recorded video, I will be able to see that overlapped image that recorded with video also.
Friends, I need this solution ASAP. Please suggest proper solution :)
Unfortunately, there is no way in the current Android API to get between the camera input and the encoder. Any solution would either involve capturing frames from the video source, overlaying the additional image, and then including an encoder for the captured frames. Even in native code with NEON optimizations on a fast system, this is going to be a slow process. Alternatively, the whole stream could be post-processed in a similar fashion, but this would also require a decoder.
For future reference: This is possible using the CameraView library, at least in "snapshot video" mode.