I want to play (render to surface) two or more consecutive mp4 video sequences (each stored in a separate file on my device and maybe not present at startup) on my Android device in a smooth (no stalls, flicker, etc.) manner. So, the viewer might get the impression of watching only one continuous video. In a first step it would be sufficient to achieve this only for my Nexus 7 tablet.
For displaying only one video I have been using the MediaCodec API in a similar way to http://dpsm.wordpress.com/2012/07/28/android-mediacodec-decoded/ and it works fine. By only creating (and configure) the second decoder after the first sequence has finished (decoder.stop and decoder.release of the first one was called), one can see the blending. For a smooth fade between two different video sequences I was thinking about having an init functionality where the second video is already initialized via decoder.configure(format, surface, null, 0) during the playback of the first one. Furthermore the first frame is also queued via decoder.queueInputBuffer.
But doing so results to the following error:
01-13 16:20:37.182: E/BufferQueue(183): [SurfaceView] connect: already connected (cur=3, req=3)
01-13 16:20:37.182: E/MediaCodec(9148): native_window_api_connect returned an error: Invalid argument (-22)
01-13 16:20:37.182: E/Decoder Init(9148): Exception decoder.configure: java.lang.IllegalStateException
It seems to me that one surface can only be used by one decoder simultaneously. So, is there any other possibility for doing that? Maybe using OpenGL?
Best,
Alex.
What you describe with using multiple instances of MediaCodec will work, but you can only have one "producer" connected to a Surface at a time. You'd need to tear down the first before you can proceed with the second, and I'm not sure how close you can get the timing.
What you can do instead is decode to a SurfaceTexture, and then draw that on the SurfaceView (using, as you thought, OpenGL).
You can see an example of rendering MP4 files to a SurfaceTexture in the ExtractMpegFramesTest example. From there you just need to render the texture to your surface (SurfaceView? TextureView?), using something like the STextureRender class in CameraToMpegTest.
There's some additional examples in Grafika, though the video player there is closer to what you already have (decoder outputs to TextureView).
Incidentally, you'll need to figure out how much of a delay to put between the last frame of movie N and the first frame of movie N+1. If the recordings were taken at fixed frame rates it's easy enough, but some sources (e.g. screenrecord) don't record that way.
Update: If you can guarantee that your movie clips have the same characteristics (size, encoding type -- essentially everything in MediaFormat), there's an easier way. You can flush() the decoder when you hit end-of-stream and just start feeding in the next file. I use this to loop video in the Grafika video player (see MoviePlayer#doExtract()).
Crazy idea. Try margin the two videos to one. They both on your device, so it shouldn't take to long. And you can implement the fade effect by yourself.
Related
I have this use case, where video from MediaPlayer has to be delivered to two Surfaces. Unfortunately, whole Android Surface API lack's of that functionality (or at least, after studying developers site, I'm unable to find it).
I've had a simillar use case where the video was produced by a custom camera module, but after a slight modification, I was able to retrieve Bitmap from the camera so I just used lockCanvas, drawBitmap and unlockAndPost on two Surfaces. With MediaPlayer, I don't know how to retrieve Bitmap and keep playback with proper timing.
Also, I've tried to use Allocation for that purpose, with one Allocation serving as USAGE_IO_INPUT, two as USAGE_IO_OUTPUT, and with ioReceive, copyFrom, ioSend methods. But it was also an dead end. For some unknown reason, RenderScript engine is very unstable on my platform, I've had numerous errors like:
android.renderscript.RSInvalidStateException: Calling RS with no Context active.
when context passed to RenderScript.create was this from Application class, or
Failed loading RS driver: dlopen failed: could not locate symbol .... falling back to default
(I've lost full log somewhere...). And at the end, I was not able to create proper Input Allocation type to be compatible with MediaPlayer. Due to mentioned flaws with RenderScript on my platform, I would consider this as last resort for solving this issue.
So, in conclusion: How to play video (from mp4 file) to two Surfaces? This video has to be in sync. Also, more generic question, how to play video to #X Surface's which can be dynamically added, removed during playback?
I've resolved my issue by having multiple instances of MediaPlayer with same video file source. When doing basic player operations like pause/play/seek, I'm just doing them on every player.
I'd like to display decoded video frames from MediaCodec out of order, or omit frames, or show frames multiple times.
I considered configuring MediaCodec to use a Surface, call MediaCodec.dequeueOutputBuffer() repeatedly, save the resulting buffer indices, and then later call MediaCodec.releaseOutputBuffer(desired_index, true), but there doesn't seem to be a way to increase the number of output buffers, so I might run out of output buffers if I'm dealing with a lot of frames to be rearranged.
One idea I'm considering is to use glReadPixels() to read the pixel data into a frame buffer, convert the color format appropriately, then copy it to a SurfaceView when I need the frame displayed. But this seems like a lot of copying (and color format conversion) overhead, especially when I don't inherently need to modify the pixel data.
So I'm wondering if there is a better, more performant way. Perhaps there is a way to configure a different Surface/Texture/Buffer for each decoded frame, and then a way to tell the SurfaceView to display a specific Surface/Texture/Buffer (without having to do a memory copy). It seems like there must be a way to accomplish this with OpenGL, but I'm very new to OpenGL and could use recommendations on areas to investigate. I'll even go NDK if I have to.
So far I've been reviewing the Android docs, and fadden's bigflake and Grafika. Thanks.
Saving copies of lots of frames could pose a problem when working with higher-resolution videos and higher frame counts. A 1280x720 frame, saved in RGBA, will be 1280x720x4 = 3.5MB. If you're trying to save 100 frames, that's 1/3rd of the memory on a 1GB device.
If you do want to go this approach, I think what you want to do is attach a series of textures to an FBO and render to them to store the pixels. Then you can just render from the texture when it's time to draw. Sample code for FBO rendering exists in Grafika (it's one of the approaches used in the screen recording activity).
Another approach is to seek around in the decoded stream. You need to seek to the nearest sync frame before the frame of interest (either by asking MediaExtractor to do it, or by saving off encoded data with the BufferInfo flags) and decode until you reach the target frame. How fast this is depends on how many frames you need to traverse, the resolution of the frames, and the speed of the decoder on your device. (As you might expect, stepping forward is easier than stepping backward. You may have noticed a similar phenomena in other video players you've used.)
Don't bother with glReadPixels(). Generally speaking, if decoded data is directly accessible from your app, you're going to take a speed hit (more so on some devices than others). Also, the number of buffers used by the MediaCodec decoder is somewhat device-dependent, so I wouldn't count on having more than 4 or 5.
I'm trying to use a C library (Aubio) to perform beat detection on some music playing from a MediaPlayer in Android. To capture the raw audio data, I'm using a Visualizer, which sends a byte buffer at regular intervals to a callback function, which in turn sends it to the C library through JNI.
I'm getting inconsistent results (i.e. almost no beats are detected, and the only ones who are are not really consistent with the audio). I've checked multiple times, and, while I can't exactly rule out what I'm doing on my own, I'm wondering how exactly the Android Visualizer behaves, since it is not explicit in the documentation.
If I set the buffer size using setCaptureSize, does that mean that the captured buffer is averaged over the complete audio samples? For instance, if I divide the capture size by 2, will it still represent the same captured sound, but with 2 times less precision on the time axis?
Is it the same with the capture rate? For instance, does setting twice the capture size with half the rate yield the same data?
Are the captures consecutive? To put it another way, if I take too long to process a capture, are the sounds played during the processing ignored when I receive the next capture?
Thanks for your insight!
Make sure the callback function gets the entire audio signal, for instance by counting the frames that get out of it, and the ones that reach the callback.
It would help to be pointed at Visualizer documentation.
I want to try to do motion detection by comparing consecutive camera preview frames, and I'm wondering if I'm interpreting the android docs correctly. Tell me if this is right:
If I want the camera preview to use buffers I allocate myself, I have to call addCallbackBuffer(), at least twice to get two separate buffers to compare.
Then I have to use the setPreviewCallbackWithBuffer() form of the callback so the preview will be filled in to the buffers I allocated.
Once I get to at least the 2nd callback, I can do whatever lengthy processing I like to compare the buffers, and the camera will leave me alone, not doing any more callbacks or overwriting my buffers till I return the oldest buffer back to the camera by calling allCallbackBuffer() once again (and the newest buffer will sit around unchanged for me to use in the next callback for comparison).
That last one is the one I'm least clear on. I won't get errors or anything because it ran out of buffers will I? It really will just silently drop preview frames and not do the callback?
Well, I went and implemented the above algorithms and they actually worked, so I guess I was interpreting the docs correctly :-).
If anyone wants to see my heavily modified CameraPreview code that does this, it is on my web page at:
http://home.comcast.net/~tomhorsley/hardware/scanner/android-scanner.html
I'm looking for the best way (if any...) to capture continuous video to a circular buffer on the SD card, allowing the user to capture events after they have happened.
The standard video recording API allows you to just write directly to a file, and when you reach the limit (set by the user, or the capacity of the SD card) you have to stop and restart the recording. This creates up to a 2 second long window where the recording is not running. This is what some existing apps like DailyRoads Voyager already do. To minimize the chance of missing something important you can set the splitting time to something long, like 10 minutes, but if an event occurs near the end of this timespan you are wasting space by storing the 9 minutes of nothing at the beginning.
So, my idea for now is as follows: I'll have a large file that will serve as the buffer. I'll use some code I've found to capture the frames and save them to the file myself, wrapping around at the end. When the user wants to keep some part, I'll mark it by pointers to the beginning and end in the buffer. The recording can continue as before, skipping over regions that are marked for retention.
After the recording is stopped, or maybe during that on a background thread (depending on phone/card speed) I'll copy out the marked region to another file and remove the overwrite protection.
Main question, if you don't care about the details above: I can't seem to find a way to convert the individual frames to a video file in the Android SDK. Is it possible? If not, are there any available libraries, maybe in native code, that can do this?
I don't really care about the big buffer of uncompressed frames, but the exported videos should be compressed in an Android-friendly format. But if there is a way to compress the buffer I would like to hear about it.
Thank you.
In Android's MediaRecorder there is two way to specify the output. One is a filename and another is a FileDescriptor.
Using static method fromSocket of ParcelFileDescriptor you can create an instance of ParcelFileDescriptor pointing to a socket. Then, call getFileDescriptor to get the FileDescriptor to be passed to the MediaRecorder.
Since you can get the encoded video from the socket (as if you were creating a local web server), you will be able to access individual frames of the video, although not so directly, because you will need to decode it first.