Multiple android Surface consumers - video playback to two surfaceviews - android

I have this use case, where video from MediaPlayer has to be delivered to two Surfaces. Unfortunately, whole Android Surface API lack's of that functionality (or at least, after studying developers site, I'm unable to find it).
I've had a simillar use case where the video was produced by a custom camera module, but after a slight modification, I was able to retrieve Bitmap from the camera so I just used lockCanvas, drawBitmap and unlockAndPost on two Surfaces. With MediaPlayer, I don't know how to retrieve Bitmap and keep playback with proper timing.
Also, I've tried to use Allocation for that purpose, with one Allocation serving as USAGE_IO_INPUT, two as USAGE_IO_OUTPUT, and with ioReceive, copyFrom, ioSend methods. But it was also an dead end. For some unknown reason, RenderScript engine is very unstable on my platform, I've had numerous errors like:
android.renderscript.RSInvalidStateException: Calling RS with no Context active.
when context passed to RenderScript.create was this from Application class, or
Failed loading RS driver: dlopen failed: could not locate symbol .... falling back to default
(I've lost full log somewhere...). And at the end, I was not able to create proper Input Allocation type to be compatible with MediaPlayer. Due to mentioned flaws with RenderScript on my platform, I would consider this as last resort for solving this issue.
So, in conclusion: How to play video (from mp4 file) to two Surfaces? This video has to be in sync. Also, more generic question, how to play video to #X Surface's which can be dynamically added, removed during playback?

I've resolved my issue by having multiple instances of MediaPlayer with same video file source. When doing basic player operations like pause/play/seek, I'm just doing them on every player.

Related

AVC HW encoder with MediaCodec Surface reliability?

I'm working on a Android app that uses MediaCodec to encode H.264 video using the Surface method. I am targeting Android 5.0 and I've followed all the examples and samples from bigflake.com (I started working on this project two years ago, so I kind of went through all the gotchas and other issues).
All is working nice and well on Nexus 6 (which uses the Qualcomm hardware encoder for doing this), and I'm able to record flawlessly in real-time 1440p video with AAC audio, in a multitude of outputs (from MP4 local files, upto http streaming).
But when I try to use the app on a Sony Android TV (running Android 5.1) which uses a Mediatek chipset, all hell breaks loose even from the encoding level. To be more specific:
It's basically impossible to make the hardware encoder work properly (that is, "OMX.MTK.VIDEO.ENCODER.AVC"). With the most basic setup (which succeeds at MediaCodec's level), I will almsot never get output buffers out of it, only weird, spammy, logcat error messages stating that the driver has encountered errors each time a frame should be encoded, like this:
01-20 05:04:30.575 1096-10598/? E/venc_omx_lib: VENC_DrvInit failed(-1)!
01-20 05:04:30.575 1096-10598/? E/MtkOmxVenc: [ERROR] cannot set param
01-20 05:04:30.575 1096-10598/? E/MtkOmxVenc: [ERROR] EncSettingH264Enc fail
Sometimes, trying to configure it to encode at a 360 by 640 pixels resolution will succeed in making the encoder actually encode stuff, but the first problem I'll notice is that it will only create one keyframe, that is, the first video frame. After that, no more keyframes are ever, ever created, only P-frames. Ofcourse, the i-frame-interval was set to a decent value and is working with no issues on other devices. Needless to say, this makes it impossible to create seekable MP4 files, or any kind of streamable solution on top.
Most of the times, after releasing the encoder, logcat will start spamming endlessly with "Waiting for input frame to be released..." which basically requires a reboot of the device, since nothing will work from that point on anyway.
In the case where it doesn;t go havoc after a simple release(), no problem - the hardware encoder is making sure that it cannot be created a second time, and it falls back to the generic SOFTWARE avc google encoder. hich ofcourse is basically a mockup encoder which does little to nothing than spit out an error when trying to make it encode anything larger than 160p videos...
So, my question is: is there any hope of making this MediaCodec API actually work on such a device? My understanding was that there are some CTS tests performed by Google/manufacturers (in this case, Sony) that would allow a developer to actually think that an API is supported on a device which prouds itself as running Android 5.1. Am I missing something obvious here? Did anyone actually ever tried doing this (a simple MediaCodec video encoding test) and succeeded? It's really frustrating!
PS: it's worth mentioning that not even Sony provides yet a recording capability for this TV set, which many people are complaining anyway. So, my guess is that this sounds more like a Mediatek problem, but still, what exactly are the Android's CTS for in this case anyway?

Smooth playback of consecutive mp4 video sequences on Android

I want to play (render to surface) two or more consecutive mp4 video sequences (each stored in a separate file on my device and maybe not present at startup) on my Android device in a smooth (no stalls, flicker, etc.) manner. So, the viewer might get the impression of watching only one continuous video. In a first step it would be sufficient to achieve this only for my Nexus 7 tablet.
For displaying only one video I have been using the MediaCodec API in a similar way to http://dpsm.wordpress.com/2012/07/28/android-mediacodec-decoded/ and it works fine. By only creating (and configure) the second decoder after the first sequence has finished (decoder.stop and decoder.release of the first one was called), one can see the blending. For a smooth fade between two different video sequences I was thinking about having an init functionality where the second video is already initialized via decoder.configure(format, surface, null, 0) during the playback of the first one. Furthermore the first frame is also queued via decoder.queueInputBuffer.
But doing so results to the following error:
01-13 16:20:37.182: E/BufferQueue(183): [SurfaceView] connect: already connected (cur=3, req=3)
01-13 16:20:37.182: E/MediaCodec(9148): native_window_api_connect returned an error: Invalid argument (-22)
01-13 16:20:37.182: E/Decoder Init(9148): Exception decoder.configure: java.lang.IllegalStateException
It seems to me that one surface can only be used by one decoder simultaneously. So, is there any other possibility for doing that? Maybe using OpenGL?
Best,
Alex.
What you describe with using multiple instances of MediaCodec will work, but you can only have one "producer" connected to a Surface at a time. You'd need to tear down the first before you can proceed with the second, and I'm not sure how close you can get the timing.
What you can do instead is decode to a SurfaceTexture, and then draw that on the SurfaceView (using, as you thought, OpenGL).
You can see an example of rendering MP4 files to a SurfaceTexture in the ExtractMpegFramesTest example. From there you just need to render the texture to your surface (SurfaceView? TextureView?), using something like the STextureRender class in CameraToMpegTest.
There's some additional examples in Grafika, though the video player there is closer to what you already have (decoder outputs to TextureView).
Incidentally, you'll need to figure out how much of a delay to put between the last frame of movie N and the first frame of movie N+1. If the recordings were taken at fixed frame rates it's easy enough, but some sources (e.g. screenrecord) don't record that way.
Update: If you can guarantee that your movie clips have the same characteristics (size, encoding type -- essentially everything in MediaFormat), there's an easier way. You can flush() the decoder when you hit end-of-stream and just start feeding in the next file. I use this to loop video in the Grafika video player (see MoviePlayer#doExtract()).
Crazy idea. Try margin the two videos to one. They both on your device, so it shouldn't take to long. And you can implement the fade effect by yourself.

Android 4.2 with 4 MediaPlayers = "Can't play this video"

Whenever I'm trying to load at least 4 mediaPlayers, one of them will corrupt the video it's trying to load and trigger an Android OS message "Can't play this video"
Other information:
For 3 mediaPlayers everything works fine.
On other Android versions, different from 4.2, the same code with the same 4 video works.
The 4 video can be played independently on the device. There is no format problem.
After starting the program and getting the "Can't play this video" message, the video can no longer be played in any other application unless I reset the device.
I tried this both with VideoViews or independent MediaPlayers displayed on surfaceViews.
I replicated the error on more devices running Android 4.2.
On android 4.1.2 and other android 4 versions I do not recall the code worked fine.
On Android, the idea is that everything related to media codecs is hidden from the developer which has to use a consistent and unique API : MediaPlayer.
When you play a media, would it be a stream or something located on the external device, the low level codecs/parsers are instanciated every time an application will be needing their help.
However, it occurs that for particular reasons related to hardware decoding, some codecs, cannot be instantiated more than once. As a matter of fact, every application must be releasing resources (codecs instances for instance) when they do not need them anymore by calling MediaPlayer.release() in a valid state.
In fact, what I'm saying is illustrated in the documentation of release on the Android Developers website :
Releases resources associated with this MediaPlayer object. It is
considered good practice to call this method when you're done using
the MediaPlayer. In particular, whenever an Activity of an application
is paused (its onPause() method is called), or stopped (its onStop()
method is called), this method should be invoked to release the
MediaPlayer object, unless the application has a special need to keep
the object around. In addition to unnecessary resources (such as
memory and instances of codecs) being held, failure to call this
method immediately if a MediaPlayer object is no longer needed may
also lead to continuous battery consumption for mobile devices, and
playback failure for other applications if no multiple instances of
the same codec are supported on a device. Even if multiple instances
of the same codec are supported, some performance degradation may be
expected when unnecessary multiple instances are used at the same
time.
So, either you are not calling release when you are done playing back, or another app is holding a reference on this kind of resources.
EDIT :
If you need to be rendering several videos on the same Activity, you have two choices. As I said in my response, what you originally wanted is not possible because of low-level issues, neither it is on iOS by the way.
What you can try to do though is :
If the medias you are playing are not real-time streamed content, you could wrap the 4 videos into a single one, using one of the widely available free video editors. Then render the video in full screen in your Activity, it will look like you have 4 Views.
If they are real-time/non recorded content, keep the first video as is. I assume every video is encoded using the same codec/container. What you might be trying is to transcode the 3 other videos so they use a different codec and a different format. Make sure you are transcoding to a codec/container that is supported by Android. This might potentially force Android to use different decoders in the same time. I think this is overkill compared to the result you're expecting.
Lastly, you could use a different backend for decoding such as MediaPlayer + FFMPEG or just FFMPEG. But again, even if it works this will be, I think, a huge overkill.
To sum this up, you have to make compromises in order for this to work.

How can I retrieve the timestamp of a video frame as it's being recorded?

So I've been trying to figure out a way to get the timestamp of a video frame as it's being recorded.
All of the samples online and in the API documentation tell you to use MediaRecorder to record video from the camera. The problem is that no timestamp data is returned, nor is there a callback called when it records a frame.
I started investigating the Camera app in the Android 4.2 source code, and was able to make significant progress on this. I successfully recorded video from the camera, saving frame timestamps to a file when the SurfaceTexture's onFrameAvailable listener was called, since SurfaceTexture has a timestamp property.
Upon further investigation though, I figured out that I was receiving these callbacks when the frame was being displayed not when recorded. In hindsight, this makes a lot of sense and I should have spotted it earlier.
In any case, I continued further into the Camera app and started looking at the EffectsRecorder class that it has. I found that it uses some undocumented APIs via the filter framework to allow the use of OpenGL shaders. For my use case, this is helpful, so I likely will continue down this path. I'm still trying to make my way through it to get it recording video correctly, but I feel like I will still only get timestamps on display, rather than on recording.
Looking further, I see that it uses a filter that has a MediaRecorder class underneath it, using an undocumented second video source other than camera called GRALLOC_BUFFER = 2. This is even more intriguing and useful, but ultimately, I still need the timestamp for the frame being recorded.
I'm appealing to Stack Overflow in hopes that someone in the community has encountered this. I simply need a pointer of where to look. I can provide source code, since most of it is just lifted from the AOSP, but I haven't as yet simply because I'm not sure what would be relevant.
Help is greatly appreciated.

Video recording to a circular buffer on Android

I'm looking for the best way (if any...) to capture continuous video to a circular buffer on the SD card, allowing the user to capture events after they have happened.
The standard video recording API allows you to just write directly to a file, and when you reach the limit (set by the user, or the capacity of the SD card) you have to stop and restart the recording. This creates up to a 2 second long window where the recording is not running. This is what some existing apps like DailyRoads Voyager already do. To minimize the chance of missing something important you can set the splitting time to something long, like 10 minutes, but if an event occurs near the end of this timespan you are wasting space by storing the 9 minutes of nothing at the beginning.
So, my idea for now is as follows: I'll have a large file that will serve as the buffer. I'll use some code I've found to capture the frames and save them to the file myself, wrapping around at the end. When the user wants to keep some part, I'll mark it by pointers to the beginning and end in the buffer. The recording can continue as before, skipping over regions that are marked for retention.
After the recording is stopped, or maybe during that on a background thread (depending on phone/card speed) I'll copy out the marked region to another file and remove the overwrite protection.
Main question, if you don't care about the details above: I can't seem to find a way to convert the individual frames to a video file in the Android SDK. Is it possible? If not, are there any available libraries, maybe in native code, that can do this?
I don't really care about the big buffer of uncompressed frames, but the exported videos should be compressed in an Android-friendly format. But if there is a way to compress the buffer I would like to hear about it.
Thank you.
In Android's MediaRecorder there is two way to specify the output. One is a filename and another is a FileDescriptor.
Using static method fromSocket of ParcelFileDescriptor you can create an instance of ParcelFileDescriptor pointing to a socket. Then, call getFileDescriptor to get the FileDescriptor to be passed to the MediaRecorder.
Since you can get the encoded video from the socket (as if you were creating a local web server), you will be able to access individual frames of the video, although not so directly, because you will need to decode it first.

Categories

Resources