I'm developing an app for applying effects to the camera image in real-time. Currently I'm using the MediaMuxer class in combination with MediaCodec. Those classes were implemented with Android 4.3.
Now I wanted to redesign my app and make it compatible for more devices. The only thing I found in the internet was a combination of FFmpeg and OpenCV, but I read that the framerate is not very well if I want to use a high resolution. Is there any possibility to encode video in real-time while capturing the camera image without using MediaMuxer and MediaCodec?
PS: I'm using GLSurfaceView for OpenGL fragment shader effects. So this is a must-have.
Real-time encoding of large frames at a moderate frame rate is not going to happen with software codecs.
MediaCodec was introduced in 4.1, so you can still take advantage of hardware-accelerated compression so long as you can deal with the various problems. You'd still need an alternative to MediaMuxer if you want a .mp4 file at the end.
Some commercial game recorders, such as Kamcord and Everyplay, claim to work on Android 4.1+. So it's technically possible, though I don't know if they used non-public APIs to feed surfaces directly into the video encoder.
In pre-Jellybean Android it only gets harder.
(For anyone interested in recording GL in >= 4.3, see EncodeAndMuxTest or Grafika's "Record GL app".)
Related
I am using Opentok SDK for video calling in IOS and Android devices with Nodejs server.
It is a group call scenario with max 4 people, when we stream for more than 10 min, both the devices getting too hot.
Does anyone have solution for this?
We can't degrade the video quality.
This is likely because you are using the default video code, VP8, which is not hardware accelerated. You can change the codec per publisher to either H.264 or VP8, but there are some trade-offs to this approach.
Their lack of H.264 SVC support is disappointing, but might be okay depending on your use case. If you read this whole post and still want more guidance, I'd recommend reaching out to their developer support team, and/or post more about your use case here.
Here's some more context from the OpenTok Documentation, but I recommend you read the whole page to understand where you need to make compromises:
The VP8 real-time video codec is a software codec. It can work well at lower bitrates and is a mature video codec in the context of WebRTC. As a software codec it can be instantiated as many times as is needed by the application within the limits of memory and CPU. The VP8 codec supports the OpenTok Scalable Video feature, which means it works well in large sessions with supported browsers and devices.
The H.264 real-time video codec is available in both hardware and software forms depending on the device. It is a relatively new codec in the context of WebRTC although it has a long history for streaming movies and video clips over the internet. Hardware codec support means that the core CPU of the device doesn’t have to work as hard to process the video, resulting in reduced CPU load. The number of hardware instances is device-dependent with iOS having the best support. Given that H.264 is a new codec for WebRTC and each device may have a different implementation, the quality can vary. As such, H.264 may not perform as well at lower bit-rates when compared to VP8. H.264 is not well suited to large sessions since it does not support the OpenTok Scalable Video feature.
Assuming we have a Surface in Android that displays a video (e.g. h264) with a MediaPlayer:
1) Is it possible to change the displayed saturation, contrast & brightness of the displayed on the surface video? In real time? E.g. Images can use setColorFilter is there anything similar in Android to process the video frames?
Alternative question (if no. 1 is too difficult):
2) If we would like to export this video with e.g. an increased saturation, we should use a Codec, e.g. MediaCodec. What technology (method, class, library, etc...) should we use before the codec/save action to apply the saturation change?
For display only, one easy approach is to use a GLSurfaceView, a SurfaceTexture to render the video frames, and a MediaPlayer. Prokash's answer links to an open source library that shows how to accomplish that. There are a number of other examples around if you search those terms together. Taking that route, you draw video frames to an OpenGL texture and create OpenGL shaders to manipulate how the texture is rendered. (I would suggest asking Prokash for further details and accepting his answer if this is enough to fill your requirements.)
Similarly, you could use the OpenGL tools with MediaCodec and MediaExtractor to decode video frames. The MediaCodec would be configured to output to a SurfaceTexture, so you would not need to do much more than code some boilerplate to get the output buffers rendered. The filtering process would be the same as with a MediaPlayer. There are a number of examples using MediaCodec as a decoder available, e.g. here and here. It should be fairly straightforward to substitute the TextureView or SurfaceView used in those examples with the GLSurfaceView of Prokash's example.
The advantage of this approach is that you have access to all the separate tracks in the media file. Because of that, you should be able to filter the video track with OpenGL and straight copy other tracks for export. You would use a MediaCodec in encode mode with the Surface from the GLSurfaceView as input and a MediaMuxer to put it all back together. You can see several relevant examples at BigFlake.
You can use a MediaCodec without a Surface to access decoded byte data directly and manipulate it that way. This example illustrates that approach. You can manipulate the data and send it to an encoder for export or render it as you see fit. There is some extra complexity in dealing with the raw byte data. Note that I like this example because it illustrates dealing with the audio and video tracks separately.
You can also use FFMpeg, either in native code or via one of the Java wrappers out there. This option is more geared towards export than immediate playback. See here or here for some libraries that attempt to make FFMpeg available to Java. They are basically wrappers around the command line interface. You would need to do some extra work to manage playback via FFMpeg, but it is definitely doable.
If you have questions, feel free to ask, and I will try to expound upon whatever option makes the most sense for your use case.
If you are using a player that support video filters then you can do that.
Example of such a player is VLC, which is built around FFMPEG [1].
VLC is pretty easy to compile for Android. Then all you need is the libvlc (aar file) and you can build your own app. See compile instructions here.
You will also need to write your own module. Just duplicate an existing one and modify it. Needless to say that VLC offers strong transcoding and streaming capabilities.
As powerful VLC for Android is, it has one huge drawback - video filters cannot work with hardware decoding (Android only). This means that the entire video processing is on the CPU.
Your other options are to use GLSL / OpenGL over surfaces like GLSurfaceView and TextureView. This guaranty GPU power.
I am building an app where i need to compress video before uploading it to server. I have tried ffmpeg4android(https://github.com/chloette/ffmpeg4android) which is very heavy in size. Its increasing 20MB size of my app.
I tried MediaCodec android api, which is not working as expected.
Can anyone have working code example for compressing a video with MediaCodec android api?
Update:
Yes looking for MP4 containers.
This is a vague question since you do not specify a codec or file container, but assuming you are interested in H.264 codecs and MP4 containers there are a lot of examples here. Specifically, you will probably be interested in CameraToMpegTest.java.
Note that even though the requirements are stated as Android 4.3, MediaCodec support is poor until Android 5.1.
I am working on a video conferencing project. We were using software codec for encode and decode of video frames which will do fine for lower resolutions( up to 320p). We have planned to support our application for higher resolutions also up to 720p. I came to know that hardware acceleration will do this job fairly well.
As the hardware codec api Media codec is available from Jelly Bean onward I have used it for encode and decode and are working fine. But my application is supported from 2.3 . So I need to have an hardware accelerated video decode for H.264 frames of 720p at 30fps.
On research came across the idea of using OMX codec by modifying the stage fright framework.I had read that the hardware decoder for H.264 is available from 2.1 and encoder is there from 3.0. I have gone through many articles and questions given in this site and confirmed that I can go ahead.
I had read about stage fright architecture here -architecture and here- stagefright how it works
And I read about OMX codec here- use-android-hardware-decoder-with-omxcodec-in-ndk.
I am having a starting trouble and some confusions on its implementation.I would like to have some info about it.
For using OMX codec in my code should I build my project with the whole android source tree or can I do by adding some files from AOSP source(if yes which all).
What are the steps from scratch I should follow to achieve it.
Can someone give me a guideline on this
Thanks...
The best example to describe the integration of OMXCodec in native layer is the command line utility stagefright as can be observed here in GingerBread itself. This example shows how a OMXCodec is created.
Some points to note:
The input to OMXCodec should be modeled as a MediaSource and hence, you should ensure that your application handles this requirement. An example for creating a MediaSource based source can be found in record utility file as DummySource.
The input to decoder i.e. MediaSource should provide the data through the read method and hence, your application should provide individual frames for every read call.
The decoder could be created with NativeWindow for output buffer allocation. In this case, if you wish to access the buffer from the CPU, you should probably refer to this query for more details.
I'm looking at the class MediaRecorder of the Android SDK, and I was wondering if it can be used to record a video made from a Surface.
Example: I want to record what I display on my surface (a video game?) into a file.
As I said in the title: I'm not looking to record anything from the camera.
I think it is possible by overriding most of the class, but I'd very much like some ideas...
Beside, I'm not sure how the Camera class is used in MediaRecorder, and what I should get from my Surface to replace it.
Thank you for your interest!
PS: I'm looking at the native code used my MediaRecorder to have some clue, maybe it will inspire someone else:
http://www.netmite.com/android/mydroid/frameworks/base/media/jni/
The ability to record from a Surface was added in Android Lollipop. Here is the documentation:
http://developer.android.com/about/versions/android-5.0.html#ScreenCapture
Android 4.3 (API 18) adds some new features that do exactly what you want. Specifically, the ability to provide data to MediaCodec from a Surface, and the ability to store the encoded video as a .mp4 file (through MediaMuxer).
Some sample code is available here, including a patch for a Breakout game that records the game as you play it.
This is unfortunately not possible at the Java layer. All the communication between the Camera and Media Recorder happens in the native code layers, and there's no way to inject non-Camera data into that pipeline.
While Android 4.1 added the Media Codec APIs, which allow access to the device's video encoders, there's no easy-to-use way to take the resulting encoded streams and save them as a standard video file. You'd have find a library to do that or write one yourself.
You MAY wish to trace the rabbit hole from a different folder in AOSP
frameworks/av/media
as long as comfortable with NDK (C/C++, JNI, ...) and Android (permissions, ...)
Goes quite deep and am not sure about how far you can go on a non-rooted device.
Here's an article on how to draw on a EGLSurface and generate a video using MediaCodec:
https://www.sisik.eu/blog/android/media/images-to-video
This uses OpenGL ES but you can have MediaCodec provide a surface that you can then obtain a canvas from to draw on. No need for OpenGL ES.