Currently I am doing research for a native video player project, initialy I tried to use ffmpeg as the decoder and return the Byte to java, then I use View::onDraw with Canvas to display frames. Unfortunately, the performance of this method is not good, so I am wondering whether there is anything else that I could use to display frames other then passing to java?
Also, other than display the frames, how can I play sound using C/C++ with NDK?
Thanks.
You can use ffmpeg http://ffmpeg.org and/or libtheora http://www.theora.org to decode video frames. Then just display the result via OpenGL ES 2 using render-to-texture. Refer to http://www.gamedev.net/topic/570295-opengl-and-xvidtheoraanything for details.
For audio you can use OpenAL. Here is the Android port: http://pielot.org/2010/12/14/openal-on-android
Related
I'm using mp4parser and the videos need to be of the same kind.
I was thinking of using android's media codec to decode & encode the preroll video to fit the same encoding output of the cameras (front & back)
any suggestion on how this can be done (how to get specific device encoding params)?
If you want to find out what encoding your Android camera is using, try using this: https://developer.android.com/reference/android/media/CamcorderProfile
This should suffice to answer your questions for detecting the video encoding including : The file output format, Video codec format, Video bit rate in bits per second, Video frame rate in frames per second, Video frame width and height, Audio codec format, Audio bit rate in bits per second, Audio sample rate Number of audio channels for recording.
Pulled a lot of the above information from here as well: https://developer.android.com/guide/topics/media/camera#capture-video
As for transcoding videos that are already in the users roll, I found this useful transcoder that was written in pure java using the Android MediaCodec API and can be found here: https://github.com/ypresto/android-transcoder
Also, as rupps mentioned below, you can use ffmeg which is proven to work countless times on Android. However, the reasoning for me linking the other transcoder first is because, as the author states, "FFmpeg is the most famous solution for transcoding. But using FFmpeg binary on Android can cause GPL and/or patent issues. Also using native code for Android development can be troublesome because of cross-compiling, architecture compatibility, build time and binary size." So use whichever one you believe better suits you. Here is the link for ffmeg for Android:
https://github.com/WritingMinds/ffmpeg-android
If you don't want to use the transcoder that someone else made then I reccomend making your own transcoder using the MediaCodec API that can be found here: https://developer.android.com/reference/android/media/MediaCodec
If you want magic, try this library.
https://github.com/INDExOS/media-for-mobile/
Take a look at the MediaComposer class.
Here's also a code snippet on how it's done.
org.m4m.MediaComposer mediaComposer = new org.m4m.MediaComposer(factory, progressListener);
mediaComposer.addSourceFile(mediaUri1);
int orientation = mediaFileInfo.getRotation();
mediaComposer.setTargetFile(dstMediaPath, orientation);
// set video encoder
VideoFormatAndroid videoFormat = new VideoFormatAndroid(videoMimeType, width, height);
videoFormat.setVideoBitRateInKBytes(videoBitRateInKBytes);
videoFormat.setVideoFrameRate(videoFrameRate);
videoFormat.setVideoIFrameInterval(videoIFrameInterval);
mediaComposer.setTargetVideoFormat(videoFormat);
// set audio encoder
AudioFormatAndroid aFormat = new AudioFormatAndroid(audioMimeType, audioFormat.getAudioSampleRateInHz(), audioFormat.getAudioChannelCount());
aFormat.setAudioBitrateInBytes(audioBitRate);
aFormat.setAudioProfile(MediaCodecInfo.CodecProfileLevel.AACObjectLC);
mediaComposer.setTargetAudioFormat(aFormat);
mediaComposer.setTargetFile(dstMediaPath, orientation);
mediaComposer.start();
Assuming we have a Surface in Android that displays a video (e.g. h264) with a MediaPlayer:
1) Is it possible to change the displayed saturation, contrast & brightness of the displayed on the surface video? In real time? E.g. Images can use setColorFilter is there anything similar in Android to process the video frames?
Alternative question (if no. 1 is too difficult):
2) If we would like to export this video with e.g. an increased saturation, we should use a Codec, e.g. MediaCodec. What technology (method, class, library, etc...) should we use before the codec/save action to apply the saturation change?
For display only, one easy approach is to use a GLSurfaceView, a SurfaceTexture to render the video frames, and a MediaPlayer. Prokash's answer links to an open source library that shows how to accomplish that. There are a number of other examples around if you search those terms together. Taking that route, you draw video frames to an OpenGL texture and create OpenGL shaders to manipulate how the texture is rendered. (I would suggest asking Prokash for further details and accepting his answer if this is enough to fill your requirements.)
Similarly, you could use the OpenGL tools with MediaCodec and MediaExtractor to decode video frames. The MediaCodec would be configured to output to a SurfaceTexture, so you would not need to do much more than code some boilerplate to get the output buffers rendered. The filtering process would be the same as with a MediaPlayer. There are a number of examples using MediaCodec as a decoder available, e.g. here and here. It should be fairly straightforward to substitute the TextureView or SurfaceView used in those examples with the GLSurfaceView of Prokash's example.
The advantage of this approach is that you have access to all the separate tracks in the media file. Because of that, you should be able to filter the video track with OpenGL and straight copy other tracks for export. You would use a MediaCodec in encode mode with the Surface from the GLSurfaceView as input and a MediaMuxer to put it all back together. You can see several relevant examples at BigFlake.
You can use a MediaCodec without a Surface to access decoded byte data directly and manipulate it that way. This example illustrates that approach. You can manipulate the data and send it to an encoder for export or render it as you see fit. There is some extra complexity in dealing with the raw byte data. Note that I like this example because it illustrates dealing with the audio and video tracks separately.
You can also use FFMpeg, either in native code or via one of the Java wrappers out there. This option is more geared towards export than immediate playback. See here or here for some libraries that attempt to make FFMpeg available to Java. They are basically wrappers around the command line interface. You would need to do some extra work to manage playback via FFMpeg, but it is definitely doable.
If you have questions, feel free to ask, and I will try to expound upon whatever option makes the most sense for your use case.
If you are using a player that support video filters then you can do that.
Example of such a player is VLC, which is built around FFMPEG [1].
VLC is pretty easy to compile for Android. Then all you need is the libvlc (aar file) and you can build your own app. See compile instructions here.
You will also need to write your own module. Just duplicate an existing one and modify it. Needless to say that VLC offers strong transcoding and streaming capabilities.
As powerful VLC for Android is, it has one huge drawback - video filters cannot work with hardware decoding (Android only). This means that the entire video processing is on the CPU.
Your other options are to use GLSL / OpenGL over surfaces like GLSurfaceView and TextureView. This guaranty GPU power.
I am searching for a library which offer ability for streaming video from android device (5.1+) and recording it at the same time.
I tried MediaRecorder - the usual way to record videos on android - but with it I am not able to stream it over webrtc or rtsp because camera is busy.
Currently I am using libstreaming. With little modification done app can record and stream over rtsp concurrently. But this lib lacks support for hardware codec in MTK and SPRG chipsets.
I am wonder if you can recommend a solution or another lib which.
By the moment lib works only on nexus 4 with qcom chipset.
After several days of research, I came to the decision to use a combination of FFMpeg and MediaCodec.
It seems that the only way to get frames from camera at high rate is to use Android MediaCodec API. But MediaCodec supports only mp4 file formats, which is not an option for me (I need ts), while FFMpeg can process\create any kind of human known video formats.
Currently I am trying to make it work together (read ByteBuffer from MediaCodec and feed FFMpeg recorder with it).
Useful links:
Grafika project: https://github.com/google/grafika
ContinuousCapture and Show + record are the most interesting parts to check
javacpp (specifically FFMpeg wrapper): https://github.com/bytedeco/javacpp
Has example with recording and streaming.
kickflip sdk: https://github.com/Kickflip/kickflip-android-sdk
The library which makes two mentioned above tools works together and also is open sourced. Sadly it doesn't solve my problem fully. The feature I need is requested but not already implemented: https://github.com/bytedeco/javacv/issues/95
i am developing a media player application in android which uses ffmpeg for decoding which i think is software decoding. it doesn't play high resolution videos smoothly so i would like to switch to hardware decoding. I came to know that libstagefright will do the thing. But how to implement it using libstagefright? Is there any samples or documentation . Please help in using the libstagefright.
if you are using ICS you can use MediaCodec to encode or decode using hardware.
see http://developer.android.com/reference/android/media/MediaCodec.html for more details and examples.
Thanks,
NinjAndroid,
MoMinis R&D team
What I need to do is to decode video frames and render the frames on a trapezoidal surface. I'm using Android 2.2 as my development platform
I'm not using the mediaplayer service since I need access to the decoded frames.
Here's what I have so far:
I am using stagefright framework to extract decoded video frames.
each frame is then converted from YUV420 to RGB format
the converted frames are then copied to a texture and rendered to an OpenGL surface
Note that I am using Processing and not using OpenGL calls directly.
So now my problems are
i can only decode mp4 files with stagefright
the rendering is too slow, around 100ms for a 320x420 frame
there is no audio yet, I can only render videos but I still don't know how to synchronize the playing of the audio frames.
So for my questions...
how can I support other video formats? Shoud I use stagefright or should I switch to ffmpeg?
how can I improve the performance? I should be able to support at least 720p?
Should I use OpenGL calls directly instead of Processing? Will this improve the performance?
How can I sync the audio frames during playback?
Adding other video formats and codecs to stagefright
If you have parsers for "other" video formats, then you need to implement Stagefright media extractor plug-in and integrate into awesome player. Similarly if you have OMX Components for required Video Codecs, you need to integrate them into OMXCodec class.
Using FFMPEG components in stagefright, or using FFMPEG player instead of stagefright does not seem trivial.
However if required formats are already available in Opencore, then you can modify Android Stack so that Opencore gets chosen for those formats. You need to port the logic of getting YUV data to Opencore.
(get dirty with MIOs)
Playback performance
The surface flinger, used for normal playback uses Overlay for rendering. It usually provides around 4 - 8 video buffers (so far what I have seen). So you can check how many different buffers you are getting in OPEN GL rendering. Increasing buffer will definitely improve the performance.
Also, check time taken for YUV to RGB conversion. Can optimize or use opensource library to improve performance.
Usually Open GL is not used for Video Rendering (known for Graphics). So not sure on the performance.
Audio Video Sync
Audio time is used as reference. In Stagefright, awesome player uses Audio Player for playing out audio. This player implements an interface for providing time data. Awesome player uses this for rendering Video. Basically Video frames are rendered when their presentation time matches with that of audio sample being played.
Shash