I'm a bit confused about how to play and record video/audio in Android. I don't really understand in what situations one should use these classes:
-To play: MediaPlayer vs MediaExtractor + MediaCodec
-To record: MediaRecorder vs MediaCodec + MediaMuxer
When do I have to use one or the others?
Sorry if it's a repeated question, I think it should be a common one but I haven't found any.
If the high level interfaces (MediaPlayer, MediaRecorder) can do what you want (play back video from a format that the system supports to the display, or record video from the camera into a file), you should probably just use them, it will be much much simpler.
If you want to do something more custom, when you notice that the part of the chain that you want to modify is hidden inside the high level classes, you'll want to move on to the lower level ones. E.g. for MediaExtractor; if you only want to extract packets of data from a file but not decode and display/play them back them immediately, you'll want to use MediaExtractor. If you want to receive packets from some other source that the system itself doesn't support, you'll want to use MediaCodec without MediaExtractor. Likewise, if you want to record something else than the camera, or write the output somewhere else than to a file that MediaRecorder supports, you'll want to use MediaCodec directly instead of MediaRecorder.
Also note that the high level classes also improve and get more flexible with newer API versions, allowing you to do things that previously required you to manually use the lower level classes. E.g. in Android 5.0, MediaRecorder got the ability to record from a custom Surface, allowing you to record a video of something you render yourself, not just the camera. This was previously possible since 4.3 by using the lower level classes.
Related
My android app plays videos in Exoplayer 2, and now I'd like to play a video backwards.
I searched around a lot and found only the idea to convert it to a gif and this from WeiChungChang.
Is there any more straight-forward solution? Another player or a library that implements this for me is probably too much to ask, but converting it to a reverse gif gave me a lot of memory problems and I don't know what to do with the WeiChungChang idea. Playing only mp4 in reverse would be enough tho.
Videos are frequently encoded such that the encoding for a given frame is dependent on one or more frames before it, and also sometimes dependent on one or more frames after it also.
In other words to create the frame correctly you may need to refer to one or more previous and one or more subsequent frames.
This allows a video encoder reduce file or transmission size by encoding fully the information for every reference frame, sometimes called I frames, but for the frames before and/or after the reference frames only storing the delta to the reference frames.
Playing a video backwards is not a common player function and the player would typically have to decode the video as usual (i.e. forwards) to get the frames and then play them in the reverse order.
You could extend ExoPlayer to do this yourself but it may be easier to manipulate the video on the server side if possible first - there exist tools which will reverse a video and then your players will be able to play it as normal, for example https://www.videoreverser.com, https://www.kapwing.com/tools/reverse-video etc
If you need to reverse it on the device for your use case, then you could use ffmpeg on the device to achieve this - see an example ffmpeg command to do this here:
https://video.stackexchange.com/a/17739
If you are using ffmpeg it is generally easiest to use via a wrapper on Android such as this one, which will also allow you test the command before you add it to your app:
https://github.com/WritingMinds/ffmpeg-android-java
Note that video manipulation is time and processor hungry so this may be slow and consume more battery than you want on your mobile device if the video is long.
Assuming we have a Surface in Android that displays a video (e.g. h264) with a MediaPlayer:
1) Is it possible to change the displayed saturation, contrast & brightness of the displayed on the surface video? In real time? E.g. Images can use setColorFilter is there anything similar in Android to process the video frames?
Alternative question (if no. 1 is too difficult):
2) If we would like to export this video with e.g. an increased saturation, we should use a Codec, e.g. MediaCodec. What technology (method, class, library, etc...) should we use before the codec/save action to apply the saturation change?
For display only, one easy approach is to use a GLSurfaceView, a SurfaceTexture to render the video frames, and a MediaPlayer. Prokash's answer links to an open source library that shows how to accomplish that. There are a number of other examples around if you search those terms together. Taking that route, you draw video frames to an OpenGL texture and create OpenGL shaders to manipulate how the texture is rendered. (I would suggest asking Prokash for further details and accepting his answer if this is enough to fill your requirements.)
Similarly, you could use the OpenGL tools with MediaCodec and MediaExtractor to decode video frames. The MediaCodec would be configured to output to a SurfaceTexture, so you would not need to do much more than code some boilerplate to get the output buffers rendered. The filtering process would be the same as with a MediaPlayer. There are a number of examples using MediaCodec as a decoder available, e.g. here and here. It should be fairly straightforward to substitute the TextureView or SurfaceView used in those examples with the GLSurfaceView of Prokash's example.
The advantage of this approach is that you have access to all the separate tracks in the media file. Because of that, you should be able to filter the video track with OpenGL and straight copy other tracks for export. You would use a MediaCodec in encode mode with the Surface from the GLSurfaceView as input and a MediaMuxer to put it all back together. You can see several relevant examples at BigFlake.
You can use a MediaCodec without a Surface to access decoded byte data directly and manipulate it that way. This example illustrates that approach. You can manipulate the data and send it to an encoder for export or render it as you see fit. There is some extra complexity in dealing with the raw byte data. Note that I like this example because it illustrates dealing with the audio and video tracks separately.
You can also use FFMpeg, either in native code or via one of the Java wrappers out there. This option is more geared towards export than immediate playback. See here or here for some libraries that attempt to make FFMpeg available to Java. They are basically wrappers around the command line interface. You would need to do some extra work to manage playback via FFMpeg, but it is definitely doable.
If you have questions, feel free to ask, and I will try to expound upon whatever option makes the most sense for your use case.
If you are using a player that support video filters then you can do that.
Example of such a player is VLC, which is built around FFMPEG [1].
VLC is pretty easy to compile for Android. Then all you need is the libvlc (aar file) and you can build your own app. See compile instructions here.
You will also need to write your own module. Just duplicate an existing one and modify it. Needless to say that VLC offers strong transcoding and streaming capabilities.
As powerful VLC for Android is, it has one huge drawback - video filters cannot work with hardware decoding (Android only). This means that the entire video processing is on the CPU.
Your other options are to use GLSL / OpenGL over surfaces like GLSurfaceView and TextureView. This guaranty GPU power.
I'm trying to implement an Android app that records a video, while also writing a file containing the times at which each frame was taken. I tried using MediaRecorder, got to a point where I can get the video saved, but I couldn't find a way to get the timestamps. I tried doing something like:
while (previous video file length != current video file length)
write current time to text file;
but this did not seem to work, as file length doesn't seem to be updated frequently enough (or am I wrong?).
I then tried using OpenCV and managed to capture the images frame by frame (and therefore getting timestamps was easy), but I couldn't find a way to join all the frames to one video. I saw answers referring to using NDK and FFmpeg, but I feel like there should be an easier solution (perhaps similar to the one I tried at the top?).
You could use MediaCodec to capture the video, but that gets complicated quickly, especially if you want audio as well. You can recover timestamps from a video file by walking through it with MediaExtractor. Just call getSampleTime() on each frame to get the presentation time in microseconds.
This question is old, but I just needed the same thing. With the camera2 api you could stream to multiple targets. So you can stream from one and the same physical camera to the MediaRecorder and at the same time to an ImageReader. The ImageReader has the ImageReader.OnImageAvailableListener, the "onImageAvailable" callback you can use for querying the frames for their timestamps:
image = reader.acquireLatestImage();
long cameraTime = image.getTimestamp();
Not sure whether streaming to mulitple targets requires certain hardware capabilities, though (such as multiple image processors). I guess, modern devices have like 2-3 of them.
EDIT: This looks like every camera2 capable device can stream to up to 3 targets.
EDIT: Apparently, you can use an ImageReader for capturing the individual frames and their timestamps, and then use an ImageWriter to provide those images to downstream consumers such as a MediaCodec, which records the video for you.
Expanding on fadden's answer, the easiest way to go is to record using MediaRecorder (the Camera2 example helped me a lot) and after that extracting timestamps via MediaExtractor. I want to add a working MediaExtractor code sample:
MediaExtractor extractor = new MediaExtractor();
extractor.setDataSource(pathToVideo);
do {
long sampleMillis = extractor.getSampleTime() / 1000;
// do something with sampleMillis
}
while (extractor.advance() && extractor.getSampleTime() != -1);
I'm looking at the class MediaRecorder of the Android SDK, and I was wondering if it can be used to record a video made from a Surface.
Example: I want to record what I display on my surface (a video game?) into a file.
As I said in the title: I'm not looking to record anything from the camera.
I think it is possible by overriding most of the class, but I'd very much like some ideas...
Beside, I'm not sure how the Camera class is used in MediaRecorder, and what I should get from my Surface to replace it.
Thank you for your interest!
PS: I'm looking at the native code used my MediaRecorder to have some clue, maybe it will inspire someone else:
http://www.netmite.com/android/mydroid/frameworks/base/media/jni/
The ability to record from a Surface was added in Android Lollipop. Here is the documentation:
http://developer.android.com/about/versions/android-5.0.html#ScreenCapture
Android 4.3 (API 18) adds some new features that do exactly what you want. Specifically, the ability to provide data to MediaCodec from a Surface, and the ability to store the encoded video as a .mp4 file (through MediaMuxer).
Some sample code is available here, including a patch for a Breakout game that records the game as you play it.
This is unfortunately not possible at the Java layer. All the communication between the Camera and Media Recorder happens in the native code layers, and there's no way to inject non-Camera data into that pipeline.
While Android 4.1 added the Media Codec APIs, which allow access to the device's video encoders, there's no easy-to-use way to take the resulting encoded streams and save them as a standard video file. You'd have find a library to do that or write one yourself.
You MAY wish to trace the rabbit hole from a different folder in AOSP
frameworks/av/media
as long as comfortable with NDK (C/C++, JNI, ...) and Android (permissions, ...)
Goes quite deep and am not sure about how far you can go on a non-rooted device.
Here's an article on how to draw on a EGLSurface and generate a video using MediaCodec:
https://www.sisik.eu/blog/android/media/images-to-video
This uses OpenGL ES but you can have MediaCodec provide a surface that you can then obtain a canvas from to draw on. No need for OpenGL ES.
Is there any way to record audio in high quality?
And how can I read information that user is saying something? In Audio Recording application you can see such indicator (I don't know the right name for it).
At the moment, a big reason for poor audio quality recording on Android is the codec used by the MediaRecorder class (the AMR-NB codec). However, you can get access to uncompressed audio via the AudioRecord class, and record that into a file directly.
The Rehearsal Assistant app does this to save uncompressed audio into a WAV file - take a look at the RehearsalAudioRecord class source code.
The RehearsalAudioRecord class also provides a getMaxAmplitude method, which you can use to detect the maximum audio level since the last time you called the method (MediaRecorder also provides this method).
For recording and monitoring: You can use the sound recorder activity.
Here's a snippet of code:
Intent recordIntent = new Intent(
MediaStore.Audio.Media.RECORD_SOUND_ACTION);
startActivityForResult(recordIntent, REQUEST_CODE_RECORD);
For a perfect working example of how to record audio which includes an input monitor, download the open source Ringdroid project: https://github.com/google/ringdroid
Look at the screenshots and you'll see the monitor.
For making the audio higher quality, you'd need a better mic. The built in mic can only capture so much (which is not that good). Again, look at the ringdroid project, glean some info from there. At that point you could implement some normalization and amplification routines to improve the sound.
I give you a simple answer.
for samplerate, about the quality, 48000 is almost the same as 16000.
for bitrate, about the quality, 96Kbps is much better than 16Kbps.
you can try stereo(channelCount = 2), but make little change.
So, for android phones, just set the audio bit rate bigger, you will get the better quality.