Consider using libVLC for Android, based on the official recommended way.
I went through the compilation process without problems (but took some time).
I'd like to have the snapshot functionality, but I've found some very old (2-3 years old) threads around which states that this feature is still not available (2016) at least "not out of the box' by this thread (2014).
Snapshot functionality is available on other platforms.
Also there are some solutions where they switch from SurfaceView to TextureView.
However I prefer sticking to SurfaceView as TextureView brings some performance drawbacks (according to this topic).
Also on an official android page it's stated:
In API 24 and higher, it's recommended to implement SurfaceView instead of TextureView.
In 2014 there were only 2 dependecies of the snapshot function based on the thread I've mentioned earlier:
enabling sout module
enabling png as encoder
When looking the "VLC-Android" repository of VideoLAN, there is a file responsible for building libVLC.
In line 396, sout module seems to be enabled by default.
Before compilation I've enabled png as encoder in vlc/contrib/src/ffmpeg/rules.mak as pointed out in the forum.
However there is still no function related to snapshot in either org.videolan.libvlc.MediaPlayer or in org.videolan.libvlc.VLCVideoLayout.
The question is how can I create a snapshot (either into file, or into buffer) on Android with libVLC, without using TextureView?
Update1:
Fact1:
Found the reason on why it's unavailable on Android. In VLC's core source tree, in file lib/video.c on line 145 there is the snapshot function with a massive FIXME warning:
/* FIXME: This is not atomic. All parameters should be passed at once
* (obviously _not_ with var_*()). Also, the libvlc object should not be
* used for the callbacks: that breaks badly if there are concurrent
* media players in the instance. */
var_Create( p_vout, "snapshot-width", VLC_VAR_INTEGER );
var_SetInteger( p_vout, "snapshot-width", i_width);
var_Create( p_vout, "snapshot-height", VLC_VAR_INTEGER );
var_SetInteger( p_vout, "snapshot-height", i_height );
var_Create( p_vout, "snapshot-path", VLC_VAR_STRING );
var_SetString( p_vout, "snapshot-path", psz_filepath );
var_Create( p_vout, "snapshot-format", VLC_VAR_STRING );
var_SetString( p_vout, "snapshot-format", "png" );
var_TriggerCallback( p_vout, "video-snapshot" );
vlc_object_release( p_vout );
Fact2:
I wanted to go to another direction with this. If snapshot function is not usable (and also not wise to use it), I thought of some emergency solutions:
there is a video-filter in VLC named scene. This produce still images of the video to a specific path. I tried using this, but video-filters are not able to change at runtime. So this attempt failed.
I also tried to do it from the MediaPlayer (via Media.addOption), but video filters are also not possible to change at MediaPlayer level on Android.
I tried then to pass the filter config as an argument to libVLC initialization and finally it succeeded, however that won't be effective to create a new libVLC instance everytime when I need a screenshot.
A few ways to go about this...
Here's a crossplatform thumbnailer example using libvlc https://code.videolan.org/mfkl/libvlcsharp-samples/blob/master/PreviewThumbnailExtractor.Skia/Program.cs It should work on Android without much editing since it doesn't use any OS specific feature in the script. Should be able to translate it to Java/Kotlin as well I guess.
There is a libvlc function that allows to take snapshot. Just go the time you want and call it. https://www.videolan.org/developers/vlc/doc/doxygen/html/group__libvlc__video.html#ga9b0a3870ce962aa0358050b2d5a59143
In VLC Android, the medialibrary now manages thumbnails.
LibVLC 4 now bundles a thumbnailer https://github.com/videolan/vlc/blob/d40eb012b10cc355ea9ad7a13eaf494b8e826d78/include/vlc/libvlc_media.h#L845
Good luck.
Related
The app I am working on gets the video from the camera through Surface and encodes it to video/avc (H264) I am doing that successfully and it is working great on phones like galaxy Note 10+ but on phones like Xiaomi note 10s which is a new phone I am having this issue. Here is what I am doing:
create format:
format = MediaFormat.createVideoFormat(
H264, videoWidth, videoHeight
).apply {
setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 0)
setInteger(MediaFormat.KEY_BIT_RATE, bitrate)
setInteger(MediaFormat.KEY_FRAME_RATE, videoFrameRate)
setInteger(
MediaFormat.KEY_COLOR_FORMAT,
CodecCapabilities.COLOR_FormatSurface
)
setFloat(MediaFormat.KEY_I_FRAME_INTERVAL, 1f)
}```
Then create encoderName:
val encoderName = MediaCodecList(
MediaCodecList.ALL_CODECS
).findEncoderForFormat(format) //using the format I shared in the first step
Then create:
codec = MediaCodec.createByCodecName(encoderName)
Then .setCallback(callback) //not important since we won't make it till this point, it will crash before that.
4. And this is the line where it crashes.
codec.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE) //CRASH => MediaCodec$CodecException: Error 0x80001001
The rest
codec.setInputSurface(surface)
codec.start()
I am suspecting the
setInteger(
MediaFormat.KEY_COLOR_FORMAT,
CodecCapabilities.COLOR_FormatSurface
) //I tried changing the value and completely removing this setInteger, no luck :/
Error 0x80001001 also known as OMX_ErrorUndefined says:
"There was an error, but the cause of the error could not be determined".
Most likely cause for this error is insufficient resources. This can happen for example if you try to configure a hardware codec but there is not enough graphics memory available at the moment.
Suggestion 1: Make sure you release the codecs when you are done using them. You need to check all code paths.
Suggestion 2: Knowing that this can happen, you can filter the MediaCodecList keeping all the encoders that support the given format. Then wrap the configure() call in a try/catch block. And, if the call fails, try the next option from the list of codecs.
Note that on most devices there are at least two codecs for H264: a hardware codec and a software codec. The former one having better performance, the latter one being more resilient.
I have started to learn android for 3 months and I have some problems with MediaPlayer.setDataSource
I want to get path of mp3 file in my raw directory, which is used for Media Player.
I have tried many ways but the app's still not working, even though the program doesn't crash or show problems. I have tried many solutions from other posts but it's still not working.
Here is my code:
String path = "android.resource://com.example.acer.appdemo/raw/emer2";
bleeding1.setDataSource(path);
bleeding1.prepareAsync();
bleeding1.start();
textView.setText(getString(R.string.Firstaid2));
count = 2;
The reason why I choose this, because I want to make a program that change audio every time I swipe left or right. So I want the program setDataSource again each time I swipe left or right, and the code above is one of my cases (The audio doesn't start everytimes I put a new path).
You have to reset the MediaPlayer (call bleeding1.reset()) before you can set a new data source.
See https://developer.android.com/reference/android/media/MediaPlayer.html for a helpful lifecyle diagram.
I am developing an application using Android Opencv.
This app, which I am developing, offers two operations.
The frame read from the camera is passed to Jni using native function
Mat.getNativeObjAddr (), and the new image is returned through
javaCameraView's onCameraFrame() function
It reads a video clip inside Storage, processes each frame the same
as # 1, and returns the resulting image via the onCameraFrame()
function.
First,function is implemented as simple as the following and works normally:
#Override
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame)
{
if(inputFrame!=null){
Detect(inputFrame.rgba().getNativeObjAddr(), boardImage.getNativeObjAddr());
}
return boardImage;
}
}
However, the problem occurred in the second operation.
As far as I know, the files inside Java Storage are not readable by jni.
I already tried FFmpegMediaPlayer or MediaMetadataRetriever through Google search. However, the getFrameAtTime () function provided by this MetadataRetriever took an average of 170ms when grabbing a bitmap to a specific frame of 1920 * 1080 image. What I have to develop is to show the video results in real time at 30 fps. In # 1, the native function Detect () takes about 2ms to process one frame.
For these reasons, I want to do this.
java sends a video's path (eg : /storage/emulated/0/download/video.mp4) to jni, and native functions process the video one frame at a time, and display the result image on 'onCameraFrame'.
Is there a proper way? I look forward to your reply. Thank you!
I want to capture the audio wave frame from the audio buffer, I found android.media.audiofx.Visualizer can do such thing, but it can only returns partial and low quality audio content
I found android.media.audiofx.Visualizer will call to the function Visualizer_command(VISUALIZER_CMD_CAPTURE) at android4.0\frameworks\base\media\libeffects\visualizer
I found the function Visualizer_process will make the audio content to low quality. I want to rewrite the Visualizer_process , and want to find who will call Visualizer_process, but I cannot find the caller from Android source code, can anyone help me ?
thanks very much!
The AudioFlinger::PlaybackThread::threadLoop calls AudioFlinger::EffectChain::process_l, which calls AudioFlinger::EffectModule::process, which finally calls the actual effect's process function.
As you can see in AudioFlinger::EffectModule::process, there's the call
int ret = (*mEffectInterface)->process(mEffectInterface,
&mConfig.inputCfg.buffer,
&mConfig.outputCfg.buffer);
mEffectInterface is an effect_handle_t, which is an effect_interface_s**. The effect_interface_s struct (defined here) contains a number of function pointers (process, command, ...). These are filled out with pointers the actual effect's functions when the effect is loaded. The effects provide these pointers through a struct (in EffectVisualizer it's gVisualizerInterface).
Note that the exact location of these functions may differ between different Android releases. So if you're looking at Android 4.0 you might find some of them in AudioFlinger.cpp (or somewhere else).
In my Android application, I am encoding some media in webm (vp8) format using MediaCodec. The encoding is working as expected. However, I need to ensure that I create a sync frame once in a while. Here is what I do:
encoder.queueInputBuffer(..., MediaCodec.BUFFER_FLAG_SYNC_FRAME);
Later in the code, I check for sync frame:
encoder.dequeueOutputBuffer(bufferInfo, 0);
boolean isSyncFrame = (bufferInfo.flags & MediaCodec.BUFFER_FLAG_SYNC_FRAME);
The problem is that isSyncFrame never gets a true value.
I am wondering if I am making a mistake in my encoding configuration. May be there is a better way to tell the encoder to create a sync frame once in a while.
I hope it is not a bug in MediaCodec. Thank you in advance for your help.
There is no (current as of Android 4.3) way to request an on-demand sync frame using MediaCodec encoders. This is partly due to OMX, the underlying codec implementation in Android, that does not provide a way to specify which input frame should be encoded as a sync frame; although it has a way to trigger a sync frame "in the near future".
feisal's answer is the only currently supported way to control sync frames, but you have to do it at configuration time.
==edit re: jesup
You can trigger a sync frame in the near future using MediaCodec.setParameter:
Bundle params = new Bundle();
params.putInt(MediaCodec.PARAMETER_KEY_REQUEST_SYNC_FRAME, 0);
mCodec.setParameters(syncFrame);
Unfortunately, there is no (reliable) way to tell in MediaCodec if an encoded buffer is a sync frame other than doing it on your own by inspecting the byte-codes.
you can set the rate of I-frames in the MediaFormat object of your encoder by setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, int secs_between_iframes );