I am developing an application using Android Opencv.
This app, which I am developing, offers two operations.
The frame read from the camera is passed to Jni using native function
Mat.getNativeObjAddr (), and the new image is returned through
javaCameraView's onCameraFrame() function
It reads a video clip inside Storage, processes each frame the same
as # 1, and returns the resulting image via the onCameraFrame()
function.
First,function is implemented as simple as the following and works normally:
#Override
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame)
{
if(inputFrame!=null){
Detect(inputFrame.rgba().getNativeObjAddr(), boardImage.getNativeObjAddr());
}
return boardImage;
}
}
However, the problem occurred in the second operation.
As far as I know, the files inside Java Storage are not readable by jni.
I already tried FFmpegMediaPlayer or MediaMetadataRetriever through Google search. However, the getFrameAtTime () function provided by this MetadataRetriever took an average of 170ms when grabbing a bitmap to a specific frame of 1920 * 1080 image. What I have to develop is to show the video results in real time at 30 fps. In # 1, the native function Detect () takes about 2ms to process one frame.
For these reasons, I want to do this.
java sends a video's path (eg : /storage/emulated/0/download/video.mp4) to jni, and native functions process the video one frame at a time, and display the result image on 'onCameraFrame'.
Is there a proper way? I look forward to your reply. Thank you!
Related
I have a flutter app which uses Dart ffi to connect to my custom C++ audio backend. There I allocate around 10MB of total memory for my audio buffers. Each buffer has 10MB / 84 of memory. I use 84 audio players. Here is the ffi flow:
C++ bridge:
extern "C" __attribute__((visibility("default"))) __attribute__((used))
void *
loadMedia(char *filePath, int8_t *mediaLoadPointer, int64_t *currentPositionPtr, int8_t *mediaID) {
LOGD("loadMedia %s", filePath);
if (soundEngine == nullptr) {
soundEngine = new SoundEngine();
}
return soundEngine->loadMedia(filePath, mediaLoadPointer, currentPositionPtr, mediaID);
}
In my sound engine I launch a C++ thread:
void loadMedia(){
std::thread{startDecoderWorker,
buffer,
}.detach();
}
void startDecoderWorker(float*buffer){
buffer = new float[30000]; // 30000 might be wrong here, I entered a huge value to just showcase the problem, the calculation of 10MB / 84 code is redundant to the code
}
So here is the problem, I dont know why but when I allocate memory with new keyword even inside a C++ thread, flutters raster thread janks and I can see that my flutter UI janks lots of frames. This is also present in performance overlay as it goes all red for 3 to 5 frames with each of it taking around 30 40ms. Tested on profile mode.
Here is how I came to this conclusion:
If I instantly return from my startDecoderWorker without running new memory allocation code, when I do this there is 0 jank. Everything is smooth 60fps, performance overlay doesnt show me red bars.
Here are some screenshots from Profile mode:
The actual cause, after discussions (in the comments of the question), is not because the memory allocation is too slow, but lie somewhere else - the calculations which will be heavy if the allocation is big.
For details, please refer to the comments and discussions of the question ;)
Consider using libVLC for Android, based on the official recommended way.
I went through the compilation process without problems (but took some time).
I'd like to have the snapshot functionality, but I've found some very old (2-3 years old) threads around which states that this feature is still not available (2016) at least "not out of the box' by this thread (2014).
Snapshot functionality is available on other platforms.
Also there are some solutions where they switch from SurfaceView to TextureView.
However I prefer sticking to SurfaceView as TextureView brings some performance drawbacks (according to this topic).
Also on an official android page it's stated:
In API 24 and higher, it's recommended to implement SurfaceView instead of TextureView.
In 2014 there were only 2 dependecies of the snapshot function based on the thread I've mentioned earlier:
enabling sout module
enabling png as encoder
When looking the "VLC-Android" repository of VideoLAN, there is a file responsible for building libVLC.
In line 396, sout module seems to be enabled by default.
Before compilation I've enabled png as encoder in vlc/contrib/src/ffmpeg/rules.mak as pointed out in the forum.
However there is still no function related to snapshot in either org.videolan.libvlc.MediaPlayer or in org.videolan.libvlc.VLCVideoLayout.
The question is how can I create a snapshot (either into file, or into buffer) on Android with libVLC, without using TextureView?
Update1:
Fact1:
Found the reason on why it's unavailable on Android. In VLC's core source tree, in file lib/video.c on line 145 there is the snapshot function with a massive FIXME warning:
/* FIXME: This is not atomic. All parameters should be passed at once
* (obviously _not_ with var_*()). Also, the libvlc object should not be
* used for the callbacks: that breaks badly if there are concurrent
* media players in the instance. */
var_Create( p_vout, "snapshot-width", VLC_VAR_INTEGER );
var_SetInteger( p_vout, "snapshot-width", i_width);
var_Create( p_vout, "snapshot-height", VLC_VAR_INTEGER );
var_SetInteger( p_vout, "snapshot-height", i_height );
var_Create( p_vout, "snapshot-path", VLC_VAR_STRING );
var_SetString( p_vout, "snapshot-path", psz_filepath );
var_Create( p_vout, "snapshot-format", VLC_VAR_STRING );
var_SetString( p_vout, "snapshot-format", "png" );
var_TriggerCallback( p_vout, "video-snapshot" );
vlc_object_release( p_vout );
Fact2:
I wanted to go to another direction with this. If snapshot function is not usable (and also not wise to use it), I thought of some emergency solutions:
there is a video-filter in VLC named scene. This produce still images of the video to a specific path. I tried using this, but video-filters are not able to change at runtime. So this attempt failed.
I also tried to do it from the MediaPlayer (via Media.addOption), but video filters are also not possible to change at MediaPlayer level on Android.
I tried then to pass the filter config as an argument to libVLC initialization and finally it succeeded, however that won't be effective to create a new libVLC instance everytime when I need a screenshot.
A few ways to go about this...
Here's a crossplatform thumbnailer example using libvlc https://code.videolan.org/mfkl/libvlcsharp-samples/blob/master/PreviewThumbnailExtractor.Skia/Program.cs It should work on Android without much editing since it doesn't use any OS specific feature in the script. Should be able to translate it to Java/Kotlin as well I guess.
There is a libvlc function that allows to take snapshot. Just go the time you want and call it. https://www.videolan.org/developers/vlc/doc/doxygen/html/group__libvlc__video.html#ga9b0a3870ce962aa0358050b2d5a59143
In VLC Android, the medialibrary now manages thumbnails.
LibVLC 4 now bundles a thumbnailer https://github.com/videolan/vlc/blob/d40eb012b10cc355ea9ad7a13eaf494b8e826d78/include/vlc/libvlc_media.h#L845
Good luck.
I am trying to save image sequences with fixed framerates (preferably up to 30) on an android device with FULL capability for camera2 (Galaxy S7), but I am unable to a) get a steady framerate, b) reach even 20fps (with jpeg encoding). I already included the suggestions from Android camera2 capture burst is too slow.
The minimum frame duration for JPEG is 33.33 milliseconds (for resolutions below 1920x1080) according to
characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP).getOutputMinFrameDuration(ImageFormat.JPEG, size);
and the stallduration is 0ms for every size (similar for YUV_420_888).
My capture builder looks as follows:
captureBuilder.set(CaptureRequest.CONTROL_AE_MODE, CONTROL_AE_MODE_OFF);
captureBuilder.set(CaptureRequest.SENSOR_EXPOSURE_TIME, _exp_time);
captureBuilder.set(CaptureRequest.CONTROL_AE_LOCK, true);
captureBuilder.set(CaptureRequest.SENSOR_SENSITIVITY, _iso_value);
captureBuilder.set(CaptureRequest.LENS_FOCUS_DISTANCE, _foc_dist);
captureBuilder.set(CaptureRequest.CONTROL_AF_MODE, CONTROL_AF_MODE_OFF);
captureBuilder.set(CaptureRequest.CONTROL_AWB_MODE, _wb_value);
// https://stackoverflow.com/questions/29265126/android-camera2-capture-burst-is-too-slow
captureBuilder.set(CaptureRequest.EDGE_MODE,CaptureRequest.EDGE_MODE_OFF);
captureBuilder.set(CaptureRequest.COLOR_CORRECTION_ABERRATION_MODE, CaptureRequest.COLOR_CORRECTION_ABERRATION_MODE_OFF);
captureBuilder.set(CaptureRequest.NOISE_REDUCTION_MODE, CaptureRequest.NOISE_REDUCTION_MODE_OFF);
captureBuilder.set(CaptureRequest.CONTROL_AF_TRIGGER, CaptureRequest.CONTROL_AF_TRIGGER_CANCEL);
// Orientation
int rotation = getWindowManager().getDefaultDisplay().getRotation();
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION,ORIENTATIONS.get(rotation));
Focus distance is set to 0.0 (inf), iso is set to 100, exposure-time 5ms. Whitebalance can be set to OFF/AUTO/ANY VALUE, it does not impact the times below.
I start the capture session with the following command:
session.setRepeatingRequest(_capReq.build(), captureListener, mBackgroundHandler);
Note: It does not make a difference if I request RepeatingRequest or RepeatingBurst..
In the preview (only texture surface attached), everything is at 30fps.
However, as soon as I attach an image reader (listener running on HandlerThread) which I instantiate like follows (without saving, only measuring time between frames):
reader = ImageReader.newInstance(_img_width, _img_height, ImageFormat.JPEG, 2);
reader.setOnImageAvailableListener(readerListener, mBackgroundHandler);
With time-measuring code:
ImageReader.OnImageAvailableListener readerListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader myreader) {
Image image = null;
image = myreader.acquireNextImage();
if (image == null) {
return;
}
long curr = image.getTimestamp();
Log.d("curr- _last_ts", "" + ((curr - last_ts) / 1000000) + " ms");
last_ts = curr;
image.close();
}
}
I get periodically repeating time differences like this:
99 ms - 66 ms - 66 ms - 99 ms - 66 ms - 66 ms ...
I do not understand why these take double or triple the time that the stream configuration map advertised for jpeg? The exposure time is well below the frame duration of 33ms. Is there some other internal processing happening that I am not aware of?
I tried the same for the YUV_420_888 format, which resulted in constant time-differences of 33ms. The problem I have here is that the cellphone lacks the bandwidth to store the images fast enough (I tried the method described in How to save a YUV_420_888 image?). If you know of any method to compress or encode these images fast enough myself, please let me know.
Edit: From the documentation of getOutputStallDuration: "In other words, using a repeating YUV request would result in a steady frame rate (let's say it's 30 FPS). If a single JPEG request is submitted periodically, the frame rate will stay at 30 FPS (as long as we wait for the previous JPEG to return each time). If we try to submit a repeating YUV + JPEG request, then the frame rate will drop from 30 FPS." Does this imply that I need to periodically request a single capture()?
Edit2: From https://developer.android.com/reference/android/hardware/camera2/CaptureRequest.html: "The necessary information for the application, given the model above, is provided via the android.scaler.streamConfigurationMap field using getOutputMinFrameDuration(int, Size). These are used to determine the maximum frame rate / minimum frame duration that is possible for a given stream configuration.
Specifically, the application can use the following rules to determine the minimum frame duration it can request from the camera device:
Let the set of currently configured input/output streams be called S.
Find the minimum frame durations for each stream in S, by looking it up in android.scaler.streamConfigurationMap using getOutputMinFrameDuration(int, Size) (with its respective size/format). Let this set of frame durations be called F.
For any given request R, the minimum frame duration allowed for R is the maximum out of all values in F. Let the streams used in R be called S_r.
If none of the streams in S_r have a stall time (listed in getOutputStallDuration(int, Size) using its respective size/format), then the frame duration in F determines the steady state frame rate that the application will get if it uses R as a repeating request."
The JPEG output is by way not the fastest way to fetch frames. You can accomplish this a lot faster by drawing the frames directly onto a Quad using OpenGL.
For burst capture, a faster solution would be capturing the images to RAM without encoding them, then encoding and saving them asynchronously.
On this website you can find a lot of excellent code related to android multimedia in general.
This specific program uses OpenGL to fetch the pixel data from an MPEG video. It's not difficult to use the camera as input instead of a video. You can basically use the texture used in the CodecOutputSurface class from the mentioned program as output texture for your capture request.
A possible solution I found consists of using and dumping YUV without encoding it as JPEG in combination with a micro Sd-card that is able to save up to 95Mb per second. (I had the misconception that YUV images would be larger, so with a cellphone that has full support for the camera2-pipeline, the write speed should be the limiting factor.
With this setup, I was able to achieve the following stable rates:
1920x1080, 15fps (approx. 4Mb * 15 == 60Mb/sec)
960x720, 30fps. (approx. 1.5Mb * 30 == 45Mb/sec)
I then encode the images offline from YUV to PNG using a python script.
I want to capture the audio wave frame from the audio buffer, I found android.media.audiofx.Visualizer can do such thing, but it can only returns partial and low quality audio content
I found android.media.audiofx.Visualizer will call to the function Visualizer_command(VISUALIZER_CMD_CAPTURE) at android4.0\frameworks\base\media\libeffects\visualizer
I found the function Visualizer_process will make the audio content to low quality. I want to rewrite the Visualizer_process , and want to find who will call Visualizer_process, but I cannot find the caller from Android source code, can anyone help me ?
thanks very much!
The AudioFlinger::PlaybackThread::threadLoop calls AudioFlinger::EffectChain::process_l, which calls AudioFlinger::EffectModule::process, which finally calls the actual effect's process function.
As you can see in AudioFlinger::EffectModule::process, there's the call
int ret = (*mEffectInterface)->process(mEffectInterface,
&mConfig.inputCfg.buffer,
&mConfig.outputCfg.buffer);
mEffectInterface is an effect_handle_t, which is an effect_interface_s**. The effect_interface_s struct (defined here) contains a number of function pointers (process, command, ...). These are filled out with pointers the actual effect's functions when the effect is loaded. The effects provide these pointers through a struct (in EffectVisualizer it's gVisualizerInterface).
Note that the exact location of these functions may differ between different Android releases. So if you're looking at Android 4.0 you might find some of them in AudioFlinger.cpp (or somewhere else).
I'm working on android camera application, I want to process my filter on sample frames (not all frames 8-10 fps enough) from camera on my android-openCV application. My frames have Mat format and I am using "SampleJavaCameraView" extends CameraCameraView. Briefly, I want to skip some frame without processing.
Use a counter, and do your processing only if counter % 10 == 0 (in order to process each tenth frame).