Android RenderScript to blur video in VideoView - android

I know that I can use RenderScript on Android to blur images, but does anybody know if I can apply the same to video views so that my complete video is gaussian blurred?

VideoView, which extends SurfaceView, does not utilize the drawing cache due to being hardware accelerated. This means you won't be able to get stills. I was forced to scrap the design I had using the paused video still.
Check out: VideoView getDrawingCache is returning black
Edit: As I look into this more, there might be a way through https://github.com/google/grafika, but I haven't seen anyone verify it as a performant workaround.

You should use a thumbnail of the video in front of it. You can then, blur the image using this lib: https://github.com/jrvansuita/GaussianBlur

Related

Android: Animated gif image resize in kmagick

I am using kmagick for resizing gif images in android. It works fine for static but gives incorretc output for gif or animated webp files.
Magick.initialize().use {
val wand = MagickWand()
// getting image
wand.readImage(src)
// resizing image
wand.resizeImage(512,512,FilterType.UndefinedFilter)
// Saving image
wand.writeImage(dst)
promise.resolve("Done at ${dst}")
}
Input: (animated gif having multiple frames)
Output: I think its single frame and that also messed up but resizes according to input for height width.
Whats happening here. How to fix it?
Just want to reference for future, this was already answered in
https://github.com/cherryleafroad/kmagick/issues/3#issuecomment-1068044815
Anyways, to sum up what's going on,
Gifs are able to save the difference between frames to save space.
It's due to the way gif images work. Because the cli uses the api in a certain way, and kmagick bindings are only direct bindings to the c api, if you want to achieve the same results that the cli does, you have to use the api in the same way. (kmagick isn't doing anything to the image, it's imagemagick that's doing it; so it's more relevant to check imagemagick api docs / ask them so you can do it in the same way and get the same results).
I'm not sure what imagemagick api they're using off the top of my head however to achieve those results, but I think that doing something like extracting each frame individually then resizing them one by one would solve it.

Unity VideoPlayer not rendering Video correctly to Texture

I'm trying to use a VideoPlayer component, with a URL source and a RenderTexture as the target, to show a video in my Unity mobile game. The video is loaded and starts playing, however the resulting texture is only 1 color. The color does change every frame to something matching what the video would look like that frame, but it's just the 1 color. Audio is working fine. On the VideoPlayer component, the Aspect Ratio is set to "Fit Inside", but I have tried all options here with the same result. As for the RenderTexture, it's set to the same resolution as the input video, and the Color Format is set to RGB565 (which both Android and iOS should support according to SystemInfo.SupportRenderTextureFormat()). I'm all out of ideas, any help would be appreciated.
EDIT: A workaround could be using "material override" instead of rendering to a texture. This doesn't work though if you want to use the texture specifically instead of only showing the video on a material, plus the fact that Material Override doesn't support objects with multiple renderers/materials. Not really a fix, but a workaround for those who find this question before a solution has been found.
I just had this myself and fixed it.
In the Raw Image, search UV Rect and set its W and H to 1. I had changed that, which made it only sample 1 pixel.

Vuforia: Manipulate Current Image Before Showing it in GLSurfaceView

I would like to do a vuforia android application including some image processing operations.
I can do what I want by showing processed current image in ImageView. But it is too slow. I have tried to resize current image but it is not good like showing image in GLSurfaceView.
So, I thought that if I do manipulation on current image then show it in GLSurfaceView, it will be faster.
I wonder that if it is possible. If so, how can I do this? Could someone help me about this problem? Thanks in advance.

Access Android Camera Frames with no Preview

I want to do image processing on frames from Android Camera in real time with OpenCv, but all the OpenCv Android examples provide a preview of the image being captured. I really don't need previews of the frames , is there any way to get the frames without actually showing the preview ?
A quick/naive way would be to make onCameraFrame method returning null and set your CameraBridgeViewBase visibility to SurfaceView.INVISIBLE or SurfaceView.GONE.
I still would like to know if it worked for you, but meanwhile I found the solution that worked for me. As I feel grateful for the guy who wrote it, I'm sharing this (scroll down to answers section) with anyone who will have same problem in future.
In case the link vanished, here's his/her pro-tip:
You can set your preview (in this case a CameraBridgeViewBase)
transparent by setting the alpha value of the view, with 0 being
completely invisible.
mOpenCvCameraView.setAlpha(0);
This should make your preview "disappear".
I wonder how your solution works, because as you can read in the docs:
This method is invoked when delivery of the frame needs to be done.
The returned values - is a modified frame which needs to be displayed
on the screen.
Indeed, in my case it doesn't work but I desperately need such a functionality.

How to capture bitmap image from video

i am using SurfaceView for play video and i want to get image from running video, i have tried with getDrawingCache() but i am getting blank image or Black screen but when i draw image of any other layout which dont have video then this is working fine,
so please anybody can solve this issue that How to get bitmap images from video or surfaceView. Thanks in advance please answer my question if you know.
I have used this code but not getting result.
mVideo.setDrawingCacheEnabled(true);
mVideo.buildDrawingCache();
Bitmap b = mVideo.getDrawingCache();
This won't work because the video content is not "exposed" to the normal view layers.
If you are using API Level >= 10 you can use MediaMetadataRetriever getFrameAtTime. It's described here.
Even this may not return the exact frame as it is fairly hardware dependent.
The most sure way to do this is to use something like FFMPEG to access the stream yourself.

Categories

Resources