i am using SurfaceView for play video and i want to get image from running video, i have tried with getDrawingCache() but i am getting blank image or Black screen but when i draw image of any other layout which dont have video then this is working fine,
so please anybody can solve this issue that How to get bitmap images from video or surfaceView. Thanks in advance please answer my question if you know.
I have used this code but not getting result.
mVideo.setDrawingCacheEnabled(true);
mVideo.buildDrawingCache();
Bitmap b = mVideo.getDrawingCache();
This won't work because the video content is not "exposed" to the normal view layers.
If you are using API Level >= 10 you can use MediaMetadataRetriever getFrameAtTime. It's described here.
Even this may not return the exact frame as it is fairly hardware dependent.
The most sure way to do this is to use something like FFMPEG to access the stream yourself.
Related
I am using kmagick for resizing gif images in android. It works fine for static but gives incorretc output for gif or animated webp files.
Magick.initialize().use {
val wand = MagickWand()
// getting image
wand.readImage(src)
// resizing image
wand.resizeImage(512,512,FilterType.UndefinedFilter)
// Saving image
wand.writeImage(dst)
promise.resolve("Done at ${dst}")
}
Input: (animated gif having multiple frames)
Output: I think its single frame and that also messed up but resizes according to input for height width.
Whats happening here. How to fix it?
Just want to reference for future, this was already answered in
https://github.com/cherryleafroad/kmagick/issues/3#issuecomment-1068044815
Anyways, to sum up what's going on,
Gifs are able to save the difference between frames to save space.
It's due to the way gif images work. Because the cli uses the api in a certain way, and kmagick bindings are only direct bindings to the c api, if you want to achieve the same results that the cli does, you have to use the api in the same way. (kmagick isn't doing anything to the image, it's imagemagick that's doing it; so it's more relevant to check imagemagick api docs / ask them so you can do it in the same way and get the same results).
I'm not sure what imagemagick api they're using off the top of my head however to achieve those results, but I think that doing something like extracting each frame individually then resizing them one by one would solve it.
I am trying to get the last frame from a video as a bitmap in good quality. I am now using MediaMetadataRetriever with the method getFrameAtTime, but the picture quality is very poor - see a similar post here.
I tried to use this method here:
https://bigflake.com/mediacodec/#ExtractMpegFramesTest but I cannot get it to work, the frames are all just horizontal lines
Also, another library that is very interesting: https://github.com/wseemann/FFmpegMediaMetadataRetriever
This one can get very good quality images from frames, but it only has at-second precision (for example if I try to get the last frame or a frame at 2300ms, it only shows corrupted image)
Are there any other options?
In Android, I have an ImageReader which emits images onImageAvailable. I'm trying to forward those images to an ImageWriter to preview on a SurfaceView. When I attempt to do so, I receive the error stated above.
java.lang.IllegalStateException: Trying to attach an opaque image into a non-opaque ImageWriter, or vice versa
I looked around and I haven't found anyone else mention this issue. Does anyone know what it is talking about? The error seems to be in native code.
I just ran into this issue when trying to pass an Image from the camera into a Surface using ImageWriter. In my case, I fixed the problem by calling SurfaceHolder.setFormat() and passing in the same format that the camera Image is using.
I would like to do a vuforia android application including some image processing operations.
I can do what I want by showing processed current image in ImageView. But it is too slow. I have tried to resize current image but it is not good like showing image in GLSurfaceView.
So, I thought that if I do manipulation on current image then show it in GLSurfaceView, it will be faster.
I wonder that if it is possible. If so, how can I do this? Could someone help me about this problem? Thanks in advance.
I know that I can use RenderScript on Android to blur images, but does anybody know if I can apply the same to video views so that my complete video is gaussian blurred?
VideoView, which extends SurfaceView, does not utilize the drawing cache due to being hardware accelerated. This means you won't be able to get stills. I was forced to scrap the design I had using the paused video still.
Check out: VideoView getDrawingCache is returning black
Edit: As I look into this more, there might be a way through https://github.com/google/grafika, but I haven't seen anyone verify it as a performant workaround.
You should use a thumbnail of the video in front of it. You can then, blur the image using this lib: https://github.com/jrvansuita/GaussianBlur