Got wrong data by ImageReader when reading from a video? - android

I am writing an app to grab every frame from a video,so that I can do some cv processing.
According to Android `s API doc`s description,I should set Mediaplayer`s surface as ImageReader.getSurface(). so that I can get every video frame on the callback OnImageAvailableListener .And it really work on some device and some video.
However ,on my Nexus5(API24-25).I got almost green pixel when ImageAvailable.
I have checked the byte[] in image`s Yuv planes,and i discover that the bytes I read from video must some thing wrong!Most of the bytes are Y = 0,UV = 0,which leed to a strange imager full of green pixel.
I have make sure the Video is YUV420sp.Could anyone help me?Or recommend another way for me to grab frame ?(I have try javacv but the grabber is too slow)

I fix my question!!
when useing Image ,we should use getCropRect to get the valid area of the Image.
forexample ,i get image.width==1088 when I decode a 1920*1080 frame,I should use image.getCropImage() to get the right size of the image which will be 1920,1080

Related

Conversion of .Raw10 (images / Videos) to RGB in Android studio

There are a few problems I am listing here..
I am using a Omnivision image sensor to get the raw video and images. I have to convert the raw image to bitmap or the video format to MJPEG..
I tried got data using Uri, then to inputstream then to a byte [], a N x 1. where I got about a million values. I am not sure whether this is the right way to get the image. Then I tried to decode using imcodes. I used to bitwise shift and added the values, but it took a lot of time and the app crashed. Instead of it, I reshaped into m x n and tried to display on a bitmap to view it as a null. I tried the dimoraic which I could not proceed. I tried decoding it as bitmap too, and the app crashed too.
Is there any way I could directly stream it in Android studio. I need to convert this raw video format into MJPEG format. I tried to stream it in python just like a webcam, which gave an error can't grab frame and something to do with MSMF

How to avoid cropping about depth frame in ARCore when acquire depth image?

I am going to get depth image when playback with mp4 file in ARCore.
First, I recorded live camera frames(depth, color) to mp4 file.
this mp4 file has 3 tracks(1280720(color), 640480(color), 640*480(depth)).
Next, I start playback this mp4 file using session.setPlaybackDataSet() function in ARCore.
and I tried to get color image depth image using below functions.
textureImage = frame.acquireCameraImage();
depthImage = frame.acquireDepthImage();
In this case, textureImage's size is 640480, but depthImage's size is 640360(cropped).
But I want to get 640*480 depthImage(non-cropped).
I tried to search functions for changing frame's size before start playback.
But I cannot find any solution.
How can I get non-cropped depth image. My tested device is Samsung Galaxy S20+.
Please help me.

camera matrix original image

As I know, when you using camera it crops some part of image. I mean that the application cuts out that part of the photo that goes beyond the rectangle.
Is there any way to get the original image that is full-sized and received directly from the camera's matrix?
Root access on my device is available.
I did a small demo years ago:
https://sourceforge.net/p/javaocr/code/HEAD/tree/trunk/demos/camera-utils/src/main/java/net/sf/javaocr/demos/android/utils/camera/CameraManager.java#l8
Basic idea is to set up callback, then you raw image data is delivered via byte array ( getPreviewFrame() / onPreviewFrame ) - no root access is necessary.
Actually, this data comes as mmapped memory buffer directly from adress space of camera app - no root is necessary
As this byte array does not provide any meta information, you have to get all the params from camera object yourself

Video as a source for the canvas drawImage() method is supported on Android?

do you know if the video as a source for the canvas drawImage() method is supported on Android?
The goal is to display the video and to select one moment to take a picture of this moment in a frame (drawImage (video,0,0), return canvas)). Do you think it is doable?
Thanks!
There is an approach in Android that will return you a bitmap for a given point in a video, which may give you want you need (or needed as this is an old question!):
MediaMetadataRetriever (https://developer.android.com/reference/android/media/MediaMetadataRetriever.html#getFrameAtTime%28long,%20int%29)
getFrameAtTime
Added in API level 10
Bitmap getFrameAtTime (long timeUs,
int option)
Call this method after setDataSource(). This method finds a representative frame close to the given time position by considering the given option if possible, and returns it as a bitmap. This is useful for generating a thumbnail for an input data source or just obtain and display a frame at the given time position.

preview buffers used for taking picture too?

I'm working on a custom camera app, and when I try to take higher resolution pictures, my jpeg callback is never called. I run logcat, and I see this message:
E/Camera-JNI(14689): Manually set buffer was too small! Expected 1473253 bytes, but got 768000!
As far as I know, I don't manually set the buffer size for taking a picture, but I do call addCallbackBuffer for capturing preview images.
Are those same buffers used for taking a picture as well as the preview? The description in the android developer docs says "Adds a pre-allocated buffer to the preview callback buffer queue." which uses the word "preview", so I wouldn't think it has anything to do with the takePicture().
So where is this manually allocated buffer it speaks of coming from?

Categories

Resources