get data byte[] in real time using camera2 api - android

I am working on Camera2 api with real time Image processing, i get
found method
onCaptureProgressed(CameraCaptureSession, CaptureRequest, CaptureResult)
call on every capturing fram but i have no idea how to get byte[] or data from CaptureResult

You can't get image data from CaptureResult; it only provides image metadata.
Take a look at the Camera2Basic sample app, which captures JPEG images with an ImageReader. If you change the JPEG format to YUV, set the resolution to preview size, and set the ImageReader Surface as a target for the preview repeating request, you'll get an ImageReader.Image for every frame captured.

Related

Got wrong data by ImageReader when reading from a video?

I am writing an app to grab every frame from a video,so that I can do some cv processing.
According to Android `s API doc`s description,I should set Mediaplayer`s surface as ImageReader.getSurface(). so that I can get every video frame on the callback OnImageAvailableListener .And it really work on some device and some video.
However ,on my Nexus5(API24-25).I got almost green pixel when ImageAvailable.
I have checked the byte[] in image`s Yuv planes,and i discover that the bytes I read from video must some thing wrong!Most of the bytes are Y = 0,UV = 0,which leed to a strange imager full of green pixel.
I have make sure the Video is YUV420sp.Could anyone help me?Or recommend another way for me to grab frame ?(I have try javacv but the grabber is too slow)
I fix my question!!
when useing Image ,we should use getCropRect to get the valid area of the Image.
forexample ,i get image.width==1088 when I decode a 1920*1080 frame,I should use image.getCropImage() to get the right size of the image which will be 1920,1080

Taking grayscale picture with Android's camera2

I am creating an app for taking pictures and sending them via http POST to my server. Since I only need grayscale data on the server side, it would by much better to just take the grayscale picture and not having to convert it.
I am using Camera2 API and I have an issue with setting properties for CaptureRequest.Builder instance. With this:
final CaptureRequest.Builder captureBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
captureBuilder.set(CaptureRequest.CONTROL_EFFECT_MODE, CaptureRequest.CONTROL_EFFECT_MODE_NEGATIVE);
It takes a negative photo.
But this:
final CaptureRequest.Builder captureBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
captureBuilder.set(CaptureRequest.CONTROL_EFFECT_MODE, CaptureRequest.CONTROL_EFFECT_MODE_MONO);
Does absolutely nothing. No grayscale. just a normal picture.
You need to look at the list of supported effects on your device, to see if MONO is actually supported by it.
If you only care about luminance, you could just capture YUV_420_888 buffers instead of JPEG, and only send the Y buffer to the server. That won't get you automatic JPEG encoding, though.
Also note that generally under the hood, JPEG images are encoded in YUV; so if you dig into your JPEG decoder library, you may be able to get the image data before conversion to RGB, and simply ignore the chroma channels.
captureRequestBuilder.set(CaptureRequest.CONTROL_EFFECT_MODE,CameraMetadata.CONTROL_EFFECT_MODE_MONO);
you can use this`

How to get pixel values from Android Image

I'm using the camera2 api to capture a burst of images. To ensure fastest capture speed, I am currently using yuv420888.
(jpeg results in approximately 3 fps capture while yuv results in approximately 30fps)
So what I'm asking is how can I access the yuv values for each pixel in the image.
i.e.
Image image = reader.AcquireNextImage();
Pixel pixel = image.getPixel(x,y);
pixel.y = ...
pixel.u = ...
pixel.v = ...
Also if another format would be faster please let me know.
If you look at the Image class you will see the immediate answer is simply the .getPlanes() method.
Of course, for YUV_420_888 this will yield three planes of YUV data which you will have to do a bit of work with in order to get the pixel value at any given location, because the U and V channels have been downsampled and may be interlaced in how they are stored in the Image.Planes. But that is beyond the scope of this question.
Also, you are correct that YUV will be the fastest available output for your camera. JPEG require extra time for encoding which will slow down the pipeline output, and RAW are very large and take a lot of time to read out because they are so large. YUV (of whatever type) is the data format that most camera pipelines work in so it is the 'native' output, and thus the fastest.

Very difficult to grab YUV buffer after picture taken

I am trying to implement a camera app, but it looks only the onJepgTaken in PictureCallback can take a buffer, while onRaw and onPostView takes null. And getSupportedPictureFormats always returns 256, so no hope to get YUV directly. If this is the case, I guess if I want to process the large image after image taking, I can only get the jpeg decoder for processing.
Updated: seems NV16 is available in takepicture callback and even if YUV buffer is not available, libjpeg is still there for codec stuff.

preview buffers used for taking picture too?

I'm working on a custom camera app, and when I try to take higher resolution pictures, my jpeg callback is never called. I run logcat, and I see this message:
E/Camera-JNI(14689): Manually set buffer was too small! Expected 1473253 bytes, but got 768000!
As far as I know, I don't manually set the buffer size for taking a picture, but I do call addCallbackBuffer for capturing preview images.
Are those same buffers used for taking a picture as well as the preview? The description in the android developer docs says "Adds a pre-allocated buffer to the preview callback buffer queue." which uses the word "preview", so I wouldn't think it has anything to do with the takePicture().
So where is this manually allocated buffer it speaks of coming from?

Categories

Resources