Very difficult to grab YUV buffer after picture taken - android

I am trying to implement a camera app, but it looks only the onJepgTaken in PictureCallback can take a buffer, while onRaw and onPostView takes null. And getSupportedPictureFormats always returns 256, so no hope to get YUV directly. If this is the case, I guess if I want to process the large image after image taking, I can only get the jpeg decoder for processing.
Updated: seems NV16 is available in takepicture callback and even if YUV buffer is not available, libjpeg is still there for codec stuff.

Related

get data byte[] in real time using camera2 api

I am working on Camera2 api with real time Image processing, i get
found method
onCaptureProgressed(CameraCaptureSession, CaptureRequest, CaptureResult)
call on every capturing fram but i have no idea how to get byte[] or data from CaptureResult
You can't get image data from CaptureResult; it only provides image metadata.
Take a look at the Camera2Basic sample app, which captures JPEG images with an ImageReader. If you change the JPEG format to YUV, set the resolution to preview size, and set the ImageReader Surface as a target for the preview repeating request, you'll get an ImageReader.Image for every frame captured.

MediaCodec encode video with inputSurface but drop frames

I'm looking for the fastest way to "decode edit encode" video on Android devices. So I choose MediaCodec with Surface for Input And Output.
This is my idea:
1. decode the mp4 file with MediaCoder, the output is SurfaceTexture;
2. use OpenGL's to edit surface, the output is texture;
3. use MediaCodec to encode, the input is Surface
the problem is:
decode and edit are much faster than encode, so when I had decode and edit 50 frames, encode maybe just consume 10 frames. but as I use the surface for input with Encode, I don't know if the encoder has consumed all previous frames. so the 40 frames are lost.
Is there any way let me know the surface consume state, so I can control decode speed or any other idea?

How to get pixel values from Android Image

I'm using the camera2 api to capture a burst of images. To ensure fastest capture speed, I am currently using yuv420888.
(jpeg results in approximately 3 fps capture while yuv results in approximately 30fps)
So what I'm asking is how can I access the yuv values for each pixel in the image.
i.e.
Image image = reader.AcquireNextImage();
Pixel pixel = image.getPixel(x,y);
pixel.y = ...
pixel.u = ...
pixel.v = ...
Also if another format would be faster please let me know.
If you look at the Image class you will see the immediate answer is simply the .getPlanes() method.
Of course, for YUV_420_888 this will yield three planes of YUV data which you will have to do a bit of work with in order to get the pixel value at any given location, because the U and V channels have been downsampled and may be interlaced in how they are stored in the Image.Planes. But that is beyond the scope of this question.
Also, you are correct that YUV will be the fastest available output for your camera. JPEG require extra time for encoding which will slow down the pipeline output, and RAW are very large and take a lot of time to read out because they are so large. YUV (of whatever type) is the data format that most camera pipelines work in so it is the 'native' output, and thus the fastest.

preview buffers used for taking picture too?

I'm working on a custom camera app, and when I try to take higher resolution pictures, my jpeg callback is never called. I run logcat, and I see this message:
E/Camera-JNI(14689): Manually set buffer was too small! Expected 1473253 bytes, but got 768000!
As far as I know, I don't manually set the buffer size for taking a picture, but I do call addCallbackBuffer for capturing preview images.
Are those same buffers used for taking a picture as well as the preview? The description in the android developer docs says "Adds a pre-allocated buffer to the preview callback buffer queue." which uses the word "preview", so I wouldn't think it has anything to do with the takePicture().
So where is this manually allocated buffer it speaks of coming from?

Camera JNI. Manually set buffer was too small

From an Android camera, I take YUV array and decode it to RGB. (JNI NDK) Then, I using black-white filter for RGB matrix, and show on CameraPrewiev in format YCbCr_420_SP
lParameters.setPreviewFormat(PixelFormat.YCbCr_420_SP);
Now I need to take a photo. But when I takePhoto, i have this error:
CAMERA-JNI Manually set buffer was too small! Expected 1138126 bytes, but got 165888!
Because from Surface you are not give the image. You must give bitmap from layout and than save on SdCsrd in some folder as Compress JPG. Thanks for all. This question is closed.

Categories

Resources