Camera JNI. Manually set buffer was too small - android

From an Android camera, I take YUV array and decode it to RGB. (JNI NDK) Then, I using black-white filter for RGB matrix, and show on CameraPrewiev in format YCbCr_420_SP
lParameters.setPreviewFormat(PixelFormat.YCbCr_420_SP);
Now I need to take a photo. But when I takePhoto, i have this error:
CAMERA-JNI Manually set buffer was too small! Expected 1138126 bytes, but got 165888!

Because from Surface you are not give the image. You must give bitmap from layout and than save on SdCsrd in some folder as Compress JPG. Thanks for all. This question is closed.

Related

How to get depth map in Android Q

in Android Q there is a option to get depth map from image.
Starting in Android Q, cameras can store the depth data for an image in a separate file, using a new schema called Dynamic Depth Format (DDF). Apps can request both the JPG image and its depth metadata, using that information to apply any blur they want in post-processing without modifying the original image data.
To read the specification for the new format, see Dynamic Depth Format.
I have read this file Dynamic Depth Format and it looks that depth data is stored in JPG file as XMP metadata and now my question is how to get this depth map from file or directly from android API?
I am using galaxy s10 with anrdoid Q
If you retrieve a DYNAMIC_DEPTH jpeg, the depth image is stored in the bytes immediately after the main jpeg image. The documentation leaves a lot to be desired in explaining this; I finally figured it out by searching the whole byte buffer for JPEG start and end markers.
I wrote a parser that you can look at or use here: https://github.com/kmewhort/funar/blob/master/app/src/main/java/com/kmewhort/funar/preprocessors/JpegParser.java. It does the following:
Uses com.drew.imaging.ImageMetadataReader to read the EXIF.
Uses com.adobe.internal.xmp to read info about the depth image sizes (and some other interesting depth attributes that you don't necessarily need, depending on your use case)
Calculates the byte locations of the depth image by subtracting each trailer size from the final byte of the whole byte buffer (there can be other trailer images, such as thumbnails). It returns a wrapped byte buffer of the actual depth JPEG.

Android camera2 API - Display processed frame in real time

I'm trying to create an app that processes camera images in real time and displays them on screen. I'm using the camera2 API. I have created a native library to process the images using OpenCV.
So far I have managed to set up an ImageReader that receives images in YUV_420_888 format like this.
mImageReader = ImageReader.newInstance(
mPreviewSize.getWidth(),
mPreviewSize.getHeight(),
ImageFormat.YUV_420_888,
4);
mImageReader.setOnImageAvailableListener(mOnImageAvailableListener, mImageReaderHandler);
From there I'm able to get the image planes (Y, U and V), get their ByteBuffer objects and pass them to my native function. This happens in the mOnImageAvailableListener:
Image image = reader.acquireLatestImage();
Image.Plane[] planes = image.getPlanes();
Image.Plane YPlane = planes[0];
Image.Plane UPlane = planes[1];
Image.Plane VPlane = planes[2];
ByteBuffer YPlaneBuffer = YPlane.getBuffer();
ByteBuffer UPlaneBuffer = UPlane.getBuffer();
ByteBuffer VPlaneBuffer = VPlane.getBuffer();
myNativeMethod(YPlaneBuffer, UPlaneBuffer, VPlaneBuffer, w, h);
image.close();
On the native side I'm able to get the data pointers from the buffers, create a cv::Mat from the data and perform the image processing.
Now the next step would be to show my processed output, but I'm unsure how to show my processed image. Any help would be greatly appreciated.
Generally speaking, you need to send the processed image data to an Android view.
The most performant option is to get an android.view.Surface object to draw into - you can get one from a SurfaceView (via SurfaceHolder) or a TextureView (via SurfaceTexture). Then you can pass that Surface through JNI to your native code, and there use the NDK methods:
ANativeWindow_fromSurface to get an ANativeWindow
The various ANativeWindow methods to set the output buffer size and format, and then draw your processed data into it.
Use setBuffersGeometry() to configure the output size, then lock() to get an ANativeWindow_Buffer. Write your image data to ANativeWindow_Buffer.bits, and then send the buffer off with unlockAndPost().
Generally, you should probably stick to RGBA_8888 as the most compatible format; technically only it and two other RGB variants are officially supported. So if your processed image is in YUV, you'd need to convert it to RGBA first.
You'll also need to ensure that the aspect ratio of your output view matches that of the dimensions you set; by default, Android's Views will just scale those internal buffers to the size of the output View, possibly stretching it in the process.
You can also set the format to one of Android's internal YUV formats, but this is not guaranteed to work!
I've tried the ANativeWindow approach, but it's a pain to set up and I haven't managed to do it correctly. In the end I just gave up and imported OpenCV4Android library which simplifies things by converting camera data to a RGBA Mat behind the scenes.

Zxing camera and preview frame format (YUV, RGB...)

I'm using the Zxing API to decode some QR Code images.
I need to convert the YUV format to -> RGB format to be used in an other application.
I know the camera buffer returns a byte[] under the NV21 format (YUV), but which one is it ?
Do I get a YUV420 format ? Or a YUV422 format ?
If so, how do I convert this format to an RGB888 format ? Do I need
to convert the YUV to YUV888 before that ?
Thanks for your time,
EDIT:
One thing I do not undestand is the length of the byte[] from the YUV420 preview frame. For a 1280*720 resolution , I get 1,382,400.00 bytes. How is it calculated ?
NV21 is basically YUV420. You can convert directly:
http://en.wikipedia.org/wiki/YUV#Y.27UV420p_.28NV21.29_to_ARGB8888_conversion
While the accepted answer is correct, it is worth pointing out that the ZXing library includes PlanarYUVLuminanceSource, which encapsulates this transform and can limit peak memory usage if the decoder access the data row-by-row.

Very difficult to grab YUV buffer after picture taken

I am trying to implement a camera app, but it looks only the onJepgTaken in PictureCallback can take a buffer, while onRaw and onPostView takes null. And getSupportedPictureFormats always returns 256, so no hope to get YUV directly. If this is the case, I guess if I want to process the large image after image taking, I can only get the jpeg decoder for processing.
Updated: seems NV16 is available in takepicture callback and even if YUV buffer is not available, libjpeg is still there for codec stuff.

Get single buffer from AVFrame data and display it on Android Bitmap/Surface/SurfaceView

I have decoded AVFrame from avcodec_decode_video2 function (FFmpeg) which is then passed to the SWS library and converted from YUV420P format to RGB565. How do I combine all colors and linesizes information i.e. frame->data[0..3], frame->linesize[0..3] into one buffer and how to display it then on the Android device say by using Android Bitmap or SurfaceView/View? I don't want to use SurfaceFlinger because it is not official part of NDK and it is subject to change with every minor release.
You only have data[0] for RGB, and linesize[0] is equal to width if your frame is standard sized.

Categories

Resources