Zxing camera and preview frame format (YUV, RGB...) - android

I'm using the Zxing API to decode some QR Code images.
I need to convert the YUV format to -> RGB format to be used in an other application.
I know the camera buffer returns a byte[] under the NV21 format (YUV), but which one is it ?
Do I get a YUV420 format ? Or a YUV422 format ?
If so, how do I convert this format to an RGB888 format ? Do I need
to convert the YUV to YUV888 before that ?
Thanks for your time,
EDIT:
One thing I do not undestand is the length of the byte[] from the YUV420 preview frame. For a 1280*720 resolution , I get 1,382,400.00 bytes. How is it calculated ?

NV21 is basically YUV420. You can convert directly:
http://en.wikipedia.org/wiki/YUV#Y.27UV420p_.28NV21.29_to_ARGB8888_conversion

While the accepted answer is correct, it is worth pointing out that the ZXing library includes PlanarYUVLuminanceSource, which encapsulates this transform and can limit peak memory usage if the decoder access the data row-by-row.

Related

How to get depth map in Android Q

in Android Q there is a option to get depth map from image.
Starting in Android Q, cameras can store the depth data for an image in a separate file, using a new schema called Dynamic Depth Format (DDF). Apps can request both the JPG image and its depth metadata, using that information to apply any blur they want in post-processing without modifying the original image data.
To read the specification for the new format, see Dynamic Depth Format.
I have read this file Dynamic Depth Format and it looks that depth data is stored in JPG file as XMP metadata and now my question is how to get this depth map from file or directly from android API?
I am using galaxy s10 with anrdoid Q
If you retrieve a DYNAMIC_DEPTH jpeg, the depth image is stored in the bytes immediately after the main jpeg image. The documentation leaves a lot to be desired in explaining this; I finally figured it out by searching the whole byte buffer for JPEG start and end markers.
I wrote a parser that you can look at or use here: https://github.com/kmewhort/funar/blob/master/app/src/main/java/com/kmewhort/funar/preprocessors/JpegParser.java. It does the following:
Uses com.drew.imaging.ImageMetadataReader to read the EXIF.
Uses com.adobe.internal.xmp to read info about the depth image sizes (and some other interesting depth attributes that you don't necessarily need, depending on your use case)
Calculates the byte locations of the depth image by subtracting each trailer size from the final byte of the whole byte buffer (there can be other trailer images, such as thumbnails). It returns a wrapped byte buffer of the actual depth JPEG.

Convert yuv21 to rgb and rgb back to yuv21 by opencv

How could I convert NV21(camera format of android phones) to rgb(3 channels,24bits) and convert the rgb back to NV21? I need to convert it back to cv::Mat but not android BitMap.
My solution
//All of them are 24bits per pixel
cv::cvtcolor(nv_21_mat, rgb_mat, CV_YUV2RGB_NV21);
cv::cvtcolor(rgb_mat, nv_21_mat, CV_RGB2YUV);
Is this correct?By the way, color format of opencv looks confuse, like
CV_RGB2YUV, CV_YUV2RGB, I do not know it is convert to YUV444, YUV420 or YUV422, or how to specify which YUV format I want them to convert to. Do anyone know the details of them?Thanks

How to get pixel values from Android Image

I'm using the camera2 api to capture a burst of images. To ensure fastest capture speed, I am currently using yuv420888.
(jpeg results in approximately 3 fps capture while yuv results in approximately 30fps)
So what I'm asking is how can I access the yuv values for each pixel in the image.
i.e.
Image image = reader.AcquireNextImage();
Pixel pixel = image.getPixel(x,y);
pixel.y = ...
pixel.u = ...
pixel.v = ...
Also if another format would be faster please let me know.
If you look at the Image class you will see the immediate answer is simply the .getPlanes() method.
Of course, for YUV_420_888 this will yield three planes of YUV data which you will have to do a bit of work with in order to get the pixel value at any given location, because the U and V channels have been downsampled and may be interlaced in how they are stored in the Image.Planes. But that is beyond the scope of this question.
Also, you are correct that YUV will be the fastest available output for your camera. JPEG require extra time for encoding which will slow down the pipeline output, and RAW are very large and take a lot of time to read out because they are so large. YUV (of whatever type) is the data format that most camera pipelines work in so it is the 'native' output, and thus the fastest.

Camera JNI. Manually set buffer was too small

From an Android camera, I take YUV array and decode it to RGB. (JNI NDK) Then, I using black-white filter for RGB matrix, and show on CameraPrewiev in format YCbCr_420_SP
lParameters.setPreviewFormat(PixelFormat.YCbCr_420_SP);
Now I need to take a photo. But when I takePhoto, i have this error:
CAMERA-JNI Manually set buffer was too small! Expected 1138126 bytes, but got 165888!
Because from Surface you are not give the image. You must give bitmap from layout and than save on SdCsrd in some folder as Compress JPG. Thanks for all. This question is closed.

Get single buffer from AVFrame data and display it on Android Bitmap/Surface/SurfaceView

I have decoded AVFrame from avcodec_decode_video2 function (FFmpeg) which is then passed to the SWS library and converted from YUV420P format to RGB565. How do I combine all colors and linesizes information i.e. frame->data[0..3], frame->linesize[0..3] into one buffer and how to display it then on the Android device say by using Android Bitmap or SurfaceView/View? I don't want to use SurfaceFlinger because it is not official part of NDK and it is subject to change with every minor release.
You only have data[0] for RGB, and linesize[0] is equal to width if your frame is standard sized.

Categories

Resources