ScriptIntrinsicYuvToRGB Image Formats - android

What YUV ImageFormats (e.g. ImageFormat.NV21, ImageFormat.YV12, and ImageFormat.YUV_420_888) are supported by ScriptIntrinsicYuvToRGB (SIYTB)?
What Element should be used for SIYTB.create()?
The SIYTB overview says "The output is RGBA; the alpha channel will be set to 255", but the documentation for create() says "Supported elements types are U8_4". Should the Element type for create() actually be Element.RGBA_8888?
What Element type should be used to construct the Type.Builder used to build the Type for the input Allocation?
The SIYTB documentation says "The input allocation should be supplied in a supported YUV format as a YUV element Allocation." Does this imply to use Element.YUV to construct the Type.Builder?
Should Type.Builder.setY() be used in conjunction with Type.Builder.setX() when creating an input Allocation for SIYTB?
If Type.Builder.setY() should be used...
* Should it be used for NV21, YV21, and YUV_420_888?
* Should Type.Builder.setX() typically be set to the image width and Type.Builder.setY() typically be set to the image height?
Should Type.Builder.setYuvFormat() be used when creating an input Allocation for SIYTB?
If Type.Builder.setYuvFormat() should be used...
* Should it be used for NV21, YV21, and YUV_420_888?
* Does it help SIYTB know how to parse a YUV Image byte array?
What YUV image byte array layout is required for SIYTB input via Allocator.copyFrom()?
I am guessing the byte array must be packed with no pixel strides or row strides. But the byte array packing of NV21, YV12, and YUV_420_888 is different, e.g. order and size of UV planes. So how is SIYTB to know the layout of the input byte array? I am guessing it is via Type.Builder.setYuvFormat().
Is there an easy way to convert an Android Image to a byte array? For example something similar to Bitmap.copyPixelsToBuffer()? (wink, nudge, Android Team)
Thanks for reading this...

Related

Correct way for working with raw data when encoding using FFmpeg

I am working on the android library for encoding/decoding raw data through ffmpeg. Every example I found uses files, it either reads or writes to a file. However, I am using raw byte array representing RGBA  image for encoder input and byte array for encoder output. Lets focus on encoding part for this question.
My function looks like this: 
int encodeRGBA(uint8_t *image, int imageSize, int presentationTimestamp,
uint8_t *result, int resultSize)
Where image is byte array containing raw rgba image data, imageSize is length of that array, presentationTimestamp is just counter used by AVFrame for setting pts, result is preallocated byte array with some defined length (currently with size matching width x height) and resultSize is byte array length (width x height). Returned int value represents actually used length of preallocated array. I am aware that this is not the best approach for sending data back to java and this is also part of the question. Is there a better way for returning result?
Example found here for encoding, directly writes byte data to the frame->data[0] (different approach for different formats, RGBA or YUV). But google search for "ffmpeg read from memory" results in examples like this, this or this. All of them suggesting using AVIOContext.
I am confused how to use AVFormatContext with AVCodecContext for encoding?
Currently I have encoder working using first approach and I am successfully returning results as described (with preallocated byte array). I would like to know if that is wrong approach? Should I be using AVIOContext for handling byte arrays?
Should I be using AVIOContext for handling byte arrays?
No. AVIOContext if for working with files and or containers in memory. In that case avformat is required to read encoded frames out of A byte array. You are working with raw frames directly and don’t require using avformat.

Convert YUV image to single byte array

I am using CameraX to obtain frames from the camera for further analysis. The callback returns ImageProxy object that contains 3 planes of YUV_420_888 image. Now, I want to pass this image to 3rd party library for further processing. The function there accepts one dimensional byte array with "raw image data" - as documentation says. I don't understand how can I convert those 3 arrays (Y, U, V) into one byte array.
I have tried to convert it into Bitmap and then to byte array but the library returns that input is invalid. I have also tried to take only one channel from YUV and pass it to the library and it worked but the results were poor because (as I am guessing) one channel didn't carry enough information for the algorithms.
How can I merge Y, U and V channels into one byte array?

Get raw RGB or YUV data buffer in Tango

I'm just started with Tango and Unity, and need to find a way to get raw RGB or YUV data from the device camera in order to pass it further for processing. This processing requires me to provide either ARGB array or three Y,U,V buffers. How shall I approach this?
I need something similar to OnTangoImageAvailableEventHandler callback - however, it gives me an array of bytes which is a YUV image and I have no way of getting individual Y,U or V planes, neither strides for each plane.
YUV byte array can be transformed to RGB using function mentioned here:
Confusion on YUV NV21 conversion to RGB.
Using that function you can get R, G and B values.

Convert RGB565 to YUV420 / NV21

I have bitmap in RGB565 format I want to convert it to YUV420/NV21
I found this here
How to convert RGB565 to YUV420SP faster on android?
but it does not have respective JNI call written there to be used in android code any clean java code for same if some one know ?

Android Bitmap Pixels - write directly to file?

Background:
The goal is to write a rather large (at least 2048 x 2048 pixels) image file with OpenGL rendered data.
Today I first use glReadPixels in order to get the 32-bit (argb8888) pixel data into an int array.
Then I copy the data into a new short array, converting the 32-bit argb values into 16-bit (rgb565) values. At this point I also turn the image upside down and change the color order to make the opengl-image data compatible with android bitmap data (different row order and color channel order).
Finally I create a Bitmap() instance and .copyPixelsFromBuffer(Buffer b) in order to be able to save it to disk as a png-file.
However I want to use memory more efficient in order to avoid out of memory crashes on some phones.
Question:
Can I skip the first transformation from int[] -> short[] in some way (and avoid the allocation of a new array for pixel data)? Maybe just use byte arrays / buffers and write the converted pixels to the same array I read from...
More important: Can I skip the bitmap creation (here's where the program crash) and somehow write the data directly to disk as a working image file (and avoid allocation of the pixel data again in the bitmap object)?
EDIT: If I could write the data directly to file, maybe I don't need to convert to 16-bit pixel data, depending on the file size and how fast the file can be read into memory at a later point.
I'm not sure that this could help but, this PNGJ library allows to write a PNG sequentially, line by line. If memory usage if your primary concern (and if you can access the pixels values in the order of the final PNG file from the rendered data) it could be useful.

Categories

Resources