I have bitmap in RGB565 format I want to convert it to YUV420/NV21
I found this here
How to convert RGB565 to YUV420SP faster on android?
but it does not have respective JNI call written there to be used in android code any clean java code for same if some one know ?
Related
I am working on the android library for encoding/decoding raw data through ffmpeg. Every example I found uses files, it either reads or writes to a file. However, I am using raw byte array representing RGBA image for encoder input and byte array for encoder output. Lets focus on encoding part for this question.
My function looks like this:
int encodeRGBA(uint8_t *image, int imageSize, int presentationTimestamp,
uint8_t *result, int resultSize)
Where image is byte array containing raw rgba image data, imageSize is length of that array, presentationTimestamp is just counter used by AVFrame for setting pts, result is preallocated byte array with some defined length (currently with size matching width x height) and resultSize is byte array length (width x height). Returned int value represents actually used length of preallocated array. I am aware that this is not the best approach for sending data back to java and this is also part of the question. Is there a better way for returning result?
Example found here for encoding, directly writes byte data to the frame->data[0] (different approach for different formats, RGBA or YUV). But google search for "ffmpeg read from memory" results in examples like this, this or this. All of them suggesting using AVIOContext.
I am confused how to use AVFormatContext with AVCodecContext for encoding?
Currently I have encoder working using first approach and I am successfully returning results as described (with preallocated byte array). I would like to know if that is wrong approach? Should I be using AVIOContext for handling byte arrays?
Should I be using AVIOContext for handling byte arrays?
No. AVIOContext if for working with files and or containers in memory. In that case avformat is required to read encoded frames out of A byte array. You are working with raw frames directly and don’t require using avformat.
I am using CameraX to obtain frames from the camera for further analysis. The callback returns ImageProxy object that contains 3 planes of YUV_420_888 image. Now, I want to pass this image to 3rd party library for further processing. The function there accepts one dimensional byte array with "raw image data" - as documentation says. I don't understand how can I convert those 3 arrays (Y, U, V) into one byte array.
I have tried to convert it into Bitmap and then to byte array but the library returns that input is invalid. I have also tried to take only one channel from YUV and pass it to the library and it worked but the results were poor because (as I am guessing) one channel didn't carry enough information for the algorithms.
How can I merge Y, U and V channels into one byte array?
What YUV ImageFormats (e.g. ImageFormat.NV21, ImageFormat.YV12, and ImageFormat.YUV_420_888) are supported by ScriptIntrinsicYuvToRGB (SIYTB)?
What Element should be used for SIYTB.create()?
The SIYTB overview says "The output is RGBA; the alpha channel will be set to 255", but the documentation for create() says "Supported elements types are U8_4". Should the Element type for create() actually be Element.RGBA_8888?
What Element type should be used to construct the Type.Builder used to build the Type for the input Allocation?
The SIYTB documentation says "The input allocation should be supplied in a supported YUV format as a YUV element Allocation." Does this imply to use Element.YUV to construct the Type.Builder?
Should Type.Builder.setY() be used in conjunction with Type.Builder.setX() when creating an input Allocation for SIYTB?
If Type.Builder.setY() should be used...
* Should it be used for NV21, YV21, and YUV_420_888?
* Should Type.Builder.setX() typically be set to the image width and Type.Builder.setY() typically be set to the image height?
Should Type.Builder.setYuvFormat() be used when creating an input Allocation for SIYTB?
If Type.Builder.setYuvFormat() should be used...
* Should it be used for NV21, YV21, and YUV_420_888?
* Does it help SIYTB know how to parse a YUV Image byte array?
What YUV image byte array layout is required for SIYTB input via Allocator.copyFrom()?
I am guessing the byte array must be packed with no pixel strides or row strides. But the byte array packing of NV21, YV12, and YUV_420_888 is different, e.g. order and size of UV planes. So how is SIYTB to know the layout of the input byte array? I am guessing it is via Type.Builder.setYuvFormat().
Is there an easy way to convert an Android Image to a byte array? For example something similar to Bitmap.copyPixelsToBuffer()? (wink, nudge, Android Team)
Thanks for reading this...
I'm just started with Tango and Unity, and need to find a way to get raw RGB or YUV data from the device camera in order to pass it further for processing. This processing requires me to provide either ARGB array or three Y,U,V buffers. How shall I approach this?
I need something similar to OnTangoImageAvailableEventHandler callback - however, it gives me an array of bytes which is a YUV image and I have no way of getting individual Y,U or V planes, neither strides for each plane.
YUV byte array can be transformed to RGB using function mentioned here:
Confusion on YUV NV21 conversion to RGB.
Using that function you can get R, G and B values.
Preview Data in camera is yuv420sp style data. But the data delivered in datacallback is jpeg data. How to convert yuv420sp data to jpeg data before datacallback?
It is very difficult to convert in JAVA layer. So what else method you have?
No need to convert. The Camera.PictureCallBack returns a JPEG.
See "http://developer.android.com/reference/android/hardware/Camera.html#takePicture(android.hardware.Camera.ShutterCallback, android.hardware.Camera.PictureCallback, android.hardware.Camera.PictureCallback, android.hardware.Camera.PictureCallback)"