Android turn 3 sequence byte[] to int[] - android

I am looking to transfer pixel data from a server to an android program. On the server, the pixel data is in form RGBA, with one byte per color / transparency. Unfortunately on android the the corresponding pixel format is ARGB, meaning the alpha channel comes before the color data, instead of after, like it does on the server. I am worried that shuffling the RGBA data to ARGB format on the server will be too slow, and so I was hoping to find another way around that. The server is written in python by the way. I am capturing the screen data using the function presented here: Image.frombuffer with 16-bit image data. If there is a way to grab screen capture using this method (or some other) in ARGB format or even RGB_565 I would love to hear about that as well.
One trick I thought of to solve this problem was to use the isPreMultiplied flag on canvas.drawbitmap(int[], ...) and then send only the RGB bytes from the server. Then I could recompose the RGB bits into ints on the android device and send that to drawbitmap, ignoring the alpha channel entirely.
However, this leaves me with another problem. Ints are comprised of 4 bytes, and I have a sequence of 3 bytes in my byte[] array (the RGB values). I was using some of the solutions proposed here: byte array to Int Array to convert my byte[] to an int[] when I was transferring RGBA data. But now that it is just 3 byte sequences, I'm not sure how to quickly convert it to ints. I am hoping for close to real time image updating so I need a way to do this quickly. Any ideas?

int rgbInt = byteArray[0] << 16 + byteArray[1] << 8 + byteArray[2];
// not sure these are in the correct order, you may have to swap the indexes around.
You might also need to include
+ 0xFF << 24
to set the alpha value to opaque.

Related

Getting Pixel color values from 8Bit Greyscale image in Android, currently getting values like -16777216 while I want values 0-255

I receive an link to an image that's 8Bit Greyscale. Now I download it and decode it like this
urlLabels = //someurl
val btLabel = BitmapFactory.decodeStream(urlLabels.openStream())
///And I use it like this
labelBitmap = labelBt.copy(Bitmap.Config.ARGB_8888, false) // idk what to use as Bitmap.config
labelPixels = IntArray(labelBitmap.width * labelBitmap.height)
labelBitmap.getPixels(labelPixels, 0, labelBitmap.width, 0, 0, labelBitmap.width, labelBitmap.height)
The values I get from this Bitmap are then -16777216 and similar values, while I need it to be between 0-255 to correspond to some values I receive from my API. I am only starting to work with colors and images so I've been super confused. I tried searching online but 8Bit greyscale config doesn't seem to be a thing? But that's probably me misunderstanding everything. Basically I just want to download the image and be able to read the values of its pixels in correct 8Bit form (values between 0-255).
Edit: Since I am really unfamiliar with this, it probably is missing some key information, so please comment and request more information than I've provided if necessary. And please stay polite and dumb down your answers so I can understand them and implement them, explanations are welcomed as I'd love to understand what is happening.
16777215 means argb(1,1,1,1) => #FFFFFF
-16777216 means argb(1,0,0,0) => #000000
#FFFFFF (hexadecimal) = 16777215 (decimal) which is the highest number for an RGB.
Executing "Color.valueOf(-16777216)" in Android returns "Color(r:0.0, g:0.0, b:0.0, a:1.0, color_space: sRGB IEC61966-2.1)" which is the BLACK_without_transparency.
To get values between 0 and 255 you have to split the Color in this way:
red: Color.valueOf(-16777216).red()*255
green: Color.valueOf(-16777216).green()*255
blue: Color.valueOf(-16777216).blue()*255

Correct way for working with raw data when encoding using FFmpeg

I am working on the android library for encoding/decoding raw data through ffmpeg. Every example I found uses files, it either reads or writes to a file. However, I am using raw byte array representing RGBA  image for encoder input and byte array for encoder output. Lets focus on encoding part for this question.
My function looks like this: 
int encodeRGBA(uint8_t *image, int imageSize, int presentationTimestamp,
uint8_t *result, int resultSize)
Where image is byte array containing raw rgba image data, imageSize is length of that array, presentationTimestamp is just counter used by AVFrame for setting pts, result is preallocated byte array with some defined length (currently with size matching width x height) and resultSize is byte array length (width x height). Returned int value represents actually used length of preallocated array. I am aware that this is not the best approach for sending data back to java and this is also part of the question. Is there a better way for returning result?
Example found here for encoding, directly writes byte data to the frame->data[0] (different approach for different formats, RGBA or YUV). But google search for "ffmpeg read from memory" results in examples like this, this or this. All of them suggesting using AVIOContext.
I am confused how to use AVFormatContext with AVCodecContext for encoding?
Currently I have encoder working using first approach and I am successfully returning results as described (with preallocated byte array). I would like to know if that is wrong approach? Should I be using AVIOContext for handling byte arrays?
Should I be using AVIOContext for handling byte arrays?
No. AVIOContext if for working with files and or containers in memory. In that case avformat is required to read encoded frames out of A byte array. You are working with raw frames directly and don’t require using avformat.

Convert YUV image to single byte array

I am using CameraX to obtain frames from the camera for further analysis. The callback returns ImageProxy object that contains 3 planes of YUV_420_888 image. Now, I want to pass this image to 3rd party library for further processing. The function there accepts one dimensional byte array with "raw image data" - as documentation says. I don't understand how can I convert those 3 arrays (Y, U, V) into one byte array.
I have tried to convert it into Bitmap and then to byte array but the library returns that input is invalid. I have also tried to take only one channel from YUV and pass it to the library and it worked but the results were poor because (as I am guessing) one channel didn't carry enough information for the algorithms.
How can I merge Y, U and V channels into one byte array?

Get raw RGB or YUV data buffer in Tango

I'm just started with Tango and Unity, and need to find a way to get raw RGB or YUV data from the device camera in order to pass it further for processing. This processing requires me to provide either ARGB array or three Y,U,V buffers. How shall I approach this?
I need something similar to OnTangoImageAvailableEventHandler callback - however, it gives me an array of bytes which is a YUV image and I have no way of getting individual Y,U or V planes, neither strides for each plane.
YUV byte array can be transformed to RGB using function mentioned here:
Confusion on YUV NV21 conversion to RGB.
Using that function you can get R, G and B values.

Android Bitmap Pixels - write directly to file?

Background:
The goal is to write a rather large (at least 2048 x 2048 pixels) image file with OpenGL rendered data.
Today I first use glReadPixels in order to get the 32-bit (argb8888) pixel data into an int array.
Then I copy the data into a new short array, converting the 32-bit argb values into 16-bit (rgb565) values. At this point I also turn the image upside down and change the color order to make the opengl-image data compatible with android bitmap data (different row order and color channel order).
Finally I create a Bitmap() instance and .copyPixelsFromBuffer(Buffer b) in order to be able to save it to disk as a png-file.
However I want to use memory more efficient in order to avoid out of memory crashes on some phones.
Question:
Can I skip the first transformation from int[] -> short[] in some way (and avoid the allocation of a new array for pixel data)? Maybe just use byte arrays / buffers and write the converted pixels to the same array I read from...
More important: Can I skip the bitmap creation (here's where the program crash) and somehow write the data directly to disk as a working image file (and avoid allocation of the pixel data again in the bitmap object)?
EDIT: If I could write the data directly to file, maybe I don't need to convert to 16-bit pixel data, depending on the file size and how fast the file can be read into memory at a later point.
I'm not sure that this could help but, this PNGJ library allows to write a PNG sequentially, line by line. If memory usage if your primary concern (and if you can access the pixels values in the order of the final PNG file from the rendered data) it could be useful.

Categories

Resources