Based on the discussion I had at Camera2 api Imageformat.yuv_420_888 results on rotated image, I wanted to know how to adjust the lookup done via rsGetElementAt_uchar methods so that the YUV data is rotated by 90 degree.
I also have a project like the HdrViewfinder provided by Google. The problem is that the output is in landscape because the output surface used as target surface is connected to the yuv allocation which does not care if the device is in landscape or portrait mode. But I want to adjust the code so that it is in portrait mode.
Therefore, I took a custom YUVToRGBA renderscript but I do not know what to change to rotate the output.
Can somebody help me to adjust the following custom YUVtoRGBA script by 90 degree because I want to use the output in portrait mode:
// Needed directive for RS to work
#pragma version(1)
// The java_package_name directive needs to use your Activity's package path
#pragma rs java_package_name(net.hydex11.cameracaptureexample)
rs_allocation inputAllocation;
int wIn, hIn;
int numTotalPixels;
// Function to invoke before applying conversion
void setInputImageSize(int _w, int _h)
{
wIn = _w;
hIn = _h;
numTotalPixels = wIn * hIn;
}
// Kernel that converts a YUV element to a RGBA one
uchar4 __attribute__((kernel)) convert(uint32_t x, uint32_t y)
{
// YUV 4:2:0 planar image, with 8 bit Y samples, followed by
// interleaved V/U plane with 8bit 2x2 subsampled chroma samples
int baseIdx = x + y * wIn;
int baseUYIndex = numTotalPixels + (y >> 1) * wIn + (x & 0xfffffe);
uchar _y = rsGetElementAt_uchar(inputAllocation, baseIdx);
uchar _u = rsGetElementAt_uchar(inputAllocation, baseUYIndex);
uchar _v = rsGetElementAt_uchar(inputAllocation, baseUYIndex + 1);
_y = _y < 16 ? 16 : _y;
short Y = ((short)_y) - 16;
short U = ((short)_u) - 128;
short V = ((short)_v) - 128;
uchar4 out;
out.r = (uchar) clamp((float)(
(Y * 298 + V * 409 + 128) >> 8), 0.f, 255.f);
out.g = (uchar) clamp((float)(
(Y * 298 - U * 100 - V * 208 + 128) >> 8), 0.f, 255.f);
out.b = (uchar) clamp((float)(
(Y * 298 + U * 516 + 128) >> 8), 0.f, 255.f); //
out.a = 255;
return out;
}
I have found that custom script at https://bitbucket.org/cmaster11/rsbookexamples/src/tip/CameraCaptureExample/app/src/main/rs/customYUVToRGBAConverter.fs .
Here someone has put the Java code to rotate YUV data. But I want to do it in Renderscript since that is faster.
Any help would be great.
best regards,
I'm assuming you want the output to be in RGBA, as in your conversion script. You should be able to use an approach like that used in this answer; that is, simply modify the x and y coordinates as the first step in the convert kernel:
//Rotate 90 deg clockwise during the conversion
uchar4 __attribute__((kernel)) convert(uint32_t inX, uint32_t inY)
{
uint32_t x = wIn - 1 - inY;
uint32_t y = inX;
//...rest of the function
Note the changes to the parameter names.
This presumes you have set up the output dimensions correctly (see linked answer). A 270 degree rotation can be accomplished in a similar way.
Related
I am using the “FaceNet” model converted with “TensorFlow Lite” into a quantized model. It was done following the instructions on the page https://medium.com/analytics-vidhya/facenet-on-mobile-cb6aebe38505.
This is the info on the input and the output buffer of the quantized model.
INPUTS:
[{'index': 451, 'shape': array([ 1, 160, 160, 3], dtype=int32), 'quantization': (0.0078125, 128L), 'name': 'input', 'dtype': <type 'numpy.uint8'>}]
OUTPUTS:
[{'index': 450, 'shape': array([ 1, 512], dtype=int32), 'quantization': (0.0235294122248888, 0L), 'name': 'embeddings', 'dtype': <type 'numpy.uint8'>}]
I do not manage to fill the input buffer properly.
I have already used “FaceNet” FULL model, which takes float values and it worked as expected. So I know what input float values should look like for the full model, so I guess there is only one step more to convert each float value into a corresponding byte value and to feed those byte values into the TensorFlow Lite model.
This is what I did with “FaceNet” FULL model.
//extract all the pixels of the image (of the face area of 160 x 160)
bitmap.getPixels(intValues, 0, inputWidth, 0, 0, inputWidth, inputHeight);
//copy the value of each channel of each pixel into an array
for (int i = 0; i < intValues.length; ++i) {
int p = intValues[i];
shortValues[i * 3 + 2] = (short) (p & 0xFF);
shortValues[i * 3 + 1] = (short) ((p >> 8) & 0xFF);
shortValues[i * 3 + 0] = (short) ((p >> 16) & 0xFF);
}
//calculate the mean value of all the pixels of the image
double sum = 0f;
for (short shortValue : shortValues) {
sum += shortValue;
}
double mean = sum / shortValues.length;
sum = 0f;
for (short shortValue : shortValues) {
sum += Math.pow(shortValue - mean, 2);
}
//calculate the standard deviation of all the pixels of the image
double std = Math.sqrt(sum / shortValues.length);
double std_adj = Math.max(std, 1.0/ Math.sqrt(shortValues.length));
//FINALLY fill the input buffer for the tensorflow
//calculate a float value for each pixel
for (short shortValue : shortValues) {
inputFloatBuffer.put((float) ((shortValue - mean) * (1 / std_adj)));
}
}
Now that I have float values, how to convert them into byte values for TensorFlow Lite?
I tried every possible combination with the values “0.0078125” (1/128) and “128” (mentioned at the top of the post), but nothing gave meaningful results.
For example:
int int_value = ((short)(float_value * 128)) + 128;
I used scaling to squeeze float values first into the range of [-1,1], but that did not help either.
Does somebody have idea?
I don't know how the model was quantized, but it really worth trying that directly putting the int value pixels you got from bitmap (in range [0, 255]) to the buffer. Make sure to use a ByteBuffer rather than FloatBuffer as well.
I'm trying to use Android's RenderScript to render a semi-transparent circle behind an image, but things go very wrong when returning a value from the RenderScript kernel.
This is my kernel:
#pragma version(1)
#pragma rs java_package_name(be.abyx.aurora)
// We don't need very high precision floating points
#pragma rs_fp_relaxed
// Center position of the circle
int centerX = 0;
int centerY = 0;
// Radius of the circle
int radius = 0;
// Destination colour of the background can be set here.
float destinationR;
float destinationG;
float destinationB;
float destinationA;
static int square(int input) {
return input * input;
}
uchar4 RS_KERNEL circleRender(uchar4 in, uint32_t x, uint32_t y) {
//Convert input uchar4 to float4
float4 f4 = rsUnpackColor8888(in);
// Check if the current coordinates fall inside the circle
if (square(x - centerX) + square(y - centerY) < square(radius)) {
// Check if current position is transparent, we then need to add the background!)
if (f4.a == 0) {
uchar4 temp = rsPackColorTo8888(0.686f, 0.686f, 0.686f, 0.561f);
return temp;
}
}
return rsPackColorTo8888(f4);
}
Now, the rsPackColorTo8888() function takes 4 floats with a value between 0.0 and 1.0. The resulting ARGB-color is then found by calculating 255 times each float value. So the given floats correspond to the color R = 0.686 * 255 = 175, G = 0.686 * 255 = 175, B = 0.686 * 255 = 175 and A = 0.561 * 255 = 143.
The rsPackColorTo8888() function itself works correctly, but when the found uchar4 value is returned from the kernel, something really weird happens. The R, G and B value changes to respectively Red * Alpha = 56, Green * Alpha = 56 and Blue * Alpha = 56 where Alpha is 0.561. This means that no value of R, G and B can ever be larger than A = 0.561 * 255.
Setting the output manually, instead of using rsPackColorTo8888() yields exact the same behavior. I mean that following code produces the exact same result, which in turn proofs that rsPackColorTo8888() is not the problem:
if (square(x - centerX) + square(y - centerY) < square(radius)) {
// Check if current position is transparent, we then need to add the background!)
if (f4.a == 0) {
uchar4 temp;
temp[0] = 175;
temp[1] = 175;
temp[2] = 175;
temp[3] = 143;
return temp;
}
}
This is the Java-code from which the script is called:
#Override
public Bitmap renderParallel(Bitmap input, int backgroundColour, int padding) {
ResizeUtility resizeUtility = new ResizeUtility();
// We want to end up with a square Bitmap with some padding applied to it, so we use the
// the length of the largest dimension (width or height) as the width of our square.
int dimension = resizeUtility.getLargestDimension(input.getWidth(), input.getHeight()) + 2 * padding;
Bitmap output = resizeUtility.createSquareBitmapWithPadding(input, padding);
output.setHasAlpha(true);
RenderScript rs = RenderScript.create(this.context);
Allocation inputAlloc = Allocation.createFromBitmap(rs, output);
Type t = inputAlloc.getType();
Allocation outputAlloc = Allocation.createTyped(rs, t);
ScriptC_circle_render circleRenderer = new ScriptC_circle_render(rs);
circleRenderer.set_centerX(dimension / 2);
circleRenderer.set_centerY(dimension / 2);
circleRenderer.set_radius(dimension / 2);
circleRenderer.set_destinationA(((float) Color.alpha(backgroundColour)) / 255.0f);
circleRenderer.set_destinationR(((float) Color.red(backgroundColour)) / 255.0f);
circleRenderer.set_destinationG(((float) Color.green(backgroundColour)) / 255.0f);
circleRenderer.set_destinationB(((float) Color.blue(backgroundColour)) / 255.0f);
circleRenderer.forEach_circleRender(inputAlloc, outputAlloc);
outputAlloc.copyTo(output);
inputAlloc.destroy();
outputAlloc.destroy();
circleRenderer.destroy();
rs.destroy();
return output;
}
When alpha is set to 255 (or 1.0 as a float), the returned color-values (inside my application's Java-code) are correct.
Am I doing something wrong, or is this really a bug somewhere in the RenderScript-implementation?
Note: I've checked and verified this behavior on a Oneplus 3T (Android 7.1.1), a Nexus 5 (Android 7.1.2), Android-emulator version 7.1.2 and 6.0
Instead of passing the values with the type:
uchar4 temp = rsPackColorTo8888(0.686f, 0.686f, 0.686f, 0.561f);
Trying creating a float4 and passing that.
float4 newFloat4 = { 0.686, 0.686, 0.686, 0.561 };
uchar4 temp = rsPackColorTo8888(newFloat4);
i use the Renderscript to do the gaussian blur on a image.
but no matter what i did. the ScriptIntrinsicBlur is more more faster.
why this happened? ScriptIntrinsicBlur is using another method?
this id my RS code:
#pragma version(1)
#pragma rs java_package_name(top.deepcolor.rsimage.utils)
//aussian blur algorithm.
//the max radius of gaussian blur
static const int MAX_BLUR_RADIUS = 1024;
//the ratio of pixels when blur
float blurRatio[(MAX_BLUR_RADIUS << 2) + 1];
//the acquiescent blur radius
int blurRadius = 0;
//the width and height of bitmap
uint32_t width;
uint32_t height;
//bind to the input bitmap
rs_allocation input;
//the temp alloction
rs_allocation temp;
//set the radius
void setBlurRadius(int radius)
{
if(1 > radius)
radius = 1;
else if(MAX_BLUR_RADIUS < radius)
radius = MAX_BLUR_RADIUS;
blurRadius = radius;
/**
calculate the blurRadius by Gaussian function
when the pixel is far way from the center, the pixel will not contribute to the center
so take the sigma is blurRadius / 2.57
*/
float sigma = 1.0f * blurRadius / 2.57f;
float deno = 1.0f / (sigma * sqrt(2.0f * M_PI));
float nume = -1.0 / (2.0f * sigma * sigma);
//calculate the gaussian function
float sum = 0.0f;
for(int i = 0, r = -blurRadius; r <= blurRadius; ++i, ++r)
{
blurRatio[i] = deno * exp(nume * r * r);
sum += blurRatio[i];
}
//normalization to 1
int len = radius + radius + 1;
for(int i = 0; i < len; ++i)
{
blurRatio[i] /= sum;
}
}
/**
the gaussian blur is decomposed two steps:1
1.blur in the horizontal
2.blur in the vertical
*/
uchar4 RS_KERNEL horizontal(uint32_t x, uint32_t y)
{
float a, r, g, b;
for(int k = -blurRadius; k <= blurRadius; ++k)
{
int horizontalIndex = x + k;
if(0 > horizontalIndex) horizontalIndex = 0;
if(width <= horizontalIndex) horizontalIndex = width - 1;
uchar4 inputPixel = rsGetElementAt_uchar4(input, horizontalIndex, y);
int blurRatioIndex = k + blurRadius;
a += inputPixel.a * blurRatio[blurRatioIndex];
r += inputPixel.r * blurRatio[blurRatioIndex];
g += inputPixel.g * blurRatio[blurRatioIndex];
b += inputPixel.b * blurRatio[blurRatioIndex];
}
uchar4 out;
out.a = (uchar) a;
out.r = (uchar) r;
out.g = (uchar) g;
out.b = (uchar) b;
return out;
}
uchar4 RS_KERNEL vertical(uint32_t x, uint32_t y)
{
float a, r, g, b;
for(int k = -blurRadius; k <= blurRadius; ++k)
{
int verticalIndex = y + k;
if(0 > verticalIndex) verticalIndex = 0;
if(height <= verticalIndex) verticalIndex = height - 1;
uchar4 inputPixel = rsGetElementAt_uchar4(temp, x, verticalIndex);
int blurRatioIndex = k + blurRadius;
a += inputPixel.a * blurRatio[blurRatioIndex];
r += inputPixel.r * blurRatio[blurRatioIndex];
g += inputPixel.g * blurRatio[blurRatioIndex];
b += inputPixel.b * blurRatio[blurRatioIndex];
}
uchar4 out;
out.a = (uchar) a;
out.r = (uchar) r;
out.g = (uchar) g;
out.b = (uchar) b;
return out;
}
Renderscript intrinsics are implemented very differently from what you can achieve with a script of your own. This is for several reasons, but mainly because they are built by the RS driver developer of individual devices in a way that makes the best possible use of that particular hardware/SoC configuration, and most likely makes low level calls to the hardware that is simply not available at the RS programming layer.
Android does provide a generic implementation of these intrinsics though, to sort of "fall back" in case no lower hardware implementation is available. Seeing how these generic ones are done will give you some better idea of how these intrinsics work. For example, you can see the source code of the generic implementation of the 3x3 convolution intrinsic here rsCpuIntrinsicConvolve3x3.cpp.
Take a very close look at the code starting from line 98 of that source file, and notice how they use no for loops whatsoever to do the convolution. This is known as unrolled loops, where you add and multiply explicitly the 9 corresponding memory locations in the code, thereby avoiding the need of a for loop structure. This is the first rule you must take into account when optimizing parallel code. You need to get rid of all branching in your kernel. Looking at your code, you have a lot of if's and for's that cause branching -- this means the control flow of the program is not straight through from beginning to end.
If you unroll your for loops, you will immediately see a boost in performance. Note that by removing your for structures you will no longer be able to generalize your kernel for all possible radius amounts. In that case, you would have to create fixed kernels for different radii, and this is exactly why you see separate 3x3 and 5x5 convolution intrinsics, because this is just what they do. (See line 99 of the 5x5 intrinsic at rsCpuIntrinsicConvolve5x5.cpp).
Furthermore, the fact that you have two separate kernels doesn't help. If you're doing a gaussian blur, the convolutional kernel is indeed separable and you can do 1xN + Nx1 convolutions as you've done there, but I would recommend putting both passes together in the same kernel.
Keep in mind though, that even doing these tricks will probably still not give you as fast results as the actual intrinsics, because those have probably been highly optimized for your specific device(s).
I am trying to capture the image data in the onFrameAvailable method from a Google Tango. I am using the Leibniz release. In the header file it is said that the buffer contains HAL_PIXEL_FORMAT_YV12 pixel data. In the release notes they say the buffer contains YUV420SP. But in the documentation it is said the pixels are RGBA8888 format (). I am a little confused and additionally. I don't really get image data but a lot of magenta and green. Right now I am trying to convert from YUV to RGB similar to this one. I guess there is something wrong with the stride, too. Here eís the code of the onFrameAvailable method:
int size = (int)(buffer->width * buffer->height);
for (int i = 0; i < buffer->height; ++i)
{
for (int j = 0; j < buffer->width; ++j)
{
float y = buffer->data[i * buffer->stride + j];
float v = buffer->data[(i / 2) * (buffer->stride / 2) + (j / 2) + size];
float u = buffer->data[(i / 2) * (buffer->stride / 2) + (j / 2) + size + (size / 4)];
const float Umax = 0.436f;
const float Vmax = 0.615f;
y = y / 255.0f;
u = (u / 255.0f - 0.5f) ;
v = (v / 255.0f - 0.5f) ;
TangoData::GetInstance().color_buffer[3*(i*width+j)]=y;
TangoData::GetInstance().color_buffer[3*(i*width+j)+1]=u;
TangoData::GetInstance().color_buffer[3*(i*width+j)+2]=v;
}
}
I am doing the yuv to rgb conversion in the fragment shader.
Has anyone ever obtained an RGB image for the Google Tango Leibniz release? Or had someone similar problems when converting from YUV to RGB?
YUV420SP (aka NV21) is correct for the time being. An explanation is here. In this format you have a width x height array where each element is a Y byte, followed by a width/2 x height/2 array where each element is a V byte and a U byte. Your code is implementing YV21, which has separate arrays for V and U instead of interleaving them in one array.
You mention that you are doing YUV to RGB conversion in a fragment shader. If all you want to do with the camera images is draw then you can use TangoService_connectTextureId() and TangoService_updateTexture() instead of TangoService_connectOnFrameAvailable(). This approach delivers the camera image to you already in an OpenGL texture that gives your fragment shader RGB values without bothering with the pixel format details. You will need to bind to GL_TEXTURE_EXTERNAL_OES (instead of GL_TEXTURE_2D), and your fragment shader would look something like this:
#extension GL_OES_EGL_image_external : require
precision mediump float;
varying vec4 v_t;
uniform samplerExternalOES colorTexture;
void main() {
gl_FragColor = texture2D(colorTexture, v_t.xy);
}
If you really do want to pass YUV data to a fragment shader for some reason, you can do so without preprocessing it into floats. In fact, you don't need to unpack it at all - for NV21 just define a 1-byte texture for Y and a 2-byte texture for VU, and load the data as-is. Your fragment shader will use the same texture coordinates for both.
By the way, if someone experienced problems with capturing the image data on the Leibniz release, too: One of the developers told me that there is a bug concerning the camera and that it should be fixed with the Nash release.
The bug caused my buffer to be null but when I used the Nash update I got data again. However, right now the problem is that the data I am using doesn't make sense. I guess/hope the cause is that the Tablet didn't get the OTA update yet (there can be a gap between the actual release date and the OTA software update).
Just try code following:
//C#
public bool YV12ToPhoto(byte[] data, int width, int height, out Texture2D photo)
{
photo = new Texture2D(width, height);
int uv_buffer_offset = width * height;
for (int i = 0; i < height; i++)
{
for (int j = 0; j < width; j++)
{
int x_index = j;
if (j % 2 != 0)
{
x_index = j - 1;
}
// Get the YUV color for this pixel.
int yValue = data[(i * width) + j];
int uValue = data[uv_buffer_offset + ((i / 2) * width) + x_index + 1];
int vValue = data[uv_buffer_offset + ((i / 2) * width) + x_index];
// Convert the YUV value to RGB.
float r = yValue + (1.370705f * (vValue - 128));
float g = yValue - (0.689001f * (vValue - 128)) - (0.337633f * (uValue - 128));
float b = yValue + (1.732446f * (uValue - 128));
Color co = new Color();
co.b = b < 0 ? 0 : (b > 255 ? 1 : b / 255.0f);
co.g = g < 0 ? 0 : (g > 255 ? 1 : g / 255.0f);
co.r = r < 0 ? 0 : (r > 255 ? 1 : r / 255.0f);
co.a = 1.0f;
photo.SetPixel(width - j - 1, height - i - 1, co);
}
}
return true;
}
I have succeeded.
I've been scratching my head over this for a bit and the only thing I can conclude is that rsRand() is not implemented on the processor that is usually meant to run the script (e.g. GPU or CPU) or that it cannot be run in parallel.
Can anyone confirm this? If that is the case, is there a reference somewhere listing what functions are safe to use in relation to performance?
Is there any other way to get a random number without using rsRand()?
Here is my renderscript file:
#pragma version(1)
#pragma rs java_package_name(com.example.app)
#pragma rs_fp_relaxed
float width;
float height;
float3 p0, p1, p2, p3;
uchar4 __attribute__((kernel)) gradGen(uint32_t x, uint32_t y)
{
float3 result;
float hd = x / width;
float vd = y / height;
float noise = rsRand((float) 1 / 256) - ((float) 1 / 512); // CULPRIT
hd = 3 * hd * hd - 2 * hd * hd * hd;
vd = 3 * vd * vd - 2 * vd * vd * vd;
result.r = (1 - vd) * ((1 - hd) * p0.r + hd * p1.r) + vd * ((1 - hd) * p3.r + hd * p2.r) + noise;
result.g = (1 - vd) * ((1 - hd) * p0.g + hd * p1.g) + vd * ((1 - hd) * p3.g + hd * p2.g) + noise;
result.b = (1 - vd) * ((1 - hd) * p0.b + hd * p1.b) + vd * ((1 - hd) * p3.b + hd * p2.b) + noise;
return rsPackColorTo8888(result);
}
rsRand() calls the platform rand() on most implementations (that's how it's implemented in the CPU backend, I don't know that any RS GPU drivers actually implement RNGs in their drivers), so it's going to be significantly more heavyweight and slower than something like simple shifts and XORs.
and yeah, looking at the bionic implementation of rand(), you're right that it's serialized. maybe I'll get someone to port a Mersenne twister sometime.
Instead of wondering, I decided to do the dumb thing and write my own rsRand(). Xorshift was simple enough and here is extra code for implementing a PRNG:
uint32_t r0 = 0x6635e5ce, r1 = 0x13bf026f, r2 = 0x43225b59, r3 = 0x3b0314d0;
uchar4 __attribute__((kernel)) gradGen(uint32_t x, uint32_t y)
{
...
// Generate a random number between 0-1
uint32_t t = r0 ^ (r0 << 11);
r0 = r1; r1 = r2; r2 = r3;
r3 = r3 ^ (r3 >> 19) ^ t ^ (t >> 8);
float rnd = (float) r3 / 0xffffffff;
...
}
The above is fast and the quality of random numbers are good enough for my application. I'd still be interested to know the details behind rsRand() slow down.