Renderscript Colormatrix intrinsic - Help me understand - android

I think that I must be misunderstanding how the Renderscript intrinsic to apply a colormatrix works, because my results don't turn out as I expect.
So I have an allocation for Renderscript which "overlays" an OpenCV Mat, basically imagine it as a 3 dimensional array full of pixels where every pixel has RGBA (Red,Green,Blue,Alpha) values.
So I want to apply a colormatrix to every pixel like this:
Vector R times Matrix 0.152286f, 1.052583f, -0.204868f, 0f,
G 0.114503f, 0.786281f, 0.099216f, 0f,
B -0.003882f, -0.048116f, 1.051998f, 0f,
A 0.000000f, 0.000000f, 0.000000f, 1f
So what I expect to happen is that the new Vector R'G'B'A' will be like this:
R' = R * 0.152286f + G * 1.052583f + B * -0.204868f + A * 0f
G' = as above
B' = as above
A' = R * 0 + G * 0 + B * 0 + A * 1
So the new R' value will be a combination of the old RGB values, A does not affect RGB. Same behavior for G' and B'. A will always stay the same.
In Code it looks like this:
Matrix4f mProtan = new Matrix4f(new float[]{
0.152286f, 1.052583f, -0.204868f, 0f,
0.114503f, 0.786281f, 0.099216f, 0f,
-0.003882f, -0.048116f, 1.051998f, 0f,
0.000f, 0.000f, 0.000f, 1f
});
scriptIntrinsicColorMatrix.setColorMatrix(mProtan);
scriptIntrinsicColorMatrix.forEach(inputAllocation, outputAllocation);
So I am already doing this with OpenCV and it works as expected, but it's kinda slow, hence I'd like to use Renderscript, but there my outcome is usually weird, for example this matrix shouldn't really affect anything other than Red, Green and combinations of them (e.g. Red becomes a dark shade of grey/brown, Green becomes a muddy yellow and Purple is Red + Blue so Red goes away and purple becomes only Blue. Even white paper gets a greenish tint).
I also tried direct streaming from the camera feed via Renderscript only and storing information in Bitmaps, but the results end up the same.
Any help would be greatly appreciated! :-)

RS uses column-major format for matrices, so you need to transpose your matrix to get your expected R'G'B'A' values.
https://developer.android.com/guide/topics/renderscript/reference/rs_matrix.html
https://android.googlesource.com/platform/frameworks/rs/+/android-7.0.0_r6/cpu_ref/rsCpuIntrinsicColorMatrix.cpp#820

Related

Issued ShapeRenderer beahavior on android

I'm using ShapeRenderer to draw some kind of circular indicator under a mask texture. Everything works perfectly and as expected on Desktop but when running the same code on android, the shape rendering is always on top. Another strange difference is that all shape rendering function calls seems inverted such the first shape is drawn on top.
I've reproduced the problem on a june 2020 libgdx nightly build and with the 20/09/2020 nightly build.
Here is the code :
myConstructor(){
sr = new ShapeRenderer();
sr.scale(1 + 1.1f * GUI.ZOOM_RATIO, 1 + 1.1f * GUI.ZOOM_RATIO, 0f);
}
draw(Batch g,float arg){
g.end(); // I have some batch rendering before
sr.begin(ShapeType.Filled);
sr.setColor(Color.CYAN);
sr.circle(indX, indY, indicatorW + GUI.fit2Density(2));
sr.setColor(Color.DARK_GRAY);
sr.arc(indX, indY, indicatorW, 0f, degrees);
sr.end();
g.begin();
g.setColor(1f, 1f, 1f, 1f);
g.draw(coolDownIndic, indX - coolDownIndic.getWidth() / 2 - GUI.fit2Density(2),
indY - coolDownIndic.getHeight() / 2);
}
On Desktop I see the texture rendred over the arc and the arc over the circle. This order is exactly inverted on android, you can see the expected behavior here Desktop rendering and the incorrect behavior here Android rendering ( the purple cloud on top is a particle emitter effect not linked to the issue).
I guess this is a bug, note that I'm using the scale method on the ShapeRenderer, not sure if it can be related.
Any help would be appreciated.

how to check ray intersection with object in ARCore

Is there a way to check if I touched the object on the screen ? As I understand the HitResult class allows me to check if I touched the recognized and maped surface. But I want to check this I touched the object that is set on that surface.
ARCore doesn't really have a concept of an object, so we can't directly provide that. I suggest looking at ray-sphere tests for a starting point.
However, I can help with getting the ray itself (to be added to HelloArActivity):
/**
* Returns a world coordinate frame ray for a screen point. The ray is
* defined using a 6-element float array containing the head location
* followed by a normalized direction vector.
*/
float[] screenPointToWorldRay(float xPx, float yPx, Frame frame) {
float[] points = new float[12]; // {clip query, camera query, camera origin}
// Set up the clip-space coordinates of our query point
// +x is right:
points[0] = 2.0f * xPx / mSurfaceView.getMeasuredWidth() - 1.0f;
// +y is up (android UI Y is down):
points[1] = 1.0f - 2.0f * yPx / mSurfaceView.getMeasuredHeight();
points[2] = 1.0f; // +z is forwards (remember clip, not camera)
points[3] = 1.0f; // w (homogenous coordinates)
float[] matrices = new float[32]; // {proj, inverse proj}
// If you'll be calling this several times per frame factor out
// the next two lines to run when Frame.isDisplayRotationChanged().
mSession.getProjectionMatrix(matrices, 0, 1.0f, 100.0f);
Matrix.invertM(matrices, 16, matrices, 0);
// Transform clip-space point to camera-space.
Matrix.multiplyMV(points, 4, matrices, 16, points, 0);
// points[4,5,6] is now a camera-space vector. Transform to world space to get a point
// along the ray.
float[] out = new float[6];
frame.getPose().transformPoint(points, 4, out, 3);
// use points[8,9,10] as a zero vector to get the ray head position in world space.
frame.getPose().transformPoint(points, 8, out, 0);
// normalize the direction vector:
float dx = out[3] - out[0];
float dy = out[4] - out[1];
float dz = out[5] - out[2];
float scale = 1.0f / (float) Math.sqrt(dx*dx + dy*dy + dz*dz);
out[3] = dx * scale;
out[4] = dy * scale;
out[5] = dz * scale;
return out;
}
If you're calling this several times per frame see the comment about the getProjectionMatrix and invertM calls.
Apart from Mouse Picking with Ray Casting, cf. Ian's answer, the other commonly used technique is a picking buffer, explained in detail (with C++ code) here
The trick behind 3D picking is very simple. We will attach a running
index to each triangle and have the FS output the index of the
triangle that the pixel belongs to. The end result is that we get a
"color" buffer that doesn't really contain colors. Instead, for each
pixel which is covered by some primitive we get the index of this
primitive. When the mouse is clicked on the window we will read back
that index (according to the location of the mouse) and render the
select triangle red. By combining a depth buffer in the process we
guarantee that when several primitives are overlapping the same pixel
we get the index of the top-most primitive (closest to the camera).
So in a nutshell:
Every object's draw method needs an ongoing index and a boolean for whether this draw renders the pixel buffer or not.
The render method converts the index into a grayscale color and the scene is rendered
After the whole rendering is done, retrieve the pixel color at the touch position GL11.glReadPixels(x, y, /*the x and y of the pixel you want the colour of*/). Then translate the color back to an index and the index back to an object. Voilà, you have your clicked object.
To be fair, for a mobile usecase you should probably read a 10x10 rectangle, iterate trough it and pick the first found non-background color - because touches are never that precise.
This approach works independently of the complexity of your objects

OpenGL ES write depth data to color

I'm trying to implement DepthBuffer-like functionality using OpenGL ES on Android.
In other words I'm trying to get the 3D point on surface that is rendered on point [x, y] on the user device. In order to make that I need to be able to read the distance of the fragment at that given point.
Answer in different circumstances:
When using normal OpenGL you could achieve this by creating FrameBuffer and then attach either RenderBuffer or Texture with depth component to it.
Both of those approaches use glReadPixels, with internal format of GL_DEPTH_COMPONENT to retrieve the data from the buffer/texture. Unfortunately OpenGL ES only supports GL_ALPHA, GL_RGB, and GL_RGBA as the readback formats, so there's really no way to reach the framebuffer's depth data directly.
The only viable approach that I can think of (and that I have found suggested on the internet) is to create different shaders just for depth buffering. The shader, that is used only for depth rendering, should write gl_FragCoord.z value (=the distance value that we want to read.) on the gl_FragColor. However:
The actual Question:
When I write gl_FragCoord.z value on the gl_FragColor = new Vec4(vec3(gl_FragCoord.z), 1.0); and later use glReadPixels to read back the rgb values, those read values don't match up with the input.
What I have tried:
I realize that there's only 24 bits (r, g, b * 8 bits each) representing the depth data so I tried shifting the returned value by 8 - to get 32 bits, but it didn't seem to work. I also tried to shift distance when applying it to red, green and blue, but that didn't seem to work as expected. I have been trying to figure out what's wrong by observing the bits, results at the bottom.
fragmentShader.glsl(candidate #3):
void main() {
highp float distance = 1.0; //currently just 1.0 to test the results with different values.
lowp float red = distance / exp2(16.0);
lowp float green = distance / exp2(8.0);
lowp float blue = distance / exp2(0.0);
gl_FragColor = vec4(red, green, blue, 1.0);
}
Method to read the values (=glReadPixels)
private float getDepth(int x, int y){
FloatBuffer buffer = GeneralSettings.getFloatBuffer(1); //just creates FloatBuffer with capacity of 1 float value.
terrainDepthBuffer.bindFrameBuffer(); //bind the framebuffer before read back.
GLES20.glReadPixels(x, y, 1, 1, GLES20.GL_RGB, GLES20.GL_UNSIGNED_BYTE, buffer); //read the values from previously bind framebuffer.
GeneralSettings.checkGlError("glReadPixels"); //Make sure there is no gl related errors.
terrainDepthBuffer.unbindCurrentFrameBuffer(); //Remember to unbind the buffer after reading/writing.
System.out.println(buffer.get(0)); //Print the value.
}
Observations in bits using the shader & method above:
Value | Shader input | ReadPixels output
1.0f | 111111100000000000000000000000 | 111111110000000100000000
0.0f | 0 | 0
0.5f | 111111000000000000000000000000 | 100000000000000100000000

Opencv increasing accuracy of threshold

I am working on an app that is expected to remove image backgrounds using opencv, at first I tried using grabcut but it was too slow and the results were not always accurate, then I tried using threshold, although the results are not yet close th grabcut, its very fast and looks like a better, So my code is first looking at the image hue and analying which portion of it appears more, that portion is taken in as the background, the issue is at times its getting the foreground as background below is my code:
private Bitmap backGrndErase()
{
Bitmap bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.skirt);
Log.d(TAG, "bitmap: " + bitmap.getWidth() + "x" + bitmap.getHeight());
bitmap = ResizeImage.getResizedBitmap(bitmap, calculatePercentage(40, bitmap.getWidth()), calculatePercentage(40, bitmap.getHeight()));
Mat frame = new Mat();
Utils.bitmapToMat(bitmap, frame);
Mat hsvImg = new Mat();
List<Mat> hsvPlanes = new ArrayList<>();
Mat thresholdImg = new Mat();
// int thresh_type = Imgproc.THRESH_BINARY_INV;
//if (this.inverse.isSelected())
int thresh_type = Imgproc.THRESH_BINARY;
// threshold the image with the average hue value
hsvImg.create(frame.size(), CvType.CV_8U);
Imgproc.cvtColor(frame, hsvImg, Imgproc.COLOR_BGR2HSV);
Core.split(hsvImg, hsvPlanes);
// get the average hue value of the image
double threshValue = this.getHistAverage(hsvImg, hsvPlanes.get(0));
Imgproc.threshold(hsvPlanes.get(0), thresholdImg, threshValue, mThresholdValue, thresh_type);
// Imgproc.adaptiveThreshold(hsvPlanes.get(0), thresholdImg, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 11, 2);
Imgproc.blur(thresholdImg, thresholdImg, new Size(5, 5));
// dilate to fill gaps, erode to smooth edges
Imgproc.dilate(thresholdImg, thresholdImg, new Mat(), new Point(-1, -1), 1);
Imgproc.erode(thresholdImg, thresholdImg, new Mat(), new Point(-1, -1), 3);
Imgproc.threshold(thresholdImg, thresholdImg, threshValue, mThresholdValue, Imgproc.THRESH_BINARY);
//Imgproc.adaptiveThreshold(thresholdImg, thresholdImg, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 11, 2);
// create the new image
Mat foreground = new Mat(frame.size(), CvType.CV_8UC3, new Scalar(255, 255, 255));
frame.copyTo(foreground, thresholdImg);
Utils.matToBitmap(foreground,bitmap);
//return foreground;
alreadyRun = true;
return bitmap;
}
the method responsible for Hue:
private double getHistAverage(Mat hsvImg, Mat hueValues)
{
// init
double average = 0.0;
Mat hist_hue = new Mat();
// 0-180: range of Hue values
MatOfInt histSize = new MatOfInt(180);
List<Mat> hue = new ArrayList<>();
hue.add(hueValues);
// compute the histogram
Imgproc.calcHist(hue, new MatOfInt(0), new Mat(), hist_hue, histSize, new MatOfFloat(0, 179));
// get the average Hue value of the image
// (sum(bin(h)*h))/(image-height*image-width)
// -----------------
// equivalent to get the hue of each pixel in the image, add them, and
// divide for the image size (height and width)
for (int h = 0; h < 180; h++)
{
// for each bin, get its value and multiply it for the corresponding
// hue
average += (hist_hue.get(h, 0)[0] * h);
}
// return the average hue of the image
average = average / hsvImg.size().height / hsvImg.size().width;
return average;
}
A sample of the input and output:[
Input Image 2 and Output:
Input Image 3 and Output:
Indeed, as others have said you are unlikely to get good results just with a threshold on hue. You can use something similar to GrabCut, but faster.
Under the hood, GrabCut calculates foreground and background histograms, then calculates the probability of each pixel being FG/BG based on these histograms, and then optimizes the resulting probability map using graph cut to obtain a segmentation.
Last step is most expensive, and it may be ignored depending on the application. Instead, you may apply the threshold to the probability map to obtain a segmentation. It may (and will) be worse than GrabCut, but will be better than your current approach.
There are some points to consider for this approach. The choice of histogram model would be very important here. You can either consider 2 channels in some space like YUV or HSV, consider 3 channels of RGB, or consider 2 channels of normalized RGB. You also have to select an appropriate bin size for those histograms. Too small bins would lead to 'overtraining', while too large will reduce the precision. The tradeoffs between those are a topic for a separate discussion, in brief - I would advice using RGB with 64 bins per channel for start and then see what changes are better for your data.
Also, you can get better results for coarse binning if you use interpolation to get values between bins. In past I have used trilinear interpolation and it was kind of good, compared to no interpolation at all.
But remember that there are no guarantees that your segmentation will be correct without prior knowledge on object shape, either with GrabCut, thresholding or this approach.
I would try again Grabcut, it is one of the best segmentation methods available. This is the result I get
cv::Mat bgModel,fgModel; // the models (internally used)
cv::grabCut(image,// input image
object_mask,// segmentation result
rectang,// rectangle containing foreground
bgModel,fgModel, // models
5,// number of iterations
cv::GC_INIT_WITH_RECT); // use rectangle
// Get the pixels marked as likely foreground
cv::compare(object_mask,cv::GC_PR_FGD,object_mask,cv::CMP_EQ);
cv::threshold(object_mask, object_mask, 0,255, CV_THRESH_BINARY); //ensure the mask is binary
The only problem of Grabcut is that you have to give as an input a rectangle containing the object you want to extract. Apart from that it works pretty well.
Your method of finding average hue is WRONG! As you most probably know, hue is expressed as angle and takes value in [0,360] range. Therefore, a pixel with hue 360 essentially has same colour as a pixel with hue 0 (both are pure red). In the same way, a pixel with hue 350 is actually closer to a pixel with hue 10 than a pixel with hue, say for example, 300.
As for opencv, cvtColor function actually divides calculated hue value by 2 to fit it in 8 bit integer. Thus, in opencv, hue values wrap after 180. Now, consider we have two red(ish) pixels with hues 10 and 170. If we take their average, we will get 90 — hue of pure cyan, the exact opposite of red — which is not our desired value.
Therefore, to correctly find the average hue, you need to first find average pixel value in RGB colour space, then calculate the hue from this RGB value. You can create 1x1 matrix with average RGB pixel and convert it to HSV/HSL.
Following the same reasoning, applying threshold to hue image doesn't work flawlessly. It does not consider wrapping of hue values.
If I understand correctly, you want to find pixels with similar hue as the background. Assuming we know the colour of background, I would do this segmentation in RGB space. I would introduce some tolerance variable. I would use the background pixel value as centre and this tolerance as radius and thus define a sphere in RGB colour space. Now, rest is inspecting each pixel value, if it falls inside this sphere, then classify as background; otherwise, regard it as foreground pixel.

PointCloud from two undistorted images

I want to do some Structure from Motion using OpenCV. This should happen on Android.
Currently I am having the cameraMatrix (intrinsic parameters) and the distortion coefficients from the camera calibration.
The user should now take 2 images from building and the app should generate a pointcloud.
Note: the user maybe also rotates the camera of the smartphone a little bit as he moves along one side of the building...
At the current point, I have the following information:
the undistorted left image
the undistorted right image
a list of good matches using SIFT
the homography matrix
the fundamental matrix
I've searched the internet and now I am very confused how I should proceed...
Some say I need to use stereoRectify for getting Q and use Q with reprojectImageTo3D() for getting the pointCloud.
Others say that I need to use stereoRectifyUncalibrated and use H1 and H2 from this method to fill all the parameters of triangulatePoints.
In triangulatePoints I need the projectionMatrix of each camera/image but from my understanding this seems definitly wrong.
So for me there are some problems:
How do I get R and T (Rotation and Translation) from all the information I already have
If I use stereoRectify, the first 4 parameters are cameraMatrix1, distortionCoeff1, cameraMatrix2, distortionCoeff2) - If I do not have a stereoCamera like Kinect, are the ameraMatrix1 and cameraMatrix2 equals for my setup (mono camera on a smartphone)
How can I obtain Q (guess if I have R and T I can get it from stereoRectify)
Is there anonther way of getting the projectioMatrices for each camera so I can use the triangulationmethod provided by OpenCV
I know this are a lot of questions, but googeling confused me so I need to get this straight. I hope someone can help me with my problems.
Thanks
PS as this are more theoretical questions I did not post some code. If you want / need to see code or the values of my camera calibration, just ask and I will add them to my posting.
I wrote something about using Farneback's optical flow for Structure from Motion before. You can read the details here.
But here's the code snippet, it's a somewhat working, but not great implementation. Hope that you can use it as a reference.
/* Try to find essential matrix from the points */
Mat fundamental = findFundamentalMat( left_points, right_points, FM_RANSAC, 0.2, 0.99 );
Mat essential = cam_matrix.t() * fundamental * cam_matrix;
/* Find the projection matrix between those two images */
SVD svd( essential );
static const Mat W = (Mat_<double>(3, 3) <<
0, -1, 0,
1, 0, 0,
0, 0, 1);
static const Mat W_inv = W.inv();
Mat_<double> R1 = svd.u * W * svd.vt;
Mat_<double> T1 = svd.u.col( 2 );
Mat_<double> R2 = svd.u * W_inv * svd.vt;
Mat_<double> T2 = -svd.u.col( 2 );
static const Mat P1 = Mat::eye(3, 4, CV_64FC1 );
Mat P2 =( Mat_<double>(3, 4) <<
R1(0, 0), R1(0, 1), R1(0, 2), T1(0),
R1(1, 0), R1(1, 1), R1(1, 2), T1(1),
R1(2, 0), R1(2, 1), R1(2, 2), T1(2));
/* Triangulate the points to find the 3D homogenous points in the world space
Note that each column of the 'out' matrix corresponds to the 3d homogenous point
*/
Mat out;
triangulatePoints( P1, P2, left_points, right_points, out );
/* Since it's homogenous (x, y, z, w) coord, divide by w to get (x, y, z, 1) */
vector<Mat> splitted = {
out.row(0) / out.row(3),
out.row(1) / out.row(3),
out.row(2) / out.row(3)
};
merge( splitted, out );
return out;
This isn't OpenCV, but here is an example of exactly what you are asking for:
http://boofcv.org/index.php?title=Example_Stereo_Single_Camera
There is an Android demonstration application which includes that code here:
https://play.google.com/store/apps/details?id=org.boofcv.android

Categories

Resources