OpenCV Android wrong coordinates - android

I'm facing a very strange issue with OpenCV for Android :
when I'm accessing pixel with Mat.at it gives me the wrong pixel on the screen :
A simple example :
for( double y = (mat.rows - h) / 2 ; y < (mat.rows + h) / 2 ; y++ ) {
for( double x = (mat.cols - w) / 2; x < (mat.cols + w) / 2; x++ ) {
for( int c = 0; c < 3; c++ ) {
mat.at<Vec3b>(y,x)[c] =
saturate_cast<uchar>( 255 );
}
}
}
circle(mat, Point((mat.cols - w) / 2, (mat.rows - h) / 2), 10, Scalar(255,0,0,255));
circle(mat, Point((mat.cols + w) / 2, (mat.rows - h) / 2), 10, Scalar(255,0,0,255));
circle(mat, Point((mat.cols - w) / 2, (mat.rows + h) / 2), 10, Scalar(255,0,0,255));
circle(mat, Point((mat.cols + w) / 2, (mat.rows + h) / 2), 10, Scalar(255,0,0,255));
I should have the corners aligned with the box but not.
Is there a conversion to make in order to access to the true coordinates ?

You don't post the initialization of mat, but it appears to be initialized as type CV_8UC4. This means that accessing the image using mat.at<cv::Vec3b> will give you incorrect pixel locations. Four-channel images must be accessed using at<cv::Vec4b> to give correct pixel locations, even if you are only modifying three of the channels, as in your example.
Unrelated: it's not advisable to use double as a counter variable type.

This is a long shot, but it might help you. Is this a SDK emulation? I remember having issues with thoes. When I tried my code in a device it worked like a charm.

Related

I want to display numbers from 0 to 100 clockwise in canvas

I want to print numbers from 0 to 100 clockwise on canvas it will work when we have values from 0 to 12 but once i changed the values or increased the it is stopped working in my case can anyone help me here?
private int[] mClockHours = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10,11,12};
private void drawNumeral(Canvas canvas) {
mPaint.setTextSize(fontSize);
for (int number : mClockHours) {
String tmp = String.valueOf(number);
mPaint.getTextBounds(tmp, 0, tmp.length(), mRect);
double angle = Math.PI / 6 * (number - 2);
Log.d("drawNumeral", "number: "+number);
Log.d("drawNumeral", "Math.PI: "+Math.PI);
Log.d("drawNumeral", "angle: "+angle);
Log.d("drawNumeral", "temp: "+tmp);
int x = (int) (width / 2 + Math.cos(angle) * mRadius - mRect.width() / 2);
int y = (int) (height / 2 + Math.sin(angle) * mRadius - mRect.height() / 2);
canvas.drawText(tmp, x, y, mPaint);
}
}
You need to invalidate() the view to see changes you made. Here is a good tutorial to understand how to draw on canvas.

Multiple colors for single path Android

Good day.I am creating a siri like wave for android and i encounter an big issue.I need the wave to be in 4 colors.Lets assume i only have one single line which is drawing on the screen accordingly to the voice decibels.Anyway i am able to do it but no way i am able to give 4 different colors for same path.Assume it is 1 single path which moves from screen start to screen end,i need that line to have 4 different colors,mainly i had to divide the path into 4 parts and draw the color for each parts,but neither google,nor any other source give me anything (not even found anything similar to what i want).
Meanwhile i am posting the code where actually i am drawing the lines.
for (int l = 0; l < mWaveCount; ++l) {
float midH = height / 2.0f;
float midW = width / 2.0f;
float maxAmplitude = midH / 2f - 4.0f;
float progress = 1.0f - l * 1.0f / mWaveCount;
float normalAmplitude = (1.5f * progress - 0.5f) * mAmplitude;
float multiplier = (float) Math.min(1.0, (progress / 3.0f * 2.0f) + (1.0f / 3.0f));
if (l != 0) {
mSecondaryPaint.setAlpha((int) (multiplier * 255));
}
mPath.reset();
for (int x = 0; x < width + mDensity; x += mDensity) {
float scaling = 1f - (float) Math.pow(1 / midW * (x - midW), 2);
float y = scaling * maxAmplitude * normalAmplitude * (float) Math.sin(
180 * x * mFrequency / (width * Math.PI) + mPhase) + midH;
// canvas.drawPoint(x, y, l == 0 ? mPrimaryPaint : mSecondaryPaint);
//
// canvas.drawLine(x, y, x, 2*midH - y, mSecondaryPaint);
if (x == 0) {
mPath.moveTo(x, y);
} else {
mPath.lineTo(x, y);
// final float x2 = (x + mLastX) / 2;
// final float y2 = (y + mLastY) / 2;
// mPath.quadTo(x2, y2, x, y);
}
mLastX = x;
mLastY = y;
}
if (l == 0) {
canvas.drawPath(mPath, mPrimaryPaint);
} else {
canvas.drawPath(mPath, mSecondaryPaint);
}
}
I tried to change color on if (l == 0) {
canvas.drawPath(mPath, mPrimaryPaint);
} but if i change it here,no result at all,either the line is separate and not moving at all,but it should,either the color is not applied,propably because i am doing it in loop as i had to and everytime the last color is picked to draw.Anyway can you help me out?Even an small reference is gold for me because really there is nothing at all in the internet.
Anyway even though Matt Horst answer fully correct,i found the simplest and easiest solution...i never thought it would be so easy.Anyway if in world there is someone who need to make an path divided into multiple colors,here is what you can do
int[] rainbow = getRainbowColors();
Shader shader = new LinearGradient(0, 0, 0, width, rainbow,
null, Shader.TileMode.REPEAT);
Matrix matrix = new Matrix();
matrix.setRotate(90);
shader.setLocalMatrix(matrix);
mPrimaryPaint.setShader(shader);
Where getRainbowColors() is an array of colors you wish your line to have and width is the length of the path so the Shader knows how to draw the colors in right way to fit the length of path.Anyway .....easy isnt it?and pretty purged me a lot to get into this simple point.Nowhere in internet you could find only if you are looking for something completelly different,than you might come across this.
It seems to me like you could set up one paint for each section, each with a different color. Then set up one path for each section too. Then as you draw across the screen, wherever the changeover point is between sections, start drawing with the new path. And make sure first to use moveTo() on the new path so it starts off where the old one left off.
For my solution, I tried changing the color of the linePaint in the onDraw Call. But it was drawing a single color.
So i used two different paints for two different colors and draw path on the canvas.
And it worked. Hope it helps someone out there.

How to disable a button when switching Screens? (Android LibGDX)

My render method:
public void render(float delta) {
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
int w = GateRunner.WIDTH;
int h = GateRunner.HEIGHT;
cam.update();
cam.setToOrtho(false, w, h);
Gdx.input.setInputProcessor(this);
game.batch.begin();
game.batch.draw(title_background, 0, 0, GateRunner.WIDTH, GateRunner.HEIGHT);
game.batch.draw(title, (w / 2) - (w / 2), h / 2 + h * 12 / 90, w, (int) (w * 0.5));
playButtonSprite.setPosition((w / 2) - (w / 2), h / 2 + h / 20);
playButtonSprite.setSize(w, (int) (w * (144.0 / 1080)));
playButtonSprite.draw(game.batch);
game.batch.draw(instructions_button, (w/2) - (w/2), h/2 + h/20 - h*3/40, w, (int) (w * (144.0 / 1080)));
game.batch.draw(about_button, (w / 2) - (w / 2), h / 2 + h/20 - 2*h*3/40, w, (int)(w*(144.0/1080)));
game.batch.end();
}
My touchDown method.
public boolean touchDown(int screenX, int screenY, int pointer, int button) {
float pointerX = InputTransform.getCursorToModelX(windowWidth, screenX);
float pointerY = InputTransform.getCursorToModelY(windowHeight, screenY);
if(playButtonSprite.getBoundingRectangle().contains(pointerX, pointerY)) //Play button
{
game.setScreen(new PlayScreen(game));
dispose();
}
return true;
}
I wanted to make it so when I clicked the Play "button", which is just a Sprite, it would move to PlayScreen. I did this by checking if the rectangle where the Play sprite was clicked. However, even though this part works, whenever I click in that rectangle area in PlayScreen, it runs the code again - it starts the PlayScreen over. How can I fix this?
Edit: also, there might be a better name for this question, so feel free to suggest.
This is happenning because when you change screen, you still have the same input processor in place which is checking whether that rectangle of the screen is tapped. Whether the button is visible or not is irrelevant.
You can do one of these to fix it...
Make each screen have its own input processor specific to its needs - this is the preferred option
Have a single input processor that checks what the current screen is before handling actions
Alternatively, look into frameworks like scene2d.ui to handle this sort of stuff for you.

Color Image in the Google Tango Leibniz API

I am trying to capture the image data in the onFrameAvailable method from a Google Tango. I am using the Leibniz release. In the header file it is said that the buffer contains HAL_PIXEL_FORMAT_YV12 pixel data. In the release notes they say the buffer contains YUV420SP. But in the documentation it is said the pixels are RGBA8888 format (). I am a little confused and additionally. I don't really get image data but a lot of magenta and green. Right now I am trying to convert from YUV to RGB similar to this one. I guess there is something wrong with the stride, too. Here eís the code of the onFrameAvailable method:
int size = (int)(buffer->width * buffer->height);
for (int i = 0; i < buffer->height; ++i)
{
for (int j = 0; j < buffer->width; ++j)
{
float y = buffer->data[i * buffer->stride + j];
float v = buffer->data[(i / 2) * (buffer->stride / 2) + (j / 2) + size];
float u = buffer->data[(i / 2) * (buffer->stride / 2) + (j / 2) + size + (size / 4)];
const float Umax = 0.436f;
const float Vmax = 0.615f;
y = y / 255.0f;
u = (u / 255.0f - 0.5f) ;
v = (v / 255.0f - 0.5f) ;
TangoData::GetInstance().color_buffer[3*(i*width+j)]=y;
TangoData::GetInstance().color_buffer[3*(i*width+j)+1]=u;
TangoData::GetInstance().color_buffer[3*(i*width+j)+2]=v;
}
}
I am doing the yuv to rgb conversion in the fragment shader.
Has anyone ever obtained an RGB image for the Google Tango Leibniz release? Or had someone similar problems when converting from YUV to RGB?
YUV420SP (aka NV21) is correct for the time being. An explanation is here. In this format you have a width x height array where each element is a Y byte, followed by a width/2 x height/2 array where each element is a V byte and a U byte. Your code is implementing YV21, which has separate arrays for V and U instead of interleaving them in one array.
You mention that you are doing YUV to RGB conversion in a fragment shader. If all you want to do with the camera images is draw then you can use TangoService_connectTextureId() and TangoService_updateTexture() instead of TangoService_connectOnFrameAvailable(). This approach delivers the camera image to you already in an OpenGL texture that gives your fragment shader RGB values without bothering with the pixel format details. You will need to bind to GL_TEXTURE_EXTERNAL_OES (instead of GL_TEXTURE_2D), and your fragment shader would look something like this:
#extension GL_OES_EGL_image_external : require
precision mediump float;
varying vec4 v_t;
uniform samplerExternalOES colorTexture;
void main() {
gl_FragColor = texture2D(colorTexture, v_t.xy);
}
If you really do want to pass YUV data to a fragment shader for some reason, you can do so without preprocessing it into floats. In fact, you don't need to unpack it at all - for NV21 just define a 1-byte texture for Y and a 2-byte texture for VU, and load the data as-is. Your fragment shader will use the same texture coordinates for both.
By the way, if someone experienced problems with capturing the image data on the Leibniz release, too: One of the developers told me that there is a bug concerning the camera and that it should be fixed with the Nash release.
The bug caused my buffer to be null but when I used the Nash update I got data again. However, right now the problem is that the data I am using doesn't make sense. I guess/hope the cause is that the Tablet didn't get the OTA update yet (there can be a gap between the actual release date and the OTA software update).
Just try code following:
//C#
public bool YV12ToPhoto(byte[] data, int width, int height, out Texture2D photo)
{
photo = new Texture2D(width, height);
int uv_buffer_offset = width * height;
for (int i = 0; i < height; i++)
{
for (int j = 0; j < width; j++)
{
int x_index = j;
if (j % 2 != 0)
{
x_index = j - 1;
}
// Get the YUV color for this pixel.
int yValue = data[(i * width) + j];
int uValue = data[uv_buffer_offset + ((i / 2) * width) + x_index + 1];
int vValue = data[uv_buffer_offset + ((i / 2) * width) + x_index];
// Convert the YUV value to RGB.
float r = yValue + (1.370705f * (vValue - 128));
float g = yValue - (0.689001f * (vValue - 128)) - (0.337633f * (uValue - 128));
float b = yValue + (1.732446f * (uValue - 128));
Color co = new Color();
co.b = b < 0 ? 0 : (b > 255 ? 1 : b / 255.0f);
co.g = g < 0 ? 0 : (g > 255 ? 1 : g / 255.0f);
co.r = r < 0 ? 0 : (r > 255 ? 1 : r / 255.0f);
co.a = 1.0f;
photo.SetPixel(width - j - 1, height - i - 1, co);
}
}
return true;
}
I have succeeded.

Detect road lanes with android and opencv

i have problem with detection road lanes with my phone.
i wrote some code for road lanes detection, but him not working for me.
From camera get modifications from normal view to BGR colors and try use GausianBlur and Canny, but i think i not good draw lanes for detection.
Maybe some people have another idea how detection road lanes with OpenCV?
Mat mYuv = new Mat(height + height / 2, width, CvType.CV_8UC1);
Mat mRgba = new Mat(height + height / 2, width, CvType.CV_8UC1);
Mat thresholdImage = new Mat(height + height / 2, width, CvType.CV_8UC1);
mYuv.put(0, 0, data);
Imgproc.cvtColor(mYuv, mRgba, Imgproc.COLOR_YUV420p2BGR, 4);
//convert to grayscale
Imgproc.cvtColor(mRgba, thresholdImage, Imgproc.COLOR_mRGBA2RGBA, 4);
// Perform a Gaussian blur (convolving in 5x5 Gaussian) & detect edges
Imgproc.GaussianBlur(mRgba, mRgba, new Size(5,5), 2.2, 2);
Imgproc.Canny(mRgba, thresholdImage, VActivity.CANNY_MIN_TRESHOLD, VActivity.CANNY_MAX_THRESHOLD);
Mat lines = new Mat();
double rho = 1;
double theta = Math.PI/180;
int threshold = 50;
//do Hough transform to find lanes
Imgproc.HoughLinesP(thresholdImage, lines, rho, theta, threshold, VActivity.HOUGH_MIN_LINE_LENGTH, VActivity.HOUGH_MAX_LINE_GAP);
for (int x = 0; x < lines.cols() && x < 1; x++){
double[] vec = lines.get(0, x);
double x1 = vec[0],
y1 = vec[1],
x2 = vec[2],
y2 = vec[3];
Point start = new Point(x1, y1);
Point end = new Point(x2, y2);
Core.line(mRgba, start, end, new Scalar(255, 0, 0), 3);
}
This approach is fine and I've done something similar, not for road line detection but I did notice that it could be used for that purpose. Some comments:
Not sure why you do:
Imgproc.cvtColor(mRgba, thresholdImage, Imgproc.COLOR_mRGBA2RGBA, 4);
as 1. the comment say convert to greyscale, which is a single channel and 2. thresholdImage will get overwritten with the call to Canny later. You just need to dimension thresholdImage with:
thresholdImage = new Mat(mRgba.size(), CvType.CV_8UC1);
What are your parameter values to the call to Canny? I played about with mine considerably and ended up with values like: threshold1 = 441, threshold2 = 160, aperture = 3.
Likewise Imgproc.HoughLinesP: I use Imgproc.HoughLines rather than Imgproc.HoughLinesP with parameters: threshold = 80, minLen = 30, maxLen = 10.
Also have a look at:
for (int x = 0; x < lines.cols() && x < 1; x++){
&& x < 1 means you will only take the first line that the call to HoughLinesP returns. I'd suggest you remove this and use some other criteria to reduce the number of lines; for example, I was interesting in only horizontal and vertical lines so I used atan2 to calculate line angles and exclude those that deviate too much.
UPDATE
Here is how I get the angle of a line. Assuming coordinates of one point is (x1,y1) and the other (x2, y2) then to get the angle:
double lineAngle = Math.atan2(y2 - y1, x2 - x1);
this should return an angle in radians between -PI/2 and PI/2
With regard Canny parameters then I would experiment - I set up onTouch so that I could adjust the threshold values by touching certain parts of the screen to see the effects in realtime. Note that aperture is rather disappointing parameter: it seems to only like odd values 3, 5 and 7 and 3 is the best that I've found.
Something like in the onTouch method:
int w = mRgba.width();
int h = mRgba.height();
float x = event.getX();
float y = event.getY();
if ((x < w / 3) && (y < h / 2)) t1 += 20;
if ((x < w / 3) && (y >= h / 2)) t1 -= 20;
if ((x > 2 * w / 3) && (y < h / 2)) t2 += 20;
if ((x > 2 * w / 3) && (y >= h / 2)) t2 -= 20;
t1 and t2 being the threshold values passed to the Canny call.

Categories

Resources