I am looking for tips and hints on "the best way" to approach something. I want to either import, or create, geometry (initially a cylinder), isolate half of it, and move the vertices around, then export it again as an .obj or .stl. I realise there are libraries that will do this but I need this to work on Android and the libraries (as far as I know) don't. I made these images in 3DMax to explain what I mean. I can handle much of the coding, BUT the geometry mathematics I just cannot get my head around.
I have adapted this method for creating a cylinder from an example in the book: Processing 2: Creative Coding Hotshot...
float[][] vertx;
float[][] verty;
void setup() {
size(800, 600, P3D);
vertx = new float[36][36];//36 triangle strips, 36 vertices
verty = new float[36][36];
}
void draw() {
hint( ENABLE_DEPTH_TEST );
pushMatrix();
background(125);
fill(255);
strokeWeight(0.5);
translate( width/2, height/2, 200);
rotateX(radians(-45));
scale( 1 );
translate(0, -50, 0);
initPoints();
beginShape(TRIANGLE_STRIP );
for ( int h = 1; h < 36; h++) {
for ( int a = 0; a<37; a++ ) {
int aa = a % 36;
// normal( vertx[h][aa], 0, verty[h][aa]);
vertex( vertx[h][aa], h*5.0, verty[h][aa] );
//normal( vertx[h-1][aa], 0, verty[h-1][aa]);
vertex( vertx[h-1][aa], (h-1)*5.0, verty[h-1][aa] );
}
}
endShape();
beginShape(TRIANGLE_FAN); //bottom
int h = 35;
vertex( 0, h*5, 0 );
for ( int a = 0; a<37; a++ ) {
int aa = a % 36;
vertex( vertx[h][aa], h*5, verty[h][aa] );
}
endShape();
popMatrix();
hint(DISABLE_DEPTH_TEST);
}
float getR( float a, float h ) {
float r = 50;
return r;
}
void initPoints() {
for ( int h = 0; h < 36; h++) {
for ( int a = 0; a<36; a++) {
float r = getR( a*10.0, h*5.0 ); //a = 10 (360/36)
vertx[h][a] = cos( radians( a*10.0 )) * r;
verty[h][a] = sin( radians( a*10.0 )) * r;
}
}
}
...and I am assuming it is possible to isolate/grab certain vertices from the array?
Any other approaches, or any advice on how to develop this? Is the import > transform method even possible? #Spektre - might there be a better approach than this?
This is a very general question, so I'll give you a general answer.
Your first step is to really digest the code you posted: what do each of the calls to vertex() do? Where is each vertex being placed? You should be able to write your own code that draws a cylinder without copy-pasting anything from this code. You should be able to draw other shapes. Start by only drawing a few vertexes to see where they're showing up, then add a few more, then a few more, until you understand exactly what this code is doing.
There are a ton of tutorials that can help you understand the code. Here is a 2D tutorial to get you started (read (and understand) that one first), and here is a 3D version.
Once you have that working it should be pretty easy to manipulate the vertexes however you want. If you want the user to be able to select a vertex, google something like "3d point picking" for a ton of resources.
Finally, I'm not sure Processing provides an easy way to export a .obj file. But again, google is your friend- try googling something like "Processing export obj file" for a handful of libraries that seem to help with your goal.
If you get stuck on a specific step, post an MCVE and ask a specific question. Good luck!
Related
I am quite new to opengl es 2.0 on android. I am working on a project which draws a few plane indicators on screen(like altimeter, compass etc). After doing the tutorial from the official google dev site here http://developer.android.com/training/graphics/opengl/index.html I just continued along this path, drawing circles, triangles, squares etc (only 2d stuff). I can make the drawn objects move using rotation and translation matrices, but the only way I know how to do this(except for how they did it in the tutorial) is like this in the onDrawFrame() method of my renderer class:
//set values for all Indicators
try {
Thread.sleep(1);
// for roll + pitch:
if(roll < 90) {
roll += 1.5f;
} else roll = 0;
if(pitch < 90) {
pitch += 0.5f;
} else pitch = 0;
// for compass:
if(compassDeg > 360) compassDeg = 0;
else compassDeg += 1;
//for altimeter
if(realAltitude >= 20000) realAltitude = 0;
else realAltitude += 12;
//for speedometer:
if(realSpeed >= 161) realSpeed = 0;
else realSpeed += 3;
} catch (InterruptedException e) {
e.printStackTrace();
}
roll, pitch, compassDeg, speed etc are the parameters the indicators receive and I designed them to move accordingly (if compassDeg = 0 for example, the compass will point north and so on). These parameters will eventually be received via bluetooth but for now I'm modifying them from the code directly because I don't have a bluetooth implementation yet.
I am pretty sure this is not the best way to do it, sometimes the drawn objects stutter and seem to go back a few frames, then back again and I don't think pausing the drawing method is a good idea in general.
I've seen that in the tutorial I mentioned in the beginning they use something like this:
//Use the following code to generate constant rotation.
//Leave this code out when using TouchEvents.
long time = SystemClock.uptimeMillis() %4000L ;
float contAngle = -0.090f * ((int) time);
Matrix.setRotateM(contRotationMatrix, 0, contAngle, 0, 0, -1.0f);
Matrix.multiplyMM(contMVPMatrix, 0, mMVPMatrix4, 0, contRotationMatrix, 0);
which is still kinda weird I think, there has to be a more straightforward way in which to specify how to draw each frame, to rotate and translate objects frame by frame.
So my question is how do I make everything move frame by frame or something like that, or at least how do I find out when one frame has been drawn?
I want to ask about some ideas / study materials connected to binarization. I am trying to create system that detects human emotions. I am able to get areas such as brows, eyes, nose, mouth etc. but then comes another stage -> processing...
My images are taken in various places/time of day/weather conditions. It's problematic during binarization, with the same treshold value one images are fully black, other looks well and provide me informations I want.
What I want to ask you about is:
1) If there is known way how to bring all images to the same level of brightness?
2) How to create dependency between treshold value and brightness on image?
What I have tried for now is normalize the image... but there are no effects, maybe I'm doing something wrong. I'm using OpenCV (for android)
Core.normalize(cleanFaceMatGRAY, cleanFaceMatGRAY,0, 255, Core.NORM_MINMAX, CvType.CV_8U);
EDIT:
I tried adaptive treshold, OTSU - they didnt work for me. I have problems with using CLAHE in Android but I managed to implement Niblack algorithm.
Core.normalize(cleanFaceMatGRAY, cleanFaceMatGRAY,0, 255, Core.NORM_MINMAX, CvType.CV_8U);
nibelBlackTresholding(cleanFaceMatGRAY, -0.2);
private void nibelBlackTresholding(Mat image, double parameter) {
Mat meanPowered = image.clone();
Core.multiply(image, image, meanPowered);
Scalar mean = Core.mean(image);
Scalar stdmean = Core.mean(meanPowered);
double tresholdValue = mean.val[0] + parameter * stdmean.val[0];
int totalRows = image.rows();
int totalCols = image.cols();
for (int cols=0; cols < totalCols; cols++) {
for (int rows=0; rows < totalRows; rows++) {
if (image.get(rows, cols)[0] > tresholdValue) {
image.put(rows, cols, 255);
} else {
image.put(rows, cols, 0);
}
}
}
}
The results are really good, but still not enough for some images. I paste links cuz images are big and I don't want to take too much screen:
For example this one is tresholded really fine:
https://dl.dropboxusercontent.com/u/108321090/a1.png
https://dl.dropboxusercontent.com/u/108321090/a.png
But bad light produce shadows sometimes and this gives this effect:
https://dl.dropboxusercontent.com/u/108321090/b1.png
https://dl.dropboxusercontent.com/u/108321090/b.png
Do you have any idea that could help me to improve treshold of those images with high light difference (shadows)?
EDIT2:
I found that my previous Algorithm is implemented in wrong way. Std was calculated in wrong way. In Niblack Thresholding mean is local value not global. I repaired it according to this reference http://arxiv.org/ftp/arxiv/papers/1201/1201.5227.pdf
private void niblackThresholding2(Mat image, double parameter, int window) {
int totalRows = image.rows();
int totalCols = image.cols();
int offset = (window-1)/2;
double tresholdValue = 0;
double localMean = 0;
double meanDeviation = 0;
for (int y=offset+1; y<totalCols-offset; y++) {
for (int x=offset+1; x<totalRows-offset; x++) {
localMean = calculateLocalMean(x, y, image, window);
meanDeviation = image.get(y, x)[0] - localMean;
tresholdValue = localMean*(1 + parameter * ( (meanDeviation/(1 - meanDeviation)) - 1 ));
Log.d("QWERTY","TRESHOLD " +tresholdValue);
if (image.get(y, x)[0] > tresholdValue) {
image.put(y, x, 255);
} else {
image.put(y, x, 0);
}
}
}
}
private double calculateLocalMean(int x, int y, Mat image, int window) {
int offset = (window-1)/2;
Mat tempMat;
Rect tempRect = new Rect();
Point leftTop, bottomRight;
leftTop = new Point(x - (offset + 1), y - (offset + 1));
bottomRight = new Point(x + offset, y + offset);
tempRect = new Rect(leftTop, bottomRight);
tempMat = new Mat(image, tempRect);
return Core.mean(tempMat).val[0];
}
Results for 7x7 window and proposed in reference k parameter = 0.34: I still can't get rid of shadow on faces.
https://dl.dropboxusercontent.com/u/108321090/b2.png
https://dl.dropboxusercontent.com/u/108321090/b1.png
things to look at:
http://docs.opencv.org/java/org/opencv/imgproc/CLAHE.html
http://docs.opencv.org/java/org/opencv/imgproc/Imgproc.html#adaptiveThreshold(org.opencv.core.Mat,%20org.opencv.core.Mat,%20double,%20int,%20int,%20int,%20double)
http://docs.opencv.org/java/org/opencv/imgproc/Imgproc.html#threshold(org.opencv.core.Mat,%20org.opencv.core.Mat,%20double,%20double,%20int) (THRESH_OTSU)
I'm writing an image processing app on android, and I'm trying to speed it up using the NDK. I have the following for-loop:
int x, y, c, idx;
const int pitch3 = pitch * 3;
float adj, result;
...
// px, py, u, u_bar are all float arrays of size nx*ny*3
// theta, tau, denom are float constants
// idx >= pitch3
for(y=1;y<ny;++y)
{
for(x=1;x<nx;++x)
{
for(c=0;c<3;++c)
{
adj = -px[idx] - py[idx] + px[idx - 3] + py[idx - pitch3];
result = ((u[idx] - tau * adj) + tau * f[idx]) * denom;
u_bar[idx] = result + theta * (result - u[idx]);
u[idx] = result;
++idx;
}
}
}
I'm wondering if it is possible to speed up this loop?
I'm thinking that using fixed-point arithmetic wouldn't do much, except on really old android phone (which I'm not going to target). Would writing it in assembly give a big improvement?
EDIT: I know I could use SIMD/NEON instructions, but they are not so common I think ...
Since you're accessing the array as a flat structure, the 3 levels of looping is only increasing the value used for idx. You can loop for (idx = pitch3; idx < nx*ny*3; idx++).
Another option is to move to fixed-point math. Do you really need more than 64 bits of dynamic range?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I want to make a an application like a cam scanner for cropping a document.
But I need same functionality like my two images..
First Images shown image captured by camera..
Second image recognize a captured image part like this..
I research more and more but not getting any out put so, I ask here if,any one done this tell me..
Thanks
I assume your problem is to detect the object to scan.
Object detection mechanisms like pattern matching or feature detection won't bring you the results you are looking for as you don't know what exactly is the object you are scanning.
Basically you search for a rectangular object in the picture.
A basic approach to this could be as following:
Run a canny edge detector on the image. It could help to blur the image a bit before doing this. The edges of the object should be clearly visible.
Now you want to do a Hough transform to find lines in the picture.
Search for lines with an angle around 90deg to each other. The problem would be to find the right ones. Maybe it is enough to use the lines closest to the frame of the picture that are reasonably parallel to them.
Find the intersecting points to define the edges of your object.
At least this should give you a hint where to research further.
As further steps in such an app you will have to calculate the projection of the points and do a affine transform of the object.
I hope this helps.
After writing all this i found this post. It should help you lot.
As my answer targets OpenCV you have to use the OpenCV library.
In Order to do this, you need to install the Android Native Development Kit (NDK).
There are some good tutorials on how to use OpenCV on Android on the OpenCV for Android page.
One thing to keep in mind is that almost each function of the Java wrapper calls a native method. That costs lots of time. So you want to do as much as possible in your native code before returning your results to the Java part.
I know I am too late to answer but it might be helpful to someone.
Try the following code.
#Override
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
path = new Path();
path.moveTo(x1, y1); // this should set the start point right
//path.lineTo(x1, y1); <-- this line should be drawn at the end of course,sorry
path.lineTo(x2, y2);
path.lineTo(x3, y3);
path.lineTo(x4, y4);
path.lineTo(x1, y1);
canvas.drawPath(path, currentPaint);
}
Pass your image mat in this method:
void findSquares(Mat image, List<MatOfPoint> squares) {
int N = 10;
squares.clear();
Mat smallerImg = new Mat(new Size(image.width() / 2, image.height() / 2), image.type());
Mat gray = new Mat(image.size(), image.type());
Mat gray0 = new Mat(image.size(), CvType.CV_8U);
// down-scale and upscale the image to filter out the noise
Imgproc.pyrDown(image, smallerImg, smallerImg.size());
Imgproc.pyrUp(smallerImg, image, image.size());
// find squares in every color plane of the image
Outer:
for (int c = 0; c < 3; c++) {
extractChannel(image, gray, c);
// try several threshold levels
Inner:
for (int l = 1; l < N; l++) {
Imgproc.threshold(gray, gray0, (l + 1) * 255 / N, 255, Imgproc.THRESH_BINARY);
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
// find contours and store them all as a list
Imgproc.findContours(gray0, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
MatOfPoint approx = new MatOfPoint();
// test each contour
for (int i = 0; i < contours.size(); i++) {
approx = approxPolyDP(contours.get(i), Imgproc.arcLength(new MatOfPoint2f(contours.get(i).toArray()), true) * 0.02, true);
// square contours should have 4 vertices after approximation
// relatively large area (to filter out noisy contours)
// and be convex.
// Note: absolute value of an area is used because
// area may be positive or negative - in accordance with the
// contour orientation
double area = Imgproc.contourArea(approx);
if (area > 5000) {
if (approx.toArray().length == 4 &&
Math.abs(Imgproc.contourArea(approx)) > 1000 &&
Imgproc.isContourConvex(approx)) {
double maxCosine = 0;
Rect bitmap_rect = null;
for (int j = 2; j < 5; j++) {
// find the maximum cosine of the angle between joint edges
double cosine = Math.abs(angle(approx.toArray()[j % 4], approx.toArray()[j - 2], approx.toArray()[j - 1]));
maxCosine = Math.max(maxCosine, cosine);
bitmap_rect = new Rect(approx.toArray()[j % 4], approx.toArray()[j - 2]);
}
// if cosines of all angles are small
// (all angles are ~90 degree) then write quandrange
// vertices to resultant sequence
if (maxCosine < 0.3)
squares.add(approx);
}
}
}
}
}
}
In this method you get four point of document then you can cut this image using below method:
public Bitmap warpDisplayImage(Mat inputMat) {
List<Point> newClockVisePoints = new ArrayList<>();
int resultWidth = inputMat.width();
int resultHeight = inputMat.height();
Mat startM = Converters.vector_Point2f_to_Mat(orderRectCorners(Previes method four poit list(like : List<Point> points)));
Point ocvPOut4 = new Point(0, 0);
Point ocvPOut1 = new Point(0, resultHeight);
Point ocvPOut2 = new Point(resultWidth, resultHeight);
Point ocvPOut3 = new Point(resultWidth, 0);
ocvPOut3 = new Point(0, 0);
ocvPOut4 = new Point(0, resultHeight);
ocvPOut1 = new Point(resultWidth, resultHeight);
ocvPOut2 = new Point(resultWidth, 0);
}
Mat outputMat = new Mat(resultWidth, resultHeight, CvType.CV_8UC4);
List<Point> dest = new ArrayList<Point>();
dest.add(ocvPOut3);
dest.add(ocvPOut2);
dest.add(ocvPOut1);
dest.add(ocvPOut4);
Mat endM = Converters.vector_Point2f_to_Mat(dest);
Mat perspectiveTransform = Imgproc.getPerspectiveTransform(startM, endM);
Imgproc.warpPerspective(inputMat, outputMat, perspectiveTransform, new Size(resultWidth, resultHeight), Imgproc.INTER_CUBIC);
Bitmap descBitmap = Bitmap.createBitmap(outputMat.cols(), outputMat.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(outputMat, descBitmap);
return descBitmap;
}
i have just followed an example on opencv regarding circle detection http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html
vector<Vec3f> circles;
/// Apply the Hough Transform to find the circles
HoughCircles( src_gray, circles, CV_HOUGH_GRADIENT, 1, src_gray.rows/8, 200, 100, 0, 0 );
/// Draw the circles detected
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
...
However, im having projekt with eclipse not accepting the function call
cvRound(circles[i][0])
Invalid arguments ' Candidates are: int cvRound(double) '
I have tried to add include a number of directories for gnu c and c++ in properties -> c/c++ general -> paths and symbols for example
ndkroot/sources/cxx-stl..../include
the native/jni/include
for opencv etc
But still it wont accept the cvRound function, is there something im missing?
thx in advance
cvRound function is just a rounding function to convert a double value to integer. Two ways:
1- You can make your own rounding function and use it.
int Round(double x){
int y;
if(x >= (int)x+0,5)
y = (int)x++;
else
y = (int)x;
return y;
}
2- Include not only C++, but also C API of opencv. (include/opencv/)