I am trying to implement a 3D app for Android that should also support cardboard like viewers. I have seen some of those images and they seem to have some kind of barrel distortion in order to be orthogonal through the cardboard lenses.
So I was looking for algorithms or libraries specifically for Java/Android that would help me achieving this.
I have found this implementation: http://www.helviojunior.com.br/fotografia/barrel-and-pincushion-distortion/
It would be great to have something like this because it has everything I'd need. Unfortunately it's for C# and it has some specific code that I just couldn't easily translate into more generic code.
Then there is a simpler Java implementation here: http://popscan.blogspot.de/2012/04/fisheye-lens-equation-simple-fisheye.html
I have changed it to:
public static Bitmap fisheye(Bitmap srcimage) {
/*
* Fish eye effect
* tejopa, 2012-04-29
* http://popscan.blogspot.com
* http://www.eemeli.de
*/
// get image pixels
double w = srcimage.getWidth();
double h = srcimage.getHeight();
int[] srcpixels = new int[(int)(w*h)];
srcimage.getPixels(srcpixels, 0, (int)w, 0, 0, (int)w, (int)h);
Bitmap resultimage = srcimage.copy(srcimage.getConfig(), true);
// create the result data
int[] dstpixels = new int[(int)(w*h)];
// for each row
for (int y=0;y<h;y++) {
// normalize y coordinate to -1 ... 1
double ny = ((2*y)/h)-1;
// pre calculate ny*ny
double ny2 = ny*ny;
// for each column
for (int x=0;x<w;x++) {
// preset to black
dstpixels[(int)(y*w+x)] = 0;
// normalize x coordinate to -1 ... 1
double nx = ((2*x)/w)-1;
// pre calculate nx*nx
double nx2 = nx*nx;
// calculate distance from center (0,0)
// this will include circle or ellipse shape portion
// of the image, depending on image dimensions
// you can experiment with images with different dimensions
double r = Math.sqrt(nx2+ny2);
// discard pixels outside from circle!
if (0.0<=r&&r<=1.0) {
double nr = Math.sqrt(1.0-r*r);
// new distance is between 0 ... 1
nr = (r + (1.0-nr)) / 2.0;
// discard radius greater than 1.0
if (nr<=1.0) {
// calculate the angle for polar coordinates
double theta = Math.atan2(ny,nx);
// calculate new x position with new distance in same angle
double nxn = nr*Math.cos(theta);
// calculate new y position with new distance in same angle
double nyn = nr*Math.sin(theta);
// map from -1 ... 1 to image coordinates
int x2 = (int)(((nxn+1)*w)/2.0);
// map from -1 ... 1 to image coordinates
int y2 = (int)(((nyn+1)*h)/2.0);
// find (x2,y2) position from source pixels
int srcpos = (int)(y2*w+x2);
// make sure that position stays within arrays
if (srcpos>=0 & srcpos < w*h) {
// get new pixel (x2,y2) and put it to target array at (x,y)
dstpixels[(int)(y*w+x)] = srcpixels[srcpos];
}
}
}
}
}
resultimage.setPixels(dstpixels, 0, (int)w, 0, 0, (int)w, (int)h);
//return result pixels
return resultimage;
}
But it doesn't have this lens factor, so the resulting image is always a full circle/ellipse.
Any chance you could point me to some working Java code or library or (maybe even better) help me to amend this code for the lens factor to be taken into account (0.0 <= factor <= 1.0)?
I managed to get it to work.
Bottom line: I created a Bitmap bigger than the original Bitmap, and then I drew the original Bitmap on the new Bitmap (and centered it there) using
Canvas canvas = new Canvas(newBitmap);
canvas.drawBitmap(originalBitmap, null, new Rect(x, y, r, b), null);
I used the Java algorithm posted in my question to create the effect on the new Bitmap. That worked great.
Related
I want to detect yellow colour objects and plot a centroid position on the largest detected yellow object.
I perform my steps in this order:
converted the input rgbaframe to hsv using cvtColor() method
perform color segmentation in HSV using inRange() method, bound it only to yellow color ranges & returns a binary threshold mask.
I perform morphology operation (specifically MORPH_CLOSE) to perform dilation then erosion on the mask to remove any noise.
I perform a gaussian blur to smooth the mask.
I apply canny algorithm to perform edge detection to make edges more obvious to prepare for Contour detection in the next step. (i'm starting to wonder if this step is of beneficial use at all?)
I apply findContour() algorithm to find the Contours in the image as well as find the hierarchy.
HERE I intend to use feature2d.FeatureDetection(SIMPLEBLOB)& pass in the blob area for detection as Params, however there seems to be no implementation supporting for Android, hence I had to work around the limitation and find largest blob using Imgproc.contourArea().
is there a way to do so?
I pass in the contours obtained previously from the findContours() method as parameter to Imgproc.moments to compute the Centroid Position of the objects detected.
HOWEVER, I would like to bring to everyone's attention that this current implementation will compute all centroids in EACH contour (yellow objects) detected. *PLS SEE/REFER to pic 1, 2 to see what is output onto the Frame back to the user.
What I would like to achieve is to find a way to use the contour of the largestblob (via largestContourArea) and pass that info on as a parameter into the ImgprocMoments() so that I will ONLY COMPUTE the centroid of that largest Contour(Object) detected, so I should only see 1 centroid Pos plotted on the screen at any particular point in time.
I have tried a few methods such as passing the contour of the Largest object as param into Imgproc.moments() but it didn't work either due to difference in data Type / if it worked, the output is not as desired, w multiple centroid points being plotted within or along the perimeter of the object, rather than 1 single point at the center of the largest contour object.
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
InputFrame = inputFrame.rgba();
Core.transpose(InputFrame,mat1); //transpose mat1(src) to mat2(dst), sorta like a Clone!
Imgproc.resize(mat1,mat2,InputFrame.size(),0,0,0); // params:(Mat src, Mat dst, Size dsize, fx, fy, interpolation) Extract the dimensions of the new Screen Orientation, obtain the new orientation's surface width & height. Try to resize to fit to screen.
Core.flip(mat2,InputFrame,-1); // mat3 now get updated, no longer is the Origi inputFrame.rgba BUT RATHER the transposed, resized, flipped version of inputFrame.rgba().
int rowWidth = InputFrame.rows();
int colWidth = InputFrame.cols();
Imgproc.cvtColor(InputFrame,InputFrame,Imgproc.COLOR_RGBA2RGB);
Imgproc.cvtColor(InputFrame,InputFrame,Imgproc.COLOR_RGB2HSV);
Lower_Yellow = new Scalar(21,150,150); //HSV color scale H to adjust color, S to control color variation, V is indicator of amt of light required to be shine on object to be seen.
Upper_Yellow = new Scalar(31,255,360); //HSV color scale
Core.inRange(InputFrame,Lower_Yellow, Upper_Yellow, maskForYellow);
final Size kernelSize = new Size(5, 5); //must be odd num size & greater than 1.
final Point anchor = new Point(-1, -1); //default (-1,-1) means that the anchor is at the center of the structuring element.
final int iterations = 1; //number of times dilation is applied. https://docs.opencv.org/3.4/d4/d76/tutorial_js_morphological_ops.html
Mat kernel = Imgproc.getStructuringElement(Imgproc.MORPH_ELLIPSE, kernelSize);
Imgproc.morphologyEx(maskForYellow, yellowMaskMorphed, Imgproc.MORPH_CLOSE, kernel, anchor, iterations); //dilate first to remove then erode. White regions becomes more pronounced, erode away black regions
Mat mIntermediateMat = new Mat();
Imgproc.GaussianBlur(yellowMaskMorphed,mIntermediateMat,new Size(9,9),0,0); //better result than kernel size (3,3, maybe cos reference area wider, bigger, can decide better whether inrange / out of range.
Imgproc.Canny(mIntermediateMat, mIntermediateMat, 5, 120); //try adjust threshold //https://stackoverflow.com/questions/25125670/best-value-for-threshold-in-canny
List<MatOfPoint> contours = new ArrayList<>();
Mat hierarchy = new Mat();
Imgproc.findContours(mIntermediateMat, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE, new Point(0, 0));
byte[] arr = new byte[100];
//List<double>hierarchyHolder = new ArrayList<>();
int cols = hierarchy.cols();
int rows = hierarchy.rows();
for (int i=0; i < rows; i++) {
for (int j = 0; j < cols; j++) {
//hierarchyHolder.add(hierarchy.get(i,j));
//hierarchy.get(i,j) is a double[] type, not byte.
Log.d("hierarchy"," " + hierarchy.get(i,j).toString());
}
}
double maxArea1 = 0;
int maxAreaIndex1 = 0;
//MatOfPoint max_contours = new MatOfPoint();
Rect r = null;
ArrayList<Rect> rect_array = new ArrayList<Rect>();
for(int i=0; i < contours.size(); i++) {
//if(Imgproc.contourArea(contours.get(i)) > 300) { //Size of Mat contour # that particular point in ArrayList of Points.
double contourArea1 = Imgproc.contourArea(contours.get(i));
//Size of Mat contour # that particular point in ArrayList of Points.
if (maxArea1 < contourArea1){
maxArea1 = contourArea1;
maxAreaIndex1 = i;
}
//maxArea1 = Imgproc.contourArea(contours.get(i)); //assigned but nvr used
//max_contours = contours.get(i);
r = Imgproc.boundingRect(contours.get(maxAreaIndex1));
rect_array.add(r); //will only have 1 r in the array eventually, cos we will only take the one w largestContourArea.
}
Imgproc.cvtColor(InputFrame, InputFrame, Imgproc.COLOR_HSV2RGB);
if (rect_array.size() > 0) { //if got more than 1 rect found in rect_array, draw them out!
Iterator<Rect> it2 = rect_array.iterator(); //only got 1 though, this method much faster than drawContour, wont lag. =D
while (it2.hasNext()) {
Rect obj = it2.next();
//if
Imgproc.rectangle(InputFrame, obj.br(), obj.tl(),
new Scalar(0, 255, 0), 1);
}
}
//========= Compute CENTROID POS! WHAT WE WANT TO SHOW ON SCREEN EVENTUALLY!======================
List<Moments> mu = new ArrayList<>(contours.size()); //HUMoments
for (int i = 0; i < contours.size(); i++) {
mu.add(Imgproc.moments(contours.get(i)));
}
List<Point> mc = new ArrayList<>(contours.size()); //the Circle centre Point!
for (int i = 0; i < contours.size(); i++) {
//add 1e-5 to avoid division by zero
mc.add(new Point(mu.get(i).m10 / (mu.get(i).m00 + 1e-5), mu.get(i).m01 / (mu.get(i).m00 + 1e-5)));
}
for (int i = 0; i < contours.size(); i++) {
Scalar color = new Scalar(150, 150, 150);
Imgproc.circle(InputFrame, mc.get(i), 20, color, -1); //just to plot the small central point as a dot on the detected ImgObject.
}
Picture of output when view on CameraFrame:
1:
2:
Managed to resolve the issue, instead of looping thru the entire array of Contours for yellow objects detected, and pass each contour as parameter to Imgproc.moments, I now only assign the Contour at the particular index which the LargestContour is detected, so only 1 single contour is being processed now by Imgproc.moments to compute Centroid! The code correction is as shown below
//========= Compute CENTROID POS! WHAT WE WANT TO SHOW ON SCREEN EVENTUALLY!======================
List<Moments> mu = new ArrayList<>(contours.size());
mu.add(Imgproc.moments(contours.get(maxAreaContourIndex1))); //Just adding that 1 Single Largest Contour (largest ContourArea) to arryalist to be computed for MOMENTS to compute CENTROID POS!
List<Point> mc = new ArrayList<>(contours.size()); //the Circle centre Point!
//add 1e-5 to avoid division by zero
mc.add(new Point(mu.get(0).m10 / (mu.get(0).m00 + 1e-5), mu.get(0).m01 / (mu.get(0).m00 + 1e-5))); //index 0 cos there shld only be 1 contour now, the largest one only!
//notice that it only adds 1 point, the centroid point. Hence only 1 point in the mc list<Point>, so ltr reference that point w an index 0!
Scalar color = new Scalar(150, 150, 150);
Imgproc.circle(InputFrame, mc.get(0), 15, color, -1); //just to plot the small central point as a dot on the detected ImgObject.
I'm adding a PNG file as an own floorplan on top of Google Maps with the Google Maps Android API with the following code:
GroundOverlayOptions groundOverlayOptions = new GroundOverlayOptions();
BitmapDescriptor bitmapDescriptor = BitmapDescriptorFactory.fromAsset("building-d.png");
groundOverlayOptions.image(bitmapDescriptor);
groundOverlayOptions.anchor(0, 1);
LatLng buildingSW = new LatLng(47.014815, 8.305098);
LatLng buildingNE = new LatLng(47.015148, 8.305440);
LatLng buildingNW = new LatLng(47.015168, 8.305144);
LatLng buildingSE = new LatLng(47.014792, 8.305385);
Location swLoc = locationFromLatLng(buildingSW);
Location seLoc = locationFromLatLng(buildingSE);
Location nwLoc = locationFromLatLng(buildingNW);
Location neLoc = locationFromLatLng(buildingNE);
float angle = swLoc.bearingTo(nwLoc);
groundOverlayOptions.bearing(angle);
float width = swLoc.distanceTo(seLoc);
groundOverlayOptions.position(buildingSW, width);
mMap.addGroundOverlay(groundOverlayOptions);
Now I know that in the PNG there is a room at pixel coordinates 422/301, 708/301, 422/10 and 708/10 (those are the corners). I'd like to draw a polygon over the GroundOverlay covering that room. How should I do that? Do I need to convert my pixel-coordinates to LatLng and if so, how?
And by the way: Do I really have to use PNGs for GroundOverlays and is there no other supported vector-format like eps, pdf, ...?
Having seen your comment to the other answer, let me complete with some code:
Having set the "origin" in latlng 47.014816, 8.305098, you have to convert those coordinates to mercator and you can do something similar to the below:
public boolean initializeByTwoCouplesOfCooordsAndScale(double[] coordAreal, double[] coordBreal, double[] coordAvirtual, double[] coordBvirtual, double scalingFactor) {
if (coordAreal[0] == coordBreal[0] && coordAvirtual[1] == coordBvirtual[1] && coordAreal[1] == coordBreal[1] && coordAvirtual[0] == coordBvirtual[0]) {
System.err.println("Coordinates must not be the same!");
return false;
}
// aPoint is considered the "origin" point (0,0)
aPoint = coordAreal;
bPoint = coordAvirtual;
// now calculate the angle of the Real world coordinate for the points
double deltaRy = coordBreal[1] - coordAreal[1];
double deltaRx = coordBreal[0] - coordAreal[0];
double aR = Math.atan2(deltaRy, deltaRx);
// Now calculate the angle of the virtual world coordinates
double deltaVy = coordBvirtual[1] - coordAvirtual[1];
double deltaVx = coordBvirtual[0] - coordAvirtual[0];
double aV = Math.atan2(deltaVy, deltaVx);
// Set the transformation angle as the difference between the real and the virtual angles.
mPhi= (aR - aV);
// Set the scaling factor as the provided one
mScale = (scalingFactor);//scaling factor is in function below
// Calculate the scaling factor error correction using the distances of the two systems.
return true;
}
public static double getScalingFactor(double latitude) {
return 1 / (Math.cos(Math.toRadians(latitude)));
}
So you can call the method:
initializeByTwoCouplesOfCooordsAndScale(new double[]{MERCATOR_LNG,MERCATOR_LAT},//real coordinates for point A REMEMBER: LNG,LAT = x,y!
new double[]{0d,0d}, //Virual coordinates for point A
new double[]{MERCATOR_POINT_B_LNG, MERCATOR_POINT_B_LAT},//real point B
new double[]{X_METERS,Y_METERS},//coordinates in meters of point B in virtual map
getScalingFactor(47.014816));
then you can transform with this function:
public double[] transform(double[] coord) {
double[] transCoord = new double[2];
double xscaled = (coord[0] - bPoint[0]) * mScale; // XXX bPoint is the position of origin point in the "VIRTUAL" world. [0] is the x coordinate
double yscaled = (coord[1] - bPoint[1]) * mScale;
transCoord[0] = (xscaled * Math.cos(mPhi)) - (yscaled * Math.sin(mPhi)) + aPoint[0]; //aPoint is the point with real coordinates of origin!
transCoord[1] = (xscaled * Math.sin(mPhi)) + (yscaled * Math.cos(mPhi)) + aPoint[1];
return transCoord;
}
you can find online a way to convert latlng to mercator, it just a bunch of math ;)
You should work in this way:
Your indoor map positions should be relative to a specific point (BOTTOM-LEFT is 0,0 let's say), then all the other positions will be relative to that point in meters, so you will endup in values under 100meters usually.
Having this you have to "move, rotate and scale" the indoor map with respect to the world.
Just take a map on a desktop which is not LAT/LNG and find the coordinates for the same indoor points you have (usually we get real and indoor position for bottom-left and top-right positions) so you can find where it should be in the world.
Take a look also at the scaling factor (depending on the latitude, the map must be scaled)
https://en.wikipedia.org/wiki/Mercator_projection#Scale_factor
We calculate that value by doing something like 1/cos(latitudeINradians)
public static double getScalingFactor(double latitude) {
return 1 / (Math.cos(Math.toRadians(latitude)));
}
Let me know if you can find a way, otherwise i will search and try to strip our code
I want to detect eyes irises and their centers using Hough Circle algorithm.
I'm using this code:
private void houghCircle()
{
Bitmap obtainedBitmap = imagesList.getFirst();
/* convert bitmap to mat */
Mat mat = new Mat(obtainedBitmap.getWidth(),obtainedBitmap.getHeight(),
CvType.CV_8UC1);
Mat grayMat = new Mat(obtainedBitmap.getWidth(), obtainedBitmap.getHeight(),
CvType.CV_8UC1);
Utils.bitmapToMat(obtainedBitmap, mat);
/* convert to grayscale */
int colorChannels = (mat.channels() == 3) ? Imgproc.COLOR_BGR2GRAY : ((mat.channels() == 4) ? Imgproc.COLOR_BGRA2GRAY : 1);
Imgproc.cvtColor(mat, grayMat, colorChannels);
/* reduce the noise so we avoid false circle detection */
Imgproc.GaussianBlur(grayMat, grayMat, new Size(9, 9), 2, 2);
// accumulator value
double dp = 1.2d;
// minimum distance between the center coordinates of detected circles in pixels
double minDist = 100;
// min and max radii (set these values as you desire)
int minRadius = 0, maxRadius = 1000;
// param1 = gradient value used to handle edge detection
// param2 = Accumulator threshold value for the
// cv2.CV_HOUGH_GRADIENT method.
// The smaller the threshold is, the more circles will be
// detected (including false circles).
// The larger the threshold is, the more circles will
// potentially be returned.
double param1 = 70, param2 = 72;
/* create a Mat object to store the circles detected */
Mat circles = new Mat(obtainedBitmap.getWidth(), obtainedBitmap.getHeight(), CvType.CV_8UC1);
/* find the circle in the image */
Imgproc.HoughCircles(grayMat, circles, Imgproc.CV_HOUGH_GRADIENT, dp, minDist, param1, param2, minRadius, maxRadius);
/* get the number of circles detected */
int numberOfCircles = (circles.rows() == 0) ? 0 : circles.cols();
/* draw the circles found on the image */
for (int i=0; i<numberOfCircles; i++) {
/* get the circle details, circleCoordinates[0, 1, 2] = (x,y,r)
* (x,y) are the coordinates of the circle's center
*/
double[] circleCoordinates = circles.get(0, i);
int x = (int) circleCoordinates[0], y = (int) circleCoordinates[1];
Point center = new Point(x, y);
int radius = (int) circleCoordinates[2];
/* circle's outline */
Core.circle(mat, center, radius, new Scalar(0,
255, 0), 4);
/* circle's center outline */
Core.rectangle(mat, new Point(x - 5, y - 5),
new Point(x + 5, y + 5),
new Scalar(0, 128, 255), -1);
}
/* convert back to bitmap */
Utils.matToBitmap(mat, obtainedBitmap);
MediaStore.Images.Media.insertImage(getContentResolver(),obtainedBitmap, "testgray", "gray" );
}
But it doesn't detect iris in all images correctly. Specially, if the iris has a dark color like brown. How can I fix this code to detect the irises and their centers correctly?
EDIT: Here are some sample images (which I got from the web) that shows the performance of the algorithm (Please ignore the landmarks which are represented by the red squares):
In these images the algorithm doesn't detect all irises:
This image shows how the algorithm couldn't detect irises at all:
EDIT 2: Here is a code which uses Canny edge detection, but it causes the app to crash:
private void houghCircle()
{
Mat grayMat = new Mat();
Mat cannyEdges = new Mat();
Mat circles = new Mat();
Bitmap obtainedBitmap = imagesList.getFirst();
/* convert bitmap to mat */
Mat originalBitmap = new Mat(obtainedBitmap.getWidth(),obtainedBitmap.getHeight(),
CvType.CV_8UC1);
//Converting the image to grayscale
Imgproc.cvtColor(originalBitmap,grayMat,Imgproc.COLOR_BGR2GRAY);
Imgproc.Canny(grayMat, cannyEdges,10, 100);
Imgproc.HoughCircles(cannyEdges, circles,
Imgproc.CV_HOUGH_GRADIENT,1, cannyEdges.rows() / 15); //now circles is filled with detected circles.
//, grayMat.rows() / 8);
Mat houghCircles = new Mat();
houghCircles.create(cannyEdges.rows(),cannyEdges.cols()
,CvType.CV_8UC1);
//Drawing lines on the image
for(int i = 0 ; i < circles.cols() ; i++)
{
double[] parameters = circles.get(0,i);
double x, y;
int r;
x = parameters[0];
y = parameters[1];
r = (int)parameters[2];
Point center = new Point(x, y);
//Drawing circles on an image
Core.circle(houghCircles,center,r,
new Scalar(255,0,0),1);
}
//Converting Mat back to Bitmap
Utils.matToBitmap(houghCircles, obtainedBitmap);
MediaStore.Images.Media.insertImage(getContentResolver(),obtainedBitmap, "testgray", "gray" );
}
This is the error I get in the log
FATAL EXCEPTION: Thread-28685
CvException [org.opencv.core.CvException: cv::Exception: /hdd2/buildbot/slaves/slave_ardbeg1/50-SDK/opencv/modules/imgproc/src/color.cpp:3739: error: (-215) scn == 3 || scn == 4 in function void cv::cvtColor(cv::InputArray, cv::OutputArray, int, int)
]
at org.opencv.imgproc.Imgproc.cvtColor_1(Native Method)
at org.opencv.imgproc.Imgproc.cvtColor(Imgproc.java:4598)
Which is caused by this line: Imgproc.cvtColor(originalBitmap,grayMat,Imgproc.COLOR_BGR2GRAY);
Can anyone please tell me how this error can solved? Perhaps adding a canny edge detection will improve the results.
Hough circles work better on well defined circles. They are not good with things like iris.
After some thresholding, morphological operations or canny edge detection, feature detection methods like MSER work much better for iris detection.
Here is a similar question with a solution if you are looking for some code.
As you want to detect iris using hough transform (there are others), you had better studying the Canny edge detector and its parameters.
cv::HoughCircles takes the Canny-hysteresis threshold in param1. Investigating Canny alone, you get the impression of good threshold range.
Maybe instead of gaussian blur, you apply a better denoising (non local means with say h=32 and window sizes 5 and 15), and also try to harmonize the image contrast, e.g., using contrast limited adaptive histogram equalization (cv::CLAHE).
Harmonization is to make sure all (highlight and shadow) eyes map to similar intensity range.
I wanted to know if those images are the images you processed or if you like took a cell phone snapshot of your screen to upload them here. Because the irises are bigger than the maximum radius you set in your code. Therefor I don't understand how you could find any iris at all. The irises in the first image have a radius of over 20. So you shouldn't be able to detect them.
You should set the radii to the radius range you expect your irises to be.
I am working on image editing using OPENGL in Android and I have applied filter to an image using photoshop curve now I want to reproduce the same in Android using glsl. Is there any formula to calculate single fragment color using photoshops curve output value?
EDIT
The math behind the photoshop curve has already been answered in this question
How to recreate the math behind photoshop curves but I am not very clear about how to reproduce the same in glsl fragment shader.
Screen Shot of my photoshop curve
You're after a function fragColour = curves(inColour, constants...). If you have just the one curve for red, green and blue you apply the same curve to each individually. This answer has a link (below) to code which plots points along the function. The key line is:
double y = ...
Which you'd return from curves. The variable x in the loop is your inColour. All you need now is the constants which come from the points and the second derivative sd arrays. These you'll have to pass in as uniforms. The function first has to figure out which point each colour x is between (finding cur, next, sd[i] and sd[i+1]), then evaluate and return y.
EDIT:
If you just want to apply some curve you've created in photoshop then the problem is much simpler. The easiest way is to create a simple function that gives a similar shape. I use these as a starting point. A gamma correction curve is also quite common.
This is overkill, but if you do need a more exact result, you could create an image with a linear ramp (e.g. 255 pixels from black to white), apply your filter to it in photoshop and the result becomes a lookup table. Passing in all 255 values to a shader is expensive so if it's a smooth curve you could try some curve fitting tools (for example).
Once you have a function, simply apply it to your colour in GLSL. Applying a gamma curve for example is done like this:
fragColour = vec4(pow(inColour.rgb, 1.0 / gamma.rgb), inColour.a);
EDIT2:
The curve you have looks very similar to this:
fragColour = vec4(pow(inColour.rgb, 1.0 / vec3(0.6)), inColour.a);
Or even simpler:
fragColour = vec4(inColour.rgb * inColour.rgb, inColour.a);
Just in case the link dies, I'll copy the code here (not that I've tested it):
Point[] points = /* liste de points, triƩs par "x" croissants */
double[] sd = secondDerivative(points);
for(int i=0;i<points.length-1;i++) {
Point cur = points[i];
Point next = points[i+1];
for(int x=cur.x;x<next.x;x++) {
double t = (double)(x-cur.x)/(next.x-cur.x);
double a = 1-t;
double b = t;
double h = next.x-cur.x;
double y= a*cur.y + b*next.y + (h*h/6)*( (a*a*a-a)*sd[i]+ (b*b*b-b)*sd[i+1] );
draw(x,y); /* ou tout autre utilisation */
}
}
And the second derivative:
public static double[] secondDerivative(Point... P) {
int n = P.length;
double yp1=0.0;
double ypn=0.0;
// build the tridiagonal system
// (assume 0 boundary conditions: y2[0]=y2[-1]=0)
double[][] matrix = new double[n][3];
double[] result = new double[n];
matrix[0][1]=1;
for(int i=1;i<n-1;i++) {
matrix[i][0]=(double)(P[i].x-P[i-1].x)/6;
matrix[i][1]=(double)(P[i+1].x-P[i-1].x)/3;
matrix[i][2]=(double)(P[i+1].x-P[i].x)/6;
result[i]=(double)(P[i+1].y-P[i].y)/(P[i+1].x-P[i].x) - (double)(P[i].y-P[i-1].y)/(P[i].x-P[i-1].x);
}
matrix[n-1][1]=1;
// solving pass1 (up->down)
for(int i=1;i<n;i++) {
double k = matrix[i][0]/matrix[i-1][1];
matrix[i][1] -= k*matrix[i-1][2];
matrix[i][0] = 0;
result[i] -= k*result[i-1];
}
// solving pass2 (down->up)
for(int i=n-2;i>=0;i--) {
double k = matrix[i][2]/matrix[i+1][1];
matrix[i][1] -= k*matrix[i+1][0];
matrix[i][2] = 0;
result[i] -= k*result[i+1];
}
// return second derivative value for each point P
double[] y2 = new double[n];
for(int i=0;i<n;i++) y2[i]=result[i]/matrix[i][1];
return y2;
}
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I want to make a an application like a cam scanner for cropping a document.
But I need same functionality like my two images..
First Images shown image captured by camera..
Second image recognize a captured image part like this..
I research more and more but not getting any out put so, I ask here if,any one done this tell me..
Thanks
I assume your problem is to detect the object to scan.
Object detection mechanisms like pattern matching or feature detection won't bring you the results you are looking for as you don't know what exactly is the object you are scanning.
Basically you search for a rectangular object in the picture.
A basic approach to this could be as following:
Run a canny edge detector on the image. It could help to blur the image a bit before doing this. The edges of the object should be clearly visible.
Now you want to do a Hough transform to find lines in the picture.
Search for lines with an angle around 90deg to each other. The problem would be to find the right ones. Maybe it is enough to use the lines closest to the frame of the picture that are reasonably parallel to them.
Find the intersecting points to define the edges of your object.
At least this should give you a hint where to research further.
As further steps in such an app you will have to calculate the projection of the points and do a affine transform of the object.
I hope this helps.
After writing all this i found this post. It should help you lot.
As my answer targets OpenCV you have to use the OpenCV library.
In Order to do this, you need to install the Android Native Development Kit (NDK).
There are some good tutorials on how to use OpenCV on Android on the OpenCV for Android page.
One thing to keep in mind is that almost each function of the Java wrapper calls a native method. That costs lots of time. So you want to do as much as possible in your native code before returning your results to the Java part.
I know I am too late to answer but it might be helpful to someone.
Try the following code.
#Override
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
path = new Path();
path.moveTo(x1, y1); // this should set the start point right
//path.lineTo(x1, y1); <-- this line should be drawn at the end of course,sorry
path.lineTo(x2, y2);
path.lineTo(x3, y3);
path.lineTo(x4, y4);
path.lineTo(x1, y1);
canvas.drawPath(path, currentPaint);
}
Pass your image mat in this method:
void findSquares(Mat image, List<MatOfPoint> squares) {
int N = 10;
squares.clear();
Mat smallerImg = new Mat(new Size(image.width() / 2, image.height() / 2), image.type());
Mat gray = new Mat(image.size(), image.type());
Mat gray0 = new Mat(image.size(), CvType.CV_8U);
// down-scale and upscale the image to filter out the noise
Imgproc.pyrDown(image, smallerImg, smallerImg.size());
Imgproc.pyrUp(smallerImg, image, image.size());
// find squares in every color plane of the image
Outer:
for (int c = 0; c < 3; c++) {
extractChannel(image, gray, c);
// try several threshold levels
Inner:
for (int l = 1; l < N; l++) {
Imgproc.threshold(gray, gray0, (l + 1) * 255 / N, 255, Imgproc.THRESH_BINARY);
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
// find contours and store them all as a list
Imgproc.findContours(gray0, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
MatOfPoint approx = new MatOfPoint();
// test each contour
for (int i = 0; i < contours.size(); i++) {
approx = approxPolyDP(contours.get(i), Imgproc.arcLength(new MatOfPoint2f(contours.get(i).toArray()), true) * 0.02, true);
// square contours should have 4 vertices after approximation
// relatively large area (to filter out noisy contours)
// and be convex.
// Note: absolute value of an area is used because
// area may be positive or negative - in accordance with the
// contour orientation
double area = Imgproc.contourArea(approx);
if (area > 5000) {
if (approx.toArray().length == 4 &&
Math.abs(Imgproc.contourArea(approx)) > 1000 &&
Imgproc.isContourConvex(approx)) {
double maxCosine = 0;
Rect bitmap_rect = null;
for (int j = 2; j < 5; j++) {
// find the maximum cosine of the angle between joint edges
double cosine = Math.abs(angle(approx.toArray()[j % 4], approx.toArray()[j - 2], approx.toArray()[j - 1]));
maxCosine = Math.max(maxCosine, cosine);
bitmap_rect = new Rect(approx.toArray()[j % 4], approx.toArray()[j - 2]);
}
// if cosines of all angles are small
// (all angles are ~90 degree) then write quandrange
// vertices to resultant sequence
if (maxCosine < 0.3)
squares.add(approx);
}
}
}
}
}
}
In this method you get four point of document then you can cut this image using below method:
public Bitmap warpDisplayImage(Mat inputMat) {
List<Point> newClockVisePoints = new ArrayList<>();
int resultWidth = inputMat.width();
int resultHeight = inputMat.height();
Mat startM = Converters.vector_Point2f_to_Mat(orderRectCorners(Previes method four poit list(like : List<Point> points)));
Point ocvPOut4 = new Point(0, 0);
Point ocvPOut1 = new Point(0, resultHeight);
Point ocvPOut2 = new Point(resultWidth, resultHeight);
Point ocvPOut3 = new Point(resultWidth, 0);
ocvPOut3 = new Point(0, 0);
ocvPOut4 = new Point(0, resultHeight);
ocvPOut1 = new Point(resultWidth, resultHeight);
ocvPOut2 = new Point(resultWidth, 0);
}
Mat outputMat = new Mat(resultWidth, resultHeight, CvType.CV_8UC4);
List<Point> dest = new ArrayList<Point>();
dest.add(ocvPOut3);
dest.add(ocvPOut2);
dest.add(ocvPOut1);
dest.add(ocvPOut4);
Mat endM = Converters.vector_Point2f_to_Mat(dest);
Mat perspectiveTransform = Imgproc.getPerspectiveTransform(startM, endM);
Imgproc.warpPerspective(inputMat, outputMat, perspectiveTransform, new Size(resultWidth, resultHeight), Imgproc.INTER_CUBIC);
Bitmap descBitmap = Bitmap.createBitmap(outputMat.cols(), outputMat.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(outputMat, descBitmap);
return descBitmap;
}