So basically, I'm creating an android app (using tesseract and OpenCV) which when given a word after pre-processing and scan steps, draws a rectangle around that word - basically "finds" the word and marks it. However I'm wondering how to get coordinates of a character ? or atleast a word ? I have coordinates of each line, but the coordinates are not relative to the "main-picture", but only coordinates of "text-blocks" that I have. Maybe someone has/knows either explanation/tutorial or some kind of info on how to go about finding coordinates of a word/character. Would highly appreciate.
This sample code, taken from the API Examples Wiki page from tesseract should help:
APIExamples
Focus on those 2 lines:
int x1, y1, x2, y2;
ri->BoundingBox(level, &x1, &y1, &x2, &y2);
Pix *image = pixRead("/usr/src/tesseract/testing/phototest.tif");
tesseract::TessBaseAPI *api = new tesseract::TessBaseAPI();
api->Init(NULL, "eng");
api->SetImage(image);
api->SetVariable("save_blob_choices", "T");
api->SetRectangle(37, 228, 548, 31);
api->Recognize(NULL);
tesseract::ResultIterator* ri = api->GetIterator();
tesseract::PageIteratorLevel level = tesseract::RIL_SYMBOL;
if(ri != 0) {
do {
const char* symbol = ri->GetUTF8Text(level);
float conf = ri->Confidence(level);
int x1, y1, x2, y2;
ri->BoundingBox(level, &x1, &y1, &x2, &y2);
if(symbol != 0) {
printf("symbol %s, conf: %f", symbol, conf);
bool indent = false;
tesseract::ChoiceIterator ci(*ri);
do {
if (indent) printf("\t\t ");
printf("\t- ");
const char* choice = ci.GetUTF8Text();
printf("%s conf: %f\n", choice, ci.Confidence());
indent = true;
} while(ci.Next());
}
printf("---------------------------------------------\n");
delete[] symbol;
} while((ri->Next(level)));
}
Related
I want to ask about some ideas / study materials connected to binarization. I am trying to create system that detects human emotions. I am able to get areas such as brows, eyes, nose, mouth etc. but then comes another stage -> processing...
My images are taken in various places/time of day/weather conditions. It's problematic during binarization, with the same treshold value one images are fully black, other looks well and provide me informations I want.
What I want to ask you about is:
1) If there is known way how to bring all images to the same level of brightness?
2) How to create dependency between treshold value and brightness on image?
What I have tried for now is normalize the image... but there are no effects, maybe I'm doing something wrong. I'm using OpenCV (for android)
Core.normalize(cleanFaceMatGRAY, cleanFaceMatGRAY,0, 255, Core.NORM_MINMAX, CvType.CV_8U);
EDIT:
I tried adaptive treshold, OTSU - they didnt work for me. I have problems with using CLAHE in Android but I managed to implement Niblack algorithm.
Core.normalize(cleanFaceMatGRAY, cleanFaceMatGRAY,0, 255, Core.NORM_MINMAX, CvType.CV_8U);
nibelBlackTresholding(cleanFaceMatGRAY, -0.2);
private void nibelBlackTresholding(Mat image, double parameter) {
Mat meanPowered = image.clone();
Core.multiply(image, image, meanPowered);
Scalar mean = Core.mean(image);
Scalar stdmean = Core.mean(meanPowered);
double tresholdValue = mean.val[0] + parameter * stdmean.val[0];
int totalRows = image.rows();
int totalCols = image.cols();
for (int cols=0; cols < totalCols; cols++) {
for (int rows=0; rows < totalRows; rows++) {
if (image.get(rows, cols)[0] > tresholdValue) {
image.put(rows, cols, 255);
} else {
image.put(rows, cols, 0);
}
}
}
}
The results are really good, but still not enough for some images. I paste links cuz images are big and I don't want to take too much screen:
For example this one is tresholded really fine:
https://dl.dropboxusercontent.com/u/108321090/a1.png
https://dl.dropboxusercontent.com/u/108321090/a.png
But bad light produce shadows sometimes and this gives this effect:
https://dl.dropboxusercontent.com/u/108321090/b1.png
https://dl.dropboxusercontent.com/u/108321090/b.png
Do you have any idea that could help me to improve treshold of those images with high light difference (shadows)?
EDIT2:
I found that my previous Algorithm is implemented in wrong way. Std was calculated in wrong way. In Niblack Thresholding mean is local value not global. I repaired it according to this reference http://arxiv.org/ftp/arxiv/papers/1201/1201.5227.pdf
private void niblackThresholding2(Mat image, double parameter, int window) {
int totalRows = image.rows();
int totalCols = image.cols();
int offset = (window-1)/2;
double tresholdValue = 0;
double localMean = 0;
double meanDeviation = 0;
for (int y=offset+1; y<totalCols-offset; y++) {
for (int x=offset+1; x<totalRows-offset; x++) {
localMean = calculateLocalMean(x, y, image, window);
meanDeviation = image.get(y, x)[0] - localMean;
tresholdValue = localMean*(1 + parameter * ( (meanDeviation/(1 - meanDeviation)) - 1 ));
Log.d("QWERTY","TRESHOLD " +tresholdValue);
if (image.get(y, x)[0] > tresholdValue) {
image.put(y, x, 255);
} else {
image.put(y, x, 0);
}
}
}
}
private double calculateLocalMean(int x, int y, Mat image, int window) {
int offset = (window-1)/2;
Mat tempMat;
Rect tempRect = new Rect();
Point leftTop, bottomRight;
leftTop = new Point(x - (offset + 1), y - (offset + 1));
bottomRight = new Point(x + offset, y + offset);
tempRect = new Rect(leftTop, bottomRight);
tempMat = new Mat(image, tempRect);
return Core.mean(tempMat).val[0];
}
Results for 7x7 window and proposed in reference k parameter = 0.34: I still can't get rid of shadow on faces.
https://dl.dropboxusercontent.com/u/108321090/b2.png
https://dl.dropboxusercontent.com/u/108321090/b1.png
things to look at:
http://docs.opencv.org/java/org/opencv/imgproc/CLAHE.html
http://docs.opencv.org/java/org/opencv/imgproc/Imgproc.html#adaptiveThreshold(org.opencv.core.Mat,%20org.opencv.core.Mat,%20double,%20int,%20int,%20int,%20double)
http://docs.opencv.org/java/org/opencv/imgproc/Imgproc.html#threshold(org.opencv.core.Mat,%20org.opencv.core.Mat,%20double,%20double,%20int) (THRESH_OTSU)
I'm trying to drag a path around my canvas by re-plotting the coordinates, stored in an array of point, then re creating it. The path drags but flips horizontally and vertically, like a mirror image, over where the user clicks. I have no idea why.
private void drag(MotionEvent e) {
// TODO correct weird flip
if (clicked(e)) {
for (Point p : points) {
int modX = (int) (e.getX() + (e.getX() - p.x));
int modY = (int) (e.getY() + (e.getY() - p.y));
p.set(modX, modY);
}
updateOutline();
}
}
private void updateOutline() {
// update the outline
outline = new Path();
outline.moveTo(points.get(0).x, points.get(0).y);
for (Point coor : points)
outline.lineTo(coor.x, coor.y);
}
Any help will be appreciated, thanks
In my opinion there is a problem in these lines:
int modX = (int) (e.getX() + (e.getX() - p.x));
int modY = (int) (e.getY() + (e.getY() - p.y));
Consider two points A(1,5) and B(4,5). If user clicks in C(3,6), then point A will be translated to A'(5, 7) and point B to B'(2, 7). As you can see, points A and B will change places.
You might want to store start drag position and calculate distance and updated path position using this information.
I've been trying to include the ability to show routes in an android app, and was working this solution into my app:
J2ME/Android/BlackBerry - driving directions, route between two locations
I've got basically all of the code in place, but in the drawPath method I get the error "The method toPixels(GeoPoint, Point) in the type Projection is not applicable for the arguments (GeoPoint, Point)" on the starred code below. Here's the code:
public void drawPath(MapView mMapView, Canvas canvas)
{
int x1 = -1, y1 = -1, x2 = -1, y2 = -1;
Paint paint = new Paint();
paint.setColor(Color.GREEN);
paint.setStyle(Paint.Style.STROKE);
paint.setStrokeWidth(3);
for (int i = 0; i < mPoints.size(); i++)
{
Point point = new Point();
mMapView.getProjection().*****toPixels*****(mPoints.get(i), point);
x2 = point.*****x*****;
y2 = point.*****y*****;
if (i > 0)
{
canvas.drawLine(x1, y1, x2, y2, paint);
}
x1 = x2;
y1 = y2;
}
}
I've not been able to test it at all yet because I've been unable to sort this error, so I don't know if there are other problems elsewhere. However in the meantime, if anybody knows why this error pops up it would be really appreciated. Thanks in advance! Oh and if anybody needs to see any of my other code or classes please let me know.
here is a good example i have posted you can try out this
I had tried this source code.
Copy the file from source code, don't change anything before that.
DON'T press Ctrl + Shift + O to load library class automatically.
Sometimes, eclipse import a wrong library. Why? That's another topic on eclipse.
Edit all these lines manually
import org.ci.geo.route.Road;
import org.ci.geo.route.RoadProvider;
Change import to your package library name.
Then edit this line that suit to your layout:
setContentView(R.layout.main);
mapView = (MapView) findViewById(R.id.mapview);
and this for sweet flavour
TextView textView = (TextView) findViewById(R.id.description);
Hope this will help you
I'm trying to convert this C++ code to Android's Java. However, I'm using GL10 and glGetDoublev apparently isn't supported on OpenGL-ES. How else can I perform this function?
// Get point by reading z buffer and unprojecting to object coords
void Pick(int x, int y)
{
GLint viewport[4];
GLdouble mvmatrix[16], projmatrix[16];
glGetIntegerv(GL_VIEWPORT, viewport);
glGetDoublev(GL_MODELVIEW_MATRIX, mvmatrix);
glGetDoublev(GL_PROJECTION_MATRIX, projmatrix);
int winx = x;
int winy = winHeight - y;
GLfloat winz = 0.0;
GLdouble objx = 0.0;
GLdouble objy = 0.0;
GLdouble objz = 0.0;
// Get winz for given winx and winy
glReadPixels(winx, winy, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winz);
// Make sure there was something at the point
if (winz >= 1.0)
{
qDebug("Nothing picked");
}
else
{
// Get object coords from win coords
gluUnProject((GLdouble)winx, (GLdouble)winy, (GLdouble)winz,
mvmatrix, projmatrix, viewport,
&objx, &objy, &objz);
qDebug("Pick: win=%d,%d,%.3f obj=%.3f,%.3f,%.3f",
winx, winy, winz, objx, objy, objz);
// Place a marker at that position
Marker marker;
marker.point.x = objx;
marker.point.y = objy;
marker.point.z = objz;
markerList << marker;
// limit to two markers
if (markerList.count() > 2)
markerList.pop_front();
Rebuild();
}
}
I ran into this problem in my own Android OpenGL ES 1.0 forays. Since OpenGL ES 1.0 does not allow you to get the matrices directly (as far as I know glGetFloatv was not implemented in 1.0), as you wanted to do, you need to make a wrapper that tracks matrices.
If you use OpenGL ES 1.1, you can use glGetFloatv since it has been implemented.
Here is the website where I originally found the solution:
http://www.41post.com/1540/programming/android-opengl-get-the-modelview-matrix-on-15-cupcake
All implementation details are there.
I'm working on a painting application for Android and I'd like to use raw data from the device's touch screen to adjust the user's paint brush as they draw. I've seen other apps for Android (iSteam, for example) where the size of the brush is based on the size of your fingerprint on the screen. As far as painting apps go, that would be a huge feature.
Is there a way to get this data? I've googled for quite a while, but I haven't found any source demonstrating it. I know it's possible, because Dolphin Browser adds multi-touch support to the Hero without any changes beneath the application level. You must be able to get a 2D matrix of raw data or something...
I'd really appreciate any help I can get!
There are some properties in the Motion Event class. You can use the getSize() method to find the size of the object. The Motion Event class also gives access to pressure, coordinates etc...
If you check the APIDemos in the SDK there's a simple paitning app called TouchPaint
package com.example.android.apis.graphics;
It uses the following to draw on the canvas
#Override public boolean onTouchEvent(MotionEvent event) {
int action = event.getAction();
mCurDown = action == MotionEvent.ACTION_DOWN
|| action == MotionEvent.ACTION_MOVE;
int N = event.getHistorySize();
for (int i=0; i<N; i++) {
//Log.i("TouchPaint", "Intermediate pointer #" + i);
drawPoint(event.getHistoricalX(i), event.getHistoricalY(i),
event.getHistoricalPressure(i),
event.getHistoricalSize(i));
}
drawPoint(event.getX(), event.getY(), event.getPressure(),
event.getSize());
return true;
}
private void drawPoint(float x, float y, float pressure, float size) {
//Log.i("TouchPaint", "Drawing: " + x + "x" + y + " p="
// + pressure + " s=" + size);
mCurX = (int)x;
mCurY = (int)y;
mCurPressure = pressure;
mCurSize = size;
mCurWidth = (int)(mCurSize*(getWidth()/3));
if (mCurWidth < 1) mCurWidth = 1;
if (mCurDown && mBitmap != null) {
int pressureLevel = (int)(mCurPressure*255);
mPaint.setARGB(pressureLevel, 255, 255, 255);
mCanvas.drawCircle(mCurX, mCurY, mCurWidth, mPaint);
mRect.set(mCurX-mCurWidth-2, mCurY-mCurWidth-2,
mCurX+mCurWidth+2, mCurY+mCurWidth+2);
invalidate(mRect);
}
mFadeSteps = 0;
}
Hope that helps :)
I'm working on something similar, and I'd suggest looking at the Canvas and Paint classes as well. Looking at getHistorySize() in Motion Event might also be helpful for figuring out how long a particular stroke has been in play.