How to add cartoon face on Camera Preview using Android - android

I have a Module to put cartoon face on eyes or anywhere else on live Camera Preview. I am using Moodme Sdk. I have implemented camera preview. I m getting landmark x and y axis value. But I don't know where do i add those landmark and how to put that image on eyes using landmark. This is code for while getting person face on live camera.
#Override
public void onImageAvailable(ImageReader reader) {
Image image = imageReader.acquireLatestImage();
if (image == null) {
return;
}
ByteBuffer yBuffer = image.getPlanes()[0].getBuffer();
byte[] yBytes = new byte[yBuffer.remaining()];
yBuffer.get(yBytes);
ByteBuffer uBuffer = image.getPlanes()[1].getBuffer();
byte[] uBytes = new byte[uBuffer.remaining()];
uBuffer.get(uBytes);
ByteBuffer vBuffer = image.getPlanes()[2].getBuffer();
byte[] vBytes = new byte[vBuffer.remaining()];
vBuffer.get(vBytes);
tracker.processImageBuffer(yBytes, WIDTH, HEIGHT, WIDTH, MDMTrackerManager.FrameFormat.GRAY);
//renderer.updateTextureImage(yBytes, uBytes, vBytes, image.getPlanes()[1].getPixelStride());
image.close();
if (tracker.isFaceTracked()) {
// renderer.updateVertices();
}
if (tracker.isFaceTracked()) {
// translate to opengl coordinates
float[] landmarks = new float[66*2];
for (int i = 0; i < 66; ++i) {
if(i >=17 && i <27 || i >=36 && i <48 ) {
landmarks[2 * i] = 1.0f - tracker.getLandmarks()[2 * i] / (HEIGHT / 2);
landmarks[2 * i + 1] = 1.0f - tracker.getLandmarks()[2 * i + 1] / (WIDTH / 2);
}
}
// renderer.updateLandmarks(landmarks);
} else {
// renderer.updateLandmarks(null);
}
long currentTime = System.currentTimeMillis();
double fps = 1000.0 / (currentTime - lastFrameTime);
updater.update(fps);
lastFrameTime = currentTime;
}
I have also used Face Detection Library But that is not giving me accurate result.Is There any good Library For Face detection and put image or Mask on Camera Preview. Any help will be appreciated.

There are many libraries available which add a face mask on camera preview. Almost all of them use OpenCV. Check out these libraries.
FaceFilter
Face Replace
FaceTracker
Android GPUimage
The Android GPUimage seems to add image on Camera Preview. A similar question used this library to add face mask on camera preview. You can take a look into the answer posted on the question.
The FaceFilter library does the same work, but on a captured image. However you can see the tutorial for the library posted by the author and integrate it with face detection. There are several tutorials for face detection. This tutorial explains how to implement face detection, while also overlaying graphics on it. Although there is not much on the overlaid graphics in the tutorial, it might solve your question.

Related

Overlay a image over live frame - OpenCV4Android

I'm trying to overlay on the live frame a little image previosuly selected by the user. I already have the image's path from a previous activity. My problem is that I am not able to show the image on the frame.
I'm trying to detect a rectangle on the frame, and over the rectangle display the image selected. I could detect the rectangle, but now i can't display the image on any part of the frame (I don't care about the rectangle right now).
I've been trying to do it with the explanations from Adding Image Overlay OpenCV for Android and add watermark small image to large image opencv4android but it didn't worked for me.
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
Mat gray = inputFrame.gray();
Mat dst = inputFrame.rgba();
Imgproc.pyrDown(gray, dsIMG, new Size(gray.cols() / 2, gray.rows() / 2));
Imgproc.pyrUp(dsIMG, usIMG, gray.size());
Imgproc.Canny(usIMG, bwIMG, 0, threshold);
Imgproc.dilate(bwIMG, bwIMG, new Mat(), new Point(-1, 1), 1);
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
cIMG = bwIMG.clone();
Imgproc.findContours(cIMG, contours, hovIMG, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
for (MatOfPoint cnt : contours) {
MatOfPoint2f curve = new MatOfPoint2f(cnt.toArray());
Imgproc.approxPolyDP(curve, approxCurve, 0.02 * Imgproc.arcLength(curve, true), true);
int numberVertices = (int) approxCurve.total();
double contourArea = Imgproc.contourArea(cnt);
if (Math.abs(contourArea) < 100) {
continue;
}
//Rectangle detected
if (numberVertices >= 4 && numberVertices <= 6) {
List<Double> cos = new ArrayList<>();
for (int j = 2; j < numberVertices + 1; j++) {
cos.add(angle(approxCurve.toArray()[j % numberVertices], approxCurve.toArray()[j - 2], approxCurve.toArray()[j - 1]));
}
Collections.sort(cos);
double mincos = cos.get(0);
double maxcos = cos.get(cos.size() - 1);
if (numberVertices == 4 && mincos >= -0.3 && maxcos <= 0.5) {
//Small watermark image
Mat a = imread(img_path);
Mat bSubmat = dst.submat(0, dst.rows() -1 , 0, dst.cols()-1);
a.copyTo(bSubmat);
}
}
}
return dst;
}
NOTE: img_path is the path of the selected image I want to display over the frame. I got it from the previous activity.
By now, I just want to display the image over the frame. Later, I will try to display it on the same position where it found the rectangle.
Please, any suggestion or recommendation is welcome, as I am new with OpenCV. I'm sorry for my english, but feel free to ask me anything I didn't explain correctly. I'll do my best to explain it better.
Thanks a lot!
If you just want to display the image as an overlay, and not save it as part of the video, you may find it easier to simple display it in a separate view above the video view. This will likely use less processing, battery etc also.
If you want to draw onto the camera image bitmap then the following will allow you do that:
Bitmap cameraBitmap = BitmapFactory.decodeByteArray(bytes,0,bytes.length, opt);
Canvas camImgCanvas = new Canvas(cameraBitmap);
Drawable d = ContextCompat.getDrawable(getActivity(), R.drawable.myDrawable);
//Centre the drawing
int bitMapWidthCenter = cameraBitmap.getWidth()/2;
int bitMapheightCenter = cameraBitmap.getHeight()/2;
d.setBounds(bitMapWidthCenter, bitMapheightCenter, bitMapWidthCenter+d.getIntrinsicWidth(),
bitMapheightCenter+d.getIntrinsicHeight());
//And draw it...
d.draw(camImgCanvas);

Rendering issue on Project Tango using OpenCV image processing

I came across one problem to render the camera image after some process on its YUV buffer.
I am using the example video-overlay-jni-example and in the method OnFrameAvailable I am creating a new frame buffer using the cv::Mat...
Here is how I create a new frame buffer:
cv::Mat frame((int) yuv_height_ + (int) (yuv_height_ / 2), (int) yuv_width_, CV_8UC1, (uchar *) yuv_temp_buffer_.data());
After process, I copy the frame.data to the yuv_temp_buffer_ in order to render it on the texture: memcpy(&yuv_temp_buffer_[0], frame.data, yuv_size_);
And this works fine...
The problem starts when I try to execute an OpenCV method findChessboardCorners... using the frame that I've created before.
The method findChessboardCorners takes about 90ms to execute (11 fps), however, it seems to be rendering in a slower rate. (It appears to be rendering in ~0.5 fps on the screen).
Here is the code of the OnFrameAvailable method:
void AugmentedRealityApp::OnFrameAvailable(const TangoImageBuffer* buffer) {
if (yuv_drawable_ == NULL){
return;
}
if (yuv_drawable_->GetTextureId() == 0) {
LOGE("AugmentedRealityApp::yuv texture id not valid");
return;
}
if (buffer->format != TANGO_HAL_PIXEL_FORMAT_YCrCb_420_SP) {
LOGE("AugmentedRealityApp::yuv texture format is not supported by this app");
return;
}
// The memory needs to be allocated after we get the first frame because we
// need to know the size of the image.
if (!is_yuv_texture_available_) {
yuv_width_ = buffer->width;
yuv_height_ = buffer->height;
uv_buffer_offset_ = yuv_width_ * yuv_height_;
yuv_size_ = yuv_width_ * yuv_height_ + yuv_width_ * yuv_height_ / 2;
// Reserve and resize the buffer size for RGB and YUV data.
yuv_buffer_.resize(yuv_size_);
yuv_temp_buffer_.resize(yuv_size_);
rgb_buffer_.resize(yuv_width_ * yuv_height_ * 3);
AllocateTexture(yuv_drawable_->GetTextureId(), yuv_width_, yuv_height_);
is_yuv_texture_available_ = true;
}
std::lock_guard<std::mutex> lock(yuv_buffer_mutex_);
memcpy(&yuv_temp_buffer_[0], buffer->data, yuv_size_);
///
cv::Mat frame((int) yuv_height_ + (int) (yuv_height_ / 2), (int) yuv_width_, CV_8UC1, (uchar *) yuv_temp_buffer_.data());
if (!stam.isCalibrated()) {
Profiler profiler;
profiler.startSampling();
stam.initFromChessboard(frame, cv::Size(9, 6), 100);
profiler.endSampling();
profiler.print("initFromChessboard", -1);
}
///
memcpy(&yuv_temp_buffer_[0], frame.data, yuv_size_);
swap_buffer_signal_ = true;
}
Here is the code of the method initFromChessBoard:
bool STAM::initFromChessboard(const cv::Mat& image, const cv::Size& chessBoardSize, int squareSize)
{
cv::Mat rvec = cv::Mat(cv::Size(3, 1), CV_64F);
cv::Mat tvec = cv::Mat(cv::Size(3, 1), CV_64F);
std::vector<cv::Point2d> imagePoints, imageBoardPoints;
std::vector<cv::Point3d> boardPoints;
for (int i = 0; i < chessBoardSize.height; i++)
{
for (int j = 0; j < chessBoardSize.width; j++)
{
boardPoints.push_back(cv::Point3d(j*squareSize, i*squareSize, 0.0));
}
}
//getting only the Y channel (many of the functions like face detect and align only needs the grayscale image)
cv::Mat gray(image.rows, image.cols, CV_8UC1);
gray.data = image.data;
bool found = findChessboardCorners(gray, chessBoardSize, imagePoints, cv::CALIB_CB_FAST_CHECK);
#ifdef WINDOWS_VS
printf("Number of chessboard points: %d\n", imagePoints.size());
#elif ANDROID
LOGE("Number of chessboard points: %d", imagePoints.size());
#endif
for (int i = 0; i < imagePoints.size(); i++) {
cv::circle(image, imagePoints[i], 6, cv::Scalar(149, 43, 0), -1);
}
}
Is anyone having the same problem after process something in the YUV buffer to render on the texture?
I did a test using other device rather than the project Tango using camera2 API, and the rendering process on the screen appears to be the same rate of the OpenCV function process itself.
I appreciate any help.
I had a similar problem. My app slowed down after using the copied yuv buffer and doing some image processing with OpenCV. I would recommand you to use the tango_support library to access the yuv image buffer by doing the following:
In your config function:
int AugmentedRealityApp::TangoSetupConfig() {
TangoSupport_createImageBufferManager(TANGO_HAL_PIXEL_FORMAT_YCrCb_420_SP, 1280, 720, &yuv_manager_);
}
In your callback function:
void AugmentedRealityApp::OnFrameAvailable(const TangoImageBuffer* buffer) {
TangoSupport_updateImageBuffer(yuv_manager_, buffer);
}
In your render thread:
void AugmentedRealityApp::Render() {
TangoImageBuffer* yuv = new TangoImageBuffer();
TangoSupport_getLatestImageBuffer(yuv_manager_, &yuv);
cv::Mat yuv_frame, rgb_img, gray_img;
yuv_frame.create(720*3/2, 1280, CV_8UC1);
memcpy(yuv_frame.data, yuv->data, 720*3/2*1280); // yuv image
cv::cvtColor(yuv_frame, rgb_img, CV_YUV2RGB_NV21); // rgb image
cvtColor(rgb_img, gray_img, CV_RGB2GRAY); // gray image
}
You can share the yuv_manger with other objects/threads so you can access the yuv image buffer wherever you want.

How to improve OCR Accuracy which use tesseract? [duplicate]

I've been using tesseract to convert documents into text. The quality of the documents ranges wildly, and I'm looking for tips on what sort of image processing might improve the results. I've noticed that text that is highly pixellated - for example that generated by fax machines - is especially difficult for tesseract to process - presumably all those jagged edges to the characters confound the shape-recognition algorithms.
What sort of image processing techniques would improve the accuracy? I've been using a Gaussian blur to smooth out the pixellated images and seen some small improvement, but I'm hoping that there is a more specific technique that would yield better results. Say a filter that was tuned to black and white images, which would smooth out irregular edges, followed by a filter which would increase the contrast to make the characters more distinct.
Any general tips for someone who is a novice at image processing?
fix DPI (if needed) 300 DPI is minimum
fix text size (e.g. 12 pt should be ok)
try to fix text lines (deskew and dewarp text)
try to fix illumination of image (e.g. no dark part of image)
binarize and de-noise image
There is no universal command line that would fit to all cases (sometimes you need to blur and sharpen image). But you can give a try to TEXTCLEANER from Fred's ImageMagick Scripts.
If you are not fan of command line, maybe you can try to use opensource scantailor.sourceforge.net or commercial bookrestorer.
I am by no means an OCR expert. But I this week had the need to convert text out of a jpg.
I started with a colorized, RGB 445x747 pixel jpg.
I immediately tried tesseract on this, and the program converted almost nothing.
I then went into GIMP and did the following.
image > mode > grayscale
image > scale image > 1191x2000 pixels
filters > enhance > unsharp mask with values of
radius = 6.8, amount = 2.69, threshold = 0
I then saved as a new jpg at 100% quality.
Tesseract then was able to extract all the text into a .txt file
Gimp is your friend.
As a rule of thumb, I usually apply the following image pre-processing techniques using OpenCV library:
Rescaling the image (it's recommended if you’re working with images that have a DPI of less than 300 dpi):
img = cv2.resize(img, None, fx=1.2, fy=1.2, interpolation=cv2.INTER_CUBIC)
Converting image to grayscale:
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
Applying dilation and erosion to remove the noise (you may play with the kernel size depending on your data set):
kernel = np.ones((1, 1), np.uint8)
img = cv2.dilate(img, kernel, iterations=1)
img = cv2.erode(img, kernel, iterations=1)
Applying blur, which can be done by using one of the following lines (each of which has its pros and cons, however, median blur and bilateral filter usually perform better than gaussian blur.):
cv2.threshold(cv2.GaussianBlur(img, (5, 5), 0), 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
cv2.threshold(cv2.bilateralFilter(img, 5, 75, 75), 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
cv2.threshold(cv2.medianBlur(img, 3), 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
cv2.adaptiveThreshold(cv2.GaussianBlur(img, (5, 5), 0), 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 31, 2)
cv2.adaptiveThreshold(cv2.bilateralFilter(img, 9, 75, 75), 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 31, 2)
cv2.adaptiveThreshold(cv2.medianBlur(img, 3), 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 31, 2)
I've recently written a pretty simple guide to Tesseract but it should enable you to write your first OCR script and clear up some hurdles that I experienced when things were less clear than I would have liked in the documentation.
In case you'd like to check them out, here I'm sharing the links with you:
Getting started with Tesseract - Part I: Introduction
Getting started with Tesseract - Part II: Image Pre-processing
Three points to improve the readability of the image:
Resize the image with variable height and width(multiply 0.5 and 1 and 2 with image height and width).
Convert the image to Gray scale format(Black and white).
Remove the noise pixels and make more clear(Filter the image).
Refer below code :
Resize
public Bitmap Resize(Bitmap bmp, int newWidth, int newHeight)
{
Bitmap temp = (Bitmap)bmp;
Bitmap bmap = new Bitmap(newWidth, newHeight, temp.PixelFormat);
double nWidthFactor = (double)temp.Width / (double)newWidth;
double nHeightFactor = (double)temp.Height / (double)newHeight;
double fx, fy, nx, ny;
int cx, cy, fr_x, fr_y;
Color color1 = new Color();
Color color2 = new Color();
Color color3 = new Color();
Color color4 = new Color();
byte nRed, nGreen, nBlue;
byte bp1, bp2;
for (int x = 0; x < bmap.Width; ++x)
{
for (int y = 0; y < bmap.Height; ++y)
{
fr_x = (int)Math.Floor(x * nWidthFactor);
fr_y = (int)Math.Floor(y * nHeightFactor);
cx = fr_x + 1;
if (cx >= temp.Width) cx = fr_x;
cy = fr_y + 1;
if (cy >= temp.Height) cy = fr_y;
fx = x * nWidthFactor - fr_x;
fy = y * nHeightFactor - fr_y;
nx = 1.0 - fx;
ny = 1.0 - fy;
color1 = temp.GetPixel(fr_x, fr_y);
color2 = temp.GetPixel(cx, fr_y);
color3 = temp.GetPixel(fr_x, cy);
color4 = temp.GetPixel(cx, cy);
// Blue
bp1 = (byte)(nx * color1.B + fx * color2.B);
bp2 = (byte)(nx * color3.B + fx * color4.B);
nBlue = (byte)(ny * (double)(bp1) + fy * (double)(bp2));
// Green
bp1 = (byte)(nx * color1.G + fx * color2.G);
bp2 = (byte)(nx * color3.G + fx * color4.G);
nGreen = (byte)(ny * (double)(bp1) + fy * (double)(bp2));
// Red
bp1 = (byte)(nx * color1.R + fx * color2.R);
bp2 = (byte)(nx * color3.R + fx * color4.R);
nRed = (byte)(ny * (double)(bp1) + fy * (double)(bp2));
bmap.SetPixel(x, y, System.Drawing.Color.FromArgb
(255, nRed, nGreen, nBlue));
}
}
bmap = SetGrayscale(bmap);
bmap = RemoveNoise(bmap);
return bmap;
}
SetGrayscale
public Bitmap SetGrayscale(Bitmap img)
{
Bitmap temp = (Bitmap)img;
Bitmap bmap = (Bitmap)temp.Clone();
Color c;
for (int i = 0; i < bmap.Width; i++)
{
for (int j = 0; j < bmap.Height; j++)
{
c = bmap.GetPixel(i, j);
byte gray = (byte)(.299 * c.R + .587 * c.G + .114 * c.B);
bmap.SetPixel(i, j, Color.FromArgb(gray, gray, gray));
}
}
return (Bitmap)bmap.Clone();
}
RemoveNoise
public Bitmap RemoveNoise(Bitmap bmap)
{
for (var x = 0; x < bmap.Width; x++)
{
for (var y = 0; y < bmap.Height; y++)
{
var pixel = bmap.GetPixel(x, y);
if (pixel.R < 162 && pixel.G < 162 && pixel.B < 162)
bmap.SetPixel(x, y, Color.Black);
else if (pixel.R > 162 && pixel.G > 162 && pixel.B > 162)
bmap.SetPixel(x, y, Color.White);
}
}
return bmap;
}
INPUT IMAGE
OUTPUT IMAGE
This is somewhat ago but it still might be useful.
My experience shows that resizing the image in-memory before passing it to tesseract sometimes helps.
Try different modes of interpolation. The post https://stackoverflow.com/a/4756906/146003 helped me a lot.
What was EXTREMLY HELPFUL to me on this way are the source codes for Capture2Text project.
http://sourceforge.net/projects/capture2text/files/Capture2Text/.
BTW: Kudos to it's author for sharing such a painstaking algorithm.
Pay special attention to the file Capture2Text\SourceCode\leptonica_util\leptonica_util.c - that's the essence of image preprocession for this utility.
If you will run the binaries, you can check the image transformation before/after the process in Capture2Text\Output\ folder.
P.S. mentioned solution uses Tesseract for OCR and Leptonica for preprocessing.
Java version for Sathyaraj's code above:
// Resize
public Bitmap resize(Bitmap img, int newWidth, int newHeight) {
Bitmap bmap = img.copy(img.getConfig(), true);
double nWidthFactor = (double) img.getWidth() / (double) newWidth;
double nHeightFactor = (double) img.getHeight() / (double) newHeight;
double fx, fy, nx, ny;
int cx, cy, fr_x, fr_y;
int color1;
int color2;
int color3;
int color4;
byte nRed, nGreen, nBlue;
byte bp1, bp2;
for (int x = 0; x < bmap.getWidth(); ++x) {
for (int y = 0; y < bmap.getHeight(); ++y) {
fr_x = (int) Math.floor(x * nWidthFactor);
fr_y = (int) Math.floor(y * nHeightFactor);
cx = fr_x + 1;
if (cx >= img.getWidth())
cx = fr_x;
cy = fr_y + 1;
if (cy >= img.getHeight())
cy = fr_y;
fx = x * nWidthFactor - fr_x;
fy = y * nHeightFactor - fr_y;
nx = 1.0 - fx;
ny = 1.0 - fy;
color1 = img.getPixel(fr_x, fr_y);
color2 = img.getPixel(cx, fr_y);
color3 = img.getPixel(fr_x, cy);
color4 = img.getPixel(cx, cy);
// Blue
bp1 = (byte) (nx * Color.blue(color1) + fx * Color.blue(color2));
bp2 = (byte) (nx * Color.blue(color3) + fx * Color.blue(color4));
nBlue = (byte) (ny * (double) (bp1) + fy * (double) (bp2));
// Green
bp1 = (byte) (nx * Color.green(color1) + fx * Color.green(color2));
bp2 = (byte) (nx * Color.green(color3) + fx * Color.green(color4));
nGreen = (byte) (ny * (double) (bp1) + fy * (double) (bp2));
// Red
bp1 = (byte) (nx * Color.red(color1) + fx * Color.red(color2));
bp2 = (byte) (nx * Color.red(color3) + fx * Color.red(color4));
nRed = (byte) (ny * (double) (bp1) + fy * (double) (bp2));
bmap.setPixel(x, y, Color.argb(255, nRed, nGreen, nBlue));
}
}
bmap = setGrayscale(bmap);
bmap = removeNoise(bmap);
return bmap;
}
// SetGrayscale
private Bitmap setGrayscale(Bitmap img) {
Bitmap bmap = img.copy(img.getConfig(), true);
int c;
for (int i = 0; i < bmap.getWidth(); i++) {
for (int j = 0; j < bmap.getHeight(); j++) {
c = bmap.getPixel(i, j);
byte gray = (byte) (.299 * Color.red(c) + .587 * Color.green(c)
+ .114 * Color.blue(c));
bmap.setPixel(i, j, Color.argb(255, gray, gray, gray));
}
}
return bmap;
}
// RemoveNoise
private Bitmap removeNoise(Bitmap bmap) {
for (int x = 0; x < bmap.getWidth(); x++) {
for (int y = 0; y < bmap.getHeight(); y++) {
int pixel = bmap.getPixel(x, y);
if (Color.red(pixel) < 162 && Color.green(pixel) < 162 && Color.blue(pixel) < 162) {
bmap.setPixel(x, y, Color.BLACK);
}
}
}
for (int x = 0; x < bmap.getWidth(); x++) {
for (int y = 0; y < bmap.getHeight(); y++) {
int pixel = bmap.getPixel(x, y);
if (Color.red(pixel) > 162 && Color.green(pixel) > 162 && Color.blue(pixel) > 162) {
bmap.setPixel(x, y, Color.WHITE);
}
}
}
return bmap;
}
The Tesseract documentation contains some good details on how to improve the OCR quality via image processing steps.
To some degree, Tesseract automatically applies them. It is also possible to tell Tesseract to write an intermediate image for inspection, i.e. to check how well the internal image processing works (search for tessedit_write_images in the above reference).
More importantly, the new neural network system in Tesseract 4 yields much better OCR results - in general and especially for images with some noise. It is enabled with --oem 1, e.g. as in:
$ tesseract --oem 1 -l deu page.png result pdf
(this example selects the german language)
Thus, it makes sense to test first how far you get with the new Tesseract LSTM mode before applying some custom pre-processing image processing steps.
Adaptive thresholding is important if the lighting is uneven across the image.
My preprocessing using GraphicsMagic is mentioned in this post:
https://groups.google.com/forum/#!topic/tesseract-ocr/jONGSChLRv4
GraphicsMagic also has the -lat feature for Linear time Adaptive Threshold which I will try soon.
Another method of thresholding using OpenCV is described here:
https://docs.opencv.org/4.x/d7/d4d/tutorial_py_thresholding.html
I did these to get good results out of an image which has not very small text.
Apply blur to the original image.
Apply Adaptive Threshold.
Apply Sharpening effect.
And if the still not getting good results, scale the image to 150% or 200%.
Reading text from image documents using any OCR engine have many issues in order get good accuracy. There is no fixed solution to all the cases but here are a few things which should be considered to improve OCR results.
1) Presence of noise due to poor image quality / unwanted elements/blobs in the background region. This requires some pre-processing operations like noise removal which can be easily done using gaussian filter or normal median filter methods. These are also available in OpenCV.
2) Wrong orientation of image: Because of wrong orientation OCR engine fails to segment the lines and words in image correctly which gives the worst accuracy.
3) Presence of lines: While doing word or line segmentation OCR engine sometimes also tries to merge the words and lines together and thus processing wrong content and hence giving wrong results. There are other issues also but these are the basic ones.
This post OCR application is an example case where some image pre-preocessing and post processing on OCR result can be applied to get better OCR accuracy.
Text Recognition depends on a variety of factors to produce a good quality output. OCR output highly depends on the quality of input image. This is why every OCR engine provides guidelines regarding the quality of input image and its size. These guidelines help OCR engine to produce accurate results.
I have written a detailed article on image processing in python. Kindly follow the link below for more explanation. Also added the python source code to implement those process.
Please write a comment if you have a suggestion or better idea on this topic to improve it.
https://medium.com/cashify-engineering/improve-accuracy-of-ocr-using-image-preprocessing-8df29ec3a033
you can do noise reduction and then apply thresholding, but that you can you can play around with the configuration of the OCR by changing the --psm and --oem values
try:
--psm 5
--oem 2
you can also look at the following link for further details
here
So far, I've played a lot with tesseract 3.x, 4.x and 5.0.0.
tesseract 4.x and 5.x seem to yield the exact same accuracy.
Sometimes, I get better results with legacy engine (using --oem 0) and sometimes I get better results with LTSM engine --oem 1.
Generally speaking, I get the best results on upscaled images with LTSM engine. The latter is on par with my earlier engine (ABBYY CLI OCR 11 for Linux).
Of course, the traineddata needs to be downloaded from github, since most linux distros will only provide the fast versions.
The trained data that will work for both legacy and LTSM engines can be downloaded at https://github.com/tesseract-ocr/tessdata with some command like the following. Don't forget to download the OSD trained data too.
curl -L https://github.com/tesseract-ocr/tessdata/blob/main/eng.traineddata?raw=true -o /usr/share/tesseract/tessdata/eng.traineddata
curl -L https://github.com/tesseract-ocr/tessdata/blob/main/eng.traineddata?raw=true -o /usr/share/tesseract/tessdata/osd.traineddata
I've ended up using ImageMagick as my image preprocessor since it's convenient and can easily run scripted. You can install it with yum install ImageMagick or apt install imagemagick depending on your distro flavor.
So here's my oneliner preprocessor that fits most of the stuff I feed to my OCR:
convert my_document.jpg -units PixelsPerInch -respect-parenthesis \( -compress LZW -resample 300 -bordercolor black -border 1 -trim +repage -fill white -draw "color 0,0 floodfill" -alpha off -shave 1x1 \) \( -bordercolor black -border 2 -fill white -draw "color 0,0 floodfill" -alpha off -shave 0x1 -deskew 40 +repage \) -antialias -sharpen 0x3 preprocessed_my_document.tiff
Basically we:
use TIFF format since tesseract likes it more than JPG (decompressor related, who knows)
use lossless LZW TIFF compression
Resample the image to 300dpi
Use some black magic to remove unwanted colors
Try to rotate the page if rotation can be detected
Antialias the image
Sharpen text
The latter image can than be fed to tesseract with:
tesseract -l eng preprocessed_my_document.tiff - --oem 1 -psm 1
Btw, some years ago I wrote the 'poor man's OCR server' which checks for changed files in a given directory and launches OCR operations on all not already OCRed files. pmocr is compatible with tesseract 3.x-5.x and abbyyocr11.
See the pmocr project on github.

The right method to obtain R-G-B intensities of image?

I am using a simple camera app, and obtain the live camera preview to a surfaceView. I turn the camerea on/off using a button. In the button's onClick, after I call camera.startPreview(), I run:
camera.setPreviewCallback(new PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
sumRED = 0;
sumGREEN = 0;
sumBLUE = 0;
int frameHeight = camera.getParameters().getPictureSize().height;
int frameWidth = camera.getParameters().getPictureSize().width;
int[] myPixels = convertYUV420_NV21toRGB8888(data, 200, 200);
Bitmap bm = Bitmap.createBitmap(myPixels, 200, 200, Bitmap.Config.ARGB_8888);
imageView.setImageBitmap(bm);
for (int i = 0; i < myPixels.length; i++) {
sumRED = sumRED + Color.red(myPixels[i]);
sumGREEN = sumGREEN + Color.green(myPixels[i]);
sumBLUE = sumBLUE + Color.blue(myPixels[i]);
}
sumRED = sumRED / myPixels.length;
sumGREEN = sumGREEN / myPixels.length;
sumBLUE = sumBLUE / myPixels.length;
String sRed = Float.toString(sumRED);
String sGreen = Float.toString(sumGREEN);
String sBlue = Float.toString(sumBLUE);
rTextView.setText("RED: " + sRed);
gTextView.setText("Green: " + sGreen);
bTextView.setText("Blue: " + sBlue);
}
}
I am using the offered code for NV21 to RGB conversion from here.
I thought that by doing Color.blue(myPixels[i]); for the R-G-B colors I will get the color intensity of each pixel, and then obtain the average colol intensity of the picture.
But I have noticed that the TextView shows the Red and the Green as exactly same values.
Is the code I am reusing is not right for this purpose? or is there a better, preferred, more efficieny way to obtain specific color intensity of a pixel (or live video preview?)
Thank you
update:
At first I was not touching the output format of the preview. The preview of the rebuilt bitmap was really bad. When I added cameraParam.setPreviewFormat(ImageFormat.YV12); the rebuilt bitmap divided the rebuilt image to 4 quadrants. When I covered the camera with my finger (was shown mosly orange/red due to flash being on), the rebuilt image was purple on the top two quadrants, and greenish on the bottom two. When I changed the format to NV21, the rebuilt image was a single image, and showed pretty much the same as the shown preview, and there was a segnificant change between all R-G-B colors. So, my question is: Why am I getting results in the range of 150-160 when most of the screen is a single color, and values in the range of 255 or so? I thought that the conversion algorithm I was using was converting to a scale of 255. What am I missing here?

Camera preview processing on Android

I'm making a line follower for my robot on Android (to learn Java/Android programming), currently I'm facing the image processing problem: the camera preview returns an image format called YUV which I want to convert to a threshold in order to know where the line is, how would one do that?
As of now I've succeeded getting something, that is I definitely can read data from the camera preview and by some miracle even know if the light intensity is over or under a certain value at a certain area on the screen. My goal is to draw the robot's path on an overlay over the camera's preview, that too works to some extent, but the problem is the YUV management.
As you can see not only the dark area is drawn sideways, but it also repeats itself 4 times and the preview image is stretched, I cannot figure out how to fix these problems.
Here's the relevant part of code:
public void surfaceCreated(SurfaceHolder arg0) {
// TODO Auto-generated method stub
// camera setup
mCamera = Camera.open();
Camera.Parameters parameters = mCamera.getParameters();
List<Camera.Size> sizes = parameters.getSupportedPreviewSizes();
for(int i=0; i<sizes.size(); i++)
{
Log.i("CS", i+" - width: "+sizes.get(i).width+" height: "+sizes.get(i).height+" size: "+(sizes.get(i).width*sizes.get(i).height));
}
// change preview size
final Camera.Size cs = sizes.get(8);
parameters.setPreviewSize(cs.width, cs.height);
// initialize image data array
imgData = new int[cs.width*cs.height];
// make picture gray scale
parameters.setColorEffect(Camera.Parameters.EFFECT_MONO);
parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_AUTO);
mCamera.setParameters(parameters);
// change display size
LayoutParams params = (LayoutParams) mSurfaceView.getLayoutParams();
params.height = (int) (mSurfaceView.getWidth()*cs.height/cs.width);
mSurfaceView.setLayoutParams(params);
LayoutParams overlayParams = (LayoutParams) swOverlay.getLayoutParams();
overlayParams.width = mSurfaceView.getWidth();
overlayParams.height = mSurfaceView.getHeight();
swOverlay.setLayoutParams(overlayParams);
try
{
mCamera.setPreviewDisplay(mSurfaceHolder);
mCamera.setDisplayOrientation(90);
mCamera.startPreview();
}
catch (IOException e)
{
e.printStackTrace();
mCamera.stopPreview();
mCamera.release();
}
// callback every time a new frame is available
mCamera.setPreviewCallback(new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera)
{
// create bitmap from camera preview
int pixel, pixVal, frameSize = cs.width*cs.height;
for(int i=0; i<frameSize; i++)
{
pixel = (0xff & ((int) data[i])) - 16;
if(pixel < threshold)
{
pixVal = 0;
}
else
{
pixVal = 1;
}
imgData[i] = pixVal;
}
int cp = imgData[(int) (cs.width*(0.5+(cs.height/2)))];
//Log.i("CAMERA", "Center pixel RGB: "+cp);
debug.setText("Center pixel: "+cp);
// process preview image data
Paint paint = new Paint();
paint.setColor(Color.YELLOW);
int start, finish, last;
start = finish = last = -1;
float x_ratio = mSurfaceView.getWidth()/cs.width;
float y_ratio = mSurfaceView.getHeight()/cs.height;
// display calculated path on overlay using canvas
Canvas overlayCanvas = overlayHolder.lockCanvas();
overlayCanvas.drawColor(0, Mode.CLEAR);
// start by finding the tape from bottom of the screen
for(int y=cs.height; y>0; y--)
{
for(int x=0; x<cs.width; x++)
{
pixel = imgData[y*cs.height+x];
if(pixel == 1 && last == 0 && start == -1)
{
start = x;
}
else if(pixel == 0 && last == 1 && finish == -1)
{
finish = x;
break;
}
last = pixel;
}
//overlayCanvas.drawLine(start*x_ratio, y*y_ratio, finish*x_ratio, y*y_ratio, paint);
//start = finish = last = -1;
}
overlayHolder.unlockCanvasAndPost(overlayCanvas);
}
});
}
This code generates an error sometimes when quitting the application due to some method being called after release, which is the least of my problems.
UPDATE:
Now that the orientation problem is fixed (CCD sensor orientation) I'm still facing the repetition problem, this is probably related to my YUV data management...
Your surface and camera management looks correct, but I would doublecheck that camera actually accepted preview size settings ( some camera implementations reject some settings silently)
As you are working with portrait mode, you have to keep in mind that camera does not give a fart about prhone orientation - its coordinate origin isdetermined by CCD chip and is always to right corner and scan direction is from top to bottom and right to left - quite different from your overlay canvas. ( But if you are in landscape mode, everything is correct ;) ) - this is certaily source of odd drawing result
Your threshloding is bit naive and not very usefull in real life - I would suggest adaptive threshloding. In our javaocr project ( pure java, also has android demos ) we implemented efficient sauvola binarisation (see demos):
http://sourceforge.net/projects/javaocr/
Performance binarisation can be improved to work only on single image rows (patches welcome)
Issue with UV part of image is easy - default forman is NV21, luminance comes first
and this is just byte stream, and you do not need UV part of image at all (look into demos above)

Categories

Resources