#Override
public boolean onOptionsItemSelected(MenuItem item) {
int id = item.getItemId();
if (id == R.id.Canny_Edge) //Unfortunately stops the app when we use this option {
ImageView i = (ImageView) findViewById(R.id.image_view);
Bitmap bmp =BitmapFactory.decodeResource(getResources(),R.drawable.smiley);
Mat srcMat = new Mat ( bmp.getHeight(), bmp.getWidth(), CvType.CV_8UC3);
Bitmap myBitmap32 = bmp.copy(Bitmap.Config.ARGB_8888, true);
Utils.bitmapToMat(myBitmap32, srcMat);
Mat gray = new Mat(srcMat.size(), CvType.CV_8UC1);
Imgproc.cvtColor(srcMat, gray, Imgproc.COLOR_RGB2GRAY,4);
Mat edge = new Mat();
Mat dst = new Mat();
Imgproc.Canny(gray, edge, 80, 90);
Imgproc.cvtColor(edge, dst, Imgproc.COLOR_GRAY2RGBA,4);
Bitmap resultBitmap = Bitmap.createBitmap(dst.cols(), dst.rows(),Bitmap.Config.ARGB_8888);
Utils.matToBitmap(dst, resultBitmap);
i.setImageBitmap(resultBitmap);
}
else if(id == R.id.Sobel) {
ImageView i = (ImageView) findViewById(R.id.image_view);
i.setImageResource(R.drawable.apj);
//some code
}
return super.onOptionsItemSelected(item);
}
In the above code ,Android studio doesn't shows any errors .
But the app unfortunately stops on getting this Canny_Edge option(in the menu).
Why, Can anyone solve this problem.
I have answered a similar question on: https://stackoverflow.com/a/50637228/1693327 but I am unable to comment to the post due to lack of reputation points so I will copy the answer below.
The device should be crashing when it executes the Imgproc.Canny function call.
I believe the app is crashing when it hits the Canny detector because of the wrong type of OpenCV Manager installed on your device, be it version number or central processor instruction set. Checking the correct version should be straightforward. Just go to the OpenCV-android-sdk\apk directory and check for the 3 (x.y.z) numbers after OpenCV_
Checking instruction set of Android devices for Windows
To check the instruction set of your device, navigate to the adb (android debug bridge) directory commonly located at:
C:\Users\<'your username'>\AppData\Local\Android\Sdk\platform-tools
Run the command:
./adb.exe shell cat /proc/cpuinfo
After getting the correct instruction set, navigate back to the OpenCV-android-sdk\apk and locate the correct apk version and instruction set to be installed on your android device.
You can then transfer the apk to your device and install it. Another way I find useful is to navigate to the adb.exe directory and run the command:
./adb.exe install <path to OpenCV-android-sdk>/apk/OpenCV_x.y.z_Manager_x.yz_<platform instruction set>.apk
Apart from the steps above, make sure that you do not have any other environment variables that use other types of OpenCV Manager such as stating a different one in the Application.mk or build.gradle files.
After the steps above, your Canny detector should be able to run on your device without crashing.
Happy Developing :).
Related
I'm trying to implement some computer vision in an android app.
I have opencv integrated and I'm writing native c++ code for it that's called using JNI. That all seems to be working. My problem is that when executing the computer vision code the following line crashes the app without any error.
detector->detectAndCompute(usr_img,usr_mask,usr_keypoints,usr_descriptors);
If I use the orb detector, instead of sift it does work. On my physical device it then crashes on knnMatch. Whereas on an emulated Pixel 5 it completes correctly. Maybe it has something to do with my opencv and android versions?
Here's the full computer vision code:
void process_image(char* in_filepath,char* out_filepath){
Mat usr_img = imread(in_filepath); //read images from the disk
Mat ref_img = imread("redacted");
Mat overlay_img = imread("redacted");
Mat out_img;//make a copy for output
usr_img.copyTo(out_img);
//Set up feature detector
Ptr<SIFT> detector = SIFT::create();
//Ptr<ORB> detector = ORB::create(); //detectAndCompute works if I use this instead
//Set up feature matcher
Ptr<BFMatcher> matcher = BFMatcher::create(NORM_HAMMING,true);
//generate mask for ref image (so features are not created from the background)
Mat ref_mask; //defines parts of the ref image that will be searched for features.
inRange(ref_img,Scalar(0.0,0.0,252.0),Scalar(2.0,2.0,255.0),ref_mask);
bitwise_not(ref_mask,ref_mask);//invert the mask
//and an all white mask for the usr image
Mat usr_mask = Mat(usr_img.cols,usr_img.rows, CV_8UC1, Scalar(255.0));
//detect keypoints
std::vector<KeyPoint> ref_keypoints, usr_keypoints;
Mat ref_descriptors, usr_descriptors;
detector->detectAndCompute(ref_img,ref_mask,ref_keypoints,ref_descriptors);
detector->detectAndCompute(usr_img,usr_mask,usr_keypoints,usr_descriptors);
//match descriptors between images, each match is a vector of matches by decreasing "distance"
std::vector<std::vector<DMatch>> matches;
matcher->knnMatch(usr_descriptors,ref_descriptors,matches,2);
//throw out bad matches
std::vector<DMatch> good_matches;
for(uint32_t i = 0; i < matches.size(); i++){
//consider it a good match if the next best match is 33% worse
if(matches[i][0].distance*1.33 < matches[i][1].distance){
good_matches.push_back(matches[i][0]);
}
}
//visualize the matches for debugging purposes
Mat draw_match_img;
drawMatches(usr_img,usr_keypoints,ref_img,ref_keypoints,good_matches,draw_match_img);
imwrite("redacted",draw_match_img);
}
My opencv version is 4.5.4
My android version is 9 on the physical phone, and 11, api 30 on the emulated pixel 5
I found the problem.
My images were 4000x3000px and approx 3000x1600. Scaling both of the images down by a factor of 2 causes everything to work properly.
I added a resize after each imread like this:
resize(x_img,x_img,Size(),0.5,0.5,INTER_CUBIC);
What this tells me is that SIFT in opencv 4.5.4 has a image size limit above which execution will crash without an error message. ..annoying.
It also explains why some of the detectors worked and some did not, and it even seemed to vary when I ran it on a real device vs an emulated one.
I am working on an app where i need to identify text from an image and what could be the better way than using Tesseract. As Tesseract is an open source and widely accepted. I have used Tesseract in my app. So, i am getting images from user and then applying 2-3 operations on image to improve chances of getting result but i am not getting expected result.
Java Code ->
final Bitmap tessBitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(tessBitmap);
Paint paint = new Paint();
paint.setColor(Color.BLACK);
canvas.drawBitmap(image, 0, 0, paint);
Mat tessMat = new Mat();
Utils.bitmapToMat(tessBitmap, tessMat);
Imgproc.cvtColor(tessMat, tessMat, Imgproc.COLOR_RGB2GRAY);
Imgproc.threshold(tessMat, tessMat, 0, 255, Imgproc.THRESH_BINARY + Imgproc.THRESH_OTSU);
final Bitmap newTessBitmap = Bitmap.createBitmap(tessMat.width(), tessMat.height(), Bitmap.Config.RGB_565);
Utils.matToBitmap(tessMat, newTessBitmap);
final Bitmap finalTessBitmap = Bitmap.createBitmap(newTessBitmap.getWidth(), newTessBitmap.getHeight(), Bitmap.Config.ARGB_8888);
Canvas tessCanvas = new Canvas(finalTessBitmap);
Paint tessPaint = new Paint();
tessPaint.setColor(Color.BLACK);
tessCanvas.drawBitmap(newTessBitmap, 0, 0, tessPaint);
and then passing this bitmap to tesseract to get output but not getting efficient and sometimes i dont even get anything in output. I have compared my result with one online website https://www.newocr.com/ .
Which is also using tesseract in back end as it is claiming. i have also tried to contact them via email but coudlnt get anything from them.
mTess = new TessBaseAPI();
tessModelPath = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS).getAbsolutePath() + "/tesseract/";
mTess.init(tessModelPath, "eng", TessBaseAPI.OEM_TESSERACT_ONLY); mTess.setPageSegMode(TessBaseAPI.PageSegMode.PSM_AUTO);
mTess.setImage(finalTessBitmap);
This is the base Tesseract code. Please help me solve my issue. Thanks...
Given below is the image i get after applying above mentioned operation but when i pass it to tesseract i did not get anything but when passing to newocr.com website it is producing exact text.
Result from newOcr.
This image is for results.
Please suggest me about what to do if you have any idea.
After Digging more and running the same image in python code i have found out that in python pytesseract it works like charm and producing exact output as newocr. But when i run in android it doesnt work that well. so may be the issue is with API of Tesseract. So, now if you know anything still which i can do to improve accuracy. Help me. Thanks in Advance.
$ tesseract 8UIBw.jpg -
Warning: Invalid resolution 0 dpi. Using 70 instead.
Estimating resolution as 613
Tillamook
It was without any preprocessing...
$ tesseract -v
tesseract 4.0.0-253-g3948
leptonica-1.76.0 (Dec 14 2018, 15:34:47) [MSC v.1916 LIB Release x64]
libgif 5.1.4 : libjpeg 9b : libpng 1.6.35 : libtiff 4.0.9 : zlib 1.2.11 : libwebp 0.6.1 : libopenjp2 2.3.0
Found AVX
Found SSE
Not sure if this is the right way to ask, but please help. I have an image of a dented car. I have to process it and highlight the dents and return the number of dents. I was able to do it reasonably well with the following result:
The matlab code is:
img2=rgb2gray(i1);
imshow(img2);
img3=imtophat(img2,strel('disk',15));
img4=imadjust(img3);
layer=img4(:,:,1);
img5=layer>100 & layer<250;
img6=imfill(img5,'holes');
img7=bwareaopen(img6,5);
[L,ans]=bwlabeln(img7);
imshow(img7);
I=imread(i1);
Ians=CarDentIdentification(I);
However, when I try to do this using opencv, I get this:
With the following code:
Imgproc.cvtColor(source, middle, Imgproc.COLOR_RGB2GRAY);
Imgproc.equalizeHist(middle, middle);
Imgproc.threshold(middle, middle, 150, 255, Imgproc.THRESH_OTSU);
Please tell me how can I obtain better results in opencv, and also how to count the dents? I tried findcontour() but it gives a very large number. I tried on other images as well, but I'm not getting proper results.
Please help.
So you basically from the MATLAB site, imtophat does - Top-hat filtering computes the morphological opening of the image (using imopen) and then subtracts the result from the original image.
You could do this in OpenCV with the following steps:
Step 1: Get the disk structuring element
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (15, 15))
Step 2: Compute opening of the image and then subtract the result from the original image
tophat = cv2.morphologyEx(v, cv2.MORPH_TOPHAT, kernel)
This gives following result -
Step 3 - Now you could just manually threshold it or use Otsu -
ret, thresh = cv2.threshold(tophat, 17, 255, 0)
which gives you the following image -
Since the OP wants the code in Java, here is the probable code in Java:
private Mat topHat(Mat image)
{
Mat element = Imgproc.getStructuringElement(Imgproc.MORPH_ELLIPSE, new Size(15, 15), new Point (0, 0));
Mat dst = new Mat;
Imgproc.morphologyEx(image, dst, Imgproc.MORPH_TOPHAT, element, new Point(0, 0));
return dst;
}
Make sure you do this on a gray scale image (CvType.8UC1) and then you can threshold suitably.
I have an app that displays an image in an ImageView, and am experiencing problem specifically for Android 4.1.2. It is confirmed to not work on three separate 4.1.2 devices, while working on 2.3.7, 4.2.1, 4.3 and 4.4.2. The error occurs for several different images, but not all. There seems to be something about some specific JPEG-files that doesn't work as intended.
How it actually looks, and how it shows on Android 4.1.2:
The above image (left) is one such problematic image file.
A summary of the code behind setting the displayed image is:
Bitmap bitmap, background;
ImageView imageView = (ImageView)findViewById(R.id.imageView);
BitmapFactory.Options options = new BitmapFactory.Options();
options.inScaled = false;
options.inPurgeable = true;
options.inInputShareable = true;
bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.dog, options);
background = bitmap.copy(Bitmap.Config.RGB_565, true);
Canvas canvas = new Canvas(background);
canvas.drawBitmap(bitmap, 0, 0, null);
// Some calls to canvas.drawText(....) here, but doesn't have to happen for the error to occur
imageView.setImageBitmap(background);
I've figured that I am able to resize and re-save the above photo in Photoshop to make it work, without knowing why. Since I have several I'd prefer not having to do so.
I am wondering what is the source of this error on Android 4.1.2, and if there might be some programmatic way of fixing it?
I've tried my luck on Google view "skewed", "tilted", "distorted" and similar, but there are very few mentions of it and no fixes. This is the mention with screenshot I've found:
Anyone else getting distorted album art in Play Music? (Screenshot)
Based on the comment of rupps I changed from:
bitmap.copy(Bitmap.Config.RGB_565, true)
To:
bitmap.copy(Bitmap.Config.ARGB_8888, true)
This did in fact solve the problem for 4.1.2 devices, while remaining similar in functionality for all other tested devices. This does programatically solve my issue. Note however that it does require double the memory, as each pixel is stored on 4 bytes instead of 2 bytes.
As for the source of the problem, I read from the documentation of RGB_565 that:
This configuration can produce slight visual artifacts depending on the configuration of the source.
I think this mostly relates to banding/color/dithering issues and this does not explain the version specific problem, but perhaps why this setting is troublesome.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to programatically take a screenshot on Android?
How to capture the android device screen content and make an image file using the snapshot data? Which API should I use or where could I find related resources?
BTW:
not camera snapshot, but device screen
Use the following code:
Bitmap bitmap;
View v1 = MyView.getRootView();
v1.setDrawingCacheEnabled(true);
bitmap = Bitmap.createBitmap(v1.getDrawingCache());
v1.setDrawingCacheEnabled(false);
Here MyView is the View through which we need include in the screen. You can also get DrawingCache from of any View this way (without getRootView()).
There is also another way.. If we having ScrollView as root view then its better to use following code,
LayoutInflater inflater = (LayoutInflater) this.getSystemService(LAYOUT_INFLATER_SERVICE);
FrameLayout root = (FrameLayout) inflater.inflate(R.layout.activity_main, null); // activity_main is UI(xml) file we used in our Activity class. FrameLayout is root view of my UI(xml) file.
root.setDrawingCacheEnabled(true);
Bitmap bitmap = getBitmapFromView(this.getWindow().findViewById(R.id.frameLayout)); // here give id of our root layout (here its my FrameLayout's id)
root.setDrawingCacheEnabled(false);
Here is the getBitmapFromView() method
public static Bitmap getBitmapFromView(View view) {
//Define a bitmap with the same size as the view
Bitmap returnedBitmap = Bitmap.createBitmap(view.getWidth(), view.getHeight(),Bitmap.Config.ARGB_8888);
//Bind a canvas to it
Canvas canvas = new Canvas(returnedBitmap);
//Get the view's background
Drawable bgDrawable =view.getBackground();
if (bgDrawable!=null)
//has background drawable, then draw it on the canvas
bgDrawable.draw(canvas);
else
//does not have background drawable, then draw white background on the canvas
canvas.drawColor(Color.WHITE);
// draw the view on the canvas
view.draw(canvas);
//return the bitmap
return returnedBitmap;
}
It will display entire screen including content hidden in your ScrollView
UPDATED AS ON 20-04-2016
There is another better way to take screenshot.Here I have taken screenshot of WebView.
WebView w = new WebView(this);
w.setWebViewClient(new WebViewClient()
{
public void onPageFinished(final WebView webView, String url) {
new Handler().postDelayed(new Runnable(){
#Override
public void run() {
webView.measure(View.MeasureSpec.makeMeasureSpec(
View.MeasureSpec.UNSPECIFIED, View.MeasureSpec.UNSPECIFIED),
View.MeasureSpec.makeMeasureSpec(0, View.MeasureSpec.UNSPECIFIED));
webView.layout(0, 0, webView.getMeasuredWidth(),
webView.getMeasuredHeight());
webView.setDrawingCacheEnabled(true);
webView.buildDrawingCache();
Bitmap bitmap = Bitmap.createBitmap(webView.getMeasuredWidth(),
webView.getMeasuredHeight(), Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(bitmap);
Paint paint = new Paint();
int height = bitmap.getHeight();
canvas.drawBitmap(bitmap, 0, height, paint);
webView.draw(canvas);
if (bitmap != null) {
try {
String filePath = Environment.getExternalStorageDirectory()
.toString();
OutputStream out = null;
File file = new File(filePath, "/webviewScreenShot.png");
out = new FileOutputStream(file);
bitmap.compress(Bitmap.CompressFormat.PNG, 50, out);
out.flush();
out.close();
bitmap.recycle();
} catch (Exception e) {
e.printStackTrace();
}
}
}
}, 1000);
}
});
Hope this helps..!
AFAIK, All of the methods currently to capture a screenshot of android use the /dev/graphics/fb0 framebuffer. This includes ddms. It does require root to read from this stream. ddms uses adbd to request the information, so root is not required as adb has the permissions needed to request the data from /dev/graphics/fb0.
The framebuffer contains 2+ "frames" of RGB565 images. If you are able to read the data, you would have to know the screen resolution to know how many bytes are needed to get the image. each pixel is 2 bytes, so if the screen res was 480x800, you would have to read 768,000 bytes for the image, since a 480x800 RGB565 image has 384,000 pixels.
For newer Android platforms, one can execute a system utility screencap in /system/bin to get the screenshot without root permission.
You can try /system/bin/screencap -h to see how to use it under adb or any shell.
By the way, I think this method is only good for single snapshot.
If we want to capture multiple frames for screen play, it will be too slow.
I don't know if there exists any other approach for a faster screen capture.
[Based on Android source code:]
At the C++ side, the SurfaceFlinger implements the captureScreen API. This is exposed over the binder IPC interface, returning each time a new ashmem area that contains the raw pixels from the screen. The actual screenshot is taken through OpenGL.
For the system C++ clients, the interface is exposed through the ScreenshotClient class, defined in <surfaceflinger_client/SurfaceComposerClient.h> for Android < 4.1; for Android > 4.1 use <gui/SurfaceComposerClient.h>
Before JB, to take a screenshot in a C++ program, this was enough:
ScreenshotClient ssc;
ssc.update();
With JB and multiple displays, it becomes slightly more complicated:
ssc.update(
android::SurfaceComposerClient::getBuiltInDisplay(
android::ISurfaceComposer::eDisplayIdMain));
Then you can access it:
do_something_with_raw_bits(ssc.getPixels(), ssc.getSize(), ...);
Using the Android source code, you can compile your own shared library to access that API, and then expose it through JNI to Java. To create a screen shot form your app, the app has to have the READ_FRAME_BUFFER permission.
But even then, apparently you can create screen shots only from system applications, i.e. ones that are signed with the same key as the system. (This part I still don't quite understand, since I'm not familiar enough with the Android Permissions system.)
Here is a piece of code, for JB 4.1 / 4.2:
#include <utils/RefBase.h>
#include <binder/IBinder.h>
#include <binder/MemoryHeapBase.h>
#include <gui/ISurfaceComposer.h>
#include <gui/SurfaceComposerClient.h>
static void do_save(const char *filename, const void *buf, size_t size) {
int out = open(filename, O_RDWR|O_CREAT, 0666);
int len = write(out, buf, size);
printf("Wrote %d bytes to out.\n", len);
close(out);
}
int main(int ac, char **av) {
android::ScreenshotClient ssc;
const void *pixels;
size_t size;
int buffer_index;
if(ssc.update(
android::SurfaceComposerClient::getBuiltInDisplay(
android::ISurfaceComposer::eDisplayIdMain)) != NO_ERROR ){
printf("Captured: w=%d, h=%d, format=%d\n");
ssc.getWidth(), ssc.getHeight(), ssc.getFormat());
size = ssc.getSize();
do_save(av[1], pixels, size);
}
else
printf(" screen shot client Captured Failed");
return 0;
}
You can try the following library: Android Screenshot Library (ASL) enables to programmatically capture screenshots from Android devices without requirement of having root access privileges. Instead, ASL utilizes a native service running in the background, started via the Android Debug Bridge (ADB) once per device boot.
According to this link, it is possible to use ddms in the tools directory of the android sdk to take screen captures.
To do this within an application (and not during development), there are also applications to do so. But as #zed_0xff points out it certainly requires root.
Framebuffer seems the way to go, it will not always contain 2+ frames like mentioned by Ryan Conrad. In my case it contained only one. I guess it depends on the frame/display size.
I tried to read the framebuffer continuously but it seems to return for a fixed amount of bytes read. In my case that is (3 410 432) bytes, which is enough to store a display frame of 854*480 RGBA (3 279 360 bytes). Yes, the frame in binary outputed from fb0 is RGBA in my device. This will most likely depend from device to device. This will be important for you to decode it =)
In my device /dev/graphics/fb0 permissions are so that only root and users from group graphics can read the fb0. graphics is a restricted group so you will probably only access fb0 with a rooted phone using su command.
Android apps have the user id (uid) app_## and group id (guid) app_## .
adb shell has uid shell and guid shell, which has much more permissions than an app.
You can actually check those permissions at /system/permissions/platform.xml
This means you will be able to read fb0 in the adb shell without root but you will not read it within the app without root.
Also, giving READ_FRAME_BUFFER and/or ACCESS_SURFACE_FLINGER permissions on AndroidManifest.xml will do nothing for a regular app because these will only work for 'signature' apps.
if you want to do screen capture from Java code in Android app AFAIK you must have Root provileges.