How to scan a DPM datamatrix from a mobile app - android

I'm trying to leverage ZXing in an Android app to scan data matrixes. So far I'm successful with printed data matrixes such as this:
But other data matrixes printed by laser or punched have circle-looking marks instead of square-looking ones.
These present a problem. The only app I've found capable of scanning this is QRDroid. This article says that QRDroid uses ZXing so I'm thinking if they can, there must be a way. Unfortunately QRDroid is not an open source project so I don't know how.
There's the possibility of course that QRDroid is using an algorithm to somehow transform the circled marks in to squared ones before they attempt to read the data matrix. I don't know anything about image manipulation in Java, so I can't imagine how this is done.
My question is whether there's a way to tweak ZXing to read this type of data matrix, or if there's any library I can use to manipulate the image to make it readable by ZXing.
Edit:
If I use an image editor -e.g. I used https://www.befunky.com- to and apply a blur of 10, then it looks like a normal printed data matrix and my scan works. How should I go about doing this in my Android app?

After some research I found out that this type of marking is not really considered a standard data matrix but rather referred to by the manufacturing industry as a DPM, which stands for "Direct Part Marking", although I've read other sources call it "Dot Peen Marking" or "Dot Peen Matrix"
I posted this same question on an already existing issue in the Zxing repository and this was the reply I got:
The problem is the WhiteRectDetector. It finds a white rectangle inside the code, similar to this issue. If you rotate the image slightly (say 10°) or you blur it as you did or you did a suitably sized pixel dilation followed by an erosion, you'll get something that should (mostly) be detectable.
Modifying the WhiteRectDetector, to allow for dots rather than squares was not really an option for me due to deadlines, so I ended up switching from Zxing to Scandit, which is proven to be able to scan this.
Scandit is a proprietary library, but I haven't really found any other alternatives. You can get a trial license though. For those wanting to try it out to scan DPM's, the documentation is not very clear on how to enable scans for this symbology, so here's the trick.
In Android:
settings.getSymbologySettings(Barcode.SYMBOLOGY_DATA_MATRIX)
.setExtensionEnabled("direct_part_marking_mode", true);
In Objective-C:
[[settings settingsForSymbology:SBSSymbologyDatamatrix]
setExtension:#"direct_part_marking_mode" enabled:YES];

Related

Xamarin Android - Trouble with adding EXIF metadata for accented characters

I'd like to ask you for a hand because I'm in a technical impasse on a Mobile project in Xamarin Android (not Forms nor IOS).
Indeed, in my application, I take a picture via the application and just before sending it to my Blob Storage (on Azure), I modify the file by adding metadata via EXIF (more precisely using the ExifInterface object).
The problem I'm having is that the special characters in my properties (even the COMMENT tag) are replaced by question marks :(
On my side, it is not possible to work without the special characters because my application is used all over the world (Vietnam, Morocco, Egypt or Canada).
The ExifInterface we use is the one from android.media, we tried with the one from the androidx.media version but the result is the same.
We also considered using XMP to add our metadata but a limitation of the Xamarin framework prevents us from doing this (absence of the system.windows.media.imaging library which blocks bitmap manipulation).
We are totally looking for REX or a lead :)
Thanks to you for your time

Put image over video and save video to sd card Android

I'm working on a feature in which I want to add picture over the video and save it to sd card.
in general, the user selects an image with semi-transparent background and puts that image above the video, after the user presses the save button he gets a new video but already with the image above the video.
I have heard about ffmpeg, and saw some commands that are provided by ffmpeg. but I don't know where I should initialize. can anyone provide me an example for the same?
Thank you.
One common approach is to use an ffmpeg wrapper to access ffmpeg functionality from your Android app.
There are several fairly well used wrappers available on GitHub - the ones below are particularly well featured and documented (note, I have not used these as they were not so mature when I was looking at this previously, but if I was doing something like this again now I would definitely build on one of these):
http://writingminds.github.io/ffmpeg-android-java/
https://github.com/guardianproject/android-ffmpeg
Using one of the well supported and used libraries will take care of some common issues that you might otherwise encounter - having to load different binaries for different processor types, and some tricky issues with native library reloading to avoid crashes on subsequent invocations of the wrapper.
Because this approach uses the standard ffmpeg cmd line syntax for commands it also means you should be able to search and find help easily on multiple different operations (as anyone using ffmpeg in 'normal' model will use the same syntax for the ffmpeg command itself).
For example, for your adding an image case here are some results from a quick search (ffmpeg syntax can change over time so it is worth doing a current check):
https://stackoverflow.com/a/32250369/334402
https://superuser.com/a/678171

Best Tess-two configuration to get optimal recognition results?

I'm currently working on an android app utilizing the open source OCR library "Tesseract" to make an app for receipt recognition. I've gotten the library working with the "Tess-two" fork of Tesseract. The problem I'm having is that the recognition is very inconsistent. Even when provided with a good image that is cropped properly, the recognition isn't great. I'd say that when given what I would consider ideal situations, the recognition is about 90% accurate. When provided with any number sub-optimal conditions (dim lighting, blurry image, uncropped, etc...) I find that I'll often get virtually 0% accuracy.
For the purpose of my app, even 90% accuracy pretty much unacceptable, as I need to be able to get the exact information and numbers from the receipt "perfectly" without needing to worry about improperly read information.
So my question: what is the best way to configure Tess-two to get the highest accuracy possible?
In a nutshell, this is what I have done to set up the library:
//prior to running this code, I create the directory for /tessdata and copy my eng.traineddata file in there from the app's assets folder.
baseApi.setVariable("save_best_choices", "T");
baseApi = new TessBaseAPI();
baseApi.init(DATA_PATH, "eng");
baseApi.setVariable("tessedit_char_whitelist", "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ$.!?/,+=-*\"'<:&"); //I was experimenting with this to try and improve accuracy, it didn't seem to help tremendously.
baseApi.setImage(photo);//photo is a bitmap that is selected from the phone's gallery.
String tmp = baseApi.getUTF8Text();
Is there something here that I'm doing wrong, or that I could be doing better?
Are there any files other than eng.traineddata that I should be including? I know there are multiple files for each language, but honestly I couldn't figure what was what, and what actually needed to be included. From what I could gather, I got the only file that was needed.
Are there any other settings that I could/should be modifying with the "setVariable" function?
Additionally, does Tess-two have any built in support for "deskewing" images, or adjusting contrast of provided images? I have not messed with either of these techniques much yet, but this would probably help out, right?
Any help is appreciated!
In case your android app should expect on dictionary words, then have a look at the Minimum Edit Distance algorithm and apply it on the results given by tesseract.

image decoding and manipulation using JNI on android

background
On some apps, it is important to handle large images without OOM and also quickly.
For this, JNI (or renderscript, which sadly lacks on documentation) can be a nice solution.
In the past, i've succeeded using JNI for rotating huge bitmaps while avoiding OOM (link here , here and here). it was a nice (yet annoyingly hard) experience, but in the end it worked.
the problem
the android framework has plenty of functions to handle bitmaps, but i have no idea what is the situation on the JNI side.
I already know how to pass a bitmap from android's "java world" to the "JNI world" and back.
What i don't know is which functions I can use on the JNI side to help me with bitmaps.
I wish to be able to do all image operations (including decoding) on JNI, so that I won't need to worry about OOM when presented with large images, and in the end of the process, I could convert the data to Java-bitmap (to show the user) and/or write it to a file.
again, i don't want to convert the data on the JNI side to a java bitmap just to be able to run those operations.
As it turns out, there are some libraries that offer many functions (like JavaCV), but they are quite large and I'm not quite sure about their features and if they really do the decoding on the JNI-side, so I would prefer to be able to know what is possible via the built-in JNI function of Android instead.
the question
which functions are available for image manipulation on the JNI side on android?
for example, how could i run face detection on bitmaps, apply matrices, downsample bitmaps, scale bitmaps, and so on... ?
for some of the operations, i can already think of a way to implement them (scaling images is quite easy, and wikipedia can help a lot), but some are very complex.
even if i do implement the operations by myself, maybe others have made it much more efficiently, thinking of the so many optimizations that C/C++ can have.
am i really on my own when going to the JNI side of android, where i need to implement everythign from scratch?
just to make it clear, what i'm interested in is:
input bitmap on java -> image manipulation purely in JNI and C/C++ (no convertion to java objects whatsoever) ->output bitmap on java.
"built-in JNI function of Android" is kind of oxymoron. It's technically correct that many Android Framework Java classes use JNI somewhere down the chain to invoke native libraries.
But there are three reservations regarding this statement.
These are "implementation details", and are subject to change without notice in any next release of Android, or any fork (e.g. Kindle), or even OEM version which is not regarded a "fork" (e.g. built by Samsung, or for Quallcom SOC).
The way native methods are implemented in core Java classes is different from the "classical" JNI. These methods are preloaded and cached by the JVM and are therefore do not suffer from most of the overhead typical for JNI calls.
There is nothing your Java or native code can do to interact directly with the JNI methods of other classes, especially classes that constitute the system framework.
All this said, you are free to study the source code of Android, to find the native libraries that back specific classes and methods (e.g. face detection), and use these libraries in your native code, or build a JNI layer of your own to use these libraries from your Java code.
To give a specific example, face detection in Android is implemented through the android.media.FaceDetector class, which loads libFFTEm.so. You can look at the native code, and use it as you wish. You should not assume that libFFTEm.so will be present on the device, or that the library on device will have same API.
But in this specific case, it's not a problem, because all work of neven is entirely software based. Therefore you can copy this code in its entirety, or only relevant parts of it, and make it part of your native library. Note that for many devices you can simply load and use /system/lib/libFFTEm.so and never feel discomfort, until you encounter a system that will misbehave.
One noteworthy conclusion you can make from reading the native code, is that the underlying algorithms ignore the color information. Therefore, if the image for which you want to find face coordinates comes from YUV source, you can avoid a lot of overhead if you call
// run detection
btk_DCR_assignGrayByteImage(hdcr, bwbuffer, width, height);
int numberOfFaces = 0;
if (btk_FaceFinder_putDCR(hfd, hdcr) == btk_STATUS_OK) {
numberOfFaces = btk_FaceFinder_faces(hfd);
} else {
ALOGE("ERROR: Return 0 faces because error exists in btk_FaceFinder_putDCR.\n");
}
directly with your YUV (or Y) byte array, instead of converting it to RGB and back to YUV in android.media.FaceDetector.findFaces(). If your YUV buffer comes from Java, you can build your own class YuvFaceDetector which will be a copy of android.media.FaceDetector with the only difference that YuvFaceDetector.findFaces() will take Y (luminance) values only instead of a Bitmap, and avoid the RGB to Y conversion.
Some other situations are not as easy as this. For example, the video codecs are tightly coupled to the hardware platform, and you cannot simply copy the code from libstagefright.so to your project. Jpeg codec is a special beast. In modern systems (IIRC, since 2.2), you can expect /system/lib/libjpeg.so to be present. But many platforms also have much more efficient HW implementations of Jpeg codecs through libstagefright.so or OpenMAX, and often these are used in android.graphics.Bitmap.compress() and android.graphics.BitmapFactory.decode***() methods.
And there also is an optimized libjpeg-turbo, which has its own advantages over /system/lib/libjpeg.so.
It seems that your question is more about C/C++ image processing libraries than it is about Android per se. To that end, here are some other StackOverflow questions that might have information you'd find useful:
Fast Cross-Platform C/C++ Image Processing Libraries
C++ Image Processing Libraries

Android game Image format

I have a problem with an image for an android game. The problem is not a problem with the code because the code that I use I took from a book (Beginning Android 4 Games Developer).
The problem is this: I know the format that I have to use in android: png, but I don't know the settings for this format that I have to use (like RGB565...). Because if I use simply png, when I run the game the images are not good. So I need someone to explain to which settings I need to use for images for android games.
P.S The software that I used is photoshop. If there is better software for this purpose tell me.
I think there is a strong misconception in your understanding of Android and how it implements graphics. You are not constrained to .png for nearly any of your development. The .png and .9.png are only enforced strictly for managing drawable constants.
Android uses Java and has the capability to utilize nearly any graphical format. In particular native support for .bmp, .png, and .jpg are present for every device and Android OS version. You may even create your graphics in realtime byte by byte.
As for a good image editor, there are a number out there. I often use a combination of GIMP and Photoshop, myself.
Hope this helps,
FuzzicalLogic

Categories

Resources