I have implemented MlKit Vision Barcode Scanning API in a sample app and it works correctly. I would like to put it into my prod app but for that I need an option of adding supported types for recognition. Specifically GS1 Databar types are not supported by MLKit but really common in production.
Q: How can I retrain the existing model for barcode scanning or build on top of it?
ML Kit Barcode scanner appears to be a proprietary model. The feature is listed as BETA so subject to the whims of Google.
This is a beta release of ML Kit for Firebase. This API might be changed in backward-incompatible ways and is not subject to any SLA or deprecation policy.
If you are looking at ZXing, see: GS1 structure data parsing using the ZXing barcode library
After reaching to Google/Firebase support here is what I found out:
Unfortunately, it wouldn't be possible for you to re-train that model
to work with GS1 barcodes, since the general specifications differ
from one to another.
They promised to pass it as a feature request to the development team though.
So I guess the only solution would be to create a custom model for barcode scanning to reiterate when ever needed after that.
#Morrison Chang provided some useful links for pure ML solution of this problem in the comments to his answer.
Zxing is an option but you cannot retrain the model and it is only in maintenance mode now with no support for IOS (I guess there is a thrid-part Objective-C bridge).
Related
can I know how can I add AI recommendation feature to my Android Java App? My objectives is let the app to have the ability to recommend other related product based on user's preference.
For android app ML, I would recommend checking out Google ML Kit
I don't know much about android development, but their ML Kit seems like a good start.
EDIT: I found this as well, hopefully will help you can toggle from Kotlin and Java. Generate smart replies with ML Kit on Android
This task is usually performed on the backend side where all data and user search history is available.
Later this is exposed through some sort of HTTP / GraphQL / GRPC endpoint which the Android app queries using internet.
Finally the android app loops through the data received from the backend and displays a list of nice items
I have just started exploring Firebase ML Kit by Google to test out face recognition capabilities. I tried official samples and it's working well.
Though, according to it's official documentation, we can schedule an install time download of the ML models required, I need a way to pre-install the models into the Android device itself, so it could be utilized in my app whenever in Offline scenarios(w/o internet).
If there is one, it would be of great help for my use case.
Thanks.
As of now, with ML Kit you cannot pre-install the face detection models on to the device in that manner. Like you mentioned, the models can be downloaded at install time, but the question indicates that you want beyond that - i.e. there is no internet during install time. If no internet, then the app cannot be downloaded and installed, which will limit your distribution.
UPDATE
[Confirmed from the comments that the user wants the models to be available offline even without downloading once during install time.]
As of now, that is not supported for built-in models like face detection.
However, if you use custom tflite models (i.e. bring your own model as opposed to using built-in models) with ML Kit then you can bundle it within your app when you build it on your desktop and distribute manually like you suggested. Here is the documentation for the custom model API which also contains links to quickstarter apps for Android / iOS.
I am building an Android app where seven segment digits need to be recognized from picture and populate on screen post processing the data.
This needs to happen in an offline mode. So it needs to run on mobile
I have looked at Tess but it makes app size considerably large hence would like to stick to ML Kit on Firebase.
Is there a way to add seven segment digit recognition in an existing ML Kit text vision API?
Is there a way to add seven segment digit recognition in an existing ML Kit text vision API?
You cannot add it directly. We will have to update the model for the text recognition. That said, things like Driver licenses work for Text recognition with ML Kit. Have you tried running the quick-starter sample app or the codelab on your use case? If your use case does not work out, please feel free to reach out to Firebase Support and we will be happy to understand your use case and update the model.
Other option to consider is training and using your own custom model in ML Kit. You could look at TF Hub for doing transfer learning rather than training from scratch.
One of my android projects requires a feature of OCR reader to read the MICR code on the bank cheque leaf in. We have tried a sample code of android native application which is scanning the page and reading the maximum different types of fonts. But when scanning the MICR code, the app is not able to read the number and giving entirely different number. Please suggest if any feature available for OCR scanning in the MobileFirst platform, if possible please share the sample code. Please tell me whether it is possible to read a MICR code through the OCR scanner?
This is not really related to MobileFirst Platform. MobileFirst provides you with an SDK to connection to backend systems with a security layer via Adapters, and an application structure if you've created a Hybrid app.
However the client-side of the application is Cordova based. As such you can use Cordova plug-ins to add missing functionality. For example, this OCR scanner plug-in: https://github.com/rmtheis/android-ocr
If you created a Native app, then this is really completely decoupled from MobileFirst and you need to find native libraries that does what you want, or write one such on your own.
Is there a good QR Decoding Script or Pluging for Unity3D (Android and iOS)?
Or has someone already successfully integrated ZXing in Unity3D for Android and iOS? Here is a good solution for Webcam, but WebCamTexture does not always work on Android :(
I am grateful for any help.
There is a non-Free ($50) plugin available: Antares QR Code
If you're not interested in paying for a plugin then you'll have to create your own. Since ZXing is available for both iOS and Android you can create C# wrappers for it and then use a native plugin on iOS and the C#-to-Java extensions on Android to get what you need.
There is also an other plugin available for barcodes and QRCode for both Android and iOS : Easy Code Scanner
You just have to call a single method (common C# API for Android and iPhone) and it automatically launches a camera view/preview that decodes the barcode/QR code and gives you back the literal string of it in a callback. It is based on ZBar and you have nothing to integrate, everything is already self packaged.
The plugin can give you back the picture taken during the preview (as a Texture2D/Image) and also decode directly in the scripts a Texture2D/image without camera preview/shot.
The blog that the OP linked to, now has a free option for Android devices here You can also check out this related video
ARCamera prefab in Unity Tutorial
You may need to fix some minor compile errors to get everything working due to newer versions of Vuforia having different implementations.
You can also use free metaio SDK which has built in support for QR codes reading.