com.google.android.gms:play-services-vision not functioning in Sudan - android

We have a few Android apps that utilize com.google.android.gms:play-services-vision OCR library and it seem to work fine in many countries, but for some odd reason is not activated in Sudan. The app directs the video stream from camera to Vision, receives recognized strings, and displays them in an overlay on top of the video stream. Sudan is the only country where Vision OCR fails out of many where the app works just fine.
Any idea what could be the reason?
Example app: https://play.google.com/store/apps/details?id=com.arl.shipping.gateexecutor, install it, fill a form to get a pairing code and click an icon with shipping container. This should invoke an OCR that would look for a valid shipping container number in the camera view.

Related

Can two apps use camera at the same time in android phone?

Or is it impossible no matter how you build or code apps?
I am saying in android version 5.0
Which was released 2014-2016.
I mean if when one app is using camera in “background” , is using another camera app at the same time possible??
Some people say it depends on how the applications is coded.
Also the app “sound assistant” makes 2 music apps using speaker at the same time.
Then how about camera???
And i saw a comment saying
“Our current frame work does support limited support for multi-app access to the camera.
We allow one (and only one) "controlling" app to the camera, but an arbitrary number of "shared" apps to access the same camera.
There are some limitations for "shared" apps:
No camera controls Exposure/White
Balance/Focus, etc...).
No media type selection (can't choose VGA vs.
720p vS. 1080p, etc...).
Only access to a video stream by default, photo pins are blocked (any photo operation will use a video frame instead).
The "controlling" app decides what media type to use and can set any camera control. Any of the sharing app can register to be notified if a controlling app releases control of the camera, at which point, the sharing app can re-open the camera in controlling mode.
The mechanism described above does not require any copying of the captured frame so the overhead is minimal.”
What this comment means? does it say 2apps can access to camera at the sametime anyway in android??
Any way thanks for reading and I want to know if the phone is rooted , this can be possible
thank you for reading !

Activating Users' mobile phone camera Documents mode in App

Background
I am building an Optical Character Recognition (OCR) tool that makes sense of photographed Forms.
Arguably the most complicated part of the pipeline is to get the target Document into perspective; basically what is attempted in this Tutorial.
Due to the fact that the data is acquired often in very poor conditions, i.e.:
Uncontrolled Brightness
Covered or removed corners
Background containing more texture than the Target Document
Shades
Overlapped Documents
I have "solved" the Problem using Instance + Semantic Segmentation.
Situation
The images are uploaded by Users via an App that captures images as is. There are versions for the App in both Android and IOS.
Question
Would it be possible to force the App to use the Users' mobile phone Documents mode (if present) prior to acquiring the photo?
The objective is to simplify the pipeline.
In end effect, at a description level, the App would have to do three things:
1 - Activate the Documents mode
2 - Outline the Target Document; if possible even showing the yellow frame.
3 - Upload the processed file to the server. Orientation and extension are not important.
iOS
This isn't a "mode" for the native camera app.
Android
There isn't a way to have the the "documents mode" automatically selected. This isn't available for all Android devices, either. So even if you could, it wouldn't be reliable.
Best bet is following the documentation for Building a camera app, rather than using the native camera if special document scanning is essential. This won't come out of the box on either platform for you.

How to configure front and back both cameras simultaneously in Android camera2 API?

I want to configure front and back both cameras into Android camera2 API, to take pictures and videos from both cameras simultaneously, I have created 2 texture views , when ever I am opening one camera (front or back) my code is working fine but whenever I am trying to open both cameras simultaneously , code is breaking upon creating session, I am getting cameraAccessException :configure stream : method not implimented.
I want to save both front and back camera captured images as one image and both video as one video.
Guys it will be very much helpful if you can put some sample code or some sample link.
i am using one plus 6, i recently downloaded an app "Dual camera fron back Camera", by using this i am able to capture image from front and back both camera on the same time, so if somebody want to suggest for no hardware support, i think it may be valid for other phones but for my case i think i am missing something in coding, till now from google search it looks like there is some problem with session creation for second camera, i debugged my code, during creation of second camera session it fails so if you have any idea about that, please share.
Thanks
Rakesh
The camera API is fine with it, but most Android devices do not have sufficient hardware resources to run both cameras at once, so you will generally get an error trying to open the second camera.
Both image sensors are typically connected to the same image signal processor (ISP), and that ISP can only operate one camera at a time. Some high-end devices have ISPs with multiple processing pipelines which can in theory run more than one camera at a time, but they often require using multiple pipelines to handle advanced functionality or very high resolutions for the main (back) camera.
So on those devices, multiple cameras may be usable at once, but not at maximum resolution or other similar restrictions.
Some manufacturers include multi-camera features in their own camera app, since they know exactly what the limitations are and can write application code to work within them. They may not make multi-camera available to normal apps, due to concerns about performance, thermal limits, or just lack of time to verify more than the exact use case they implement in their own app.
The Android camera API does not currently have a way to query if multiple cameras can be used at once, or if they can be, what the restrictions are. So the only thing you can do is try, and handle the errors in case that isn't feasible.

Using Android's MediaRouter to cast the device screen into firetv stick or a client app?

I am trying to mirror cast using my own app into a Fire TV Stick that is connected to the televsion. It has an option to Mirror the display. My phone can connect to the Fire TV Stick this way, but I would like to mirror something with a smaller resolution and even if I change my phone's resolution using adb, I think it sends the native resolution anyway.
I looked into MediaRouter and MediaRouteProvider. Also downloaded the Media Router sample that it's snippets are used in the documentation. The sample ran but didn't work. And this API is super complex and have so many things in it. I am not sure how to build a simple app that cast video(and later phone's screen) into another device, either the Amazon Fire TV stick mirror display or at least into a client app I will also write.
I couldn't find compact enough samples to do what I want. Do you have any idea where there is a sample that works and is not a massive amount of code?
I couldn't make it work following the documentation.
Instead of finding something in the API to do the mircast for me, I was able to just read pixel data from the MediaProjection and VirtualDisplay and send that using sockets.
It wasn't easy, I had to use a GLES11Ext.GL_TEXTURE_EXTERNAL_OES from the SurfaceTexture, render that into my own offscreen GL_TEXTURE2D and then read that using glReadPixels and the attached framebuffer.

Android Photo Scanning

I am looking to develop an application that will allow the user to launch the camera and use it to scan what the camera sees for a certain logo or word.
I have had some experience using the Zebra barcode scanning library so I am able to launch the camera and scan for a barcode fine.
The issue comes when I try to scan for a logo. Is it at all possible to scan for the logo in real time with the camera or would I have to have the user take picture and then scan the newly take picture and compare it with the logo that is being searched for?
Scanning for a logo is an entirely different problem. The Zebra Library has been designed specifically to recognized barcodes and qr codes anything outside of that scope and it won't be able to interpret anything. You can scan for logos in real time or on the cloud. Amazon does this with their shopping application so that you can research what you find at the supermarket or at a friends house.
There are companies that can offer you Image Recognition as a Service:
https://www.clarifai.com
or you can look into your own open source implementation
opencv.org/
There's examples for OpenCV for Android but you're going to need a bit of background on Machine Learning and Computer Vision before you get started.

Categories

Resources