2014 FCC Closed Captioning Requirements - android

New FCC requirements that become effective January 1st 2014 require certain video content to have customizable close captioning (font, font size, text color, opacity, etc). See article:
http://www.insidetechmedia.com/2012/08/22/fcc-extends-online-closed-captioning-user-control-mandate-until-january-2014/
Anyone have any idea whether such support is in the works for the built in android video player in the coming Android releases? And thirdparty libraries with such support? Thanks!

We've encountered the same requirement at Flixster, as our app provides full-length streaming service. KitKat 4.4 added some support for closed captioning, but it's limited to WebVTT captioning format and KitKat devices.
We had to develop a solution that tailored to our needs, which included the abilities to parse SMPTE-TT/TTML captioning format, customize captioning preferences for pre-KitKat devices, and render advanced closed captioning on top of a VideoView.
This has been integrated into our app, and we've made this captioning library open source, so check it out and see if it can help your app in any way!
https://github.com/flixster/flixster-android-closedcaptions

First, note that if you read the article, "Only online-delivered full-length programming that previously appeared on television with captions in the United States is covered by the new rules".
Second, text tracks have been supported in Android since Android 4.1, though it suffers from limited documentation and apparently some bugs.
Third, as with pretty much everything in Android, you will find out what "is in the works" when we do, which will be when updates are released. Google does not usually say much in advance about a release, except maybe what "tasty treat" is the code name.

Related

Activating Users' mobile phone camera Documents mode in App

Background
I am building an Optical Character Recognition (OCR) tool that makes sense of photographed Forms.
Arguably the most complicated part of the pipeline is to get the target Document into perspective; basically what is attempted in this Tutorial.
Due to the fact that the data is acquired often in very poor conditions, i.e.:
Uncontrolled Brightness
Covered or removed corners
Background containing more texture than the Target Document
Shades
Overlapped Documents
I have "solved" the Problem using Instance + Semantic Segmentation.
Situation
The images are uploaded by Users via an App that captures images as is. There are versions for the App in both Android and IOS.
Question
Would it be possible to force the App to use the Users' mobile phone Documents mode (if present) prior to acquiring the photo?
The objective is to simplify the pipeline.
In end effect, at a description level, the App would have to do three things:
1 - Activate the Documents mode
2 - Outline the Target Document; if possible even showing the yellow frame.
3 - Upload the processed file to the server. Orientation and extension are not important.
iOS
This isn't a "mode" for the native camera app.
Android
There isn't a way to have the the "documents mode" automatically selected. This isn't available for all Android devices, either. So even if you could, it wouldn't be reliable.
Best bet is following the documentation for Building a camera app, rather than using the native camera if special document scanning is essential. This won't come out of the box on either platform for you.

Image Processing: API to classify text based on font-type and size

I am looking for an API which can take images as input and classify/identify the text in the images based on font-type and font-size. Now, the images are screenshots of screens in a mobile app, and hence represent the perfect fonts and are not distorted like handwritten text or images of printed documents.
I went through a few of the available API's like Google Vision API but could find a solution to it.
You are talking about Optical character recognition (OCR).
I used the IBM Visual Recognition API a couple of years ago, but I've heard that the text recognition feature is not available any more.
Check out this list of OCR APIs
https://www.programmableweb.com/category/ocr/api
I'm sure that you will find it usefull.

How to develop a Smart Magnifier using the mobile camera?

I would like to use the mobile camera and develop a smart magnifier that can zoom and freeze-frame what we are viewing, so we don't have to keep holding the device steady while we read. Also should be able to change colors as given in the image in the link below.
https://lh3.ggpht.com/XhSCrMXS7RCJH7AYlpn3xL5Z-6R7bqFL4hG5R3Q5xCLNAO0flY3Fka_xRKb68a2etmhL=h900-rw
Since i'm new to android i have no idea on how to start, do you have any idea?
Thanks in advance for your help :)
I've done something similar and published it here. I have to warn you though, this is not a task to start Android development with. Not because of development skills, the showstopper here is a need for massive amount of devices to test it on.
Basically, two reasons:
Camera API is quite complicated and the different HW devices behave differently. Forget about using emulator, you would need a bunch of real HW devices.
There is a new API, Camera2 for platform 21 and higher, and the old Camera API is deprecated (kind of 'in limbo' state).
I have posted some custom Camera code on GitHub here, to show some of the hurdles involved.
So the easiest way out in your situation would be to use camera intent approach, and when you get your picture back (it is a jpeg file) just decompress it and zoom-in to the center of the resulting bitmap.
Good Luck

Fingerprint Scanner using Camera [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Working on fingerprint scanner using camera or without, its possibility and its success rate?, I came across one of open source SDK named FingerJetFX its provide feasibilty with android too.
The FingerJetFX OSE fingerprint feature extractor is platform-independent and can be built
for, with appropriate changes to the make files, and run in environments with or without
operating systems, including
Linux
Android
Windows
Windows CE
various RTOSs
but I'm not sure whether Fingerprint scanner possible or not, I download the SDK and digging but no luck, even didn't found any steps to integrate the SDK, so having few of question which listed below:
I'm looking for suggestion and guidance:
Fingerprint scanner can be possible in android using camera or without camera?
With the help of FingerJetFX can I achieve my goal?
If 2nd answer is yes, then can someone provide me any sort of steps to integrate SDK in android?
Your suggestion are appreciable.
Android Camera Based Solutions:
As someone who's done significant research on this exact problem, I can tell you it's difficult to get a suitable image for templating (feature extraction) using a stock camera found on any current Android device. The main debilitating issue is achieving significant contrast between the finger's ridges and valleys. Commercial optical fingerprint scanners (which you are attempting to mimic) typically achieve the necessary contrast through frustrated total internal reflection in a prism.
In this case, light from the ridges contacting the prism are transmitted to the CMOS sensor while light from the valleys are not. You're simply not going to reliably get the same kind of results from an Android camera, but that doesn't mean you can't get something useable under ideal conditions.
I took the image on the left with a commercial optical fingerprint scanner (Futronics FS80) and the right with a normal camera (15MP Cannon DSLR). After cropping, inverting (to match the other scanner's convention), contrasting, etc the camera image, we got the following results.
The low contrast of the camera image is apparent.
But the software is able to accurately determine the ridge flow.
And we end up finding a decent number of matching minutia (marked with red circles.)
Here's the bad news. Taking these types of up close shots of the tip of a finger is difficult. I used a DSLR with a flash to achieve these results. Additionally most fingerprint matching algorithms are not scale invariant. So if the finger is farther away from the camera on a subsequent "scan", it may not match the original.
The software package I used for the visualizations is the excellent and BSD licensed SourceAFIS. No corporate "open source version"/ "paid version" shenanigans either although it's currently only ported to C# and Java (limited).
Non Camera Based Solutions:
For the frightening small number of devices that have hardware that support "USB Host Mode" you can write a custom driver to integrate a fingerprint scanner with Android. I'll be honest, for the two models I've done this for it was a huge pain. I accomplished it by using wireshark to sniff USB packets between the scanner and a linux box that had a working driver and then writing an Android driver based on the sniffed commands.
Cross Compiling FingerJetFX
Once you have worked out a solution for image acquisition (both potential solutions have their drawbacks) you can start to worry about getting FingerJetFX running on Android. First you'll use their SDK to write a self contained C++ program that takes an image and turns it into a template. After that you really have two options.
Compile it to a library and use JNI to interface with it.
Compile it to an executable and let your Android program call it as a subprocess.
For either you'll need the NDK. I've never used JNI so I'll defer to the wisdom of others on how best us it. I always tend to choose route #2. For this application I think it's appropriate since you're only really calling the native code to do one thing, template your image. Once you've got your native program running and cross compiled you can use the answer to this question to package it with your android app and call it from your Android code.
Tthere are a couple immediate hurdles:
Obtaining a good image of the fingerprint will be critical. According to their site, fingerjet expects standard fingerprint images - e.g. 8-bit greyscale (high contrast), flattened fingerprint images. If you took fingerprint pictures with the camera, the user would need to have a flat transparent surface (glass) you could flatten the fingerprints onto in order to take the picture. Your app would then locate the fingerprint in the image, transform it into a format acceptable for fingerjet. A library like OpenCV would help do this.
FingerJetFX OSE does not appear to offer canned android support - you will have to compile the library for android and use it via JNI/NDK.
From there, fingerjet should provide you with a compact representation of the print you can use for matching.
It would be feasible, but the usage requirement (need for the user to have a flat transparent surface available) might be a deal breaker...

flex 4.5/Air 's android app

I am thinking of using Flex4/Air for developing an Android app. I want the app to have hover to play like ability. I mean if a video thumbnail is selected from alist of videos but not clicked it should play a 5 second clip just like in bing.com/videos. I am assuming this is the closest we come to "hover" in Android devices - please correct me if ths is not the case.
Does Flex4/Air have this capability? Otherwise can we implement hover-to-play ability on Android devices?
Appreciate any help/pointers.
Does Flash Builder 4.5 have this capability? No! I think you meant to ask does Flex 4.5 have this capability, but the question would be best stated: "does AIR for Android have this capability? If so, how can I access it in Flex?"
When developing code for mobile devices, I would take great care when implementing functionality based on a "hover" approach. However, you can take a look at these touch events:
TOUCH_OVER
TOUCH_ROLL_OVER
I thought a Long Press / Long Touch event may be what you need, but I couldn't find any documented one supported by AIR's touch API.

Categories

Resources