On the iOS framework you can run a saliency analysis on an image and then crop that image based on the foreground subject.
See this article: https://developer.apple.com/documentation/vision/cropping_images_using_saliency
Is there a comparable solution or third-party library for Android?
Related
I want create application for Android and iOS (phones and tablets). I would like use some cross platform framework for acceleration developing process. So I ask for some recommendation.
It will be small or medium application. It will use tandard wigets like text input, check box, radio button, list box. It would be great have input with autofocus or contextual searching.
My requirements are for framework:
working on phone and tablets
easy to start / install (very important)
easy to deploy (very important)
include library for working with images
include library for working with fonts (TrueType)
learn something perspective for future
be able pack with my application some images
i am able pay some money for comfort
I had asked google before i wrote this question. I found few frameworks but i have dubieties about all of them :).
Answer for cross plafrom framewrok are:
Flutter
It is young. And is it perspective flutter & Dart?
Sencha
Is it easy for start and deploy?
PhoneGap
Qt
It is C++. How fast is it for developing?
Kivy
I read their description. I checked their documentation. But I would like know opinion of people who worked with it.
Thank you.
Man, You have two major options.
Xamarin.native
If your application is supporting a high (pixel perfect) graphic
content then go for Xamarin.Native
All your requirement will be covered by this.
You can share 80% of your code along all the platforms.
Only UI related stuff is written differently for each platform.
Everything else is shared, You can go for PCL solution
https://developer.xamarin.com/guides/cross-platform/application_fundamentals/pcl/
https://developer.xamarin.com/guides/android/getting_started/
https://developer.xamarin.com/guides/ios/getting_started/
Xamarin.Forms
If your application is not very rich in UI.
If your application is more of form based
Go for Xamarin.Forms
https://developer.xamarin.com/guides/xamarin-forms/getting-started/
i'm new to developing android apps in general.
I'm trying to create an application that given a certain image it would detect faces and would give me the eye locations and other info.
I've done some research and i found some stuff such as, the android FaceDetector API and OpenCV.
Could anyone give me some advice on how to make an app like this or send me a link with any info related to this, all help would be great!
Thanks, Daniel.
I have worked with Face recognition for a while.
If you want to use OpenCV you could do a better effort searching in SO and you can found things like this one.
The best one for me is the SDK provide by lockheed martin... but it's too expensive :S for a single person.
Edited
"Face detection and face recognition are different things ;) Face detection tells you where is the face and face recognition tells you who's the owner of the face"
If you choose OpenCV, you can find full doc in official page.
I'm going to give you a overview :
You can use OpenCV in your app using "OpenCV Manager" or with "Static Initialization on OpenCV Android".
About the first one:
OpenCV Manager is an Android service targeted to manage OpenCV library binaries on end users devices. It allows sharing the OpenCV dynamic libraries between applications on the same device. The Manager provides the following benefits:
Less memory usage. All apps use the same binaries from service and do not keep native libs inside themselves;
Hardware specific optimizations for all supported platforms;
Trusted OpenCV library source. All packages with OpenCV are published on Google Play market;
Regular updates and bug fixes;
About the second one:
A complete tutorial using eclipse.
You might try the new Android face API. See the tutorial here about how to detect faces and facial landmarks:
https://developers.google.com/vision/detect-faces-tutorial
I explain how to do it in this article. I used a TensorFlow Lite with a MobileFaceNet implementation, achieving very accurate results and with surprisingly high speed.
You'll find the source code and an APK in this repo
I'm trying to decide which library to choose in order to create app that can
filters a video, for example beautify or clarity.
During my search I came across 2 candidates: OpenCv and FFmpeg, and I found a complete framework only for FFmpeg(so +1 for this).
I couldn't found a complete comparision between the two so if someone has tried them and can post the answer it would be really helpful.
Edit:
Another candidate is Marvin framework(Java project) for Android - https://code.google.com/p/android-image-filtering/
OpenCV is a framework for Computer Vision and it's very limited for what you need because it requires you to write most of the cool filters yourself. Nevertheless, it provides a few techniques to blur images, change contrast, convert to grayscale, flip, crop, threshold, erode, dilate, resize, rotate, isolate colors, composite, and few other things. Just so you have an idea of how to implement filters, I recently implemented a Displacement Map Filter using OpenCV.
FFmpeg has a few filters as well, but it's meant to be a cross-platform solution to record, convert and stream audio and video, which means it doesn't really offer many filter effects.
Nevertheless, both APIs can read video (files and stream from camera) on Android and provide access to the video frames so you can execute your custom filters.
I believe the technology that can really help you bring a large collection of filters to your application is ImageMagick. Note that ImageMagick doesn't handle videos, so you can use Android's native API, OpenCV or FFmpeg for this part. Here are a few examples of what you can do with an image using ImageMagick from the command line, a program interface, or script:
there is a very useful OpenGL ES 2.0 library for video processing with many filters for ios
GPUImage for Ios
the android wrapper is here :
GPUImage Wrapper for Android
The GPUImage framework is a BSD-licensed iOS library that lets you apply GPU-accelerated filters and other effects to images, live camera video, and movies. In comparison to Core Image (part of iOS 5.0), GPUImage allows you to write your own custom filters, supports deployment to iOS 4.0, and has a simpler interface. However, it currently lacks some of the more advanced features of Core Image, such as facial detection.
I need to work on detecting edges from an Image, I'm using Canny algorithm for that.
Since OpenCV for android is available 2.4.2 while i'm trying to run examples it says.
"OpenCV Manager is not installed, please try to install it." after install it from the market it is working fine.
But if i want the user's to install my application so that they don't have to install another .apk for using my application.
-> How to use openCV without without asking for another application i.e. manger should be pre installed.?
-> is there any way i can use Canny algorithm for edge detection without OpenCV any good angorithm tutorials for implementing in in android.?
You might find information about this on the OpenCV webpage. This said, this is deprecated and OpenCV advises not to do this in production. The manager actually allows the user to download the OpenCV library once for all. Then, your application will be much smaller!
About not using OpenCV, you can try FastCV (as Aaron suggested), but it seems overkill for your application (and it requires you to be familiar with NDK development). With OpenCV, in the other hand, you can code either in java (by the way, have a look at JavaCV) or using the NDK.
Finally, if you only need a Canny Edge detector and don't want to use a library, you can try to write it yourself. The related page on Wikipedia should be enough for this (I could do it a few years ago as an exercise).
Have you looked into Qualcomm's FastCV? It offers some of the more common image processing algorithms offered in libraries like OpenCV. They also have a pretty cool augmented reality API called Vuforia.
Fair warning, the support documentation isn't that great and it requires that you are familiar with NDK development.
https://developer.qualcomm.com/mobile-development/mobile-technologies/computer-vision-fastcv
We need to express & render all mathematical equations (e.g. fractions, algebraic equations, matrix, calculus, trigonometry) in the our Android Native Application [not a web/browser application].
(a)Is there a way to do the same using Android libraries? For eg for desktop/server Java based JLatexMath can be used. Is there similar option available for Android? How can something similar be done on the Android platform?
There were some similar questions asked but the answer to them referred to libraries that can not be used on Android.
(b) If native libraries are not there, then how easy/difficult will it be to use Android web-view to provide support for equations expression and rendering. Will it possible to use java script libraries like jsMath?
In this case, i plan to generate HTML code on the fly and refer jsMath library. Will this have any issues?
(c) In our application, there are some action that needs to be performed based on user event for an equation. If web view based approach is used, then how can it trigger processing in native part of our application. Is there any thing special consideration while calling application native code from javascript ?
I used MathJax in my android app. It renders mathematical expression offline, which means you don't need network connection.
You might find this link helpful.
http://www.mathjax.org/