Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm working on an Android app (though eventually I'll want to do the same thing on iOS) and I'm looking to build an image recognition feature into it. The user would snap a picture, then this component of the app would need to figure out what that image is, whether it's a bowling ball, a salad, a book, you name it. It would also be helpful if it could figure out roughly how big the object in question is, though I imagine the camera focus values could help with that. The objects in question would not be moving.
I've heard of neural networks being used, but I'm not sure how this could be implemented, especially since I want to be able to recognize a very wide range of objects. I highly doubt this sort of processing could happen natively on a phone either. What are some solutions to this problem?
I would suggest you look at OpenCV. They have an awesome open source library for image processing and object detection. They also have great Android sample apps ready for testing some of their APIs.
http://opencv.org/platforms/android.html
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I know how to design and develop android application which works offline.But now i want to move towards dynamic android app,which works totally online like facebook,quora,newshunt.Where should i start please give me a way.
Thanks
I personally am completely self-taught when it comes to android, and, well, its been a journey.
For me, a big turning point was
"http://www.androidhive.info/2012/05/how-to-connect-android-with-php-mysql/"
It really gives you a good idea of what it takes to create an app that is connected to the internet, and it isn't hard to implement yourself. Mind you, before, you were working with just android, but once you integrate network connectivity and a server backend, the levels of complexity multiply.
If you're a solo-developer, just getting started, the Google App Engine does a pretty good job of making everything very easy to use, so I might recommend that. It has a free trial of all their cloud services which is $300 for 2 months.
https://cloud.google.com/appengine
Amazon AWS also is attempting to create a similar system, but they are seemingly geared to more enterprise-level operations.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am sorry to post this question but I need some guidance for a project in MATLAB.
Is there any way via an Android App to take a image that can characterize the chemical components of water computationally?
I have to find the presence of Arsenic in water remotely. The lab attended would just take an image of the water - all he can do is send it remotely to me.
How using Image processing or rather what technology I can make use that I can detect chemical composition of the water as in the image?
First, you have to remember that StackOverflow is all about software issues (not about any particular application).
Regarding your question: I am afraid you cannot do it straighforward. First, given a RGB image like this one:
.
Could you tell me if the water contains arsenic or if the color is due to the mud? You cannot say that the water contains arsenic, so you cannot develop an application to perform this task.
You have two posibilities:
Use an spectrometer. But it is complicated to use it remotely.
Use some chemical tint to change the color of the image if it contains arsenic.
Now, the software aspect:
In the question title you talked about Matlab, but then you ask about Android... I am confused. Could you clarify that particular point?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Is there any API in Android SDK which should recognize an object and return the name of the object.
For general objects in 3D, that is an unsolved problem in computer vision right now. A lot of researchers are working on it, but right now computers cannot reliably identify that an arbitrary object they haven't seen before is a "chair", for example. (If you think about it, such labeling actually requires a lot of judgment and world knowledge to know what kinds of things humans can sit on, and that's beyond the current state of AI for objects in general.)
There are algorithms that basically do a Google Image Search: they take a given picture and use some fairly advanced computer vision to find similar-looking pictures on the web (i.e. Google Goggles). There are APIs for those; check out
Google goggles API
Those work well for 2D pictures, like posters and product logos, that always look exactly the same, but not for things like plants and animals.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
So, I have to do an app to my company, my first job, and I have to use the accelerometer, in this case to calculate the movement that the player is doing.
I have to implement for iOS and Android but I have few time to do it.
I found Unity3d, but I would like to know:
Is it going to work for iOS and Android if I export?
How can I update the app using Unity3d?
Is the app the same after the exportation?
Can I implement the accelerometer one time, or I have to do 2 different times, to iOS and to Android?
Another question: is the Database implementation going to be the same?
Thank you so much. I hope that someone can help me.
If you're making an app just for accelerometer, it'd be a lot faster to do it in Java and Objective-C respectively, than to figure out Unity3D, which is actually a game engine.
Of course all of the things you've mentioned are possible in Unity, but it's not made for apps, at least not a simple 2D or UI based app.
1) Yes
2) Open Unity, update your app, build for both platforms and then push to their respective stores.
3) They will almost be identical.
4) You "should" only have to write it once, but you might need to do some conditionals depending on what you are doing, or what bugs come up on the different systems.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm making an android app in Eclipse and i want to record my voice, which will be password for login into application. When i try to log in it should recognize my password and let me use the application. How can i do that comparison in order to get a match, i need something like shazam? Thanks for any tips!
Audio comparisons is a very complex topic. Generally, if you don't know anything about it, I'd discourage you to head into such a project.
The problem is, that while you could of course just compare the two audio files byte by byte, that certainly is not what you want. Although two audio files sound the same, i.e. it contains the same spoken words, the actual data will differ quite a bit.
You'd have the following possibilities:
Try to recognize what the user said (Speech recognition), and check whether the same word was recognized later. This solution, while being the simplest, could not distinguish between different users.
Dive into the mysterious world of audio processing. A technology called Fast Fourier Transformation is more specifically what you'll be ending up dealing with.