Water Image : Chemical Analysis [closed] - android

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am sorry to post this question but I need some guidance for a project in MATLAB.
Is there any way via an Android App to take a image that can characterize the chemical components of water computationally?
I have to find the presence of Arsenic in water remotely. The lab attended would just take an image of the water - all he can do is send it remotely to me.
How using Image processing or rather what technology I can make use that I can detect chemical composition of the water as in the image?

First, you have to remember that StackOverflow is all about software issues (not about any particular application).
Regarding your question: I am afraid you cannot do it straighforward. First, given a RGB image like this one:
.
Could you tell me if the water contains arsenic or if the color is due to the mud? You cannot say that the water contains arsenic, so you cannot develop an application to perform this task.
You have two posibilities:
Use an spectrometer. But it is complicated to use it remotely.
Use some chemical tint to change the color of the image if it contains arsenic.
Now, the software aspect:
In the question title you talked about Matlab, but then you ask about Android... I am confused. Could you clarify that particular point?

Related

Recognize a figure - Similar to the Magic Touch game [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
In the game i need to the user to draw a figure, for example a rectangle, and the game has to recognize the figure. How can i do this?
Thanks!
This sounds like you would want to use a Neural Network. If you don't know what that is: It is basically an Artificial Brain which can do simple tasks like classifying (recognizing) shapes. It's really easy to do with tensorflow, since it's easily integratable into Android: https://www.tensorflow.org/mobile/android_build
Then, you would train your classifier to differenciate between Circle, Rectangle, Triangle and more. For this, you should draw some rectangles and triangles in your app, save them as images and label them yourself with what is in them. Then you can use those images with TensorFlow to train your AI to recognize images the user draws.

How can I add broad image recognition to a mobile app? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm working on an Android app (though eventually I'll want to do the same thing on iOS) and I'm looking to build an image recognition feature into it. The user would snap a picture, then this component of the app would need to figure out what that image is, whether it's a bowling ball, a salad, a book, you name it. It would also be helpful if it could figure out roughly how big the object in question is, though I imagine the camera focus values could help with that. The objects in question would not be moving.
I've heard of neural networks being used, but I'm not sure how this could be implemented, especially since I want to be able to recognize a very wide range of objects. I highly doubt this sort of processing could happen natively on a phone either. What are some solutions to this problem?
I would suggest you look at OpenCV. They have an awesome open source library for image processing and object detection. They also have great Android sample apps ready for testing some of their APIs.
http://opencv.org/platforms/android.html

Library for a simple graphics application - Android [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am creating a simple graphics application, where the user can draw several shapes, change the shape color-size, zoom, rotate the device etc. After researching, I found out that there are many options on how this can be achieved and I find it hard to choose which approach would benefit me more. One approach that I have started testing is to create a custom view and to draw on Canvas.
Is this option valuable in order to proceed without having the futuristic fear of low performance resulting to switch over something else (e.g. something like OpenGL)?
Or better, given the brief description above, what would you recommend as the best option?
As this is an opinion based type of question, I think this one might get closed, so I'll type fast: I think to get your app up and running fast, Canvas is your best approach (lots of easy docs and examples out there), unless you're already an export in OpenGL, to which there is quite a steep learning curve. It doesn't sound like, from your app requirements, you would hit upon any serious performance issues, and, as I say, you can get thing running quickly. If performance is not adequate, you can latter switch to pure OpenGL or use a framework for it like libGDX, Rajawali, AndEngine, etc.

Object Recognition using Android API [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Is there any API in Android SDK which should recognize an object and return the name of the object.
For general objects in 3D, that is an unsolved problem in computer vision right now. A lot of researchers are working on it, but right now computers cannot reliably identify that an arbitrary object they haven't seen before is a "chair", for example. (If you think about it, such labeling actually requires a lot of judgment and world knowledge to know what kinds of things humans can sit on, and that's beyond the current state of AI for objects in general.)
There are algorithms that basically do a Google Image Search: they take a given picture and use some fairly advanced computer vision to find similar-looking pictures on the web (i.e. Google Goggles). There are APIs for those; check out
Google goggles API
Those work well for 2D pictures, like posters and product logos, that always look exactly the same, but not for things like plants and animals.

Segment control in android [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Can any one please tell me how to make a segment control in android, exactly given below -
I am confused what images should I use . How to cut images and use them as resources to get exact same segment control like this. And which library would be perfect to achieve this ?
There are dozen of good articles on segment control implementation... but following are the good ones , that helped me.. Intermediate Example is closest to your requirement.
Basic Example
Intermediate Example
Github Segment Control Example One
Github Segment Control Example Two
Github Ceryle SegmentedButton (the richest)
For creating Menus , u need to have some photoshop skills, although this website can create navigation for u online as well Create Navigation Online

Categories

Resources