Android Photography App Double Exposure - android

I would like to create an android app that would combine two photographs together to create something similar to what you would see in a double exposure photograph. Can you give me any ideas on how to do this?

To get a true double-exposure, all you should need to do is add together the R/G/B values for each pixel with straight addition, with an upper limit of 255 for each component(for 24bpp at least). If it's too bright, you can always reduce it down some afterward.

Related

How to create a magicwand in android?

I want to create a magic wand tool in Android like it is implemented in Photoshop. Is there an opensource library to perform this work? And if not can anyone guide me on the right way?
OpenCV has floodFill that with a little work can give you the magic wand functionality.
basically you need acccess to the pixels of the image. You can do that in numerous ways -> Canvas. Then, your algorithm is a bit like the A* Pathfinding algorithm (but not really);
set color-diff threshold
define starting point
check every pixel arround the starting point if it passes threshold. if yes -> save coords
for every pixel that passed threshold, go to 2
the pixel-color difference that should pass the threshold is in essence the pythagoras theorem between the original starting point and the pixel you are comparing; d=SQRT((x2-x1)^2+(y2-y1)^2+(z2-z1)^2)
of course photoshop has a number extreme efficient algorithms, but essentially it boils down to above

Android: How to detect these objects in images? (Image included). Tried OpenCV and metaioSDK, but both are not working good enough

i have been working with object detection / recognition in images captured from an android device camera recently.
the object i am trying to detect are all kinds of buttons that look like this:
Picture of buttons
so far i have been trying with OpenCV and also with the metaio SDK. results:
OpenCV was always detecting something, but gave lots of false hits. also it is too much work to collect all the pictures for what i have in mind. i have tried three ways with OpenCV:
FeatureDetection (SURF, ORB and so on) -> was way too slow and not enough features on my objects.
Template Matching -> seems to only work when the template is exactly a part out of the scene image
Training classifiers -> this worked the best so far, but is too much work for my goal, and still gives too many false detections.
metaioSDK was working ok when i took my reference images (the icon part of each button) out of a picture like shown above, then printed the full image and pointed my android device camera at the printed picture. but when i tried with the real buttons (not a picture of them) then almost nothing got detected anymore. in the metaio documentation it is said that the reference images need to have lots of features and color differences and also should not only consist of white text. well, as you see my reference images are exactly the opposite from what they should be. but thats just how the buttons look ;)
so, my question would be: does any of you have a suggestion about what else i could try to detect and recognize each of those buttons when i point my android camera at them?
As a suggestion can you try the following approach:
Class-Specific Hough Forest for Object Detection
they provide a C code implementation. Compile and run it and see the results, then replace positive and negative training images with the ones you have according the following rules:
In a car you will need to define the following 3 areas:
target region (the image you provided is a good representation of a target region)
nearby working area (this area have information regarding you target relative location) I would recommend: area 3-5 times the target regions, around the target, can be a good working area
everything outside the above can be used as negative images
then,
Use "many" positive images (100-1000) at different viewing angles (-30 - +30 degrees) and various distances.
You will have to make assumptions at which viewing angles and distances your users will use the application. The more strict they are the better performance you will get. A simple "hint" camera overlay can give a good idea to people what you expect the working area to be.
Use few times (3-5) more different negative image set which includes pictures of things that might be in the camera but should not contribute any target position information.
Do not use big images, somewhere around 100-300px in width should be enough
Assemble the database, and modify the configuration file that the code comes with. Run the program, see if performance is OK for your needs.
The program will return a voting map cloud of the object you are looking fore. Add gaussian blur to it, and apply some threshold to it (you will have to make another assumption for this threshold value).
Extracted mask will define the area you are looking for. The size of the masked region can give you good estimate of the object scale. Given this information it will be much easier to select proper template and perform template matching.
(Also some thoughts) You can also try to do a small trick by using goodFeaturesToTrack function with the mask you got, to get a set of locations and compare them with the corresponding locations on a template. Constuct an SSD and solve it for rotation, scale and transition parameters, by mimizing alignment error (but not sure if this approach will work)

How to Calculate Height of Android Phone from ground

I am working on a project in which i have to calculate my device height from ground. I have searched all over the internet but could not find any solution.
Please, Anyone tell me what to do..??
Take it with a grain of salt, a bit of humor and a sense of philosophy. Change the barometer by your smartphone.
http://naturelovesmath-en.blogspot.ca/2011/06/niels-bohr-barometer-question-myth.html
First it has to be clarified, if "height from ground" means altitude in meaning "height from sea level" or you mean, how far the phone is away from the floor, when you have it in your hands.
For the second case:
Like SonicWind states, you could do the trick using the camera.
It would require calibration of the camera and to have a standard object.
Take a picture of the standard object which has to be positioned on the ground with standard zoom.
Recognize the object size - or select it in the picture, and calculate the distance to the object.
-> you have the distance to the ground.
The object might be also your shoes etc. So if the application should be for multiple users, you might allow them to enter their shoe sizes ;)
This is an odd one..but OK..I like a challenge. The only way to realistically do this is to run a sonar sensor on the phone(easily done on arduino). Other than that..all you can do is set up the code to read the accelerators to guesstimate the distance(put the phone on the ground and pick it up to the height you want. It appears to be impossible to do otherwise(maybe some concept use of the camera..)

Motion detection using OpenCV

I see queries related to opencv motion detection, but my requirement is much simpler , so i am asking the question again .
I would like to analyse video frames and see if something has changed in the frame. Any kind of motion occurring in the frame has be recognized. I just want to get notified if something happens. I don't need to track/ draw contours.
Attempts made :
1) Template matching using OpenCV ( TM_CCORR_NORMED ).
I get the similarity index using cvMinMaxLoc &
if( sim_index > threshold )
"Nothing chnged"
else
"Changed
Problem faced :
I couldn't find a way to decide on how to set thresholds. The values of false match and perfect were very close.
2) Method 2
a) Make running average
b) Take abs difference between current frame and moving average.
c) Threshold it and made it binary
d) Count the number of non zero values
Again am stuck with how to threshold it, because i am getting a large number of non zero values even for very similar frames.
Please advice me on what approach i should take. Am i going in the right direction with the above two methods, or is there a simple method which can work in all most generic scenarios.
Method 2 is generally regarded as the most simple method for motion detection, and is very effective if you have no water, swaying trees or highly variable lighting conditions in your video.
Normally you implement it like this:
motion_frame=abs(newframe-running_avg);
running_avg=(1-alpha)*running_avg+alpha*newframe;
You can threshold the motion_frame if you want, then count the nonzeroes. But you could also just sum the elements of the motion_frame and threshold that instead (be sure to work with floating point numbers). Optimizing the parameters for this is pretty easy, just make two trackbars and play around with it. Typically alpha is around [0.1; 0.3].
Lastly, it is probably overkill to do this on entire frames, you could just use subsampled versions and the result will be very similar.

Using OpenCV to analyze data from protein gels

So what I want to do is write an application that, at least in the future, could be ported to mobile platforms(such as android) that can scan an image of a protein gel and return data such as the number of bands(ie weights) in a column, relative concentration(thickness of the band), and the weights of each in each column.
For those who aren't familiar, mixtures of denatured proteins(basically, molecules made completed straight) are loaded into each column, and with the use of electricity the proteins are pulled through a gel(because the proteins are polar molecules). The end columns of each side of this image http://i52.tinypic.com/205cyrl.gif are where you place a mixture of proteins of known weights(so if you have 4 different weights, the band on top is the largest weight, and the weight/size of the protein decreases the further it travels down). Is something like this possible to analyze using OpenCV? The given image is a really clean looking gel, they can often get really messy(see google images). I figured if I allowed a user to enter the number of columns, which columns contain known weight markers and their actual weights, as well as provide an adjustable rectangle to size around the edges of the gel, that maybe it would be possible to scan and extract data from the images of these gels? I skimmed through a textbook on OpenCV but I didn't see any obvious and reliable way I could approach this. Any ideas? Maybe a different library would be better suited?
I believe you can do this using OpenCV
My approach would be a color based separation. And then counting the separate different components.
In big steps your app would do the following steps:
Load the image, rotate it scale manually through the GUI of your app, to match your needs
Create a second grayscale image in which each pixel contains a value between [0,255], that represents how good the color of the original point matches the target color (in the case of this image the shade of blue)
In one of my experiments I've used the concept of fuzzy sets and alpha cuts to extract objects of a certain color. The triangular membership function gave me pretty good results. This simply meant that I've defined triangular functions for all three color channels RGB, and summed their result for each color given as input. If the values of the color were close to the centers of the triangles, then I had a strong similarity. Plus, by controlling the width of the triangles you can define the tolerance of the matches. (another option would be to use trapezoidal membership functions)
At this point you have a grayscale image, where the background (gel) is black and the proteins are gray/white. If you wish to clear up some noise use the morphological operators (page 127) erode and dilate (cvErode and cvDelate in openCV).
After it, can use this great openCV based blob extraction library to extract the bounding boxes of the remaining gray areas - representing the proteins
Having all the coordinates of the bounding boxes you can apply your own algorithms, to extract whatever data you wish
In my opinion OpenCV gives you all the necesarry tools. However a fully automated solution might be hard to obtain. But I'm sure you can easily build a GUI where you can set the parameters of the operators you apply during the above described steps
As for Android: I didn't develop for mobile platforms, but I know that you can create C++ apps for these devices - have read several questions regarding iPhone & openCV -, so I think that your app would be portable, or at least the image processing part of it (the GUI might be too platform specific).

Categories

Resources