I'm working on an Android program for my research.
It needs to find out the angle while user using my mobile phone.
The angle here is like a rotation angle of entire mobile phone, not the widgets in layout.
I've tried keyword "angle" on http://developer.android.com/reference/packages.html
And I found a method "onSensorChanged" under public interface SensorListener
But the description there is too hard to me to understand :Q
Is it the function I want?
Essentially you would do some inverse trig on the readings of the three accelerometers, to figure out the angle of "gravity" from its vector components in the three measured axis.
See ApiDemos in the sample applications package of the SDK, particularly Sensors.java
http://developer.android.com/resources/samples/get.html
You may just want to build that demo application, install it, and experiment with the screen showing the accelerometer data, then extract that part into a separate project and modify it towards your needs.
Related
I would like to create Android app for viewing images.
The idea is that users keep their tablet flat on the table (for sake of simplicity, only X and Y for now) and to scroll the picture (one that is too big to fit the screen) by moving their tablets (yes, this app has to use tablet movement, sorry, no fingers allowed :) ).
I managed to get some basic framework implemented (implementing sensor listeners is easy), but I'm not sure how to translate "LINEAR_ACCELERATION" to pixels. I'm pretty sure it can be done (for example, check "photo sphere" or "panorama" apps that move content exactly as you move your phone) but I can't find any working prototype online.
Where can I see how that kind of "magic" is done in real world?
I'm using Project Tango to get the depth data and then I send the 3d point cloud to a native class to process the data using pcl library. I have couple of questions, which I think they will help me understanding more about rotation, translation, and coordinate systems in both pcl and tango. Also, they will help me solving my problem in the project I'm working on.
I'm working in an obstacle detection system which should work in real-time, and I'm using the default coordinate system in Tango? is this right? shouldn't I use the area description coordinate frame?
In my current scenario, I'm taking the 3D cloud data which is in the camera frame of reference and process them without changing the frame of reference. I have three related question here:
Is it alright to work directly on the data while it in the camera frame?
Since I use PCL for processing, what is the coordinate system in pcl are the x point to the right, y point to the bottom, and z forward from the camera as in the camera coordinate system of Tango?
Is transforming the cloud to the start of device is transforming to the origin (world frame)?
In the scenario in my project users should hold the device in a X rotated degree around the x axis, I have read from other posted questions that the pitch/roll/yaw are not the reliable way to get the rotation of the device and I have to use the pose data provided by tango, is it correct? how can I determine the right rotation angle of the device so that I can rotate the cloud to make sure that the surface normal of the floor will be parallel to the Y axis? (Please have a look to the pictures to have an idea of what I mean)
How Can I use the pose data to traslate and rotate the cloud data in pcl?
note:
I have a related quetion to this one, which shows the results of my 3d point cloud processing code and its output:
Related Question: how to detect the floor palne?
Thank you
I will post some answers to my questions.
1 and 2.1 - To get better results regarding floor plane normal angle, the best coordinate work for me is OpenGL world coordinate.
2.2- please refer to the answer in this link: answer link (pcl forum)
For the rotation and translation, I used the pose data from Tango device, I have read that they are better than getting the rotation of the device itself.
Thus, I get the the pose data at the same time as the point cloud and then I send them to the native code and perform the translation using PCL library.
Hope this will be helpful for someone.
The requirement is to create an Android application running on one specific mobile device that records video of a human eye pupil dilating in response to a bright light (which is physically attached to the mobile device). The video is then post-processed frame by frame on the device to detect & measure the diameter of the pupil AND the iris in each frame. Note the image processing does NOT need doing in real-time. The end result will be a dataset describing the changes in pupil (& iris) size over time. It's expected that the iris size can be used to enhance confidence in the pupil diameter data (eg removing pupil size data that's wildly wrong), but also as a relative measure for how dilated the eye is at any point.
I am familiar with developing Android mobile apps, but my experience with image processing is very limited. I've researched solutions and it seems that the answer may lie with the OpenCV/JavaCv libraries, which should provide shape detection (eg http://opencvlover.blogspot.co.uk/2012/07/hough-circle-in-javacv.html) but can anyone provide guidance on these specific questions:
Am I right to think it can detect the two circle shapes within a bitmap, one inside the other? ie shapes inside each other is not a problem.
Is it true that JavaCv can detect a circle, and return a position & radius/diameter? ie it doesn't return a set of vertices that then require further processing to compare with a circle? It seems to have a HoughCircle method, so I think yes.
What processing of each frame is typically used before doing shape detection? For example an algorithm to enhance edges, smooth, or remove colour?
Can I use it to not just detect presence of, but measure the diameter of the detected circles? (in pixels, but then can easily be converted to real-world measurements because known hardware is being used). I think yes, but would be great to hear confirmation from those more familiar.
This project is a non-commercial charitable project, so any help especially appreciated.
I would really suggest using ndk as it is a bit richer in features. Also it allows you to run and test your algorithms on a laptop with images before pushing it to a device, speeding up development.
Pre-processing steps:
Typically one would use thresholding or canny edge detection and morphological operations like erode dilate.
For detection of iris / pupil, houghcircles is not a very good method, feature detection methods like MSER work better for not-so-well-defined circles. Here is another answer I wrote on the same topic which has code that could help.
If you are looking to measure the regions, I would suggest going through this blog. It has a clear explanation on the steps involved for a reasonably accurate measurement.
I am trying to track the locations of the corners of a sheet of paper as I move it relative to an Android camera (you can assume the the sheet of paper will be a completely different color than the background). I want to find the x, y coordinates of each corner on the android screen. I also want to be able to change the angle of the paper so it won't necessarily appear perfectly rectangular all the time.
I am using opencv 2.4.1 for Android, but I could not find cvgoodfeaturetotrack or cvfindcornersubpix in the packages. Right now I am thinking of using the CvCanny algorithm to find the edges, then use the edges with cvfindcontours to find the main intersections of the lines to find the corners.
Any suggestions or source code would be greatly appreciated.
I suggest you two options:
1- Use other OpenCV version where you have those functions (You can check the online documentation)
2- Use the FAST detector and SIFT descriptors. It's a widely used method for this kind of task, really up to date. It will find the best features multi-scale, robust to light conditions, etc. You have to train the marker (the sheet of paper) to extract the features with SIFT. Then use FAST detector on the camera scene to detect and track those features.
Can anyone help me with sample code for a compass needle that points in the direction where the phone is tilted?
I am trying to develop an application to rotate a bitmap in OpenGL from accelerometer values.
Ok so thanks to the comments above we do not have to use OpenGL, and while you could, I personally believe you can make life simpler by using a custom View.
Now in traditional standard StackOverFlow past time I am not going to give you code for this but extrmely large leg up. There is a thermometer example available here http://mindtherobot.com/blog/272/android-custom-ui-making-a-vintage-thermometer/.
Why have I sent you here?
It contains an example that renders a dial exceedingly close to a compass, with a few minor tweeks it could easily become a compass in terms of design. you will just need to remove the temperature related code and use the accelerometers instead.
It is a good intro to custom views and will show you how to get started.
I made a clock after following the tutorial just as another possibility to inspire you.