I'm with not much experience in Android development.
I'm considering a big project and before getting deeply into it, I want to check whether my requirements are even possible:
My goal is to manipulate the system by changing the coordinates of a user's touch on the touchscreen. For example: If a user is touching the screen on point (X,Y), I want any opened application to act like the user touched (X+5,Y-3).
I have thought on a few levels that this may be possible to be defined in:
Touch-screen's driver level, OS level, application level (i.e. background application).
A big advantage will be to built it in a way that will allow as much compatibility as possible.
What is the best/right way to do it?
I'm not looking for a full solution, only a hint regarding the best direction to start digging...
Thanks in advance.
Related
I've been exploring 3D scanning and reconstruction using Google's project Tango.
So far, some apps I've tried like Project Tango Constructor and Voxxlr do a nice job over short time-spans (I would be happy to get recommendations for other potential scanning apps). The problem is, regardless of the app, if I run it long enough the scans accumulate so much drift that eventually everything is misaligned and ruined.
High chance of drift also occurs whenever I point the device over a featureless space like a blank wall, or when I point the cameras upward to scan ceilings. The device gets disoriented temporarily, thereby destroying the alignment of future scans. Whatever the case, getting the device to know where it is and what it is pointing at is a problem for me.
I know that some of the 3D scanning apps use Area Learning to some extent, since these apps ask me for permission to allow area learning upon startup of the app. I presume that this is to help localize the device and stabilize its pose (please correct me if this is inaccurate).
From the apps I've tried, I have never been given an option to load my own ADF. My understanding is that loading in a carefully learned feature-rich ADF helps to better anchor the device pose. Is there a reason for this dearth of apps that allow users to load in their homemade ADFs? Is it hard/impossible to do? Are current apps already optimally leveraging on area learning to localize, and is it the case that no self-recorded ADF I provide could ever do better?
I would appreciate any pointers/instruction on this topic - the method and efficacy of using ADFs in 3D scanning and reconstruction is not clearly documented. Ultimately, I'm looking for a way to use the Tango to make high quality 3D scans. If ADFs are not needed in the picture, that's fine. If the answer is that I'm endeavoring on an impossible task, I'd like to know as well.
If off-the-shelf solutions are not yet available, I am also willing to try to process the point cloud myself, though I have a feeling its probably much easier said than done.
Unfortunately, Tango doesn't have any application could do this at the moment, you will need to develop you own application for this. Just in case you wonder how to do this in code, here are the steps:
First, the learning mode of the application should be on. When we turn the learning mode on, the system will start to record an ADF, which allows the application to see a existing area that it has been to. For each point cloud we have saved, we should save the timestamp that associated with points as well.
After walking around and collecting the points, we will need to call the TangoService_saveAreaDescription function from the API. This step does some optimization on each key poses saved in the system. After done saving, we need to use the timestamp saved with point cloud to query to optimized pose again, to do that, we use the functionTangoService_getPoseAtTime. After this step, you will see the point cloud set to the right transformation, and the points will be overlapped together.
Just as a recap of the steps:
Turn on learning mode in Tango config.
Walk around, save point cloud along with the timestamp associated with the point cloud.
Call save TangoService_saveAreaDescription function.
After saving is done, call TangoServcie_getPoseAtTime to query the optimized pose based on the timestamp saved with the point cloud.
Is it possible to determine if a device (non-rooted) is in use at the moment, even if my app is not in the foreground? Precisely "in use" means the user made touch events in the last 5 seconds or display is on.
If so, what specific rights are required?
Thanks
AFAIK, android security model would not allow you to record touches if your app in not in the foreground.
There are some crude workarounds like overlaying a transparent screen to record touches. Not sure if these work now though.
"in use" means the user made touch events in the last 5 seconds
In Android, that's not practical, short of writing your own custom ROM.
or display is on
In Android, you can find out if the device is in an "interactive" mode or not. This does not strictly align with screen-on/screen-off, as the whole notion of screen-on/screen-off has pretty much fallen by the wayside.
I want to implement 3D touches in android,just like the 3d touches in the Iphone 6S and 6S plus.
I looked around in google and couldn't find any consistent material.
I could only find an example in Lua language and i am not sure yet if it's exactly what i am looking for.
So i thought may be if there is no libraries out there, then i should implement the algorithm from scratch, or maybe create a library for it.
But i don't know where to start ? do you guys have any clue ?
I believe you could implement something similar using MotionEvent, it has a getPressure() method that is supposed to return a value between 0 and 1 representing the amount of pressure on a screen. You could then do something different depending on the amount of pressure detected.
Note that some devices do not support this feature, and some (notably the Samsung Galaxy S3) will return inconsistent values.
I don't think it is possible on currently available Android devices. 3D touch is hardware technology embedded in displays in iPhones. I don't think you can implement this just writing some code in your Android application.
Short answer - no.
You need to wait for Google to actually copy the technology if it proves to be useful. But I doubt it'll happen in near future. This is because Android is all about accessibility and these screens will be quite expensive.
Long answer - Android is open source. If you are making something internal then go on, it'll allow you to do that with some modifications. Build a device, put in your modified code, create your own application that takes advantage of the feature and be happy to announce it to the world.
I am looking into developing a new input method on the Android platform that would emulate the touch input of the screen, is it possible to create a service that can interact directly or indirectly with the touch API to achieve this?
Specifics:
The interactions will come in the form of colour tracking from the camera which is processed into x/y coordinates and touch:0/1 events. Is it possible to have these interact with the touchscreen just as if it were a touch on the screen itself?
I understand that there may be permission problems with this approach of 'injection' control or piggybacking?
Also this is a technical exercise for an experimental report rather than a distributable app/piece of software so root/modifications are not a problem.
I have searched to no avail on the subject (at least not on the android platform) and i would like to find out the feasibility/difficulty of the project before undertaking it so any input would be much appreciated!
I'm sort of guessing here, but MotionEvent has some set...-functions like setLocation(float x, float y). There is also MotionEvent.Pointer to play with.
I'm interested in using Android for a E-Ink
based platform. I know it has been demonstrated once by MOTO, but I'm interested in using it for a commercial grade product and not 'just' a technology demo. I have got a question on the ability to change the platform to cope with specific display effect caused by E-Ink. I'm asking this question from the role of system architect and have no prior experience with Android.
E-ink has several characteristics which are very different than the common LCD displays:
time to update display (50-700ms)
it costs power to change the display (none to maintain)
display life time is determined by number of display updates!
tradeoffs can be made between quality, performance and display lifetime
grayscale versions available
The great thing: it costs no power to retain display information and they can be read in bright sunlight with no backlight. Also the display can be literally as thin as paper...
This means that the platform software needs to have a degree of control over the number of display updates and the type of display updates to get the best performance. Otherwise, an application which is unaware of the display characteristics could quickly drain the battery, or worse, shorten display life time to months instead of years. Conceptually I'd be interested in replacing a display driver, but I'm not sure if this part is open. I know it is hard to get info on the Qualcomm chipsets....
My question: can this be done? Can the Android platform be modified to support a drastically different display effect? Any pointers to an android roadmap?
The reason I find Android interesting for this application is because there is a significant overlap in functionality (from cell phone to browser).
Thanks!
I cannot agree more and started to lobby with app and OS developers on improving readability on e-ink:
Make scrolling and page turns e-ink friendly http://github.com/aarddict/android/issues/28#issuecomment-3512595
Looking around on the web I find a recurring theme "we had to rebuild WebView from scratch to adapt it to the e-ink display"
There are already coding solutions which reduce flicker and page refreshes. Most of them are kept by those who market the e-ink readers who prefer to keep them as frontends to their shops.
I contacted the author(s) of cool reader on their implementation of
smooth scrolling on e-ink devices and got the following reply:
Hello, Look at N2EpdController.java Author
is DairyKnight from xda-developers. At least you can use it under GPL.
For use in closed project I would recommend to contact him.
Ideally, display components for e-ink devices should be part of the Webkit's WebView framework. I've submitted a feature request via
http://bugs.webkit.org/show_bug.cgi?id=76429
fyi, E-Ink has an Android on E-Ink development kit, AM350 that's being sold now. http://www.eink.com/sell_sheets/AM350_Kit_Sell_Sheet.pdf
http://www.linuxworld.com/news/2007/112707-kernel.html
In this case the application domain is e-reading, in which case the advantages of E-ink are more imporant than the disadvantages (slow display updates).
I've done some further studies of Android. I believe the trick is to perform display updates asynchronously; to provide applications with an environment which mimicks immediate display updates, whilst detecting the relevant updates (i.e. by using graphics processor and/or MMU) to have an intelligent display update. Not all types of applications would be suitable; i.e. games and video playback require immediate display updates.
Making such a platform will be less than trivial; however, with the growing number of different hardware platforms, abstractions are becoming better all the time.
I know this is an old question, but I have found it through Google - others might want to know this too.
PocketBook Pro 902/903 are based on Android and feature e-ink screen. You might want to check them out. There might be other models too - I am interested in these because of their 10" screen. YMMV.