-- Background:
We are working on a device called Run-n-Read which tracks a user's head movements and translates it to the appropriate text movement on the screen. The use is to help a person read while running on a treadmill or riding in a moving vehicle. You can check a small video on http://weartrons.com.
We have created a small device which contains accelerometer, a micro-controller and bluetooth to send the head location in real time to the tablet every ~17ms to match with the 60fps of display. We used Processing IDE to create a basic app with downloaded book pages to test the prototype.
-- PROBLEM:
We would like to run our app in the background and dynamically change the display coordinates of any other app contents on the screen, whether it's an eBook or twitter etc. Basically our algorithms are running on our external device and sending the display coordinates (in pixels to move up-down left-right) at about 60 times per second. We would like the Android display origin to move by that many pixels during every frame rendering.
I am an electronics engineer and it's my first stab at writing any piece of software, so please let me know if I was not clear or the answer is too obvious.
Android as OS makes sure applications are encapsulated and oblivious from each other. All inter-app communication is done through what is called intents which are in the end messages. And you have to know exactly the other apps declared intents and on top of that you have no assurances that all apps implemented the kind of feature you are requesting.
Therefore I don't think what you want to do (the coordinates change) is possible at all without tinkering with the OS source code and compiling your own version of Android.
Related
Skip the first two paragraphs if your not interested in why I'm asking this question.
Here is the situation: I'm using a Moto Z Play with the Projector Modification, the mod is really cool and allows me to literally project my phone screen onto the wall. I've been writing a personal assistant program that helps me with my daily life I.E. Sorting gmails, reminding me of calendar events, keeping track of anything I want it to remember and reminding me of those things when I've asked it to, and much more. Its basically a personal secretary.
One new feature I just added was a habit tracker. I created a small graphical interface on my phone using Tasker that would email my "assistant" who would then record the habit and create a really cool graph that shows my past habit record as well as using a neural network to predict the next days habit. Only problem is, the graph got really intricate really fast. I want to show a months worth of habits (16 total habits), creating what can be up to a 16 x 31 floating point graph with labels. My laptop screen is just not big enough to display all of that without it just being a mess! I really want to display the graph from my projector mod, the entire wall will definitely be big enough to show all that data.
Ok, now my question (thanks for hanging in there I know that was a lot):
Is there any way that I can display an image on my phone from a Python program without creating a standalone app? Even if my phone needs to be plugged into my computer to stream the data through a cable.
I would use a service like Kivy to create a standalone app, but then it wouldn't be hooked up to my assistant, completely defeating the purpose.
I'm not looking for anything similar to a notification, I really want to draw over the entire screen of my phone. This is something I did with Processing (Java library) a while back, but now I'm using Python because it's more machine learning friendly.
I've looked into a lot of services but nothing seems to be able to do this. Remember that I dont need to send anything back from my phone, simply display an image on the screen until the desktop side program tells it to stop.
Not my expertise but if I would need to do something like that I would make a web-service of the python app using django and go to the url with my phone. Don't know if it help....
Regardless of "how" or "what", the answer is, you will always need some software running on the Android to capture the stream of data (images) and display it in the screen.
The point is, you don't have to write this software yourself. The obvious example that come to mind is use any DLNA compatible software, VLC for example, and have your python to generate a h264 stream and point VLC to it. Another way would be use some http service from your python and simply load it in the browser.
hope it helps.
Hey All I'm a recently graduated BS in Mechanical Engineering and am working on a project that is getting into to the field of CS and I am looking to remake a treadmill after its 1990's motherboard finally quit.
I have the following assets:
Treadmill with broken motherboard (all other components tested and functional)
Touch screen monitor similar to this
A polar heart rate monitor.
Multiple hard drives, joysticks and other USB accessories.
NI LabVIEW full subscription suite
2 functioning (2000's era) laptops with no OS.
Solidworks
Local maker's space
I have a few main goals and stretch goals and I'd like some advice as to which should be easy enough to implement and which will take me a research team and 5 years
This should be easy... right?
Get a PID controller setup with a micro controller to spin treadmill belt at [n]mph and adjust incline to [n2] degrees based on a hardware dial, knob, or push button physical input
* get microcontroller to read motor encoders for speed/incline
* get microcontroller to recognize input from a physical button
* get microcontroller to compare current speed/incline values with target values
and increase/decrease current to motors appropriately
* have microcontroller display info on LCD screen
Change from physical input to touchscreen input.
*Figure out what they're doing[in link 1 in comments below]and adjust for what I currently have (or buy fresh if absolutely necessary)
* change input from hardware buttons to software <up> <down> arrows
* Add hardware E-stop
It looks like there are plenty of libraries and devices online that are doing elements of these two steps, combining them may be difficult due to my inexperience, but not hard for the hardware and software.
Medium Difficulty (I saw a guy do this once)
Upload some kind of Linux distribution or other OS onto my microcontroller and turn my program into an application.
*Learn how to install Linux/Other OS
*Compile program as application
*Section off the bottom of the LCD Screen as a treadmill specific taskbar
* (bonus round) Make treadmill specific taskbar able to be moved and snapped
(similar to the windows taskbar)
Add feedback from a heart rate monitor to the treadmill for heart rate PID control
*SparkFun has a Single Lead Heart Rate Monitor - AD8232 [Link 2] write an application
to read the monitor and control the treadmill program accordingly.
I feel like this is theoretically possible but I don't really know how I would go about it. I also see how either of these tasks could be infinitely more complex than I'm thinking it will be.
Hard mode (Is this even possible?)
Put on smartphone style functionality.
* Install Android OS onto microcontroller
* Install Google Play store
* dedicate a set of pixels to the "treadmill OS" and the rest to the "smartphone."
* Add some sort of hook for the "treadmill OS" into the Android OS and maybe write
a few apps to control the treadmill based on [arbitrary value in app]
If I can do this, why are all the super expensive and advanced treadmills on the market so crappy in terms of their software?
For my skill set I'm pretty good on how to physically put everything together (but will need to make few post to the Electronics stack exchange as to how to get a something the size of a smartphone to regulate 120V 60hz power correctly)
My main question is how much of this is actually conceivable to do and if I am to do it in a way that satisfies all my desires, should I:
A) look to by a particular type of microcontroller to do all of this(reccomendations would be appreciated)
B) Start with one of my two Laptops and write an interface for a microcontroller that just does the easy stuff
C) Install the Android OS on one of my laptops and begin write a [treadmill app]
D) Do something I haven't thought of because this is not my field.
ps: Although this is a DIY project, when it comes to the coding, I really don't want to be reinventing the wheel so please let me know about any libraries or resources that may exist which could be helpful
Wow, what a project!
Getting the treadmill working
If your goal is to "get the treadmill working," then don't bother with any of this; instead focus on debugging the motherboard. There's probably just 1 component that went bad, and it will be easier and faster to fix that than to build everything you mentioned up through easy/medium/hard modes. But I know your goal is learning and fun, not simply to get it working :)
Control loops and data collection
As you've already identified, you need something for low-level access to the hardware (controlling the treadmill and reading heart rate back). This type of work is perfect for a micro, so you're on the right path there. Android or Linux are needlessly complex for these tasks, and implementing them will be a lot more work for you, with not much advantage.
User interaction
At a bare minimum, the existing physical buttons and knobs will directly control the micro. Once you hit that checkpoint, congratulations, your treadmill works again.
But you don't want "working", you want "cool". You mentioned a few different ways for users to interact with your system: displays, touch screens, phones, etc. Already this is going to be a huge project, so don't waste time reinventing the wheel by trying to manually implement those things. Find a working system (your laptop, daily cellphone, or even a cheap tablet online), and use that to talk with your low-level micro over something like Bluetooth or WiFi.
Choosing the right tools
If you pick something obscure, expect to spend tons of time simply trying to get basic functionality out of it. So in general, you want to pick hardware & software that:
is robust (many people use it with minimal issue)
has a large community (for support from other experts/hobbyists)
has a large ecosystem (with lots of libraries that you can leverage)
The Arduino might be a good micro for you. Look into that.
For the "cool" display, your personal phone is probably the best option. The app development for your phone is robust and will have tons of support when you need it.
Other thoughts
You mentioned LabVIEW: stop doing that. It's the wrong tool for almost every goal you have.
You asked how to regulate mains power down to a small board: buy/find any old wall-wart adapter from old electronics around your home. Cut off the tip. Connect the wires to your board. Done. (all the magic is inside the brick block).
You asked which approach is best: B. Get the treadmill working with a basic micro. Then add wireless to the micro. Then write an app to give you a sweet display and control of the treadmill (via the micro).
E-stop. Smart.
I am developing an Android application that requires devices to be laid side by side and/or above and below each other.
I know I can use the Nearby API to detect devices "Nearby" however I need something a little more "Finer Grained".
My app needs to be able to identify a device laying either on the left side, above, right side or below. While all devices are laying flat on a table (for instance).
I can find nothing on the web that describes this use case.
Is it possible?
UPDATE
My use case is that I want Android devices to be able to detect any number of "Other Devices" laying either to their left or right. The devices will be laid out horizontally with a "small" gap between each one.
In the same way that you might layout children's lettered blocks to spell out a word or phrase, or numbered blocks to make a sum.
not only should the line of devices be able to detect their immediate neighbours to their left and right the two devices at either end should be able to detect they they are the start and end (reading left to right) of the line.
Using proximity sensors is a likely way to solve your question. TYPE_PROXIMITY will gives the distance from a near by object. TYPE_MAGNETIC_FIELD gives the geomagnetic field strength on x/y/z.
For more read Position Sensors.
Making your own Mock GPS (Local PS to be exact). I don't have a link for this but its definitely possible. Check out how GPS works to get an idea. Wifi and Bluetooth are signals. but you know what else is a signal?
A: SOUND
make each phone make a large beep in turn and measure audio strength using receivers. This might work better than wifi/bluetooth. once you measure relative distances between every pair of phones, it only takes a good algorithm to find relative positions
A Possible Alternative Solution : use image processing. Get something like OpenCV for Android and setup one phone as a master. This will work only for a 2D Layout.
Another "idea" - use the cameras. Stick A board on top of your surface with 4 QR codes in each corner. (This will help identify the edges and orientation of your phone). If you're looking for a 3D layout and the phones have sufficient in-between space, you could stick a QR behind every phone and show a QR on the screen of every phone.
All of these are solutions. Maybe you can use individual ones. Maybe you can use a combination. who knows.
An idea, in case it's relevant for your use case:
Setup phase
start your app on each device in "pairing mode".
Each device will show a QR code containing the key required for communicating with the device (for example via Firebase), and screen details: size in pixels. It will also draw a rectangle at the screen boundaries.
A different phone, external to this layout will run your app as a "master", taking a picture of the phones from above.
Now you need to write an algorithm to identify the screens and their locations, orientation and extract the QR codes for analysis. Not easy, but doable.
Interaction phase
now all the phones (this should work on more than two phones) can collaborate screens to show parts of the same movie, for example.
Seems not, if You have only 2 devices, but if You have external sources (with known position) of any signal (audio, vibrate, BT or WiFi radio, etc.), which can be detected by devices with adequate accuracy, and devices time is synchronized, You can do this comparing time of signal start (or signal strength) on both devices like on this picture:
Or, if You can add some sensors to one of devices, You can create "other device locator", for example like this sound locator.
UPDATE
In a updated formulation, the issue is also not solvable: it's possible to determine which two devices are at the edge, but you can not determine which one is on the left and which is on the right side. It is necessary that at least one device knows that it, for example, is leftmost - then other devices, for example, generates a sound, the others receive it and determine their order according to the difference in arrival time. But the anchor point and synchronization of time are necessary.
By understating your use case, it is possible to find number of devices surrounded by host device, using Nearby Api, other techniques. But find how many devices each side!!! I don't think it is possible with the current mobile hardware and technology. Because, by considering all factors, magnetic sensors are only the least possible solution. But the current mobiles have no such capability.
The following point I made based on above answers.
TYPE_ACCELEROMETER, TYPE_LINEAR_ACCELERATION, TYPE_MAGNETIC_FIELD, TYPE_ORIENTATION these sensors are react to the magnetic field around the device (compass react to the magnet). You can try an app using TYPE_MAGNETIC_FIELD, test how it will react to other device when close to it (I think it will react).
But the point I am trying to make here is, if you put three devices to once side and 4 devices to other side, the MAGNETIC_FIELD sensor reads relative magnetic field. So we can't identify how may devices each side, Until unless you have made some scientific calculations.
The second point is, some one suggested TYPE_PROXIMITY sensor, but it is not meant to serve this purpose. Current mobiles, measures the proximity of an object in cm relative to the view screen of a device. This sensor is typically used to determine whether a handset is being held up to a person's ear.
Another least possibility is using location sensor, it can identify the coordinates relative to your device coordinates, you communicate between each device coordinates with host using NFC. But the problem is, your use case says those devices are very close to each other, so it is not measurable distance using location service.
To conclude, it is not possible to identify number of each devices each side of a host device, with the current mobile hardware. It will can archived by using external sensor that will extends the mobile capability. For example, a phone case that equipped with such a capability, this will open window to other use-cases and application as well.
I think a way but it may require a a bit work. First check if 2 devices are laying by getting device orientation and using accelerometer or rotation vector to check pitch, roll, etc.
When you are sure that they are laying send data from one device from one to another using BT or wifi. Data should include send time. Check retreive time on other device also you should check for latency for sending and retreiving data. If you can have a noticible time differences in ms for small distance differences between devices it would be easy to check how approximately close they are. Also you may ask users to hold their device 1 meter or fixed distance from each other to get a time of travel for BT or wifi signal you send to other.
I am developing a glassware, which shows you cards (images + text) from a list. To see all the Cards one by one you just need to move you head around and the information got changed. The application is some where similar to the What's Around glassware.
I have implemented SensorManager to get the users current orientation, And based on it current direction the application decides which card to show.
here all the images are coming from the web server so each time when card is selected there is a network call for the images (Caching Bitmaps is implemented).
Now, What my problem is, when I start and use the application the glass got Heat-Up so quickly in less than 1 min.
Could any one please suggest me any thing to fix this issue.
Note : google glass os version : XE22
I want to implement a kind of feature like copying an image file from one device to another. During the image transferring, I need to update the UI simultaneously on both side. For example, the image flies out of the device A, and then flies into the device B. On the user's side, he/she just see that image moves from one screen to another screen, then the transfer is completed.
One possible way I'm thinking so far is to display an animation during the image transferring. But I don't know how to display an image partially on screen A, and partially on screen B. Hope someone could give me some hints. Thanks a lot.
The trick is to find the time difference between the two devices.
I wrote an app that performed synchronized playback of an audio file on multiple devices. To synchronize the devices, I had them ping a time server and make note of how much the device's clock differed from the server's clock. With this offset value, I was able to do a reasonably good job of synchronizing the playback. I'm glossing over a lot of the details (latency, variability, leap second, etc.), but this was the basic idea.
To synchronize the UI on both devices, the two devices need to know the difference between each other's clock. Once you have this value, you simply time the animation appropriately. I've only ever done it with a server, but if the two devices are talking to each other for the file transfer, perhaps you could have one device ask the other for its time and compute the offset.
Tip: compute the difference several times, then use standard deviation to select a good value. If you want to really study how this is done, check out how NTP does it: http://en.wikipedia.org/wiki/Network_Time_Protocol