Python: Displaying Graphics on Android Screen from Desktop Script - android

Skip the first two paragraphs if your not interested in why I'm asking this question.
Here is the situation: I'm using a Moto Z Play with the Projector Modification, the mod is really cool and allows me to literally project my phone screen onto the wall. I've been writing a personal assistant program that helps me with my daily life I.E. Sorting gmails, reminding me of calendar events, keeping track of anything I want it to remember and reminding me of those things when I've asked it to, and much more. Its basically a personal secretary.
One new feature I just added was a habit tracker. I created a small graphical interface on my phone using Tasker that would email my "assistant" who would then record the habit and create a really cool graph that shows my past habit record as well as using a neural network to predict the next days habit. Only problem is, the graph got really intricate really fast. I want to show a months worth of habits (16 total habits), creating what can be up to a 16 x 31 floating point graph with labels. My laptop screen is just not big enough to display all of that without it just being a mess! I really want to display the graph from my projector mod, the entire wall will definitely be big enough to show all that data.
Ok, now my question (thanks for hanging in there I know that was a lot):
Is there any way that I can display an image on my phone from a Python program without creating a standalone app? Even if my phone needs to be plugged into my computer to stream the data through a cable.
I would use a service like Kivy to create a standalone app, but then it wouldn't be hooked up to my assistant, completely defeating the purpose.
I'm not looking for anything similar to a notification, I really want to draw over the entire screen of my phone. This is something I did with Processing (Java library) a while back, but now I'm using Python because it's more machine learning friendly.
I've looked into a lot of services but nothing seems to be able to do this. Remember that I dont need to send anything back from my phone, simply display an image on the screen until the desktop side program tells it to stop.

Not my expertise but if I would need to do something like that I would make a web-service of the python app using django and go to the url with my phone. Don't know if it help....

Regardless of "how" or "what", the answer is, you will always need some software running on the Android to capture the stream of data (images) and display it in the screen.
The point is, you don't have to write this software yourself. The obvious example that come to mind is use any DLNA compatible software, VLC for example, and have your python to generate a h264 stream and point VLC to it. Another way would be use some http service from your python and simply load it in the browser.
hope it helps.

Related

Can I use open source code to run and monitor a Nest security camera?

I have a macbook and I would like to use it to monitor a nest wireless security camera, including an approximately 1 tb archive of continuously updated video history (perhaps of motion detected clips only). This can be done by subscribing to a nest cloud account, but that can get expensive, especially for several cameras, so I'd rather do it myself.
Can anyone point me to open-source code that will handle this? If not, is there another type of camera that will allow me to do this over wifi?
As promised above, I will update the status of this issue.
After a significant amount of work and also significant progress, I was able to connect to the live nest camera feed programatically but was never able to actually record the live stream into short videos, although this was easy for my MacBook webcam. My belief is that Nest has engineered this feed such that camera owners cannot directly access it, leaving no option but to use their "Nest Aware" monthly service. I do not want to do this as I do not want to pay for it and because I want to create options that Nest Aware does not offer.
Searching the web, it appears that this kind of thing might be done by using another software package, "blue iris". I did not want to get this either as I am sure that flexibility would be sacrificed and also the camera would need to be made publicly shared(!)
So I am giving up on Nest, although I like the hardware.
I did find an alternative. I also had an Arlo Q camera and I tried that, using an open source API on GitHub:
https://github.com/jeffreydwalter/arlo
I was able to access the camera and save motion detected videos to my disk within an hour of finding the above link. So, if you want to do this type of thing, I recommend Arlo over Nest.

New to Micro-controllers and Lower level programming, Is this possible to do?

Hey All I'm a recently graduated BS in Mechanical Engineering and am working on a project that is getting into to the field of CS and I am looking to remake a treadmill after its 1990's motherboard finally quit.
I have the following assets:
Treadmill with broken motherboard (all other components tested and functional)
Touch screen monitor similar to this
A polar heart rate monitor.
Multiple hard drives, joysticks and other USB accessories.
NI LabVIEW full subscription suite
2 functioning (2000's era) laptops with no OS.
Solidworks
Local maker's space
I have a few main goals and stretch goals and I'd like some advice as to which should be easy enough to implement and which will take me a research team and 5 years
This should be easy... right?
Get a PID controller setup with a micro controller to spin treadmill belt at [n]mph and adjust incline to [n2] degrees based on a hardware dial, knob, or push button physical input
* get microcontroller to read motor encoders for speed/incline
* get microcontroller to recognize input from a physical button
* get microcontroller to compare current speed/incline values with target values
and increase/decrease current to motors appropriately
* have microcontroller display info on LCD screen
Change from physical input to touchscreen input.
*Figure out what they're doing[in link 1 in comments below]and adjust for what I currently have (or buy fresh if absolutely necessary)
* change input from hardware buttons to software <up> <down> arrows
* Add hardware E-stop
It looks like there are plenty of libraries and devices online that are doing elements of these two steps, combining them may be difficult due to my inexperience, but not hard for the hardware and software.
Medium Difficulty (I saw a guy do this once)
Upload some kind of Linux distribution or other OS onto my microcontroller and turn my program into an application.
*Learn how to install Linux/Other OS
*Compile program as application
*Section off the bottom of the LCD Screen as a treadmill specific taskbar
* (bonus round) Make treadmill specific taskbar able to be moved and snapped
(similar to the windows taskbar)
Add feedback from a heart rate monitor to the treadmill for heart rate PID control
*SparkFun has a Single Lead Heart Rate Monitor - AD8232 [Link 2] write an application
to read the monitor and control the treadmill program accordingly.
I feel like this is theoretically possible but I don't really know how I would go about it. I also see how either of these tasks could be infinitely more complex than I'm thinking it will be.
Hard mode (Is this even possible?)
Put on smartphone style functionality.
* Install Android OS onto microcontroller
* Install Google Play store
* dedicate a set of pixels to the "treadmill OS" and the rest to the "smartphone."
* Add some sort of hook for the "treadmill OS" into the Android OS and maybe write
a few apps to control the treadmill based on [arbitrary value in app]
If I can do this, why are all the super expensive and advanced treadmills on the market so crappy in terms of their software?
For my skill set I'm pretty good on how to physically put everything together (but will need to make few post to the Electronics stack exchange as to how to get a something the size of a smartphone to regulate 120V 60hz power correctly)
My main question is how much of this is actually conceivable to do and if I am to do it in a way that satisfies all my desires, should I:
A) look to by a particular type of microcontroller to do all of this(reccomendations would be appreciated)
B) Start with one of my two Laptops and write an interface for a microcontroller that just does the easy stuff
C) Install the Android OS on one of my laptops and begin write a [treadmill app]
D) Do something I haven't thought of because this is not my field.
ps: Although this is a DIY project, when it comes to the coding, I really don't want to be reinventing the wheel so please let me know about any libraries or resources that may exist which could be helpful
Wow, what a project!
Getting the treadmill working
If your goal is to "get the treadmill working," then don't bother with any of this; instead focus on debugging the motherboard. There's probably just 1 component that went bad, and it will be easier and faster to fix that than to build everything you mentioned up through easy/medium/hard modes. But I know your goal is learning and fun, not simply to get it working :)
Control loops and data collection
As you've already identified, you need something for low-level access to the hardware (controlling the treadmill and reading heart rate back). This type of work is perfect for a micro, so you're on the right path there. Android or Linux are needlessly complex for these tasks, and implementing them will be a lot more work for you, with not much advantage.
User interaction
At a bare minimum, the existing physical buttons and knobs will directly control the micro. Once you hit that checkpoint, congratulations, your treadmill works again.
But you don't want "working", you want "cool". You mentioned a few different ways for users to interact with your system: displays, touch screens, phones, etc. Already this is going to be a huge project, so don't waste time reinventing the wheel by trying to manually implement those things. Find a working system (your laptop, daily cellphone, or even a cheap tablet online), and use that to talk with your low-level micro over something like Bluetooth or WiFi.
Choosing the right tools
If you pick something obscure, expect to spend tons of time simply trying to get basic functionality out of it. So in general, you want to pick hardware & software that:
is robust (many people use it with minimal issue)
has a large community (for support from other experts/hobbyists)
has a large ecosystem (with lots of libraries that you can leverage)
The Arduino might be a good micro for you. Look into that.
For the "cool" display, your personal phone is probably the best option. The app development for your phone is robust and will have tons of support when you need it.
Other thoughts
You mentioned LabVIEW: stop doing that. It's the wrong tool for almost every goal you have.
You asked how to regulate mains power down to a small board: buy/find any old wall-wart adapter from old electronics around your home. Cut off the tip. Connect the wires to your board. Done. (all the magic is inside the brick block).
You asked which approach is best: B. Get the treadmill working with a basic micro. Then add wireless to the micro. Then write an app to give you a sweet display and control of the treadmill (via the micro).
E-stop. Smart.

Streaming Android Screen

I'm currently working on an app with the end goal of being roughly analogous to an Android version of Air Play for the iDevices.
Streaming media and all that is easy enough, but I'd like to be able to include games as well. The problem with that is that to do so I'd have to stream the screen.
I've looked around at various things about taking screenshots (this question and the derivatives from it in particular), but I'm concerned about the frequency/latency. When gaming, anything less than 15-20 fps simply isn't going to cut it, and I'm not certain such is possible with the methods I've seen so far.
Does anyone know if such a thing is plausible, and if so what it would take?
Edit: To make it more clear, I'm basically trying to create a more limited form of "remote desktop" for Android. Essentially, capture what the device is currently doing (movie, game, whatever) and replicate it on another device.
My initial thoughts are to simply grab the audio buffer and the frame buffer and pass them through a socket to the other device, but I'm concerned that the methods I've seen for capturing the frame buffer are too slow for the intended use. I've seen people throwing around comments of 3 FPS limits and whatnot on some of the more common ways of accessing the frame buffer.
What I'm looking for is a way to get at the buffer without those limitations.
I am not sure what you are trying to accomplish when you refer to "Stream" a video game.
But if you are trying to mimic AirPlay, all you need to do is connect via a Bluetooth/ internet connection to a device and allow sound. Then save the results or handle it accordingly.
But video games do not "Stream" a screen because the mobile device will not handle much of a work load. There are other problems like, how to will you handle the game if the person looses internet connection while playing? On top of that, this would require a lot of servers to support the game workload on the backend and bandwidth.
But if you are trying to create an online game. Essentially all you need to do is send and receive messages from a server. That is simple. If you want to "Stream" to another device, simply connect the mobile device to speakers or a TV. Just about all mobile video games or applications just send simple messages via JSON or something similar. This reduces overhead, is simple syntax, and may be used across multiple platforms.
It sounds like you should take a look at this (repost):
https://stackoverflow.com/questions/2885533/where-to-start-game-programming-for-android
If not, this is more of an open question about how to implement a video game.

Embedded System: which OS should I use?

I am planning to build my embedded system for processing the sound of my guitar, like a pod, with input and output and so on and a system running with a program with presets, options etc in a small lcd screen should be multitouch for navigation.
Now I am at the very beginning and dont know where to start and what system I should use.
It should support the features I wrote above (like multitouch) and should be free.
Embedded Linux,
or
Android
or what?
Are you using off the shelf effects modules with some sort of interface to an embedded system or are you planning on doing the effects in your program as well? I assume the latter in this response, please clarify if I have misunderstood the nature of the project:
Do your system engineering...
You are going to need to deal with the analog of the inputs and outputs. Even digital inputs and outputs are analog in some respects to keep the signals clean. Even optical is going to be analog between the optical interface and the processors interface.
(I know this is long, keep reading it will converge on the answer to your question)
You will have some sort of hardware to software data in interface, ideally if you choose to support different interfaces you will ideally want to normalize the data into a common form and datarate so that the effects processing only has to deal with it one way. (avoiding a bunch of if-then-elses in the code, if bitrate is this then, else if bitrate is this then, else...if bitrate is this and data is unipolar then, else if bitrate is this and data is bipolar then, else...).
The guts of the effects processing is as complicated as you want to make it, one effect at a time or multiple? For each effect define the parameters you are going to allow to be adjusted (I would start with the minimum number which might be none, then add parameters later once it is all working). These parameters are going to need to be global in some for or fashion so that the user interface can get at them and modify them for the effects processing.
the output, same as the input, a lot of analog work, convert from the normalized data stream into whatever the interface wants or needs or you defined it to be.
then there is the user interface...the easy part.
...
The guts of the software for the effects processing can be system independent code, and is probably more comfortable being developed and tested on a desktop/laptop than on the target system, bearing in mind the code should be written system and operating system independent as well as being written embeddable (avoid floating point, divides, lots of local variables, etc).
Sometimes if not often in an enclosed system with some sort of user interface on the same black box, knobs or buttons a screen of some sort, touch screens, etc. One system may manage the user interface the other performs the task and there is a connection between. not always but it is a nice clean design, and allows, for example a product designed yesterday with buttons and knobs and say a two line lcd panel, to be modernized to a touch screen, at a fraction of the effort, and tomorrow sometime there may be some fiber that plugs directly into a socket in the back of your head, who knows.
Another reason to separate the processing tasks is so that it is easier to insure that the effects processor will never get bogged down by user interface stuff. you dont want to be turning a virtual knob on your touchscreen and the graphics load to draw the picture causes your audio to get garbled or turn to a nasty whine. Basically the effects processor is real-time critical. you dont want to pick the string on the guitar, and have the sound come out of the amp three seconds later because the processor is also drawing an animated background on your touch screen panel. That processing needs to be tight and fast and deterministic, every if-then-else in the code has to be accounted for and balanced. If you allow for multiple effects in parallel your processor needs to be able to have the bandwidth to process all of the effects without a noticeable delay, otherwise if only one effect at a time then the processor needs to be chosen to handle the one effect with the worst computation effort. The worst that could happen is that the input to output latency varies because of something the gui processing is doing, causing the music to sound horrible.
So you can work the effects processor with its user interface being, for example, a serial interface and a protocol across that interface (which you define) for selecting effects and changing parameters. You can get the effects processor up and working and tested using your desktop and/or laptop connected through the serial interface with some adhoc code being used to change parameters, perhaps a command line program.
Now is where it becomes interesting. You can get an off the shelf embedded linux system for example or embedded android or whatever, write your app that uses the serial protocol, if need be glue, bolt, tape, mold, etc this user interface system on top of around, next to the effects processor module. Note that you could have all of the platforms suggested, an android version, a linux (without android) version, a mac version, a windows version, a dos version, a qnx version, an amiga version, you name it. You can try 100 different user interface variations on the same OS, maybe I want the knobs to be sliders, or up/down push buttons, or a dial looking thing that I use a two finger touch to rotate, or some other multi-touch gesture.
And it gets better, instead of or in addition to serial you could use a bluetooth module. Your user interface could be an iPhone app, or android phone app, or laptop linux or windows app. or your desktop computer, etc. All of which are (relatively) easy platforms for writing graphical user interfaces for selecting things.
Another approach of course could be ethernet, in particular wireless ethernet then your user interface could be a web page and the bulk of your user interface work has already been done by the firefox or chrome or other team. (wireless ethernet or bluetoot or zigbee or other allows the effects processor to be somewhere convenient and doesnt have to be within arms/foot reach of you).
...
Do your system engineering. Break the problem into a few big modules, define the interfaces between the modules and then worry about the system engineering if necessary inside those modules until you get to easily digestable bites. The better the system engineering and the better defined the interfaces between modules the easier the project will be to implement.
...
I would also investigate the xcore processors at xmos, they have a very nice simulator with vcd waveform output that you can also use to accurately profile your effects processing. Personally I would have a very tough time not choosing this platform for this project.
You should also investigate the omap from ti, this is what is on a beagleboard. You get a nice arm that already has linux and other things ported and running on it, but you also get a dsp block, that dsp block could do your effects processing and likely in a way that the two dont interfere. You lose the ability to separate your user interface processor and effects processor physically, but gain elsewhere, and can probably use a beagleboard off the shelf to develop a prototype (using analog audio in and out). I actually liked the hawkboard better (with the hawkboard you get a usable system out of the box, with the beagleboard you spend another beagleboards worth of money for stuff that should have been on the board), but last I saw they had an instability flaw with the pcb design.
I am not up on the specs but the tegra (a number of upcoming phones are or will be tegra based), like the omap, should give some parallel processing with a lean toward audio/video as well as gui. You only need the audio and gui (the easier two of the three). I think there is a development platform for sale that has a touchscreen on it and popular embedded OSes.
If you are trying to save money buy making one of these things yourself. Stop now and go to the store and buy one. The homebrew one will cost a lot more, even if all the design stuff is free. The hardware and melted down guitars and guitar amps are not. I speak from experience, many times I have spent many thousands of dollars on a homebrew projects to avoid buying some off the shelf $300 item. I learned an awful lot, and personally the building of the thing is more fun than the using it, I normally shelve it once it is finally working. YMMV
If I have misunderstood your question, please let me know and I will edit/remove/replace all of it with a different (short) answer.
In facts it depends on what kind of hardware you want to run and interface (as a consequence how much you will work at driver level... or not).
The problem with android remains the same than with a bare linux. Could even be worse if there is no framework-level library (Java) since you will have to manage C part (with JNI) and the Java part.
Work the specs... then you will choose wisely...
Reminder: android is linux-based.
Go for Android:
With any other embedded OS you will have too much of an integration work to deal with.
You can start by buying off-the-shelf hardware (Galaxy Tab, HTC phone, etc) to start your development and reach a prototype fast

Off the shelf or roll my own?

I want to take an Android based tablet - not a phone, I need a large screen and I don't need 3G.
The guy with the tablet will attach a web cam to it and a s/w application in the Adnroid tablet will stream the cameras feed to a web page (there may later be a need to stream video back to the Android tablet - tbd).
Additionally, I need 2 way Voice over IP.
I may (tbd) need to use a TCP interace to a device which might, or might not, be achieved through the Andoid.
With so much open: is there any open source that can handle that, either as a grooup or individually, or should I code my own? Since I don't normally do this kinds of stuff what's the best approach, in terms of protocols, etc
I'd like to demo something in a month or so. Sorry that this is vague - but so is the person asking for it (which might make me lean towards roll your won simply because of shifting requirements. But I might roll my own around off the shelf building block, for instance if I can find off the shelf open source VoiP, etc)
is there any open source that can
handle that, either as a grooup or
individually, or should I code my own?
AFAIK, there is virtually no "open source that can handle that" for Android. In fact, you will need hardware modifications and drivers to support webcams, let alone anything else on your to-do list.
There are a lot of mobile streaming services. Maybe they can help you with one half of your problem:
http://www.ustream.tv/
http://www.qik.com/
http://bambuser.com/
Instead of the Webcam, you can use the integrated camera on the phone itself to capture and stream. And, yes, you 'll have to develop something on your own esp. with changing requirements.

Categories

Resources