There's a very few examples on the web showing how to work with Notification Hub and android. Besides that, it seems that the samples are using a old version of android SDK. I'm trying to use the notification hub with tags (first with android), but I could not find a good resource to learn how to do that. I'm wondering if anyone already did that, and could show me some code.
I'm following this article: http://www.windowsazure.com/en-us/manage/services/notification-hubs/get-started-notification-hubs-android/
but as I said, I would like to notify users according to tags.
I'll appreciate any help.
PS: I'm not a android/java programmer, so a working project will be awesome.
Are you able to run the get started sample that you linked?
Even if they refer to API version 17, it should work the same (you can use Google API 18 without problems).
From the sample code, in order to use tags, simply add your tags in the call:
hub.register(regid, tags); // with tags a list of strings such as ["Yankees", "RedSox"]
Then when you send a notification you have to specify the tag you want to target.
Unfortunately, we do not have the full Breaking news tutorial for Android yet (only Windows and iOS): http://www.windowsazure.com/en-us/manage/services/notification-hubs/breaking-news-dotnet/
The breaking news tutorial does also provide a lot of insights into the things that have to be done for most of tag-based scenarios.
Related
I tried these links:
https://github.com/lorensiuswlt/P25Demo
https://github.com/AlexanderKaraberov/Android-BluetoothPrinter-Demo
But doesn`t print barcodes. Thanks in advance.
Follow these steps for zebra-printers
Start to finish, these steps will help you go from label novice to a printing pro:
If you do not have a previous experience with Zebra Printers, then I recommend the Getting Started with Printers and Apps document. This document walks you through setting up a Zebra printer out of the box, configuring its basic settings, and designing labels.
If Zebra printers are something you’re comfortable with but writing Android applications that print to them is a whole other story, read through the Getting Started with Android Development document. There is also sample code and demo apps available on GitHub and more information and discussions on the Printers page.
When creating printing applications, there are a few key points developers must keep in mind. These are outlined in Zebra’s Best Practices for Printing Applications. There is a recorded webinar that goes through these best practices and shows sample code on how they can be implemented. If you are in a hurry, the slide deck is included covering the main points.
Download the Link-OS SDK for implementation, demo sample code, and consultation. There are separate APIs in the SDK for Android and Android BTLE, and each API has at least one demo project.
Finally, review our API documentation to familiarize yourself with the full functionality of the Link-OS SDK. There’s a lot that it can do for you, so take advantage of everything it has to offer.
https://developer.zebra.com/blog/become-zebra-printing-android-developer-five-easy-steps
We have a setup where we want to use https://ship.io/ as our cloud-based continuous-integration server.
However we also want to have some kind of static code analysis (preferably SonarCube but that is debatable), which isn't supported officially by ship.io.
The Projects are classic mobile Projects (Android and iOS).
I have seen some posts of people mentioning that they managed to setup this kind of configuration. SonarCube just has released a gradle plugin http://www.sonarsource.com/2015/06/15/sonarqube-gradle-1-0-released/ so the Android part should be doable.
However at the moment i have no idea what would be the best way do do this for the iOS part of the project.
We already contacted the ship.io team on this issue but did not recieve a response yet.
Any suggestions/insights on this?
My name is Tim Rosenblatt and I'm one of the senior engineers here at Ship.io. I'm not sure why you didn't get a reply from our support email, and I'm glad you posted about this here.
As Viktor mentioned, we definitely support custom scripts. You absolutely can run whatever you like during your build process with this type of step.
I've got a few links that should be helpful for you in getting SonarCube added to your Ship job, but you can definitely get in touch with us if anything isn't clear enough for you. You can use the in-app support icon at the bottom right of your dashboard, or just email me personally -- tim at ship dot io
http://support.ship.io/environment/install-software
http://support.ship.io/environment/custom-shell-scripts
Thanks!
You should be able to write a script (bash, ruby, ...) which runs your static code analysis and then call that script on your own Mac or on any CI which supports running custom scripts. AFAIK ship.io does support this, our service (https://bitrise.io/ - CTO here) certainly does.
Background
I wish to get app's statistics that are available on the Developer-console website, but via Android itself, as an app.
What I've found
Google has a tool to get your app's reports (statistics, reviews, etc...) , called "gsutil" (more info here). I think it can do more, but that's what I wish to try out.
The problem
This tool is written in Python, and therefore it cannot be launched so easily on Android (and a bit on other Os's too, as you have to install Python runtime for this).
The question
Is there any way to use it on Android? Or maybe an alternative?
How about a library that does the same?
Check out the Andlytics app. They were able to accomplish this.
https://play.google.com/store/apps/details?id=com.github.andlyticsproject
I have a set of some messages which should be speaked by android app. I could use something like Svox, but I don't need to read user input.
I was thinking about using prerecorded words and putting them together myself - could you please show me some way how to do it properly?
Android has a built in feature for Text To Speech since API level 4.
Go through this tutorial for a step by step guide.
clone this git project
Its a well architected example for recognizing spoken commands in ( english/estonian )
If you like the approach, there is a lib project to use as a service
If you can take the time to follow opensource, this approach to implementing recognized commands is quite good.
I need to customize the android source code to add another secondary display for the device (this is the requirement). Hence i need to integrate the secondary display's drivers into the android stack and also add some libraries to the android stack using which the secondary display can be controlled. The driver code is readily available, so i only need to integrate it with the android stack. As i have never worked with android source code i hardly have any idea about how to proceed. Also there is no tutorial available for any kind of guidance for the same.
So far with the help of this site I'm able to setup the environment using the instructions given here: http://source.android.com/source/downloading.html
I should perform the integration on jelly Bean, so have downloaded JB source code.
Now proceeding towards integration of the drivers i have no idea how to proceed. Please provide some tutorials or useful links to do so.
Thanks in advance.