i am messing around with developing apps for the Band 2 using the Microsoft SDK and the Android Studio. I have successfully tested the application on my device but the problem i am having is how the application gets linked to the tile and how that tile gets added to the health app.
Where does the presentation XML reside? I read the Microsoft Band SDK.pdf section 8.8 SIMPLE CUSTOM TILE EXAMPLE. The example does not specify where the code needs to reside. Do i need to add it to the class file for the app or in a different file? Where does the tile icon get created, in the Android Studio and if so where?
An example of how the class, tile xml, and icon get installed to the band would be nice.
Thanks!
The SDK includes some samples- have a look at the one entitled BandTileEvent to see the full implementation. The quick version is that your tile creation code should create a series of layouts (containing elements with IDs) and icons when it is made, and then to update you'll choose a layout and assign values to the elements' ids. The key elements from the samples look like this (modified for easy readability):
private PageLayout createButtonLayout() {
return new PageLayout(
new FlowPanel(15, 0, 260, 105, FlowPanelOrientation.VERTICAL)
.addElements(new FilledButton(0, 5, 210, 45).setMargins(0, 5, 0 ,0).setId(12).setBackgroundColor(Color.RED))
.addElements(new TextButton(0, 0, 210, 45).setMargins(0, 5, 0 ,0).setId(21).setPressedColor(Color.BLUE))
);
}
This will create a PageLayout object that is used in the tile creation process. This method should be used like this:
BandTile tile = new BandTile.Builder(YOUR_TILE_UUID, "Tile Title", tileIconBitmap)
.setPageLayouts(createButtonLayout())
.setPageIcons(getIconsToUse())
.build();
client.getTileManager().addTile(context, tile);
Once the tile is on the band, you'll need to send an update- it should look something like this:
private void updatePages() throws BandIOException {
client.getTileManager().setPages(tileId,
new PageData(pageId1, 0)
.update(new FilledButtonData(12, Color.YELLOW))
.update(new TextButtonData(21, "Text Button")));
}
Once the tile is on your band, you can register an intent filter that will return these events. Check the SDK samples for the exact intents used- you'll get notified when the tile is opened, closed, and when buttons on it are pressed.
Related
Sorry I am new to Android development.
Wondering if there is any method to load Open Cycle Map using OSMdroid please?
From the website, seems there is no easy way to do so:
https://github.com/osmdroid/osmdroid/wiki/Map-Sources
Therefore, would any one can give me some tips how to do so please?
What I can think the only way is to define Tile Source manually as below.
Wondering if there is any easier way to do so please?
final String[] tileURLs = {"http://a.tile.thunderforest.com/cycle/",
"http://b.tile.thunderforest.com/cycle/",
"http://c.tile.thunderforest.com/cycle/"};
final ITileSource OCM =
new XYTileSource("Open Cycle Map",
0,
19,
512,
".png",
tileUrls,
"from open cycle map");
Thanks a lot
Defining a tile-source is a correct way how to do it. And it's a perfectly fine way, many build-in tile-sources are defined in the same way.
However, according to the documentation at the http://thunderforest.com/maps/opencyclemap/ you should obtain and use API key:
Want to use these tiles? The generic tile format string for the
OpenCycleMap layer is:
https://tile.thunderforest.com/cycle/{z}/{x}/{y}.png?apikey=<insert-your-apikey-here>
Therefore you should include you API key:
final ITileSource OCM =
new XYTileSource("Open Cycle Map",
0,
19,
512,
".png?apikey=<insert-your-apikey-here>",
tileUrls,
"from open cycle map");
(This is just modified code from the question. I didn't test it and therefore some parameters don't have to be correct)
After some weeks of waiting I finally have my Project Tango. My idea is to create an app that generates a point cloud of my room and exports this to .xyz data. I'll then use the .xyz file to show the point cloud in a browser! I started off by compiling and adjusting the point cloud example that's on Google's github.
Right now I use the onXyzIjAvailable(TangoXyzIjData tangoXyzIjData) to get a frame of x y and z values; the points. I then save these frames in a PCLManager in the form of Vector3. After I'm done scanning my room, I simple write all the Vector3 from the PCLManager to a .xyz file using:
OutputStream os = new FileOutputStream(file);
size = pointCloud.size();
for (int i = 0; i < size; i++) {
String row = String.valueOf(pointCloud.get(i).x) + " "
+ String.valueOf(pointCloud.get(i).y) + " "
+ String.valueOf(pointCloud.get(i).z) + "\r\n";
os.write(row.getBytes());
}
os.close();
Everything works fine, not compilation errors or crashes. The only thing that seems to be going wrong is the rotation or translation of the points in the cloud. When I view the point cloud everything is messed up; the area I scanned is not recognizable, though the amount of points is the same as recorded.
Could this have to do something with the fact that I don't use PoseData together with the XyzIjData? I'm kind of new to this subject and have a hard time understanding what the PoseData exactly does. Could someone explain it to me and help me fix my point cloud?
Yes, you have to use TangoPoseData.
I guess you are using TangoXyzIjData correctly; but the data you get this way is relative to where the device is and how the device is tilted when you take the shot.
Here's how i solved this:
I started from java_point_to_point_example. In this example they get the coords of 2 different points with 2 different coordinate system and then write those coordinates wrt the base Coordinate frame pair.
First of all you have to setup your exstrinsics, so you'll be able to perform all the transformations you'll need. To do that I call mExstrinsics = setupExtrinsics(mTango) function at the end of my setTangoListener() function. Here's the code (that you can find also in the example I linked above).
private DeviceExtrinsics setupExtrinsics(Tango mTango) {
//camera to IMU tranform
TangoCoordinateFramePair framePair = new TangoCoordinateFramePair();
framePair.baseFrame = TangoPoseData.COORDINATE_FRAME_IMU;
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_COLOR;
TangoPoseData imu_T_rgb = mTango.getPoseAtTime(0.0,framePair);
//IMU to device transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_DEVICE;
TangoPoseData imu_T_device = mTango.getPoseAtTime(0.0,framePair);
//IMU to depth transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_DEPTH;
TangoPoseData imu_T_depth = mTango.getPoseAtTime(0.0,framePair);
return new DeviceExtrinsics(imu_T_device,imu_T_rgb,imu_T_depth);
}
Then when you get the point Cloud you have to "normalize" it. Using your exstrinsics is pretty simple:
public ArrayList<Vector3> normalize(TangoXyzIjData cloud, TangoPoseData cameraPose, DeviceExtrinsics extrinsics) {
ArrayList<Vector3> normalizedCloud = new ArrayList<>();
TangoPoseData camera_T_imu = ScenePoseCalculator.matrixToTangoPose(extrinsics.getDeviceTDepthCamera());
while (cloud.xyz.hasRemaining()) {
Vector3 rotatedV = ScenePoseCalculator.getPointInEngineFrame(
new Vector3(cloud.xyz.get(),cloud.xyz.get(),cloud.xyz.get()),
camera_T_imu,
cameraPose
);
normalizedCloud.add(rotatedV);
}
return normalizedCloud;
}
This should be enough, now you have a point cloud wrt you base frame of reference.
If you overimpose two or more of this "normalized" cloud you can get the 3D representation of your room.
There is another way to do this with rotation matrix, explained here.
My solution is pretty slow (it takes around 700ms to the dev kit to normalize a cloud of ~3000 points), so it is not suitable for a real time application for 3D reconstruction.
Atm i'm trying to use Tango 3D Reconstruction Library in C using NDK and JNI. The library is well documented but it is very painful to set up your environment and start using JNI. (I'm stuck at the moment in fact).
Drifting
There still is a problem when I turn around with the device. It seems that the point cloud spreads out a lot.
I guess you are experiencing some drifting.
Drifting happens when you use Motion Tracking alone: it consist of a lot of very small error in estimating your Pose that all together cause a big error in your pose relative to the world. For instance if you take your tango device and you walk in a circle tracking your TangoPoseData and then you draw you trajectory in a spreadsheet or whatever you want you'll notice that the Tablet will never return at his starting point because he is drifting away.
Solution to that is using Area Learning.
If you have no clear ideas about this topic i'll suggest watching this talk from Google I/O 2016. It will cover lots of point and give you a nice introduction.
Using area learning is quite simple.
You have just to change your base frame of reference in TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION. In this way you tell your Tango to estimate his pose not wrt on where it was when you launched the app but wrt some fixed point in the area.
Here's my code:
private static final ArrayList<TangoCoordinateFramePair> FRAME_PAIRS =
new ArrayList<TangoCoordinateFramePair>();
{
FRAME_PAIRS.add(new TangoCoordinateFramePair(
TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION,
TangoPoseData.COORDINATE_FRAME_DEVICE
));
}
Now you can use this FRAME_PAIRS as usual.
Then you have to modify your TangoConfig in order to issue Tango to use Area Learning using the key TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION. Remember that when using TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION you CAN'T use learningmode and load ADF (area description file).
So you cant use:
TangoConfig.KEY_BOOLEAN_LEARNINGMODE
TangoConfig.KEY_STRING_AREADESCRIPTION
Here's how I initialize TangoConfig in my app:
TangoConfig config = tango.getConfig(TangoConfig.CONFIG_TYPE_DEFAULT);
//Turning depth sensor on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DEPTH, true);
//Turning motiontracking on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_MOTIONTRACKING,true);
//If tango gets stuck he tries to autorecover himself.
config.putBoolean(TangoConfig.KEY_BOOLEAN_AUTORECOVERY,true);
//Tango tries to store and remember places and rooms,
//this is used to reduce drifting.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION,true);
//Turns the color camera on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_COLORCAMERA, true);
Using this technique you'll get rid of those spreads.
PS
In the Talk i linked above, at around 22:35 they show you how to port your application to Area Learning. In their example they use TangoConfig.KEY_BOOLEAN_ENABLE_DRIFT_CORRECTION. This key does not exist anymore (at least in Java API). Use TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION instead.
I want to create a map from some opengl code that I wrote :
In order to do that I though about taking a screen shot of the upperview of the gl screen.
Yet I cant seem to find how to do that...
any suggestions?
Similar problem has been solved in OSG example code here.
First you need to set your view such that you are looking at center from the TOP VIEW.
osg::Vec3 center = scene->getBound().center();
double radius = scene->getBound().radius();
view->getCamera()->setViewMatrixAsLookAt( center - lookDir*(radius*3.0), center, up );
view->getCamera()->setProjectionMatrixAsPerspective(
30.0f, static_cast<double>(width)/static_cast<double>(height), 1.0f, 10000.0f );
Then, you need to use some OS specific API to do similar to logic below:
osgViewer::ScreenCaptureHandler* scrn = new osgViewer::ScreenCaptureHandler();
osgViewer::ScreenCaptureHandler::WriteToFile* captureOper = new osgViewer::ScreenCaptureHandler::WriteToFile(tmpStr.m_szBuffer, "png");
scrn->setCaptureOperation(captureOper);
scrn->captureNextFrame(*_viewer);
_viewer->frame();
Of course if you are not using OSG then you need to find equivalent APIs (of library you are using) to achieve the same task.
I am building a game in which i need to add a common topLayer as a common menu to other Layers. I am using AndEngineCocos2dExtension.
Current code :
public class LobbyLayer extends CCLayer {
CPButton low, medium, high, friends, vip;
CCSprite low_selected, medium_selected, high_selected, friends_selected,
vip_selected;
CCNode tables[];
public LobbyLayer() throws IOException {
CCSprite background = new CCSprite("gfx/bg.jpg");
background.setPosition(400, 240);
attachChild(background);
CPTopLayer topLayer = new CPTopLayer();
topLayer.setPosition(0,240);
attachChild(topLayer);
This is my second layer , I have a welcomeLayer ,which has a button for this(LobbyLayer), topLayer is the layer which i want on the top of the lobbyLayer.
But Instead I get a black Screen on the emulator, it is working fine without the topLayer.Please Help.
I'm not sure what branch you're on but GLES2 doesn't use layers anymore. When I searched andengine.org/forums for Cocos2dExtension, this is what I found:
http://www.andengine.org/forums/tools/porting-to-ios-t8450.html
I believe the cocos2d extension is so we can use cocos builder to build menu's and stuff so we have a graphical interface.
Does this help you?
You can specify the z value for the layers. I have used while adding child layer in parent layer as :
addChild(background,1);//z value 0
addChild(topLayer,5);//z value 5 so appear above background layer
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I am designing an android application where I want to plot real time data which I receive through bluetooth.
I receive a signal and process it to get some result and I want to display it in real time. I saw there are various android libraries for drawing charts. I am a bit confused as to go with one such library or to use Javascript. Can anyone suggest it which is a better option to go with? And also, which Android library to use?
There are many charting libraries for android but every time I had a requirement I used android native 2D graphics framework using Canvas. I never looked for other alternatives. It is simple and you have a lot of control. Well just to inform..
Its better to use flot-android-chart for different types of charts creation.
or you can simply use achartengine
if you want to try creating charts without any built in jars just look at this Bar Chart in Android With out any Built in jars(But it is only bar chart)
If you want a real-time chart for Android then the fastest Android Chart library is currently SciChart.
There is a performance comparison article which puts 5 open source and commercial charts head to head under real-time conditions and in all tests, SciChart comes out on top, sometimes by a considerable margin!
This makes SciChart suitable for real-time trading apps, medical apps, scientific apps and even embedded systems which run Android as an operating system.
Disclosure: I am the tech lead on the SciChart project, just so you know!
This repo look promising: Androidplot
It appears to support import from gradle rather than keeping a jar file locally. I also considered AnyChart, but it is a pay-to-use library, whereas Androidplot is offered under an Apache 2.0 license.
Screenshot from READme:
Just copying and pasting from their quickstart guide in case the link breaks:
Add the Dependency
To use the library in your gradle project add the following to your build.gradle:
dependencies {
compile "com.androidplot:androidplot-core:1.5.7"
}
If you’re using Proguard obfuscation (Projects created by Android Studio do by default) you’ll also
want add this to your proguard-rules.pro file:
-keep class com.androidplot.** { *; }
Create your Activity XML Layout
Once you’ve got an Android project skeleton created, create res/layout/simple_xy_plot_example.xml
and add an XYPlot view:
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:ap="http://schemas.android.com/apk/res-auto"
android:layout_height="match_parent"
android:layout_width="match_parent">
<com.androidplot.xy.XYPlot
style="#style/APDefacto.Dark"
android:id="#+id/plot"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
ap:title="A Simple XY Plot"
ap:rangeTitle="range"
ap:domainTitle="domain"
ap:lineLabels="left|bottom"
ap:lineLabelRotationBottom="-45"/>
</LinearLayout>
This example uses a default style to decorate the plot. The full list of XML styleable attributes is
available here. While new attributes are added regularly,
not all configurable properties are yet available.
If something you need is missing, use Fig Syntax
directly within your Plot's XML, prefixing each property with "androidPlot". Example:
androidPlot.title="My Plot"
Create an Activity
Now let's create an Activity to display the XYPlot we just defined in simple_xy_plot_example.xml.
The basic steps are:
Create an instance of Series and populate it with data to be displayed.
Register one or more series with the plot instance along with a Formatter to describing how the series should look when drawn.
Draw the Plot
Since we're working with XY data, we’ll use XYPlot, SimpleXYSeries (which is an
implementation of the XYSeries interface) and LineAndPointFormatter:
import android.app.Activity;
import android.graphics.*;
import android.os.Bundle;
import com.androidplot.util.PixelUtils;
import com.androidplot.xy.SimpleXYSeries;
import com.androidplot.xy.XYSeries;
import com.androidplot.xy.*;
import java.text.FieldPosition;
import java.text.Format;
import java.text.ParsePosition;
import java.util.*;
/**
* A simple XYPlot
*/
public class SimpleXYPlotActivity extends Activity {
private XYPlot plot;
#Override
public void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
setContentView(R.layout.simple_xy_plot_example);
// initialize our XYPlot reference:
plot = (XYPlot) findViewById(R.id.plot);
// create a couple arrays of y-values to plot:
final Number[] domainLabels = {1, 2, 3, 6, 7, 8, 9, 10, 13, 14};
Number[] series1Numbers = {1, 4, 2, 8, 4, 16, 8, 32, 16, 64};
Number[] series2Numbers = {5, 2, 10, 5, 20, 10, 40, 20, 80, 40};
// turn the above arrays into XYSeries':
// (Y_VALS_ONLY means use the element index as the x value)
XYSeries series1 = new SimpleXYSeries(
Arrays.asList(series1Numbers), SimpleXYSeries.ArrayFormat.Y_VALS_ONLY, "Series1");
XYSeries series2 = new SimpleXYSeries(
Arrays.asList(series2Numbers), SimpleXYSeries.ArrayFormat.Y_VALS_ONLY, "Series2");
// create formatters to use for drawing a series using LineAndPointRenderer
// and configure them from xml:
LineAndPointFormatter series1Format =
new LineAndPointFormatter(this, R.xml.line_point_formatter_with_labels);
LineAndPointFormatter series2Format =
new LineAndPointFormatter(this, R.xml.line_point_formatter_with_labels_2);
// add an "dash" effect to the series2 line:
series2Format.getLinePaint().setPathEffect(new DashPathEffect(new float[] {
// always use DP when specifying pixel sizes, to keep things consistent across devices:
PixelUtils.dpToPix(20),
PixelUtils.dpToPix(15)}, 0));
// just for fun, add some smoothing to the lines:
// see: http://androidplot.com/smooth-curves-and-androidplot/
series1Format.setInterpolationParams(
new CatmullRomInterpolator.Params(10, CatmullRomInterpolator.Type.Centripetal));
series2Format.setInterpolationParams(
new CatmullRomInterpolator.Params(10, CatmullRomInterpolator.Type.Centripetal));
// add a new series' to the xyplot:
plot.addSeries(series1, series1Format);
plot.addSeries(series2, series2Format);
plot.getGraph().getLineLabelStyle(XYGraphWidget.Edge.BOTTOM).setFormat(new Format() {
#Override
public StringBuffer format(Object obj, StringBuffer toAppendTo, FieldPosition pos) {
int i = Math.round(((Number) obj).floatValue());
return toAppendTo.append(domainLabels[i]);
}
#Override
public Object parseObject(String source, ParsePosition pos) {
return null;
}
});
}
}
One potentially confusing section of the code above are the initializations of LineAndPointFormatter
You probably noticed that they take a mysterious reference to an xml resource file. This is actually
using Fig to configure the instance properties from XML.
If you'd prefer to avoid the XML and keep everything in Java, just replace the code:
LineAndPointFormatter series1Format =
new LineAndPointFormatter(this, R.xml.line_point_formatter_with_labels);
with:
LineAndPointFormatter series1Format = new LineAndPointFormatter(Color.RED, Color.GREEN, Color.BLUE, null);
In general XML configuration should be used over programmatic configuration when possible as it produces
more flexibility in terms of defining properties by screen density etc.. For more details on how to
programmatically configure Formatters etc. consult the latest Javadocs.
Continuing with the original example above, add these files to your /res/xml directory:
/res/xml/line_point_formatter_with_labels.xml
<?xml version="1.0" encoding="utf-8"?>
<config
linePaint.strokeWidth="5dp"
linePaint.color="#00AA00"
vertexPaint.color="#007700"
vertexPaint.strokeWidth="20dp"
fillPaint.color="#00000000"
pointLabelFormatter.textPaint.color="#CCCCCC"/>
/res/xml/line_point_formatter_with_labels_2.xml
<?xml version="1.0" encoding="utf-8"?>
<config
linePaint.strokeWidth="5dp"
linePaint.color="#0000AA"
vertexPaint.strokeWidth="20dp"
vertexPaint.color="#000099"
fillPaint.color="#00000000"
pointLabelFormatter.textPaint.color="#CCCCCC"/>