Flutter spritewidget how to use sprite sheet - android

I have been trying to make an animation work with spritewidget, but I can't seem to see how they make this work.
I have the following
await _fireworksImageMap.load(<String>[
'assets/sprites/fireworks/fireworks_sprite_sheet_0.png',
'assets/sprites/fireworks/fireworks_sprite_sheet_1.png',
'assets/sprites/fireworks/fireworks_sprite_sheet_2.png',
'assets/sprites/fireworks/fireworks_sprite_sheet_3.png',
]);
// Load sprite sheets
String json = await _bundle.loadString('assets/sprites/fireworks/fireworks_sprite_sheet_0.json');
fireworksSpriteSheet = new SpriteSheet(_fireworksImageMap['assets/sprites/fireworks/fireworks_sprite_sheet_0.png'], json);
This is basically what I have, I do however have 3 more json sheets for that animation, png 0 to 3 are all part of a single animation. I know I have to fit this is in a spriteWidget(). But I have no idea how to get this to return an animation and really not with several spritesheets.
I hope someone could help me with some directions or a sample. There is hardly any information about spritewidget, except for the samples on GitHub that hardly gives any information.

Related

Mapbox Navigation via Customised Routes

So Mapbox provides an awesome Navigation SDK for Android, and what I have been trying to do is create my own routes, representing each point as a Feature in a Geojson file, and then passing them on to the MapMatching module to get directions that I can then pass to the Navigation Engine.
My solution evolves into two main parts. The first one involves iterating through the points I want navigation to go through, by adding them as input to the .coordinates element of MapboxMapMatching.builder() and subsequently converting this to
.toDirectionRoute(); per Mapbox instructions and example here: https://www.mapbox.com/android-docs/java/examples/use-map-matching/
private void getWaypointRoute(List<Point> features) {
originPosition = features.get(0);
destinationPosition = features.get(features.size() - 1);
MapboxMapMatching.builder()
.accessToken(Mapbox.getAccessToken())
.coordinates(features)
.steps(true) // Setting this will determine whether to return steps and turn-by-turn instructions.
.voiceInstructions(true)
.bannerInstructions(true)
.profile(DirectionsCriteria.PROFILE_DRIVING)
.build().enqueueCall(new Callback<MapMatchingResponse>() {
#Override
public void onResponse(Call<MapMatchingResponse> call, Response<MapMatchingResponse> response) {
if (response.body() == null) {
Log.e(TAG, "Map matching has failed.");
return;
}
if (response.isSuccessful()) {
currentRoute = response.body().matchings().get(0).toDirectionRoute();
The second bit involves just passing 'currentRoute' to the NavigationLauncher as shown below:
NavigationLauncherOptions options = NavigationLauncherOptions.builder()
.origin(origin)
.destination(destination)
.directionsRoute(currentRoute)
.shouldSimulateRoute(simulateRoute)
.enableOffRouteDetection(false)
.build();
// Call this method with Context from within an Activity
NavigationLauncher.startNavigation(MainActivity.this, options);
An example of the route can be seen here Android Simulator Snapshot with Route . Each point across the route, is an intersection, and corresponds to a feature in my GeoJson file. The problem becomes when I launch the navigation. Every time, either in the simulator or on a real device, each point is interpreted as a destination so the voice command goes 'You have reached your first (second, third etc) destination'. I find this annoying as I would like to have a single route with a destination and that's it. I would just like to have this points so I have my own custom path, instead of the shortest path typically returned by routing applications. I try to avoid the problem by setting voiceInstructions off but then the system goes bananans and the navigation screen moves to lat, lng (0,0) which is pretty much somewhere West of Africa. Any help on how I could resolve this it would be greatly appreciated and I would be happy to buy a beer or two for the person that provides the right answer. I have reached out to Mapbox Support as well but we have not found an answer to the problem so I asked them to escalate it internally within their engineering team, as I believe, although the problem I am solving is not uncommon, it is still not very much tested by developers. Cheers!
So here I am and after the kind support of Mapbox Support and Rafa Gutierrez
I can now answer this post myself.
The problem has been arising due to MapboxMapMatching automatically setting .coordinates as waypoints. If instead, one edits explicitly the waypoints variable to have only two waypoints: origin and destination, then the system is able to process the input customised route without translating each input coordinate as a waypoint. The code example below hopefully clarifies the point described above:
MapboxMapMatching.builder()
.accessToken(Mapbox.getAccessToken())
.coordinates(lineStringRep.coordinates())
.waypoints(OD)
.steps(true)
.voiceInstructions(true)
.bannerInstructions(true)
.profile(DirectionsCriteria.PROFILE_DRIVING)
.build().enqueueCall(new Callback<MapMatchingResponse>()
where OD is an array of integers storing the first (origin) and last index (destination) of your coordinates
OD[0] = 0;
OD[1] = features.size() - 1;

Generate and export point cloud from Project Tango

After some weeks of waiting I finally have my Project Tango. My idea is to create an app that generates a point cloud of my room and exports this to .xyz data. I'll then use the .xyz file to show the point cloud in a browser! I started off by compiling and adjusting the point cloud example that's on Google's github.
Right now I use the onXyzIjAvailable(TangoXyzIjData tangoXyzIjData) to get a frame of x y and z values; the points. I then save these frames in a PCLManager in the form of Vector3. After I'm done scanning my room, I simple write all the Vector3 from the PCLManager to a .xyz file using:
OutputStream os = new FileOutputStream(file);
size = pointCloud.size();
for (int i = 0; i < size; i++) {
String row = String.valueOf(pointCloud.get(i).x) + " "
+ String.valueOf(pointCloud.get(i).y) + " "
+ String.valueOf(pointCloud.get(i).z) + "\r\n";
os.write(row.getBytes());
}
os.close();
Everything works fine, not compilation errors or crashes. The only thing that seems to be going wrong is the rotation or translation of the points in the cloud. When I view the point cloud everything is messed up; the area I scanned is not recognizable, though the amount of points is the same as recorded.
Could this have to do something with the fact that I don't use PoseData together with the XyzIjData? I'm kind of new to this subject and have a hard time understanding what the PoseData exactly does. Could someone explain it to me and help me fix my point cloud?
Yes, you have to use TangoPoseData.
I guess you are using TangoXyzIjData correctly; but the data you get this way is relative to where the device is and how the device is tilted when you take the shot.
Here's how i solved this:
I started from java_point_to_point_example. In this example they get the coords of 2 different points with 2 different coordinate system and then write those coordinates wrt the base Coordinate frame pair.
First of all you have to setup your exstrinsics, so you'll be able to perform all the transformations you'll need. To do that I call mExstrinsics = setupExtrinsics(mTango) function at the end of my setTangoListener() function. Here's the code (that you can find also in the example I linked above).
private DeviceExtrinsics setupExtrinsics(Tango mTango) {
//camera to IMU tranform
TangoCoordinateFramePair framePair = new TangoCoordinateFramePair();
framePair.baseFrame = TangoPoseData.COORDINATE_FRAME_IMU;
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_COLOR;
TangoPoseData imu_T_rgb = mTango.getPoseAtTime(0.0,framePair);
//IMU to device transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_DEVICE;
TangoPoseData imu_T_device = mTango.getPoseAtTime(0.0,framePair);
//IMU to depth transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_DEPTH;
TangoPoseData imu_T_depth = mTango.getPoseAtTime(0.0,framePair);
return new DeviceExtrinsics(imu_T_device,imu_T_rgb,imu_T_depth);
}
Then when you get the point Cloud you have to "normalize" it. Using your exstrinsics is pretty simple:
public ArrayList<Vector3> normalize(TangoXyzIjData cloud, TangoPoseData cameraPose, DeviceExtrinsics extrinsics) {
ArrayList<Vector3> normalizedCloud = new ArrayList<>();
TangoPoseData camera_T_imu = ScenePoseCalculator.matrixToTangoPose(extrinsics.getDeviceTDepthCamera());
while (cloud.xyz.hasRemaining()) {
Vector3 rotatedV = ScenePoseCalculator.getPointInEngineFrame(
new Vector3(cloud.xyz.get(),cloud.xyz.get(),cloud.xyz.get()),
camera_T_imu,
cameraPose
);
normalizedCloud.add(rotatedV);
}
return normalizedCloud;
}
This should be enough, now you have a point cloud wrt you base frame of reference.
If you overimpose two or more of this "normalized" cloud you can get the 3D representation of your room.
There is another way to do this with rotation matrix, explained here.
My solution is pretty slow (it takes around 700ms to the dev kit to normalize a cloud of ~3000 points), so it is not suitable for a real time application for 3D reconstruction.
Atm i'm trying to use Tango 3D Reconstruction Library in C using NDK and JNI. The library is well documented but it is very painful to set up your environment and start using JNI. (I'm stuck at the moment in fact).
Drifting
There still is a problem when I turn around with the device. It seems that the point cloud spreads out a lot.
I guess you are experiencing some drifting.
Drifting happens when you use Motion Tracking alone: it consist of a lot of very small error in estimating your Pose that all together cause a big error in your pose relative to the world. For instance if you take your tango device and you walk in a circle tracking your TangoPoseData and then you draw you trajectory in a spreadsheet or whatever you want you'll notice that the Tablet will never return at his starting point because he is drifting away.
Solution to that is using Area Learning.
If you have no clear ideas about this topic i'll suggest watching this talk from Google I/O 2016. It will cover lots of point and give you a nice introduction.
Using area learning is quite simple.
You have just to change your base frame of reference in TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION. In this way you tell your Tango to estimate his pose not wrt on where it was when you launched the app but wrt some fixed point in the area.
Here's my code:
private static final ArrayList<TangoCoordinateFramePair> FRAME_PAIRS =
new ArrayList<TangoCoordinateFramePair>();
{
FRAME_PAIRS.add(new TangoCoordinateFramePair(
TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION,
TangoPoseData.COORDINATE_FRAME_DEVICE
));
}
Now you can use this FRAME_PAIRS as usual.
Then you have to modify your TangoConfig in order to issue Tango to use Area Learning using the key TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION. Remember that when using TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION you CAN'T use learningmode and load ADF (area description file).
So you cant use:
TangoConfig.KEY_BOOLEAN_LEARNINGMODE
TangoConfig.KEY_STRING_AREADESCRIPTION
Here's how I initialize TangoConfig in my app:
TangoConfig config = tango.getConfig(TangoConfig.CONFIG_TYPE_DEFAULT);
//Turning depth sensor on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DEPTH, true);
//Turning motiontracking on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_MOTIONTRACKING,true);
//If tango gets stuck he tries to autorecover himself.
config.putBoolean(TangoConfig.KEY_BOOLEAN_AUTORECOVERY,true);
//Tango tries to store and remember places and rooms,
//this is used to reduce drifting.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION,true);
//Turns the color camera on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_COLORCAMERA, true);
Using this technique you'll get rid of those spreads.
PS
In the Talk i linked above, at around 22:35 they show you how to port your application to Area Learning. In their example they use TangoConfig.KEY_BOOLEAN_ENABLE_DRIFT_CORRECTION. This key does not exist anymore (at least in Java API). Use TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION instead.

Create a Manga App

I've asked this before but apparently I was too broad on my description so i'll give it a try again. I'm using a library from Flandmark to actually use facial recognition of a person - figure out where their eyes, nose and mouth are. After that want I want to do is to generate a manga image of the person. I'm not sure how to do this. The first way I thought of was using a large database of manga images of specific areas such as the eyes, and map them to the original image. Question is, is there a way I can make the image look like a manga image in terms of background, colours, etc.
The first thing I thought would be useful is to get the size of the eyes and width of the mouth. This is done using this part of the Flandmark code:
flandmark_detect(input, bbox, model, landmarks);
// display landmarks
cvRectangle(orig, cvPoint(bbox[0], bbox[1]), cvPoint(bbox[2], bbox[3]), CV_RGB(255,0,0) );
cvRectangle(orig, cvPoint(model->bb[0], model->bb[1]), cvPoint(model->bb[2], model->bb[3]), CV_RGB(0,0,255) );
cvCircle(orig, cvPoint((int)landmarks[0], (int)landmarks[1]), 3, CV_RGB(0, 0,255), CV_FILLED);
for (int i = 2; i < 2*model->data.options.M; i += 2)
{
cvCircle(orig, cvPoint(int(landmarks[i]), int(landmarks[i+1])), 3, CV_RGB(255,0,0), CV_ED);
}
Any help would be appreciated as I don't know the best way to do this and im really stuck. Thanks

projecting unprojected radar images into osmdroid

I have written a radar weather app using osmdroid for map tiles, and manually overlaying NOAA ridge radar data. Everything Is working great except that the radar images are unprojected, while the openstreetmap tiles are in transverse Mercator projection. The weather lies within the bounds it should but the data is distorted.
I see three ways to fix this (in order of preference) but am having trouble with all three:
1) find a source of radar data already projected in mercator - hours of Googling later, I've found nothing
2) programmatically reproject the images right after I download them. Does anyone know a good API for this?
3) project them on the fly, perhaps with openlayers.im reading that can openlayers reproject,but can it be used over top of an osmdroid mapview?
Any ideas? Thanks for any help
Mike
GDAL is the way to go. There is no official Android build that I know of however some people have been successful in getting it running on Android. For example, Nutiteq has a build in the libs folder of their AdvancedMap3D sample project. Put the contents of both armeabi folders in your project's lib folder and you should be able to access the GDAL packages.
Then take a look at the GDAL in Java page. Look at the gdalinfo.java sample to get a feel for how to load and examine the parts of a GDAL dataset. To reproject your dataset, you will do something along the lines of:
SpatialReference sr = new SpatialReference();
sr.ImportFromProj4("+proj=merc +datum=WGS84");
String result[] = new String[1];
sr.ExportToPrettyWkt(result, 1);
String oldProjection = mDataset.getProjection();
String newProjection = result[0];
Dataset newDataset = gdal.AutoCreateWarpedVRT(mDataset, oldProjection, newProjection, gdalconst.GRA_NearestNeighbour, 0.0);
Dataset savedDataset = mDriver.CreateCopy(outpath, newDataset, 0, new String[] { "COMPRESS=LZW", "PREDICTOR=2" }, null, null);
newDataset.delete();
savedDataset.delete();
You may need to make a few adjustments, but that should get you most of the way there.

AS3 AIR (ios and android) CameraRoll issue

I've been trying to sort out an issue for a week or so now. Googled to no avail. I'm currently working on an iOS/Android app that has a feature in the game to take a screenshot and have it show up in the mobile device's gallery.
I'm using the CameraRoll object and the issue is that some objects on screen have smoothing applied. However the CameraRoll screenshot ignores this. Which makes the resulting screen shot have some objects with jaggies.
I've found a number of cries for help on the same issue while googling, but no answers.
Any help is much appreciated.
Jaggies in flash are common since smoothing on bitmaps is disabled by default (more cpu intensive). I'd recommend creating a new bitmap from the CameraRoll MediaEvent.SELECT event. Inside, it should return event.data which is a MediaPromise object. Inside that, you should find a read-only file property where you should be able to find the image.
Then it's just a matter of creating your new image with smoothing.
var img:Bitmap = new Bitmap();
img.bitmapData = file.bitmapData;
img.smoothing = true;
addChild(img);
I've never tried this on mobile before, but it's a common issue which I believe you're encountering.
Addendum:
If you're having an issue with the system based screenshot services, you could create your own using pure AS3. The logic being, AS3 should do a pixel-by-pixel block copy of the stage (thereby respecting the smoothing values of your images).
Try this:
var myBitmapData:BitmapData = new BitmapData(stage.stageWidth, stage.stageHeight);
myBitmapData.draw(stage);

Categories

Resources