Sync the two mpandroidcharts graphs dragging and zomming - android

I am using MPAndroidChart and I have a requirement that i have to sync the dragging and zooming of two graphs, like if i zoom out, zoom in or drag one any of one graph then other graph should be zoomed out, zoomed in or dragged to same extent on X-axis.
Example: If i drag the upper graph to 12th point on X-axis then lower graph should also be dragged to 12th point on X-axis automatically.
Guys i need need some idea how to do this, i am enough familiar with mpchartandroid library.

I wrote a function to do this, i am calling this function of Drag and scale listener. Working perfectly.
private void syncCharts(Chart mainChart, LineChart[] otherCharts) {
Matrix mainMatrix;
float[] mainVals = new float[9];
Matrix otherMatrix;
float[] otherVals = new float[9];
mainMatrix = mainChart.getViewPortHandler().getMatrixTouch();
mainMatrix.getValues(mainVals);
for (LineChart tempChart : otherCharts) {
otherMatrix = tempChart.getViewPortHandler().getMatrixTouch();
otherMatrix.getValues(otherVals);
otherVals[Matrix.MSCALE_X] = mainVals[Matrix.MSCALE_X];
otherVals[Matrix.MTRANS_X] = mainVals[Matrix.MTRANS_X];
otherVals[Matrix.MSKEW_X] = mainVals[Matrix.MSKEW_X];
otherMatrix.setValues(otherVals);
tempChart.getViewPortHandler().refresh(otherMatrix, tempChart, true);
}
}

Use the OnChartGestureListener.
https://github.com/PhilJay/MPAndroidChart/wiki/Interaction-with-the-Chart

Related

Combine image with video stream on Android

I am investigating Augmented Reality on Android.
I am using ARCore and Sceneform within an Android application.
I have tried out the sample projects and now would like to develop my own application.
One effect I would like to achieve is to combine/overlay an image (say .jpeg or .png) with a live feed from the devices onboard camera.
The image will have a transparent background that allows the user to see the live feed and image simultaneously
However I do not want the overlayed image to be a fixed/static watermark, When the user zooms in, out or pans the overlayed image must also zoom in, out and pan etc.
I do not wish the overplayed image to become 3d or anything of that nature.
Is this effect possible with Sceneform? or will I need to use other 3rd party libraries and/or tools to achieve the desired results.
UPDATE
The user is drawing on a blank sheet of white paper. The sheet of paper is orientated so that the user is comfortably drawing (either left or right handed). The user is free to move the sheet of paper while they complete their image.
An Android device is held above the sheet of paper filming the user drawing their selected image.
The live camera feed is being cast to a large TV or monitor screen.
To aid the user they have selected a static image to "trace" or "Copy".
This image is chosen on the Android device and is being combined with the live camera stream within the Android application.
The user can zoom in and out on their drawing and the combined live stream and selected static image will also zoom in and out, this will enable the user to make an accurate copy of the selected static image by drawing "Free Hand".
When the user looks directly at the sheet of paper, they only see their drawing.
When the user views the cast live stream of them drawing on the TV or monitor they see their drawing and the chosen static image superimposed. The user can control the transparency of the static image to assist them in making an accurate copy of it.
I think what you are looking for is to use AR to display an image so that the image stays in place, for example over a sheet of paper in order to act as a guide for drawing a copy of the image on the paper.
There are 2 parts to this. First is to locate the sheet of paper, the second is to place the image over the paper and keep it there as the phone moves around.
Locating the sheet of paper can be done just by detecting the plane with the paper (having some contrast, or pattern or something vs. a plain white sheet of paper will help), then tap on where the center of the page should be. This is done in the HelloSceneform sample.
If you want to have a more accurate bounding of the paper, you could tap the 4 corners of the paper, and then create anchors there. To do this register a plane tapped listener in onCreate()
arFragment.setOnTapArPlaneListener(this::onPlaneTapped);
Then in onPlaneTapped, create the 4 anchorNodes. Once you have 4, initialize the drawing to be displayed.
private void onPlaneTapped(HitResult hitResult, Plane plane, MotionEvent event) {
if (cornerAnchors.size() != 4) {
AnchorNode corner = createCornerNode(hitResult.createAnchor());
arFragment.getArSceneView().getScene().addChild(corner);
cornerAnchors.add(corner);
}
if (cornerAnchors.size() == 4 && drawingNode == null) {
initializeDrawing();
}
}
To initialize the drawing, create a Sceneform Texture from the bitmap or drawable. This can be from a resource or a file URL. You want the texture to show the whole image, and scale as the model holding it is resized.
private void initializeDrawing() {
Texture.Sampler sampler = Texture.Sampler.builder()
.setWrapMode(Texture.Sampler.WrapMode.CLAMP_TO_EDGE)
.setMagFilter(Texture.Sampler.MagFilter.NEAREST)
.setMinFilter(Texture.Sampler.MinFilter.LINEAR_MIPMAP_LINEAR)
.build();
Texture.builder()
.setSource(this, R.drawable.logo_google_developers)
.setSampler(sampler)
.build()
.thenAccept(texture -> {
MaterialFactory.makeTransparentWithTexture(this, texture)
.thenAccept(this::buildDrawingRenderable);
});
}
The model to hold the texture is just a flat quad sized to the smallest dimension between the corners. This is the same logic as laying out a quad using OpenGL.
private void buildDrawingRenderable(Material material) {
Integer[] indices = {
0, 1, 3, 3, 1, 2
};
//Calculate the center of the corners.
float min_x = Float.MAX_VALUE;
float max_x = Float.MIN_VALUE;
float min_z = Float.MAX_VALUE;
float max_z = Float.MIN_VALUE;
for (AnchorNode node : cornerAnchors) {
float x = node.getWorldPosition().x;
float z = node.getWorldPosition().z;
min_x = Float.min(min_x, x);
max_x = Float.max(max_x, x);
min_z = Float.min(min_z, z);
max_z = Float.max(max_z, z);
}
float width = Math.abs(max_x - min_x);
float height = Math.abs(max_z - min_z);
float extent = Math.min(width / 2, height / 2);
Vertex[] vertices = {
Vertex.builder()
.setPosition(new Vector3(-extent, 0, extent))
.setUvCoordinate(new Vertex.UvCoordinate(0, 1)) // top left
.build(),
Vertex.builder()
.setPosition(new Vector3(extent, 0, extent))
.setUvCoordinate(new Vertex.UvCoordinate(1, 1)) // top right
.build(),
Vertex.builder()
.setPosition(new Vector3(extent, 0, -extent))
.setUvCoordinate(new Vertex.UvCoordinate(1, 0)) // bottom right
.build(),
Vertex.builder()
.setPosition(new Vector3(-extent, 0, -extent))
.setUvCoordinate(new Vertex.UvCoordinate(0, 0)) // bottom left
.build()
};
RenderableDefinition.Submesh[] submeshes = {
RenderableDefinition.Submesh.builder().
setMaterial(material)
.setTriangleIndices(Arrays.asList(indices))
.build()
};
RenderableDefinition def = RenderableDefinition.builder()
.setSubmeshes(Arrays.asList(submeshes))
.setVertices(Arrays.asList(vertices)).build();
ModelRenderable.builder().setSource(def)
.setRegistryId("drawing").build()
.thenAccept(this::positionDrawing);
}
The last part is to position the quad in the center of the corners, and create a Transformable node so the image can be nudged into position, rotated, or scaled to be the perfect size.
private void positionDrawing(ModelRenderable drawingRenderable) {
//Calculate the center of the corners.
float min_x = Float.MAX_VALUE;
float max_x = Float.MIN_VALUE;
float min_z = Float.MAX_VALUE;
float max_z = Float.MIN_VALUE;
for (AnchorNode node : cornerAnchors) {
float x = node.getWorldPosition().x;
float z = node.getWorldPosition().z;
min_x = Float.min(min_x, x);
max_x = Float.max(max_x, x);
min_z = Float.min(min_z, z);
max_z = Float.max(max_z, z);
}
Vector3 center = new Vector3((min_x + max_x) / 2f,
cornerAnchors.get(0).getWorldPosition().y, (min_z + max_z) / 2f);
Anchor centerAnchor = null;
Vector3 screenPt = arFragment.getArSceneView().getScene().getCamera().worldToScreenPoint(center);
List<HitResult> hits = arFragment.getArSceneView().getArFrame().hitTest(screenPt.x, screenPt.y);
for (HitResult hit : hits) {
if (hit.getTrackable() instanceof Plane) {
centerAnchor = hit.createAnchor();
break;
}
}
AnchorNode centerNode = new AnchorNode(centerAnchor);
centerNode.setParent(arFragment.getArSceneView().getScene());
drawingNode = new TransformableNode(arFragment.getTransformationSystem());
drawingNode.setParent(centerNode);
drawingNode.setRenderable(drawingRenderable);
}
The intended AR reference image can be scaled with ARobjects as points for the sizing of the template for the user.
The more complex AR images will not work easily, since the AR image is overlaid on top of the users tracing, and this will obstruct the tip of their pen/pencil.
My solution is to chromakey the white paper. This will replace the white paper with the chosen image or live feed. Moving the paper around as you specified would be an issue, unless you have a means of tracking the paper position.
As you can see in this example, AR objects are in front, while chromakey is background. Tracing surface (paper) would be in the center.
Reference to this example is on the link below.
RJ
YouTube - AR tracked environment

spine 2d coordinate system libgdx

i don't know what im doing wrong...i think im having brain freeze. I am really struggling with converting my spine objects pixel coordinates to world coordinates. I have recently converted all my code to work with Ashley ecs and i cant seem to get my spine object to display in the correct position.
i have a system which handles the rendering and positioning of my spine object but i cant seem to get it displaying in the correct position.
I'm hoping someone can point me in the correct direction!
i have included my code for the spine rendering system...hope you can help!
i want to place the spine object at the same position as my box 2d object which is using world coordinates. but spine is using pixel coordinates. i have also included an image to show you what is happening. (the grey square near the middle right of the screen is where i want my spine object to be!)
Amarino
in game image
public class SpineRenderSystem extends IteratingSystem {
private static final String TAG = com.chaingang.freshstart.systems.SpineRenderSystem.class.getName();
private PolygonSpriteBatch pBatch;
SkeletonMeshRenderer skeletonMeshRenderer;
private boolean process = true;
BodyComponent bodyComp;
Spine2DComponent spineComp;
public SpineRenderSystem(PolygonSpriteBatch pBatch){
super(Family.all(RenderableComponent.class, Spine2DComponent.class, PositionComponent.class).get());
this.pBatch = pBatch;
skeletonMeshRenderer = new SkeletonMeshRenderer();
skeletonMeshRenderer.setPremultipliedAlpha(true);
}
#Override
protected void processEntity(Entity entity, float deltaTime) {
bodyComp = Mappers.body.get(entity);
spineComp = Mappers.spine2D.get(entity);
float offsetX = 100.00f/Gdx.graphics.getWidth(); //100 equal world width
float offsetY = 50.00f/Gdx.graphics.getHeight(); //50 equals world height
pBatch.begin();
spineComp.skeleton.setX((bodyComp.body.getPosition().x / offsetX) );
spineComp.skeleton.setY((bodyComp.body.getPosition().y) / offsetY);
skeletonMeshRenderer.draw(pBatch,spineComp.skeleton);
//spineComp.get(entity).skeleton.setFlipX(player.dir == -1);
spineComp.animationState.apply(spineComp.skeleton);
spineComp.skeleton.updateWorldTransform();
pBatch.end();
}
}
What I do for my spine renders is look at the bounding box size in pixels in spine. This is usually in the order of 100s of pixels. But if you are working with box2d scales, it is recommended that you think of 1 as 1 meter.
With this in mind I will scale a human spine animation with a hip y coordinate of 200 pixels by dividing it by 200, or there about.
Once you have this ratio, then when you build your Spine Skeleton you can do this (sorry, I do all my libgdx stuff in Kotlin now):
val atlasLoader = AtlasAttachmentLoader(atlas)
val skeletonJson = SkeletonJson(atlasLoader)
skeletonJson.scale = 1/200f
Then you might also want to handle an offset for rendering your spine object, as I see you are trying to do, because possibly your root bone is in the center of your spine object (a hip for example). However, you are doing a division operation, which I guess is exploratory as offsets should be an addition or subtraction. Here is how I do it using the spine pixel coordinates (again, sorry for the Kotlin, but I like it):
//In some object or global state we have this stuff
var skeleton: Skeleton
var skeletonRenderer = SkeletonRenderer<PolygonSpriteBatch>()
//Then in the rendering code
val offset = Vector2(0f,-200f)
val position = physicsRoot.position().add(offset)
skeleton.setPosition(position.x, position.y)
skeleton.updateWorldTransform()
skeletonRenderer.draw(batch, skeleton)
That should get your spine stuff working as you expect.
Have you heard of a method called camera.project(world coordinates); it might do what you are looking for. It takes the world coordinates and turns them into screen coordinates. For the opposite you can do camera.unproject(screen coordinates);

How to display X and Y axis for XYPlot in AndroidPlot

Background
I'm developing an app for Android that plots data as a line graph using AndroidPlot. Because of the nature of the data, it's important that it be pannable and zoomable. I'm using AndroidPlot's sample code on bitbucket for panning and zooming, modified to allow panning and zooming in both X and Y directions.
Everything works as desired except that there are no X and Y axis lines. It is very disorienting to look at the data without them. The grid helps, but there's no guarantee that grid lines will actually fall on the axis.
To remedy this I have tried adding two series, one that falls on just the X axis and the other on the Y. The problem with this is that if one zooms out too far the axis simply end, and it becomes apparent that I have applied a 'hack'.
Question
Is it possible to add X and Y axis lines to AndroidPlot? Or will my sad hack have to do?
EDIT
Added tags
I figured it out. It wasn't trivial, took a joint effort with a collaborator, and sucked up many hours of our time.
Starting with the sample mentioned in my question, I had to extend XYPlot (which I called GraphView) and override the onPreInit method. Note that I have two PointF's, minXY and maxXY, that are defined in my overridden XYPlot and manipulated when I zoom or scroll.
#Override
protected void onPreInit() {
super.onPreInit();
final Paint axisPaint = new Paint();
axisPaint.setColor(getResources().getColor(R.color.MY_AXIS_COLOR));
axisPaint.setStrokeWidth(3); //or whatever stroke width you want
XYGraphWidget oldWidget = getGraphWidget();
XYGraphWidget widget = new XYGraphWidget(getLayoutManager(),
this,
new SizeMetrics(
oldWidget.getHeightMetric(),
oldWidget.getWidthMetric())) {
//We now override XYGraphWidget methods
RectF mGridRect;
#Override
protected void doOnDraw(Canvas canvas, RectF widgetRect)
throws PlotRenderException {
//In order to draw the x axis, we must obtain gridRect. I believe this is the only
//way to do so as the more convenient routes have private rather than protected access.
mGridRect = new RectF(widgetRect.left + ((isRangeAxisLeft())?getRangeLabelWidth():1),
widgetRect.top + ((isDomainAxisBottom())?1:getDomainLabelWidth()),
widgetRect.right - ((isRangeAxisLeft())?1:getRangeLabelWidth()),
widgetRect.bottom - ((isDomainAxisBottom())?getDomainLabelWidth():1));
super.doOnDraw(canvas, widgetRect);
}
#Override
protected void drawGrid(Canvas canvas) {
super.drawGrid(canvas);
if(mGridRect == null) return;
//minXY and maxXY are PointF's defined elsewhere. See my comment in the answer.
if(minXY.y <= 0 && maxXY.y >= 0) { //Draw the x axis
RectF paddedGridRect = getGridRect();
//Note: GraphView.this is the extended XYPlot instance.
XYStep rangeStep = XYStepCalculator.getStep(GraphView.this, XYAxisType.RANGE,
paddedGridRect, getCalculatedMinY().doubleValue(),
getCalculatedMaxY().doubleValue());
double rangeOriginF = paddedGridRect.bottom;
float yPix = (float) (rangeOriginF + getRangeOrigin().doubleValue() * rangeStep.getStepPix() /
rangeStep.getStepVal());
//Keep things consistent with drawing y axis even though drawRangeTick is public
//drawRangeTick(canvas, yPix, 0, getRangeLabelPaint(), axisPaint, true);
canvas.drawLine(mGridRect.left, yPix, mGridRect.right, yPix, axisPaint);
}
if(minXY.x <= 0 && maxXY.x >= 0) { //Draw the y axis
RectF paddedGridRect = getGridRect();
XYStep domianStep = XYStepCalculator.getStep(GraphView.this, XYAxisType.DOMAIN,
paddedGridRect, getCalculatedMinX().doubleValue(),
getCalculatedMaxX().doubleValue());
double domainOriginF = paddedGridRect.left;
float xPix = (float) (domainOriginF - getDomainOrigin().doubleValue() * domianStep.getStepPix() /
domianStep.getStepVal());
//Unfortunately, drawDomainTick has private access in XYGraphWidget
canvas.drawLine(xPix, mGridRect.top, xPix, mGridRect.bottom, axisPaint);
}
}
};
widget.setBackgroundPaint(oldWidget.getBackgroundPaint());
widget.setMarginTop(oldWidget.getMarginTop());
widget.setMarginRight(oldWidget.getMarginRight());
widget.setPositionMetrics(oldWidget.getPositionMetrics());
getLayoutManager().remove(oldWidget);
getLayoutManager().addToTop(widget);
setGraphWidget(widget);
//More customizations can go here
}
And that was that. I sure wish this was built into AndroidPlot; it'll be nasty trying to fix this when it breaks in an AndroidPlot update...

animate specific sprite with multiple sprite on android using andengine

i have multiple objects on my canvas. and after some condition, i want some of my sprite do animate. here my code:
private AnimatedSprite[] sign;
sign = new AnimatedSprite[9];
// some loop code to create 9 sign
..
sign[index] = new AnimatedSprite(x, y, myregion);
..
until this part is ok, all signs is on position. but when i want to animate some sprite, all of that sprite will do animate too. here the code:
while(signIndex<9)
{
if(signIndex==winSlot[0] || signIndex==winSlot[1] || signIndex==winSlot[2])
{
grupSign= null;
grupSign= sign[signIndex];
grupSign.animate(200, true);
}
signIndex++;
}
anyone know and can help me how to make only specific sprites do animate?
As per my suggestion you have to use deepCopy() method while you create your animated sprite object. As per the following
sign[index] = new AnimatedSprite(x, y, myregion.deepCopy());
Advantage of using deepCopy() method is that each time new region will created for your sprite.

Drawing (filtering) 100k+ points to MapView in Android

I am trying to solve a problem with drawing a path from huge (100k+) set of GeoPoints to a MapView on Android.
Firstly I would like to say, I searched through StackOverflow a lot and haven't found an answer.The bottleneck of my code is not actually drawing into canvas, but Projection.toPixels(GeoPoint, Point) or Rect.contains(point.x, point.y) method..I am skipping points not visible on screen and also displaying only every nth point according to current zoom-level. When the map is zoomed-in I want to display as accurate path as possible so I skipping zero (or nearly to zero) points, so that when finding visible points I need to call the projection method for every single point in the collection. And that is what really takes a lot of time (not seconds, but map panning is not fluid and I am not testing it on HTC Wildfire:)). I tried caching calculated points, but since points be recalculated after every map pan/zoom it haven't helped
at all.
I thought about usage of some kind of prune and search algorithm instead of iterate the array, but I figured out the input data is not sorted (I can't throw away any branch stacked between two invisible points). That could I possible solve with simple sort at the beginning, but I am still not sure even the logarithmic count of getProjection() and Rect.contains(point.x, point.y) calls instead of linear would solve the performance problem.
Bellow is my current code. Please help me if you know how to make this better. Thanks a lot!
public void drawPath(MapView mv, Canvas canvas) {
displayed = false;
tmpPath.reset();
int zoomLevel = mapView.getZoomLevel();
int skippedPoints = (int) Math.pow(2, (Math.max((19 - zoomLevel), 0)));
int mPointsSize = mPoints.size();
int mPointsLastIndex = mPointsSize - 1;
int stop = mPointsLastIndex - skippedPoints;
mapView.getDrawingRect(currentMapBoundsRect);
Projection projection = mv.getProjection();
for (int i = 0; i < mPointsSize; i += skippedPoints) {
if (i > stop) {
break;
}
//HERE IS THE PROBLEM I THINK - THIS METHOD AND THE IF CONDITION BELOW
projection.toPixels(mPoints.get(i), point);
if (currentMapBoundsRect.contains(point.x, point.y)) {
if (!displayed) {
Point tmpPoint = new Point();
projection.toPixels(mPoints.get(Math.max(i - 1, 0)),
tmpPoint);
tmpPath.moveTo(tmpPoint.x, tmpPoint.y);
tmpPath.lineTo(point.x, point.y);
displayed = true;
} else {
tmpPath.lineTo(point.x, point.y);
}
} else if (displayed) {
tmpPath.lineTo(point.x, point.y);
displayed = false;
}
}
canvas.drawPath(tmpPath, this.pathPaint);
}
So I figured out how to make it all much faster!
I will post it here, somebody could possibly found it useful in the future.
It has emerged that usage of projection.toPixels() can really harm application performance. So I figured out that way better than take every single GeoPoint, convert it to Point and then check if it is contained in map viewport is, when I count actuall viewport radius of the map as following:
mapView.getGlobalVisibleRect(currentMapBoundsRect);
GeoPoint point1 = projection.fromPixels(currentMapBoundsRect.centerX(), currentMapBoundsRect.centerY());
GeoPoint point2 = projection.fromPixels(currentMapBoundsRect.left, currentMapBoundsRect.top);
float[] results2 = new float[3];
Location.distanceBetween(point1.getLatitudeE6()/1E6, point1.getLongitudeE6()/1E6, point2.getLatitudeE6()/1E6, point2.getLongitudeE6()/1E6, results2);
The radius is in results2[0]..
Then I can take every single GeoPoint and count the distance between it and the center of the map mapView.getMapCenter(). Then I can compare the radius with computed distance and decide whether ot not diplay the point.
So that's it, hope It will be helpful.

Categories

Resources