How to rotate ImageVector’ pathData in Compose - android

I am trying to implement similar animation like this, with Jetpack Compose
figure: https://dribbble.com/shots/4762799-Microinteraction-Exploration-002
For the play icon’s transformation (from a triangle to a rectangle) , it not difficult to get it done by using path morphing animation.After reading some gist, I found that I can build an ImageVector by supplying the pathData , that's new to me~
While for ****the part on the right, the rotation of the cube object. Well, maybe I should call that a rolling-over or whatever, because the cube changed its pivot after -90 degrees.
I'm wondering that wether I can create a VectorGroup for the cube’s pathData and animate it by updating its group parameters.
re: androidx.compose.ui.graphics.vector.ImageVector.Builder#addGroup
ImageVector.Builder(defaultWidth = 100.dp, defaultHeight = 100.dp, viewportWidth = 100, viewportHeight = 100)
.addGroup(name = "CubeRotation", rotate = params.rotate, pivotX = params.pivotX, pivotY = params.pivotY,)
.addPath(pathData = pathData, fill = SolidColor(Color.LightGray))
.clearGroup()
.build()
Here I trying to update the pivotX halfway through the animation, so I divide it into two section:
// rotate -90 degrees twice with different pivot
val rotate = if (fraction <= 0.5f) {
lerp(0f, -180f, fraction)
} else {
lerp(0f, -180f, fraction - 0.5f)
}
// change the pivotX
val pivotX = if (fraction <= 0.5f) 52f else 30f
GroupParams(
pivotX = pivotX,
pivotY = 100f,
rotate = rotate,
)
Frustratedly, after rotating by the first -90 degrees, the transformation that this rotation performed will NOT persist, -or- its transformation NOT being applied after the half time of the transition.
fraction: 0~0.5
fraction: 0.5~1
The code sample I've uploaded: IndicatorDrawable.kt
Can I implement the rotate transformation by just updating the pathData ?
For me, the VectorGroup in ImageVector looks like creating AnimatedVectorDrawable in xml, or maybe I should try some traditional way like ObjectAnimationSet in View system ?

Related

Reduce tracking window using google mlkit vision samples

I would like to reduce the reduce bar code tracking window when using the google vision api. There are some answers here but they feel a bit outdated.
I'm using google's sample: https://github.com/googlesamples/mlkit/tree/master/android/vision-quickstart
Currently, I try to figure out if a barcode is inside my overlay box inside BarcodeScannerProcessor onSuccess callback:
override fun onSuccess(barcodes: List<Barcode>, graphicOverlay: GraphicOverlay) {
if(barcodes.isEmpty())
return;
for(barcode in barcodes) {
val center = Point(graphicOverlay.imageWidth / 2, graphicOverlay.imageHeight / 2)
val rectWidth = graphicOverlay.imageWidth * Settings.OverlayWidthFactor
val rectHeight = graphicOverlay.imageHeight * Settings.OverlayHeightFactor
val left = center.x - rectWidth / 2
val top = center.y - rectHeight / 2
val right = center.x + rectWidth / 2
val bottom = center.y + rectHeight / 2
val rect = Rect(left.toInt(), top.toInt(), right.toInt(), bottom.toInt())
val contains = rect.contains(barcode.boundingBox!!)
val color = if(contains) Color.GREEN else Color.RED
graphicOverlay.add(BarcodeGraphic(graphicOverlay, barcode, "left: ${barcode.boundingBox!!.left}", color))
}
}
Y-wise it works perfectly, but the X values from barcode.boundingBox e.g. barcode.boundingBox.left seems to have an offset. Is it based on what's being calculated in GraphicOverlay?
I'm expecting the value below to be close to 0, but the offset is about 90 here:
Or perhaps it's more efficient to crop the image according to the box?
Actually the bounding box is correct. The trick is that the image aspect ratio doesn't match the viewport aspect ratio so the image is cropped horizontally. Try to open settings (a gear in the top right corner) and choose an appropriate resolution.
For example take a look at these two screenshots. On the first one the selected resolution (1080x1920) matches my phone resolution so the padding looks good (17px). On the second screenshot the aspect ratio is different (1.0 for 720x720 resolution) therefore the image is cropped and the padding looks incorrect.
So the offset should be transformed from image coordinates to the screen coordinates. Under the hood GraphicOverlay uses a matrix for this transformation. You can use the same matrix:
for(barcode in barcodes) {
barcode.boundingBox?.let { bbox ->
val offset = floatArrayOf(bbox.left.toFloat(), bbox.top.toFloat())
graphicOverlay.transformationMatrix.mapPoints(offset)
val leftOffset = offset[0]
val topOffset = offset[1]
...
}
}
The only thing is that the transformationMatrix is private, so you should add a getter to access it.
As you know, the preview size of the camera is configurable at the settings menu. This configurable size specifies the graphicOverlay dimensions.
On the other hand, the aspect ratio of the CameraSourcePreview (i.e. preview_view in activity_vision_live_preview.xml) which is shown on the screen, does not necessarily equal to the ratio of the graphicOverlay. Because depends on the size of the phone's screen and the height that the parent ConstraintLayout allows occupying.
So, in the preview, based on the difference between the aspect ratio of graphicOverlay and preview_view, some part of the graphicOverlay might not be shown horizontally or vertically.
There are some parameters inside GraphicOverlay that can help us to adjust the left and top of the barcode's boundingBox in such a way that they start from 0 in the visible area.
First of all, they should be accessible out of the GraphicOverlay class. So, it's just enough to write a getter method for them:
GraphicOverlay.java
public class GraphicOverlay extends View {
...
/**
* The factor of overlay View size to image size. Anything in the image coordinates need to be
* scaled by this amount to fit with the area of overlay View.
*/
public float getScaleFactor() {
return scaleFactor;
}
/**
* The number of vertical pixels needed to be cropped on each side to fit the image with the
* area of overlay View after scaling.
*/
public float getPostScaleHeightOffset() {
return postScaleHeightOffset;
}
/**
* The number of horizontal pixels needed to be cropped on each side to fit the image with the
* area of overlay View after scaling.
*/
public float getPostScaleWidthOffset() {
return postScaleWidthOffset;
}
}
Now, it is possible to calculate the left and top difference gap using these parameters like the following:
BarcodeScannerProcessor.kt
class BarcodeScannerProcessor(
context: Context
) : VisionProcessorBase<List<Barcode>>(context) {
...
override fun onSuccess(barcodes: List<Barcode>, graphicOverlay: GraphicOverlay) {
if (barcodes.isEmpty()) {
Log.v(MANUAL_TESTING_LOG, "No barcode has been detected")
}
val leftDiff = graphicOverlay.run { postScaleWidthOffset / scaleFactor }.toInt()
val topDiff = graphicOverlay.run { postScaleHeightOffset / scaleFactor }.toInt()
for (i in barcodes.indices) {
val barcode = barcodes[i]
val color = Color.RED
val text = "left: ${barcode.boundingBox!!.left - leftDiff} top: ${barcode.boundingBox!!.top - topDiff}"
graphicOverlay.add(MyBarcodeGraphic(graphicOverlay, barcode, text, color))
logExtrasForTesting(barcode)
}
}
...
}
Visual Result:
Here is the visual result of the output. As it's obvious in the pictures, the gap between both left & top of the barcode and the left and top of the visible area is started from 0. In the case of the left picture, the graphicOverlay is set to the size of 480x640 (aspect ratio ≈ 1.3334) and for the right one 360x640 (aspect ratio ≈ 1.7778). In both cases, on my phone, the CameraSourcePreview has a steady size of 1440x2056 pixels (aspect ratio ≈ 1.4278), so it means that the calculation truly reflected the position of the barcode in the visible area.
(note that the aspect ratio of the visible area in one experiment is lower than that of graphicOverlay, and in another experiment, greater: 1.3334 < 1.4278 < 1.7778. So, the left values and top values are adjusted respectively.)

localRotation is not rotating around the cube center

Sceneform 1.15 on Android
I want to make that the cube renderable is rotating around the own centar.
val anchorNode = AnchorNode().apply {
setParent(scene)
worldPosition = Vector3(2f, 3f, 0f)
}
scene.addChild(anchorNode)
dieNode = Node().apply {
setParent(anchorNode)
localRotation = Quaternion.eulerAngles(Vector3(10f,20f,60f))
name = "die"
renderable = it
}
I made an animator that is supposed to only rotate the cube around its own center.
private fun roll() {
val anim = createAnimator()
val node = scene.findByName("die")!!
anim.target = node
anim.setDuration(9000)
anim.start()
}
private fun createAnimator() : ObjectAnimator {
val o1 = Quaternion.eulerAngles(Vector3(-90f,180f,90f))
val animator = ObjectAnimator()
animator.setObjectValues(o1)
animator.setPropertyName("localRotation")
animator.setEvaluator(QuaternionEvaluator())
animator.setInterpolator(LinearInterpolator())
animator.setAutoCancel(true)
return animator
}
But, it happens that while cube is rotating, it is also moved which is not desired behavior.
Screenshots where one of the symetrically placed die in world should be only rotated:
The position is not centred for the Cube object. If it is off the centre with respect to the parent object, it will rotate around (0,0,0), if it lies off this axis - the object will move in a circle, not rotate.

Android matrix postscale with aniamtion

A think a do not understand enough matrices theory to build animation for matrix inside image view.
For example i have some value that i want to set in postScale method to my matrix.
I use this code
fun animateScale(scale_factor:Float,matrix: Matrix)
{
matrix.postScale(scale_factor,scale_factor)
image_view.imageMatrix = matrix
}
This code makes scale to image view as it is expected. But i cant calculate postScale with animation. I tried different approaches using ValueAnimator.ofFloat(0f, 1f) or ValueAnimator.ifInt(1, 100) but result is weird.
Updated
I added my tries of animating as it asked in comments.
val animator = ValueAnimator.ofFloat(0f, 1f)
animator.addUpdateListener(
{ animation ->
val current_anim_value = animation.animatedValue as Float
val factor_to_anim = scale_factor - ( scale_factor * current_anim_value)
matrix.postScale(factor_to_anim, factor_to_anim)
image_view.imageMatrix = matrix
})
animator.duration = 3000
animator.start()

Android alpha animation: override initial alpha value

Basically I want something to fade-in and then fade-out. Initially it should be invisible, and when the animation is ended, it should also be invisible.
targetView.alpha = 0f;
var aa = AlphaAnimation(0f, 1.0f);
aa.duration=2000;
aa.repeatMode = Animation.REVERSE;
aa.repeatCount = 1;
I have tried the code above, but it did not work. it seems that the alpha animation is multiplying the initial alpha with that animation. So, 0f * 1.0f = 0f = invisible.
After searching for an answer, I have tried this, but it did not work.
aa.fillBefore= true;
aa.fillAfter=true;
How can I make the animation ignore the initial alpha value? Is it impossible, and I should change the visibility at the start/end of the animation manually?
Yes you are right about "it seems that the alpha animation is multiplying the initial alpha with that animation. So, 0f * 1.0f = 0f = invisible.".
Just don't use targetView.alpha = 0f;, use targetView.setVisibility(View.INVISIBLE); instead.

Draw a segmented circle in Android: OpenGL vs Cavans?

I need to draw something like this:
I was hoping that this guy posted some code of how he drew his segmented circle to begin with, but alas he didn't.
I also need to know which segment is where after interaction with the wheel - for instance if the wheel is rotated, I need to know where the original segments are after the rotation action.
Two questions:
Do I draw this segmented circle (with varying colours and content placed on the segment) with OpenGL or using Android Canvas?
Using either of the options, how do I register which segment is where?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
EDIT:
Ok, so I've figured out how to draw the segmented circle using Canvas (I'll post the code as an answer). And I'm sure I'll figure out how to rotate the circle soon. But I'm still unsure how I'll recognize a separate segment of the drawn wheel after the rotation action.
Because, what I'm thinking of doing is drawing the segmented circle with these wedges, and the sort of handling the entire Canvas as an ImageView when I want to rotate it as if it's spinning. But when the spinning stops, how do I differentiate between the original segments drawn on the Canvas?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I've read about how to draw a segment on its own (here also), OpenGL, Canvas and even drawing shapes and layering them, but I've yet to see someone explaining how to recognize the separate segments.
Can drawBitmap() or createBitmap() perhaps be used?
If I go with OpenGL, I'll probably be able to rotate the segmented wheel using OpenGL's rotation, right?
I've also read that OpenGL might be too powerful for what I'd like to do, so should I rather consider "the graphic components of a game library built on top of OpenGL"?
This kind of answers my first question above - how to draw the segmented circle using Android Canvas:
Using the code found here, I do this in the onDraw function:
// Starting values
private int startAngle = 0;
private int numberOfSegments = 11;
private int sweepAngle = 360 / numberOfSegments;
#Override
protected void onDraw(Canvas canvas) {
setUpPaint();
setUpDrawingArea();
colours = getColours();
Log.d(TAG, "Draw the segmented circle");
for (int i = 0; i < numberOfSegments; i++) {
// pick a colour that is not the previous colour
paint.setColor(colours.get(pickRandomColour()));
// Draw arc
canvas.drawArc(rectF, startAngle, sweepAngle, true, paint);
// Set variable values
startAngle -= sweepAngle;
}
}
This is how I set up the drawing area based on the device's screen size:
private void setUpDrawingArea() {
Log.d(TAG, "Set up drawing area.");
// First get the screen dimensions
Point size = new Point();
Display display = DrawArcActivity.this.getWindowManager().getDefaultDisplay();
display.getSize(size);
int width = size.x;
int height = size.y;
Log.d(TAG, "Screen size = "+width+" x "+height);
// Set up the padding
int paddingLeft = (int) DrawArcActivity.this.getResources().getDimension(R.dimen.padding_large);
int paddingTop = (int) DrawArcActivity.this.getResources().getDimension(R.dimen.padding_large);
int paddingRight = (int) DrawArcActivity.this.getResources().getDimension(R.dimen.padding_large);
int paddingBottom = (int) DrawArcActivity.this.getResources().getDimension(R.dimen.padding_large);
// Then get the left, top, right and bottom Xs and Ys for the rectangle we're going to draw in
int left = 0 + paddingLeft;
int top = 0 + paddingTop;
int right = width - paddingRight;
int bottom = width - paddingBottom;
Log.d(TAG, "Rectangle placement -> left = "+left+", top = "+top+", right = "+right+", bottom = "+bottom);
rectF = new RectF(left, top, right, bottom);
}
That (and the other functions which are pretty straight forward, so I'm not going to paste the code here) draws this:
The segments are different colours with every run.

Categories

Resources