LibGDX - Box2D offsets the position of collision objects half the time - android

I'm working on a mobile game with a staggered isometric map, and I'm finding that Box2D seems to be drawing my collision objects with unpredictable results. My tileset has diamond shaped polys embedded in the tiles themselves that I'm trying to import and draw as Shape2D bodies. They seem to be drawn in the right spot half the time, and the other half they are offset negatively on the X axis by half the shape's width. I know about the discrepancy between Box2D and LibGDX center positions, but that wouldn't explain the inconsistencies here. I never had this issue with an orthogonal TiledMap, so I'm not sure if this is a weird quirk of working with isometric.
The expected placement of the shapes in Tiled:
Tiled screenshot
And here is the result I'm getting:
Mobile screenshot
The code causing this behavior:
class MyGameScreen : KtxScreen {
private val batch = SpriteBatch()
private val stage = Stage(ExtendViewport(9f, 16f), batch)
private val map = TmxMapLoader().load("map/test_map.tmx")
private val debugRenderer = Box2DDebugRenderer()
private val renderer = IsometricStaggeredTiledMapRenderer(map, UNIT_SCALE, stage.batch)
private val camera = stage.camera as OrthographicCamera
private val physicsWorld = createWorld(gravity = vec2()).apply {
autoClearForces = false
}
override fun render(delta: Float) {
val size = max(map.width, map.height)
stage.viewport.apply()
renderer.setView(camera)
map.forEachLayer<TiledMapTileLayer> { layer ->
stage.batch.use(camera.combined) {
renderer.renderTileLayer(layer)
}
// This is the code that is looping through the map's cells and
// replacing collision objects from Tiled with shape2D bodies
for (x in 0 - size..0 + size) {
for (y in 0 - size..0 + size) {
layer.getCell(x, y)?.let { cell ->
if (cell.tile.objects.isEmpty()) {
// cell is not linked to a collision object -> do nothing
return#let
}
cell.tile.objects.forEach {
physicsWorld.body { shape2D(x, y, it.shape) }
}
}
}
}
}
debugRenderer.render(physicsWorld, stage.camera.combined)
}
override fun dispose() {
super.dispose()
batch.disposeSafely()
stage.disposeSafely()
map.disposeSafely()
renderer.disposeSafely()
physicsWorld.disposeSafely()
debugRenderer.disposeSafely()
}
companion object {
// This is the method that implements the drawing of the collision objects in Box2D
private fun BodyDefinition.shape2D(
x: Int,
y: Int,
shape: Shape2D
) {
when(shape) {
is Polygon -> {
val bodyX = x + shape.x * UNIT_SCALE
val bodyY = (y + shape.y * UNIT_SCALE) / 4
position.set(bodyX, bodyY)
shape.setPosition(0f, 0f)
shape.setScale(UNIT_SCALE, UNIT_SCALE)
fixedRotation = true
allowSleep = false
loop(shape.transformedVertices)
}
else -> gdxError("Unknown shape.")
}
}
}
}
I'm new to LibGDX, so any help would be greatly appreciated!

Related

How to repeat Android Animation

I'm trying to get two views to move to the middle of the screen and bounce back again x number of times.
This code does that but it runs only once.
` val view = findViewById(R.id.imageView2)
val animation = SpringAnimation(view, DynamicAnimation.TRANSLATION_Y, 0f)
val view2 = findViewById<View>(R.id.imageView3)
val animation2 = SpringAnimation(view2, DynamicAnimation.TRANSLATION_Y, 0f)
findViewById<View>(R.id.imageView2).also { img ->
SpringAnimation(img, DynamicAnimation.TRANSLATION_Y).apply {
animation.getSpring().setDampingRatio(SpringForce.DAMPING_RATIO_HIGH_BOUNCY)
animation.spring.stiffness = SpringForce.STIFFNESS_VERY_LOW
animation.animateToFinalPosition(50f)
}
}
findViewById<View>(R.id.imageView3).also { img ->
SpringAnimation(img, DynamicAnimation.TRANSLATION_Y).apply {
animation2.getSpring().setDampingRatio(SpringForce.DAMPING_RATIO_HIGH_BOUNCY)
animation2.spring.stiffness = SpringForce.STIFFNESS_VERY_LOW
animation2.animateToFinalPosition(-100f)
}
}`
So how do I get it to run x number of times?
This is obviously Spring Animation, but I'm not married to it. If there is another animation that would accomplish this I'd be totally open to changing.
You can run multiple SpringAnimations on the same View by repeatedly calling animateToFinalPosition(translation) with a sequence of translation values.
For example:
startSpringAnimations(findViewById<View>(R.id.imageView1), 300f, 6)
startSpringAnimations(findViewById<View>(R.id.imageView2), -600f, 6)
with a function
/**
* [view] will be moved using [times] SpringAnimations over a distance of abs([totalTranslation])
* If [totalTranslation] is negative, direction will be up, else down
*/
private fun startSpringAnimations(view: View, totalTranslation: Float, times: Int ) {
if(times <= 0){
return
}
val translation = totalTranslation/ times.toFloat()
SpringAnimation(view, DynamicAnimation.TRANSLATION_Y, 0f).apply{
spring.dampingRatio = SpringForce.DAMPING_RATIO_HIGH_BOUNCY
spring.stiffness = SpringForce.STIFFNESS_VERY_LOW
addEndListener(object: DynamicAnimation.OnAnimationEndListener{
private var count = 1
override fun onAnimationEnd(animation1: DynamicAnimation<*>?, canceled: Boolean, value: Float, velocity: Float) {
Log.d("SpringAnimation", "onAnimationEnd: animation $animation1 canceled $canceled value $value velocity $velocity count $count")
if (canceled) return
count++
if(count <= times){
animateToFinalPosition(translation * count)
}
}
})
animateToFinalPosition(translation)
}
}
Set android:repeatCount="infinite" in anim folder

ValueAnimator changing other non-required values

This is something weird and unexpected that is happening. While creating a custom view, I am animating a padding value which is later displaying on Canvas. But other objects like RectF are changing its values too. Using kotlin, I have this helper function:
operator fun Pair<Float, Float>.invoke(callback: (Float) -> Unit) {
with(ValueAnimator.ofFloat(first, second)) {
duration = 500
addUpdateListener {
callback(it.animatedValue as Float)
}
start()
}
}
In my Custom View, I have this code:
private val bgCardBounds = RectF(0f, 0f, 0f, 0f)
private var expandedCard = 0
set(value) {
field = value
when (value) {
1 -> {
(0f to 32f) {
bitmapPadding = it
invalidate()
}
}
}
}
On an touch event, when the value of expandedCard changes from 0 to 1, then even the values left, top, right, bottom values of bgCardBounds also changes. Can someone help me point out why this is happening and how to stop it?

Unable to get bar-code bounding box in right position on overlay Surfaceview

I'm using my CameraX with Firebase MLKit bar-code reader to detect barcode code. Application Identifies the bar-code without a problem. But I'm trying to add bounding box which shows the area of the barcode in CameraX preview in real-time. The Bounding box information is retrieved from the bar-code detector function. But It doesn't have nither right position nor size as you can see below.
This is my layout of the activity.
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<Button
android:id="#+id/camera_capture_button"
android:layout_width="100dp"
android:layout_height="100dp"
android:layout_marginBottom="50dp"
android:scaleType="fitCenter"
android:text="Take Photo"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintRight_toRightOf="parent"
app:layout_constraintBottom_toBottomOf="parent"
android:elevation="2dp" />
<SurfaceView
android:id="#+id/overlayView"
android:layout_width="match_parent"
android:layout_height="match_parent" />
<androidx.camera.view.PreviewView
android:id="#+id/previewView"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
SurfaceView is used to draw this rectangle shape.
Barcode detection happens in the BarcodeAnalyzer class which implements ImageAnalysis.Analyzer. inside overwritten analyze function I retrieve the barcode data like below.
#SuppressLint("UnsafeExperimentalUsageError")
override fun analyze(imageProxy: ImageProxy) {
val mediaImage = imageProxy.image
val rotationDegrees = degreesToFirebaseRotation(imageProxy.imageInfo.rotationDegrees)
if (mediaImage != null) {
val analyzedImageHeight = mediaImage.height
val analyzedImageWidth = mediaImage.width
val image = FirebaseVisionImage
.fromMediaImage(mediaImage,rotationDegrees)
detector.detectInImage(image)
.addOnSuccessListener { barcodes ->
for (barcode in barcodes) {
val bounds = barcode.boundingBox
val corners = barcode.cornerPoints
val rawValue = barcode.rawValue
if(::barcodeDetectListener.isInitialized && rawValue != null && bounds != null){
barcodeDetectListener.onBarcodeDetect(
rawValue,
bounds,
analyzedImageWidth,
analyzedImageHeight
)
}
}
imageProxy.close()
}
.addOnFailureListener {
Log.e(tag,"Barcode Reading Exception: ${it.localizedMessage}")
imageProxy.close()
}
.addOnCanceledListener {
Log.e(tag,"Barcode Reading Canceled")
imageProxy.close()
}
}
}
barcodeDetectListener is a reference to an interface I create to communicate this data back into my activity.
interface BarcodeDetectListener {
fun onBarcodeDetect(code: String, codeBound: Rect, imageWidth: Int, imageHeight: Int)
}
In my main activity, I send these data to OverlaySurfaceHolder which implements the SurfaceHolder.Callback. This class is responsible for drawing a bounding box on overlayed SurfaceView.
override fun onBarcodeDetect(code: String, codeBound: Rect, analyzedImageWidth: Int,
analyzedImageHeight: Int) {
Log.i(TAG,"barcode : $code")
overlaySurfaceHolder.repositionBound(codeBound,previewView.width,previewView.height,
analyzedImageWidth,analyzedImageHeight)
overlayView.invalidate()
}
As you can see here I'm sending overlayed SurfaceView width and height for the calculation in OverlaySurfaceHolder class.
OverlaySurfaceHolder.kt
class OverlaySurfaceHolder: SurfaceHolder.Callback {
var previewViewWidth: Int = 0
var previewViewHeight: Int = 0
var analyzedImageWidth: Int = 0
var analyzedImageHeight: Int = 0
private lateinit var drawingThread: DrawingThread
private lateinit var barcodeBound :Rect
private val tag = OverlaySurfaceHolder::class.java.simpleName
override fun surfaceChanged(holder: SurfaceHolder?, format: Int, width: Int, height: Int) {
}
override fun surfaceDestroyed(holder: SurfaceHolder?) {
var retry = true
drawingThread.running = false
while (retry){
try {
drawingThread.join()
retry = false
} catch (e: InterruptedException) {
}
}
}
override fun surfaceCreated(holder: SurfaceHolder?) {
drawingThread = DrawingThread(holder)
drawingThread.running = true
drawingThread.start()
}
fun repositionBound(codeBound: Rect, previewViewWidth: Int, previewViewHeight: Int,
analyzedImageWidth: Int, analyzedImageHeight: Int){
this.barcodeBound = codeBound
this.previewViewWidth = previewViewWidth
this.previewViewHeight = previewViewHeight
this.analyzedImageWidth = analyzedImageWidth
this.analyzedImageHeight = analyzedImageHeight
}
inner class DrawingThread(private val holder: SurfaceHolder?): Thread() {
var running = false
private fun adjustXCoordinates(valueX: Int): Float{
return if(previewViewWidth != 0){
(valueX / analyzedImageWidth.toFloat()) * previewViewWidth.toFloat()
}else{
valueX.toFloat()
}
}
private fun adjustYCoordinates(valueY: Int): Float{
return if(previewViewHeight != 0){
(valueY / analyzedImageHeight.toFloat()) * previewViewHeight.toFloat()
}else{
valueY.toFloat()
}
}
override fun run() {
while(running){
if(::barcodeBound.isInitialized){
val canvas = holder!!.lockCanvas()
if (canvas != null) {
synchronized(holder) {
canvas.drawColor(Color.TRANSPARENT, PorterDuff.Mode.CLEAR)
val myPaint = Paint()
myPaint.color = Color.rgb(20, 100, 50)
myPaint.strokeWidth = 6f
myPaint.style = Paint.Style.STROKE
val refinedRect = RectF()
refinedRect.left = adjustXCoordinates(barcodeBound.left)
refinedRect.right = adjustXCoordinates(barcodeBound.right)
refinedRect.top = adjustYCoordinates(barcodeBound.top)
refinedRect.bottom = adjustYCoordinates(barcodeBound.bottom)
canvas.drawRect(refinedRect,myPaint)
}
holder.unlockCanvasAndPost(canvas)
}else{
Log.e(tag, "Cannot draw onto the canvas as it's null")
}
try {
sleep(30)
} catch (e: InterruptedException) {
e.printStackTrace()
}
}
}
}
}
}
Please can anyone point me out what am I doing wrong?
I don't have a very clear clue, but here are something you could try:
When you adjustXCoordinates, if previewWidth is 0, you return valueX.toFloat() directly. Could you add something logging to see it it actually falls into this case? Also adding some logs to print the analysis and preview dimension could be helpful as well.
Another thing worth noting is that the image you sent to the detector could have different aspect ratio from the preview View area. For example, if your camera takes a 4:3 photo, it will send it to detector. However, if your View area is 1:1, it will crop some part of the photos to display it there. In that case, you need to take this into consideration as well when adjust coordinates. Base on my testing, the image will fit into the View area based on CENTER_CROP. If you want to be really careful, probably worth checking if this is documented in the camera dev site.
Hope it helps, more or less.
I am no longer working on this project. However resonantly I worked on a camera application that uses Camera 2 API. In that application, there was a requirement to detect the object using the MLKit object detection library and show the bounding box like this on top of the camera preview. Faced the same issue like this one first and manage to get it to work finally. I'll leave my approach here. It might help someone.
Any detection library will do its detection process in a small resolution image compare to the camera preview image. When the detection library returns the combinations for the detected object we need to scale up to show it in the right position. it's called the scale factor. In order to make the calculation easy, it's better to select the analyze image size and preview image size in the same aspect ratio.
You can use the below function to get the aspect ratio of any size.
fun gcd(a: Long, b: Long): Long {
return if (b == 0L) a else gcd(b, a % b)
}
fun asFraction(a: Long, b: Long): Pair<Long,Long> {
val gcd = gcd(a, b)
return Pair((a / gcd) , b / gcd)
}
After getting the camera preview image aspect ratio, selected the analyze image size like below.
val previewFraction = DisplayUtils
.asFraction(previewSize!!.width.toLong(),previewSize!!.height.toLong())
val analyzeImageSize = characteristics
.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)!!
.getOutputSizes(ImageFormat.YUV_420_888)
.filter { DisplayUtils.asFraction(it.width.toLong(), it.height.toLong()) == previewFraction }
.sortedBy { it.height * it.width}
.first()
Finaly when you have these two values you can calculate scale factor like below.
val scaleFactor = previewSize.width / analyzedSize.width.toFloat()
Finaly before the bounding box is drawn to the multiply each opint with scale factor to get correct screen coordinations.
if detect from bitmap, your reposition method will be right as i try.

How to place a 3D Model in AR at center of screen and move it near or far with respect to camera angle on axis?

Below is the code which I have tried. It gives me the desired result but it is not optimized like camToPlan or MagicPlan app. In CamToPlan app center node moves very eficiently as per camera movement. If the camera is tilt the anchornode distance changes. How to achive the same in below code?
Camera camera = arSceneView.getScene().getCamera();
Vector3 distance = Vector3.subtract(camera.getWorldPosition(), vector3CirclePosition);
float abs = Math.abs(distance.y);
float newAngleInRadian = (float) (Math.toRadians(90f) - (float) camera.getLocalRotation().x);
float zCoordinate = (float) (abs / Math.cos(newAngleInRadian));
Log.i("1", "zCoordinate::" + zCoordinate + "::" + abs);
Vector3 cameraPos = arFragment.getArSceneView().getScene().getCamera().getWorldPosition();
Vector3 cameraForward = arFragment.getArSceneView().getScene().getCamera().getForward();
Vector3 position = Vector3.add(cameraPos, cameraForward.scaled(zCoordinate));
redNodeCenter.setWorldPosition(position);
Step1: Create addWaterMark() method in your class
private var oldWaterMark : Node? = null
private fun addWaterMark() {
ModelRenderable.builder()
.setSource(context, R.raw.step1)
.build()
.thenAccept {
addNode(it)
}
.exceptionally {
Toast.makeText(context, "Error", Toast.LENGTH_SHORT).show()
return#exceptionally null
}
}
private fun addNode(model: ModelRenderable?) {
if(oldWaterMark!=null){
arSceneView().scene.removeChild(oldWaterMark)
}
model?.let {
val node = Node().apply {
setParent(arSceneView().scene)
var camera = arSceneView().scene.camera
var ray = camera.screenPointToRay(200f,500f)
// var local=arSceneView.getScene().getCamera().localPosition
localPosition = ray.getPoint(1f)
localRotation = arSceneView().scene.camera.localRotation
localScale = Vector3(0.3f, 0.3f, 0.3f)
renderable = it
}
arSceneView().scene.addChild(node)
oldWaterMark = node
}
}
Step 2: Call the addWaterMark() inside addOnUpdateListener
arSceneView.scene.addOnUpdateListener { addWaterMark() }
Note: I created oldWaterMark object for remove the old watermark
If you want to change the position change this line
camera.screenPointToRay(200f,500f)//200f -> X position, 500f -> Y position

Android canvas , how to get a sub path from existing path?

I have a Bezier curved path in canvas starting from (0,0) and ending in (canvasWidth,0) , with a control point at (canvasWidth,canvasHeight)
its drawing properly and Im getting a curved line. Im drawing it using the path.quadTo method as shown below
path.moveTo(mPointStart.x, mPointStart.y);
path.quadTo(mControlPoint.x, mControlPoint.y, mPointEnd.x, mPointEnd.y);
canvas.drawPath(path, paint);
Now, I want to draw a sub path over this existing path. Say if I want to draw half of the path ,
I want to overdraw the same path till half of the way using some other paint, so that half of the path will be in one color, other half will be in the old color. How can I find the points till half of the existing path?
Ok so the way I did this (in Kotlin), was to use the PathMeasure class. It doesn't look the best, but it works! So please post any tidier answers.
private fun getSubPath(path: Path, start: Float, end: Float): Path {
val pathMeasure = PathMeasure(path, false)
val point = FloatArray(2)
val subPath = Path()
var startFound = false
var startDistance = start
var endDistance = end
while (pathMeasure.nextContour()) {
val length = pathMeasure.length
startDistance -= length
endDistance -= length
if (!startFound) {
if (startDistance <= 0) {
startFound = true
val startPoint = length + startDistance
pathMeasure.getPosTan(startPoint, point, null)
subPath.moveTo(point[0], point[1])
if (startDistance < 0) {
val endPoint = length + endDistance
pathMeasure.getPosTan(endPoint, point, null)
subPath.lineTo(point[0], point[1])
if (endDistance <= 0) {
break
}
}
}
} else {
val endPoint = length + endDistance
pathMeasure.getPosTan(endPoint, point, null)
subPath.lineTo(point[0], point[1])
if (endDistance <= 0) {
break
}
}
}
return subPath
}
The parameters start and end can't be smaller then 0 or larger then the path's total length.
The way to use this is by the following:
private fun usePathFunction() {
val start = pathsTotalLength * 0.25
val end = pathsTotalLength * 0.75
val subPath = getSubPath(path, start, end)
}
I think using getSegment is the proper answer to this question:
private fun getSubPath(path: Path, start: Float, end: Float): Path {
val subPath = Path()
val pathMeasure = PathMeasure(path, false)
pathMeasure.getSegment(start * pathMeasure.length, end * pathMeasure.length, subPath, true)
return subPath
}
usage:
val subPath = getSubPath(path = originalPath, start = 0.2f, end = 0.8f)

Categories

Resources