I'm trying to set a region in this way:
val region = Region()
region.set(Rect(0, 459, 1080, 0))
I also tried:
val region = Rect(5, 459, 1080, 5).toRegion()
Unfortunately, both of them don't work. It seems the region is always 0. In fact, the method region.bounds should return the Rect I've set before, instead it returns Rect(0, 0, 0, 0).
I still don't understand why the method set() of the Region class does not work, but I've found a way to solve the problem. I created an extension function:
fun Rect.toRightRegion() : Region {
val region = Region()
val path = Path()
path.addRect(RectF(this.left.toFloat(), this.top.toFloat(), this.right.toFloat(), this.bottom.toFloat()), Path.Direction.CW)
val rectF = RectF()
path.computeBounds(rectF, true)
region.setPath(path, Region(rectF.left.toInt(), rectF.top.toInt(), rectF.right.toInt(), rectF.bottom.toInt()))
return region
}
You can call this function directly on your Rect:
val region: Region = Rect(0, 459, 1080, 0).toRightRegion()
Related
Good day!
I faced such a problem that I need to draw a circle under the touch of my finger.
I tried to use the following code:
private fun pressFinger() {
mFirst.setOnTouchListener(View.OnTouchListener { v, event ->
val x = event.x
val y = event.y
when (event.action) {
MotionEvent.ACTION_DOWN -> {
drawCircle(x, y)
}
}
return#OnTouchListener true
})
}
fun drawCircle(x: Float, y: Float) {
val width: Int = x.toInt()
val height: Int = y.toInt()
val bitmap = (mFirst.getDrawable() as BitmapDrawable).bitmap
val workingBitmap: Bitmap = Bitmap.createBitmap(bitmap)
val mutableBitmap = workingBitmap.copy(Bitmap.Config.ARGB_8888, true)
val canvasBitmap = Canvas(mutableBitmap)
val paint = Paint(Paint.ANTI_ALIAS_FLAG)
paint.style = Paint.Style.FILL
paint.xfermode = PorterDuffXfermode(PorterDuff.Mode.CLEAR)
canvasBitmap.drawCircle(width.toFloat(), height.toFloat(), 180F, paint)
mFirst.setImageBitmap(mutableBitmap)
drawUtils.draw()
}
The problem is that the circle is not drawn under the finger, but on the side.
I also want to note that the image placed inside the ImageView has a larger size and I use scaleType = "cropCenter".
It turns out that I need to get the position of the finger exactly on the image placed in the ImageView, and not just on the screen.
If someone knows the correct answer to the question, I would be grateful.
This is really bad code. Why are you editing the background image in the view? The easier thing would be to save the coordinates in the onTouch function, then invalidate the view. Then in the view's onDraw, you draw a circle at that location. This way you avoid all the scaling issues, and you don't have to make 2 COPIES!!! of the bitmap which is horribly inefficient.
canvasBitmap.drawCircle(width.toFloat() - 180F / 2, height.toFloat() - 180F / 2, 180F, paint)
Have a try.
We implemented Android ML Kit for face detection in Android. It works like charm, detect faces.
The problem: We want to draw rectangles around detected faces when multiple faces detected
What we have done:
Implemented
implementation 'com.google.android.gms:play-services-mlkit-face-detection:16.1.5'
Created a custom View :
class FaceView(val theContext : Context, val bounds : Rect) : View(theContext) {
override fun onDraw(canvas: Canvas?) {
super.onDraw(canvas)
val myPaint = Paint()
myPaint.color = Color.BLACK
myPaint.style = Paint.Style.STROKE
myPaint.strokeWidth = 10f
canvas?.drawRect(bounds, myPaint)
}
}
Tried to draw a rectangle to the bound we got from the face object ML kit created
val result = detector.process(image).addOnSuccessListener { faces ->
for (face in faces) {
val bounds = face.boundingBox
val view = FaceView(requireContext(), bounds)
binding.actionRoot.addView(view)
val lp : ConstraintLayout.LayoutParams =
ConstraintLayout.LayoutParams(bounds.width(),bounds.height())
lp.startToStart = binding.actionPhoto.id
lp.topToTop = binding.actionPhoto.id
lp.marginStart = bounds.right
lp.topMargin = bounds.bottom
view.layoutParams = lp
}}
Result :
How can we draw a rectangle for each face that we produced from URI(not from CameraX) and make them clickable?
you can reference the project here, but it is the java code
https://github.com/kkdroidgit/FaceDetect
-- While ML kit gives Rect of detected Faces. I think Google Developers make it more easier for us and draw rectangle automatically and provide faces as standalone bitmap objects.
For my solution :
I used the example project that #anonymous suggested to draw lines around the face.
First I got start, end, bottom and top points from provided face rect.
( This rect is created from original bitmap, not from the imageview. So the rect points are belong to the original bitmap I was using).
As Rect points are not belong to ImageView but the bitmap itself, We need to find related rect points on ImageView (or create a scaled bitmap and detect faces on it). We calculated this points as percentages in original Bitmap.
While we use original points as in the example project to draw lines. We implemented a view with Transparent Background with using calculated points to make face rectangles clickable.
-As the last thing we created bitmaps for each face.
detector.process(theImage)
.addOnSuccessListener { faces ->
val bounds = face.boundingBox
val screenWidth = GetScreenWidth().execute()
// val theImage = InputImage.fromBitmap(tempBitmap,0)
val paint = Paint()
paint.strokeWidth = 1f
paint.color = Color.RED
paint.style = Paint.Style.STROKE
val theStartPoint = if(bounds.left < 0) 0 else{ bounds.left}
val theEndPoint = if(bounds.right > tempBitmap.width) tempBitmap.width else { bounds.right}
val theTopPoint = if(bounds.top < 0) 0 else { bounds.top }
val theBottomPoint = if(bounds.bottom > tempBitmap.height) tempBitmap.height else { bounds.bottom }
val faceWidth = theEndPoint - theStartPoint
val faceHeight = theBottomPoint - theTopPoint
Log.d(Statics.LOG_TAG, "Face width : ${faceWidth} Face Height $faceHeight")
val startPointPercent = theStartPoint.toFloat() / tempBitmap.width.toFloat()
val topPointPercent = theTopPoint.toFloat() / tempBitmap.height.toFloat()
Log.d(Statics.LOG_TAG, "Face start point percent : ${startPointPercent} Face top percent $topPointPercent")
val faceWidthPercent = faceWidth / tempBitmap.width.toFloat()
val faceHeightPercent = faceHeight / tempBitmap.height.toFloat()
Log.d(Statics.LOG_TAG, "Face width percent: ${faceWidthPercent} Face Height Percent $faceHeightPercent")
val faceImage = ConstraintLayout(requireContext())
faceImage.setBackgroundColor(Color.TRANSPARENT)
binding.actionRoot.addView(faceImage)
val boxWidth = screenWidth.toFloat()*faceWidthPercent
val boxHeight = screenWidth.toFloat()*faceHeightPercent
Log.d(Statics.LOG_TAG, "Box width : ${boxWidth} Box Height $boxHeight")
val lp : ConstraintLayout.LayoutParams =
ConstraintLayout.LayoutParams(
boxWidth.toInt(),
boxHeight.toInt()
)
lp.startToStart = binding.actionPhoto.id
lp.topToTop = binding.actionPhoto.id
lp.marginStart = (screenWidth * startPointPercent).toInt()
lp.topMargin = (screenWidth * topPointPercent).toInt()
faceImage.layoutParams = lp
val theFaceBitmap = Bitmap.createBitmap(
tempBitmap,
theStartPoint,
theTopPoint,
faceWidth,
faceHeight)
if (face.trackingId != null) {
val id = face.trackingId
faceImage.setOnClickListener {
binding.actionPhoto.setImageBitmap(theFaceBitmap)
}
}
}
Result :
I need to render a bitmap without displaying it on the screen. For that I create OpenGL context using EGL14 as described in this answer. Then I save OpenGL surface to bitmap using GLES20.glReadPixels. But for some reason it is not rendered as expected and is just transparent.
import android.graphics.Bitmap
import android.opengl.*
import android.opengl.EGL14.EGL_CONTEXT_CLIENT_VERSION
import java.nio.ByteBuffer
class Renderer {
private lateinit var display: EGLDisplay
private lateinit var surface: EGLSurface
private lateinit var eglContext: EGLContext
fun draw() {
// Just a stub that fills the bitmap with red color
GLES20.glClearColor(1f, 0f, 0f, 1f)
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT)
}
fun saveBitmap(): Bitmap {
val width = 320
val height = 240
val mPixelBuf = ByteBuffer.allocate(width * height * 4)
GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, mPixelBuf)
return Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
}
private fun initializeEglContext() {
display = EGL14.eglGetDisplay(EGL14.EGL_DEFAULT_DISPLAY)
if (display == EGL14.EGL_NO_DISPLAY) {
throw RuntimeException("eglGetDisplay failed ${EGL14.eglGetError()}")
}
val versions = IntArray(2)
if (!EGL14.eglInitialize(display, versions, 0, versions, 1)) {
throw RuntimeException("eglInitialize failed ${EGL14.eglGetError()}")
}
val configAttr = intArrayOf(
EGL14.EGL_COLOR_BUFFER_TYPE, EGL14.EGL_RGB_BUFFER,
EGL14.EGL_LEVEL, 0,
EGL14.EGL_RENDERABLE_TYPE, EGL14.EGL_OPENGL_ES2_BIT,
EGL14.EGL_SURFACE_TYPE, EGL14.EGL_PBUFFER_BIT,
EGL14.EGL_NONE
)
val configs: Array<EGLConfig?> = arrayOfNulls(1)
val numConfig = IntArray(1)
EGL14.eglChooseConfig(
display, configAttr, 0,
configs, 0, 1, numConfig, 0
)
if (numConfig[0] == 0) {
throw RuntimeException("No configs found")
}
val config: EGLConfig? = configs[0]
val surfAttr = intArrayOf(
EGL14.EGL_WIDTH, 320,
EGL14.EGL_HEIGHT, 240,
EGL14.EGL_NONE
)
surface = EGL14.eglCreatePbufferSurface(display, config, surfAttr, 0)
val contextAttrib = intArrayOf(
EGL_CONTEXT_CLIENT_VERSION, 2,
EGL14.EGL_NONE
)
eglContext = EGL14.eglCreateContext(display, config, EGL14.EGL_NO_CONTEXT, contextAttrib, 0)
EGL14.eglMakeCurrent(display, surface, surface, eglContext)
}
fun destroy() {
EGL14.eglMakeCurrent(display, EGL14.EGL_NO_SURFACE, EGL14.EGL_NO_SURFACE,
EGL14.EGL_NO_CONTEXT)
EGL14.eglDestroySurface(display, surface)
EGL14.eglDestroyContext(display, eglContext)
EGL14.eglTerminate(display)
}
}
This is how I use it:
val renderer = Renderer()
renderer.initializeEglContext()
renderer.draw()
val bitmap = renderer.saveBitmap()
renderer.destroy()
The code runs without any errors. I checked that context is created successfully. For example GLES20.glCreateProgram works as expected and returns a valid id. The only warning I get is
W/OpenGLRenderer: Failed to choose config with EGL_SWAP_BEHAVIOR_PRESERVED, retrying without...
But I'm not sure if it affects the result in any way.
However bitmap is not filled with color and is transparent:
val color = bitmap[0, 0]
Log.d("Main", "onCreate: ${Color.valueOf(color)}")
Color(0.0, 0.0, 0.0, 0.0, sRGB IEC61966-2.1)
I guess that I'm missing something, but I can't figure out what. How to make it to actually render?
Pixel buffer must be copied to bitmap:
val mPixelBuf bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
bitmap.copyPixelsFromBuffer(mPixelBuf)
return bitmap
I have a mask bitmap with alpha channel,I want use it to cover origin bitmap and make sure edge feather.I try to use below code,but failed:
AndroidManifest.xml:android:hardwareAccelerated="false"
fun featherEdge(maskBm: Bitmap): Bitmap {
if (maskBm == null) return maskBm
val canvasBmp = Bitmap.createBitmap(maskBm.width, maskBm.height, ARGB_8888)
val canvas = Canvas(canvasBmp)
// canvas.isHardwareAccelerated = false
val paint = Paint()
paint.isAntiAlias = true
paint.isDither = true
paint.maskFilter = BlurMaskFilter(20f, BlurMaskFilter.Blur.NORMAL);
canvas.drawBitmap(maskBm, 0f, 0f, paint)
return canvasBmp
}
result bitmap edge is rough:
what I want result:
Question2:
Because android:hardwareAccelerated="false",many function can't work,how to make android:hardwareAccelerated="false" only to off-Screen canvas?
Thank you.
I came across this and this questions, all about detection intersections in Android. Well, I couldn't manage to make them work with the final code, so I made an example where 2 lines definitely intersect. Not even lucky in that case. I've made an example code with two straight paths, regions that fit them, and a point that definitely crosses it. Completely unlucky.
var theyCross = false
val intersectionPath = Path()
val clipArea = Region(0, 0, 100, 100)
val path1 = Path()
path1.moveTo(50f, 0f)
path1.lineTo(50f, 100f)
val path2 = Path()
path2.moveTo(0f, 50f)
path2.lineTo(100f, 50f)
val newRegion1 = Region()
newRegion1.setPath(path1, clipArea)
val newRegion2 = Region()
newRegion2.setPath(path2, clipArea)
if(
!newRegion1.quickReject(newRegion2) &&
newRegion1.op(newRegion2, Region.Op.INTERSECT)
) {
// lines should cross!
theyCross = true
}
if (intersectionPath.op(path1, path2, Path.Op.INTERSECT)) {
if (!intersectionPath.isEmpty) {
// lines should cross!
theyCross = true
}
}
if (newRegion1.contains(50, 50)) {
// lines should cross!
theyCross = true
}
if (newRegion1.quickContains(49, 49, 51, 51)) {
// lines should cross!
theyCross = true
}
In this example I'm not using a Canvas, but in my original code, I am, and each path is made of a Paint with strokeWidth. No luck. Has any of you faced this before?
It only works if the paths are surfaces, not lines, e.g. :
val clipArea = Region(0, 0, 100, 100)
val path1 = Path()
path1.moveTo(50f, 0f)
path1.lineTo(50f, 100f)
path1.lineTo(51f, 100f)
path1.lineTo(51f, 0f)
path1.close()
val path2 = Path()
path2.moveTo(0f, 50f)
path2.lineTo(100f, 50f)
path2.lineTo(100f, 51f)
path2.lineTo(0f, 51f)
path2.close()
By the way the (ignored) return value of newRegion1.setPath(path1, clipArea) is now true (non-empty) instead of false