Heyho,
I'm quiet new to Kotlin, but I came across this problem:
I have a library function, that generates an image(QR-Code). Now I'd like to display that image... But I've no idea how. The documentation only explains how to save the image locally. But I'm not really interested in saving it. So I can either get the image as a FileStream or as a ByteArray. Any possibility to display any of these as an Image in the UI?
An Example:
#Composable
fun QrCode(stand: String) {
Text(text = "QR-Code:", fontSize = 16.sp)
//? Image(QRCode(stand).render().getBytes()) // this obviously won't work
}
Any ideas?
In order to display an Image, you can refer this.
#Composable
fun BitmapImage(bitmap: Bitmap) {
Image(
bitmap = bitmap.asImageBitmap(),
contentDescription = "some useful description",
)
}
So the remaining is to find a way to convert your target input into a Bitmap.
If you have ByteArray of the image file, you can refer to this.
fun convertImageByteArrayToBitmap(imageData: ByteArray): Bitmap {
return BitmapFactory.decodeByteArray(imageData, 0, imageData.size)
}
If you just possess the QR String, you can refer this to convert QR String to a Bitmap.
fun encodeAsBitmap(source: String, width: Int, height: Int): Bitmap? {
val result: BitMatrix = try {
MultiFormatWriter().encode(source, BarcodeFormat.QR_CODE, width, height, null)
} catch (e: Exception) {
return null
}
val w = result.width
val h = result.height
val pixels = IntArray(w * h)
for (y in 0 until h) {
val offset = y * w
for (x in 0 until w) {
pixels[offset + x] = if (result[x, y]) Color.BLACK else Color.WHITE
}
}
val bitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888)
bitmap.setPixels(pixels, 0, width, 0, 0, w, h)
return bitmap
}
And you need to add this dependency implementation 'com.google.zxing:core:3.3.1' when you decide to use the above code.
Related
I was doing the simple feature that I am taking the photo with Camerax and uploading the photo to Imagga to do the face detection by using this API, and here is the API result [Face(confidence=100.0, coordinates=Coordinates(height=1243, width=1243, xmax=2660, xmin=1417, ymax=3359, ymin=2116), face_id=)]
And base on the API document, the Coordinates are parameters that I needed to input to drawRect(xmin, ymin, xmax, ymax, paint), but the rectangle is always in the wrong position on the screen.
I've tried total 64 combinations of those four points but non is correct.
What have I done wrong? here's my code
take and upload photo to Imagga:
override fun onImageSaved(output: ImageCapture.OutputFileResults) {
Log.e(TAG, "onImageSaved: " + output.savedUri)
savedUri = output.savedUri
val file = File(getPath(savedUri, applicationContext) ?: "")
val mFile = RequestBody.create(MediaType.parse("image/*"), file)
val fileToUpload = MultipartBody.Part.createFormData("image", file.name, mFile)
photoJob?.cancel()
photoJob = lifecycleScope.launch {
binding.isLoading = true
mainViewMode.startFaceDetection(fileToUpload)
}
}
fun getPath(contentUri: Uri?, context: Context): String? {
var res: String? = null
val proj = arrayOf(MediaStore.Images.Media.DATA)
val cursor: Cursor = context.contentResolver.query(contentUri!!, proj, null, null, null)!!
if (cursor.moveToFirst()) {
val index = cursor.getColumnIndexOrThrow(MediaStore.Images.Media.DATA)
res = cursor.getString(index)
}
cursor.close()
return res
}
receive face detection results from Imagga:
mainViewMode.apply {
faceResult.observe(this#MainActivity) {
showDetectionResult(it, savedUri)
}
}
private fun showDetectionResult(it: FaceDetectionResp, savedUri: Uri?) {
Log.e(TAG, "startFaceDetection: ${it.result.faces}")
val originBitmap = MediaStore.Images.Media.getBitmap(contentResolver, savedUri)
val bmp = originBitmap!!.copy(originBitmap.config, true)
val canvas = Canvas(bmp)
val paint = Paint().apply {
color = Color.GREEN
style = Paint.Style.STROKE
strokeWidth = 5f
}
val left = it.result.faces[0].coordinates.xmin.toFloat()
val top = it.result.faces[0].coordinates.ymin.toFloat()
val right = it.result.faces[0].coordinates.xmax.toFloat()
val bottom = it.result.faces[0].coordinates.ymax.toFloat()
// it did draw the rectangle but never in the right position that I expected
canvas.drawRect(left, top, right, bottom, paint)
//because I got landscape photo, so I rotate the bitmap to portrait
binding.ivFace.setImageBitmap(rotateImage(bmp, -90f))
}
fun rotateImage(source: Bitmap, angle: Float): Bitmap? {
val matrix = Matrix()
matrix.postRotate(angle)
return Bitmap.createBitmap(
source, 0, 0, source.width, source.height,
matrix, true
)
}
I've been stuck in this for a week, can anyone help me?
I am working on Android application, where I am creating a QR code by using the QR generation library. I have achieved the QR code generation as by sending data from previews Activity to the main Activity where I have generated and shown the QR code.
The issue takes place in the process of saving QR code to the gallery and sharing the same QR code.
I have implemented sharing intent but it says the intent cannot share an empty file.
The same issue appears when I try to save the file.
Basically file with the QR code is always empty.
Here is my code for sharing and QR generation:
class QRGenerationAll : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.qr_generation_all)
val byteArray = intent.getByteArrayExtra("logoimage")
val bmp = BitmapFactory.decodeByteArray(byteArray, 0, byteArray.size)
img_logoimage.setImageBitmap(bmp)
val valuecatchedvalues: String = intent.getStringExtra("sendedvalues")
tv_textforqr.setText(valuecatchedvalues)
val tv_textforqr: String = intent.getStringExtra("logotext")
tv_textforlogoname.setText(tv_textforqr)
try {
//setting size of qr code
val manager = getSystemService(WINDOW_SERVICE) as WindowManager
val display = manager.defaultDisplay
val point = Point()
display.getSize(point)
val width = point.x
val height = point.y
val smallestDimension = if (width < height) width else height
// val qrInput = findViewById(R.id.qrInput) as EditText
//setting parameters for qr code
val charset = "UTF-8" // or "ISO-8859-1"
val hintMap: MutableMap<EncodeHintType, ErrorCorrectionLevel> = HashMap()
hintMap[EncodeHintType.ERROR_CORRECTION] = ErrorCorrectionLevel.L
if (valuecatchedvalues!=null) {
createQRCodeText(
valuecatchedvalues,
charset,
hintMap,
smallestDimension,
smallestDimension
) //MAIN METHOD FOR QR GENERATE
}
} catch (ex: Exception) {
Log.e("QrGenerate", ex.message)
}
buttonshare.setOnClickListener(View.OnClickListener {
val sharingIntent = Intent(Intent.ACTION_SEND)
sharingIntent.type = "text/plain"
sharingIntent.putExtra(Intent.EXTRA_SUBJECT, "Subject Here")
sharingIntent.putExtra(Intent.EXTRA_TEXT, bmp)
startActivity(
Intent.createChooser(
sharingIntent,
resources.getString(R.string.app_name)
)
)
})
}
private fun createQRCodeText(
valueqrTextData: String,
charset: String,
hintMap: MutableMap<EncodeHintType, ErrorCorrectionLevel>,
smallestDimension: Int,
smallestDimension1: Int
) {
try {
//generating qr code in bitmatrix type
val matrix = MultiFormatWriter().encode(
String(
valueqrTextData.toByteArray(charset(charset)), kotlin.text.charset(charset)
), BarcodeFormat.QR_CODE, smallestDimension, smallestDimension1, hintMap
)
//converting bitmatrix to bitmap
val width = matrix.width
val height = matrix.height
val pixels = IntArray(width * height)
// All are 0, or black, by default
for (y in 0 until height) {
val offset = y * width
for (x in 0 until width) {
pixels[offset + x] = if (matrix[x, y]) Color.BLACK else Color.WHITE
}
}
val bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
bitmap.setPixels(pixels, 0, width, 0, 0, width, height)
//setting bitmap to image view
val myImage = findViewById(R.id.imageView1) as ImageView
myImage.setImageBitmap(bitmap)
} catch (er: java.lang.Exception) {
Log.e("QrGenerate", er.message)
}
}
}
The following snippet is related to extracting text from previous Activity and generating QR in the receiving Activity.
val byteArray = intent.getByteArrayExtra("logoimage")
val bmp = BitmapFactory.decodeByteArray(byteArray, 0, byteArray.size)
img_logoimage.setImageBitmap(bmp)
val valuecatchedvalues: String = intent.getStringExtra("sendedvalues")
tv_textforqr.setText(valuecatchedvalues)
val tv_textforqr: String = intent.getStringExtra("logotext")
tv_textforlogoname.setText(tv_textforqr)
There is the only issue I am Facing here. I have QR image generated but as I try to share and save it it seems impossible for me.
Here is the Message after Share intent
I have searched this link for guidance Link 1
You've passed the wrong argument value for EXTRA_TEXT key. Instead of bmp you should use valuecatchedvalues:
sharingIntent.putExtra(Intent.EXTRA_TEXT, valuecatchedvalues)
Why?
Because this is the value you create your QR code from;
bmp is of type Bitmap. Bitmap cannot be processed when the value for EXTRA_TEXT key is extracted because String is expected.
I'm using CameraX's Analyzer use case with the MLKit's BarcodeScanner. I would like to crop portion of the image received from the camera, before passing it to the scanner.
What I'm doing right now is I convert ImageProxy (that I recieve in the Analyzer) to a Bitmap, crop it and then pass it to the BarcodeScanner. The downside is that it's not a very fast and efficient process.
I've also noticed the warning I get in the Logcat when running this code:
ML Kit has detected that you seem to pass camera frames to the
detector as a Bitmap object. This is inefficient. Please use
YUV_420_888 format for camera2 API or NV21 format for (legacy) camera
API and directly pass down the byte array to ML Kit.
It would be nice to not to do ImageProxy conversion, but how do I crop the rectangle I want to analyze?
What I've already tried is to set a cropRect field of the Image (imageProxy.image.cropRect) class, but it doesn't seem to affect the end result.
Yes, it's true that if you use ViewPort and set viewport to yours UseCases(imageCapture or imageAnalysis as here https://developer.android.com/training/camerax/configuration) you can get only information about crop rectangle especially if you use ImageAnalysis(because if you use imageCapture, for on-disk the image is cropped before saving and it doesn't work for ImageAnalysis and if you use imageCapture without saving on disk) and here solution how I solved this problem:
First of all set view port for use cases as here: https://developer.android.com/training/camerax/configuration
Get cropped bitmap to analyze
override fun analyze(imageProxy: ImageProxy) {
val mediaImage = imageProxy.image
if (mediaImage != null && mediaImage.format == ImageFormat.YUV_420_888) {
croppedBitmap(mediaImage, imageProxy.cropRect).let { bitmap ->
requestDetectInImage(InputImage.fromBitmap(bitmap, rotation))
.addOnCompleteListener { imageProxy.close() }
}
} else {
imageProxy.close()
}
}
private fun croppedBitmap(mediaImage: Image, cropRect: Rect): Bitmap {
val yBuffer = mediaImage.planes[0].buffer // Y
val vuBuffer = mediaImage.planes[2].buffer // VU
val ySize = yBuffer.remaining()
val vuSize = vuBuffer.remaining()
val nv21 = ByteArray(ySize + vuSize)
yBuffer.get(nv21, 0, ySize)
vuBuffer.get(nv21, ySize, vuSize)
val yuvImage = YuvImage(nv21, ImageFormat.NV21, mediaImage.width, mediaImage.height, null)
val outputStream = ByteArrayOutputStream()
yuvImage.compressToJpeg(cropRect, 100, outputStream)
val imageBytes = outputStream.toByteArray()
return BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.size)
}
Possibly there is a loss in conversion speed, but on my devices I did not notice the difference. I set 100 quality in method compressToJpeg, but mb if set less quality it can improve speed, it need test.
upd: May 02 '21 :
I found another way without convert to jpeg and then to bitmap. This should be a faster way.
Set viewport as previous.
Convert YUV_420_888 to NV21, then crop and analyze.
override fun analyze(imageProxy: ImageProxy) {
val mediaImage = imageProxy.image
if (mediaImage != null && mediaImage.format == ImageFormat.YUV_420_888) {
croppedNV21(mediaImage, imageProxy.cropRect).let { byteArray ->
requestDetectInImage(
InputImage.fromByteArray(
byteArray,
imageProxy.cropRect.width(),
imageProxy.cropRect.height(),
rotation,
IMAGE_FORMAT_NV21,
)
)
.addOnCompleteListener { imageProxy.close() }
}
} else {
imageProxy.close()
}
}
private fun croppedNV21(mediaImage: Image, cropRect: Rect): ByteArray {
val yBuffer = mediaImage.planes[0].buffer // Y
val vuBuffer = mediaImage.planes[2].buffer // VU
val ySize = yBuffer.remaining()
val vuSize = vuBuffer.remaining()
val nv21 = ByteArray(ySize + vuSize)
yBuffer.get(nv21, 0, ySize)
vuBuffer.get(nv21, ySize, vuSize)
return cropByteArray(nv21, mediaImage.width, cropRect)
}
private fun cropByteArray(array: ByteArray, imageWidth: Int, cropRect: Rect): ByteArray {
val croppedArray = ByteArray(cropRect.width() * cropRect.height())
var i = 0
array.forEachIndexed { index, byte ->
val x = index % imageWidth
val y = index / imageWidth
if (cropRect.left <= x && x < cropRect.right && cropRect.top <= y && y < cropRect.bottom) {
croppedArray[i] = byte
i++
}
}
return croppedArray
}
First crop fun I took from here: Android: How to crop images using CameraX?
And I found also another crop fun, it seems that it is more complicated:
private fun cropByteArray(src: ByteArray, width: Int, height: Int, cropRect: Rect, ): ByteArray {
val x = cropRect.left * 2 / 2
val y = cropRect.top * 2 / 2
val w = cropRect.width() * 2 / 2
val h = cropRect.height() * 2 / 2
val yUnit = w * h
val uv = yUnit / 2
val nData = ByteArray(yUnit + uv)
val uvIndexDst = w * h - y / 2 * w
val uvIndexSrc = width * height + x
var srcPos0 = y * width
var destPos0 = 0
var uvSrcPos0 = uvIndexSrc
var uvDestPos0 = uvIndexDst
for (i in y until y + h) {
System.arraycopy(src, srcPos0 + x, nData, destPos0, w) //y memory block copy
srcPos0 += width
destPos0 += w
if (i and 1 == 0) {
System.arraycopy(src, uvSrcPos0, nData, uvDestPos0, w) //uv memory block copy
uvSrcPos0 += width
uvDestPos0 += w
}
}
return nData
}
Second crop fun I took from here:
https://www.programmersought.com/article/75461140907/
I would be glad if someone can help improve the code.
I'm still improving the way to do it. But this will work for me now
CameraX crop image before sending to analyze
<androidx.constraintlayout.widget.ConstraintLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:paddingBottom="#dimen/_40sdp">
<androidx.camera.view.PreviewView
android:id="#+id/previewView"
android:layout_width="match_parent"
android:layout_height="0dp"
app:layout_constraintDimensionRatio="1:1"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent" /></androidx.constraintlayout.widget.ConstraintLayout>
Cropping an image into 1:1 before passing it to analyze
override fun onCaptureSuccess(image: ImageProxy) {
super.onCaptureSuccess(image)
var bitmap: Bitmap = imageProxyToBitmap(image)
val dimension: Int = min(bitmap.width, bitmap.height)
bitmap = ThumbnailUtils.extractThumbnail(bitmap, dimension, dimension)
imageView.setImageBitmap(bitmap) //Here you can pass the crop[from the center] image to analyze
image.close()
}
**Function for converting into bitmap **
private fun imageProxyToBitmap(image: ImageProxy): Bitmap {
val buffer: ByteBuffer = image.planes[0].buffer
val bytes = ByteArray(buffer.remaining())
buffer.get(bytes)
return BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
}
You would use ImageProxy.SetCroprect to get the rect and then use CropRect to set it.
For example if you had imageProxy, you would do : ImageProxy.setCropRect(Rect) and then you would do ImageProxy.CropRect.
I have a set of two functions that i use to bind images to a recyclerview, one is for converting a string (base64) to a bitmap, the other function is to round the corners of said image.
//convert string to bitmap
fun stringToBitMap( encodedString: String): Bitmap? {
println("string to bitmap is being called")
return try {
val encodeByte: ByteArray = Base64.decode(encodedString, Base64.DEFAULT)
BitmapFactory.decodeByteArray(encodeByte, 0, encodeByte.size)
} catch (e: Exception) {
println("Failed to convert string to bitmap")
e.message
null
}
}
//round corners
fun getRoundedCornerBitmap(bitmap: Bitmap, pixels: Int): Bitmap {
println("get rounded corners is being called")
val output = Bitmap.createBitmap(bitmap.width, bitmap.height, Bitmap.Config.ARGB_8888)
val canvas = Canvas(output)
val color = -0xbdbdbe
val paint = Paint()
val rect = Rect(0, 0, bitmap.width, bitmap.height)
val rectF = RectF(rect)
val roundPx = pixels.toFloat()
paint.isAntiAlias = true
canvas.drawARGB(0, 0, 0, 0)
paint.color = color
canvas.drawRoundRect(rectF, roundPx, roundPx, paint)
paint.xfermode = PorterDuffXfermode(PorterDuff.Mode.SRC_IN)
canvas.drawBitmap(bitmap, rect, rect, paint)
return output
}
and i anotate my final function with BindingAdapter, then i call the function from the xml file
#BindingAdapter("poster")
fun image (view: ImageView, image: String) {
return view.setImageBitmap(stringToBitMap(image)?.let { getRoundedCornerBitmap(it, 10) })
}
it works, but the performance is poor in some devices, im debbugin my app in a low resource phone (samsung SM-J106B) and the spikes of cpu usage are 35% when scrolling fast(my images are not high res, only 400x400), also the recyclerview keeps calling these functions and it makes the scrolling kinda sluggish. So the question is, how can i improve my functions?
pd: im a complete newbie :(
I ended up using the glide like this:
fun poster(view: ImageView, image: String) {
val imageByteArray: ByteArray = Base64.decode(image, Base64.DEFAULT)
val round = RequestOptions
.bitmapTransform(RoundedCorners(14))
Glide.with(view)
.load(imageByteArray)
.apply(round)
.into(view)
}
performance is better now :D
I found that solution (https://itnext.io/converting-pytorch-float-tensor-to-android-rgba-bitmap-with-kotlin-ffd4602a16b6) but when I tried to convert that way I found that the size of inputTensor.dataAsFloatArray is more than bitmap.width*bitmap.height. How works converting tensor to float array or is there any other possible method to convert pytorch tensor to bitmap?
val inputTensor = TensorImageUtils.bitmapToFloat32Tensor(
bitmap,
TensorImageUtils.TORCHVISION_NORM_MEAN_RGB, TensorImageUtils.TORCHVISION_NORM_STD_RGB
)
// Float array size is 196608 when width and height are 256x256 = 65536
val res = floatArrayToGrayscaleBitmap(inputTensor.dataAsFloatArray, bitmap.width, bitmap.height)
fun floatArrayToGrayscaleBitmap (
floatArray: FloatArray,
width: Int,
height: Int,
alpha :Byte = (255).toByte(),
reverseScale :Boolean = false
) : Bitmap {
// Create empty bitmap in RGBA format (even though it says ARGB but channels are RGBA)
val bmp = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
val byteBuffer = ByteBuffer.allocate(width*height*4)
Log.d("App", floatArray.size.toString() + " " + (width * height * 4).toString())
// mapping smallest value to 0 and largest value to 255
val maxValue = floatArray.max() ?: 1.0f
val minValue = floatArray.min() ?: 0.0f
val delta = maxValue-minValue
var tempValue :Byte
// Define if float min..max will be mapped to 0..255 or 255..0
val conversion = when(reverseScale) {
false -> { v: Float -> ((v-minValue)/delta*255).toByte() }
true -> { v: Float -> (255-(v-minValue)/delta*255).toByte() }
}
// copy each value from float array to RGB channels and set alpha channel
floatArray.forEachIndexed { i, value ->
tempValue = conversion(value)
byteBuffer.put(4*i, tempValue)
byteBuffer.put(4*i+1, tempValue)
byteBuffer.put(4*i+2, tempValue)
byteBuffer.put(4*i+3, alpha)
}
bmp.copyPixelsFromBuffer(byteBuffer)
return bmp
}
None of the answers were able to produce the output I wanted, so this is what I came up with - it is basically only reverse engineered version of what happenes in TensorImageUtils.bitmapToFloat32Tensor().
Please note that this function only works if you are using MemoryFormat.CONTIGUOUS (which is default) in TensorImageUtils.bitmapToFloat32Tensor().
fun tensor2Bitmap(input: FloatArray, width: Int, height: Int, normMeanRGB: FloatArray, normStdRGB: FloatArray): Bitmap? {
val pixelsCount = height * width
val pixels = IntArray(pixelsCount)
val output = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
val conversion = { v: Float -> ((v.coerceIn(0.0f, 1.0f))*255.0f).roundToInt()}
val offset_g = pixelsCount
val offset_b = 2 * pixelsCount
for (i in 0 until pixelsCount) {
val r = conversion(input[i] * normStdRGB[0] + normMeanRGB[0])
val g = conversion(input[i + offset_g] * normStdRGB[1] + normMeanRGB[1])
val b = conversion(input[i + offset_b] * normStdRGB[2] + normMeanRGB[2])
pixels[i] = 255 shl 24 or (r.toInt() and 0xff shl 16) or (g.toInt() and 0xff shl 8) or (b.toInt() and 0xff)
}
output.setPixels(pixels, 0, width, 0, 0, width, height)
return output
}
Example usage then could be as follows:
tensor2Bitmap(outputTensor.dataAsFloatArray, bitmap.width, bitmap.height, TensorImageUtils.TORCHVISION_NORM_MEAN_RGB, TensorImageUtils.TORCHVISION_NORM_STD_RGB)
// I faced the same problem, and I found the function itself
TensorImageUtils.bitmapToFloat32Tensor()
tortures the RGB colorspace. You should try to convert yuv to a bitmap and use
TensorImageUtils.bitmapToFloat32Tensor
instead for NOW.
// I modified the code from phillies (up) to get the coloful bitmap. Note that the format of an output tensor is typically NCHW.
// Here's my function in Kotlin. Hopefully it works in your case:
private fun floatArrayToBitmap(floatArray: FloatArray, width: Int, height: Int) : Bitmap {
// Create empty bitmap in ARGB format
val bmp: Bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
val pixels = IntArray(width * height * 4)
// mapping smallest value to 0 and largest value to 255
val maxValue = floatArray.max() ?: 1.0f
val minValue = floatArray.min() ?: -1.0f
val delta = maxValue-minValue
// Define if float min..max will be mapped to 0..255 or 255..0
val conversion = { v: Float -> ((v-minValue)/delta*255.0f).roundToInt()}
// copy each value from float array to RGB channels
for (i in 0 until width * height) {
val r = conversion(floatArray[i])
val g = conversion(floatArray[i+width*height])
val b = conversion(floatArray[i+2*width*height])
pixels[i] = rgb(r, g, b) // you might need to import for rgb()
}
bmp.setPixels(pixels, 0, width, 0, 0, width, height)
return bmp
}
Hopefully future releases of PyTorch Mobile will fix this bug.