The document is minimal or non existing, I've read OpenCV's official doc and declare Mat in OpenCV java from stackoverflow
I've did some research of image processing algorithm use Python Jupyter notebook, at this moment I want to verify it in Android python.
For the Python part:
from numpy import asarray
dimg = cv2.imread('story/IMG_0371.PNG')
dimg_small = dimg[571:572, 401:402]
dimg_small_gamma = adjust_gamma(dimg_small)
data = asarray(dimg_small)
data_gamma = asarray(dimg_small_gamma)
# print data
array([[[52, 45, 44]]], dtype=uint8)
# print data_gamma
array([[[115, 107, 105]]], dtype=uint8)
The image of dimg_small would like:
Meanwhile the image of dimg_small_gamma would like:
So far, I want to create an image with the same data [52, 45, 44] and 115, 107, 105 in Android to verify my algorithm, both Java or Kotlin would be fine.
private fun genTestMat(): Mat { //upper one
val img = Mat( 3, 1, CvType.CV_8UC3)
img.put(3, 1,52.0, 45.0, 44.0)
return img
}
private fun genTestMat2(): Mat { //under one
val img = Mat( 3, 1, CvType.CV_8UC3)
img.put(3, 1,115.0, 107.0, 105.0)
return img
}
private fun opencvMatToBitmap(mat: Mat): Bitmap {
val bitmap: Bitmap = Bitmap.createBitmap(mat.width(), mat.height(), Bitmap.Config.ARGB_8888)
Utils.matToBitmap(mat, bitmap)
return bitmap
}
...
val testImg = opencvMatToBitmap(genTestMat())
val testImg2 = opencvMatToBitmap(genTestMat2())
image_under.setImageBitmap(testImg2)
image_upper.setImageBitmap(testImg)
With the image show in my Android device:
I was wondering how to make the Android show the same image as is in Python Jupyter notebook?
Updated:
Tried to Mat.put using array as follows:
private fun genTestMat(): Mat { //upper one
val img = Mat( 3, 1, CvType.CV_8UC3)
val data = floatArrayOf(52f, 45f, 44f)
img.put(1,1, data)
return img
}
Unfortunately it crashed:
Caused by: java.lang.UnsupportedOperationException: Mat data type is not compatible: 16
at org.opencv.core.Mat.put(Mat.java:801)
at com.example.cap2hist.MainActivity.genTestMat(MainActivity.kt:131)
After chat with folks in openCV forum, got the answer.
private fun genTestMat(): Mat {
val inputMat = Mat( 1, 1, CvType.CV_8UC3)
val rgbMat = Mat()
Imgproc.cvtColor(inputMat, rgbMat, Imgproc.COLOR_BGR2RGBA, 3)
rgbMat.setTo(Scalar(52.0, 45.0, 44.0))
return rgbMat
}
Related
Heyho,
I'm quiet new to Kotlin, but I came across this problem:
I have a library function, that generates an image(QR-Code). Now I'd like to display that image... But I've no idea how. The documentation only explains how to save the image locally. But I'm not really interested in saving it. So I can either get the image as a FileStream or as a ByteArray. Any possibility to display any of these as an Image in the UI?
An Example:
#Composable
fun QrCode(stand: String) {
Text(text = "QR-Code:", fontSize = 16.sp)
//? Image(QRCode(stand).render().getBytes()) // this obviously won't work
}
Any ideas?
In order to display an Image, you can refer this.
#Composable
fun BitmapImage(bitmap: Bitmap) {
Image(
bitmap = bitmap.asImageBitmap(),
contentDescription = "some useful description",
)
}
So the remaining is to find a way to convert your target input into a Bitmap.
If you have ByteArray of the image file, you can refer to this.
fun convertImageByteArrayToBitmap(imageData: ByteArray): Bitmap {
return BitmapFactory.decodeByteArray(imageData, 0, imageData.size)
}
If you just possess the QR String, you can refer this to convert QR String to a Bitmap.
fun encodeAsBitmap(source: String, width: Int, height: Int): Bitmap? {
val result: BitMatrix = try {
MultiFormatWriter().encode(source, BarcodeFormat.QR_CODE, width, height, null)
} catch (e: Exception) {
return null
}
val w = result.width
val h = result.height
val pixels = IntArray(w * h)
for (y in 0 until h) {
val offset = y * w
for (x in 0 until w) {
pixels[offset + x] = if (result[x, y]) Color.BLACK else Color.WHITE
}
}
val bitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888)
bitmap.setPixels(pixels, 0, width, 0, 0, w, h)
return bitmap
}
And you need to add this dependency implementation 'com.google.zxing:core:3.3.1' when you decide to use the above code.
Im trying to build a image classifier android app. I've built my model using keras.
The model is as follows:
model.add(MobileNetV2(include_top=False, weights='imagenet',input_shape=(224, 224, 3)))
model.add(GlobalAveragePooling2D())
model.add(Dropout(0.5))
model.add(Dense(3, activation='softmax'))
model.layers[0].trainable = False
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
Output:
Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
mobilenetv2_1.00_224 (Functi (None, 7, 7, 1280) 2257984
_________________________________________________________________
global_average_pooling2d_2 ( (None, 1280) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 1280) 0
_________________________________________________________________
dense_1 (Dense) (None, 3) 3843
=================================================================
Total params: 2,261,827
Trainable params: 3,843
Non-trainable params: 2,257,984
After training Im converting the model using
model = tf.keras.models.load_model('model.h5')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open(f"myModel.tflite", "wb").write(tflite_model)
for android the code is as:
make_prediction.setOnClickListener(View.OnClickListener {
var resized = Bitmap.createScaledBitmap(bitmap, 224, 224, true)
val model = MyModel.newInstance(this)
var tbuffer = TensorImage.fromBitmap(resized)
var byteBuffer = tbuffer.buffer
// Creates inputs for reference.
val inputFeature0 = TensorBuffer.createFixedSize(intArrayOf(1, 224, 224, 3), DataType.FLOAT32)
inputFeature0.loadBuffer(byteBuffer)
// Runs model inference and gets result.
val outputs = model.process(inputFeature0)
val outputFeature0 = outputs.outputFeature0AsTensorBuffer
var max = getMax(outputFeature0.floatArray)
text_view.setText(labels[max])
// Releases model resources if no longer used.
model.close()
})
but whenever i try to run my app it closes and i get this error in the logcat.
java.lang.IllegalArgumentException: The size of byte buffer and the shape do not match.
if I change the input shape of my image to 300 from 224 and train my model on 300 input shape and plug in to android I get anthor error.
java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite buffer with 1080000 bytes and a Java Buffer with 150528 bytes
Any kind of help will be really appreciated.
Use it like:
make_prediction.setOnClickListener(View.OnClickListener {
var resized = Bitmap.createScaledBitmap(bitmap, 224, 224, true)
val model = MyModel.newInstance(this)
var tImage = TensorImage(DataType.FLOAT32)
var tensorImage = tImage.load(resized)
var byteBuffer = tensorImage.buffer
// Creates inputs for reference.
//val inputFeature0 = TensorBuffer.createFixedSize(intArrayOf(1, 224, 224, 3), DataType.FLOAT32)
//inputFeature0.loadBuffer(byteBuffer)
// Runs model inference and gets result.
val outputs = model.process(byteBuffer)
val outputFeature0 = outputs.outputFeature0AsTensorBuffer
var max = getMax(outputFeature0.floatArray)
text_view.setText(labels[max])
// Releases model resources if no longer used.
model.close()
})
Then check with debugger if the problem persists or
val outputFeature0 = outputs.outputFeature0AsTensorBuffer
causes another one.
Ping me if you need more help
I'm trying to convert my tflite output location to position that user can see on screen.
I don't know what I'm doing wrong but the tflite supposed to give me the location (left,right,top,bottom) between 0 and 1. but it give the output between 320 and 320 (this my model input).
this is my image analyzer class:(It's live detection and I use CameraX)
// Initializing the jaw detection model
private val jawDetectionModel = JawDetection.newInstance(context)
val imageProcessor = ImageProcessor.Builder()
.add(ResizeWithCropOrPadOp(320, 320))
.add(ResizeOp(320, 320, ResizeOp.ResizeMethod.NEAREST_NEIGHBOR))
.add(NormalizeOp(0f, 1f))
.build()
// Convert Image to Bitmap then to TensorImage
val tfImage = TensorImage.fromBitmap(imageProxy.convertImageProxyToBitmap(context))
// Process the image using the trained model, sort and pick out the top results
val outputs = jawDetectionModel.process(imageProcessor.process(tfImage))
.detectionResultList.take(MaxResultDisplay)
val left = outputs.first().locationAsRectF.left
val top = outputs.first().locationAsRectF.top
val right = outputs.first().locationAsRectF.right
val bottom = outputs.first().locationAsRectF.bottom
and this for convert image proxy to bitmap:
#SuppressLint("UnsafeExperimentalUsageError")
fun ImageProxy.convertImageProxyToBitmap(context: Context): Bitmap? {
val yuvToRgbConverter = YuvToRgbConverter(context)
val image = this.image ?: return null
// Initialise Buffer
if (!::bitmapBuffer.isInitialized) {
rotationMatrix = Matrix()
rotationMatrix.postRotate(this.imageInfo.rotationDegrees.toFloat())
bitmapBuffer = Bitmap.createBitmap(
this.width, this.height, Bitmap.Config.ARGB_8888
)
}
// Pass image to an image analyser
yuvToRgbConverter.yuvToRgb(image, bitmapBuffer)
// Create the Bitmap in the correct orientation
return Bitmap.createBitmap(
bitmapBuffer,
0,
0,
bitmapBuffer.width,
bitmapBuffer.height,
rotationMatrix,
false
)
}
I am working on Android application, where I am creating a QR code by using the QR generation library. I have achieved the QR code generation as by sending data from previews Activity to the main Activity where I have generated and shown the QR code.
The issue takes place in the process of saving QR code to the gallery and sharing the same QR code.
I have implemented sharing intent but it says the intent cannot share an empty file.
The same issue appears when I try to save the file.
Basically file with the QR code is always empty.
Here is my code for sharing and QR generation:
class QRGenerationAll : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.qr_generation_all)
val byteArray = intent.getByteArrayExtra("logoimage")
val bmp = BitmapFactory.decodeByteArray(byteArray, 0, byteArray.size)
img_logoimage.setImageBitmap(bmp)
val valuecatchedvalues: String = intent.getStringExtra("sendedvalues")
tv_textforqr.setText(valuecatchedvalues)
val tv_textforqr: String = intent.getStringExtra("logotext")
tv_textforlogoname.setText(tv_textforqr)
try {
//setting size of qr code
val manager = getSystemService(WINDOW_SERVICE) as WindowManager
val display = manager.defaultDisplay
val point = Point()
display.getSize(point)
val width = point.x
val height = point.y
val smallestDimension = if (width < height) width else height
// val qrInput = findViewById(R.id.qrInput) as EditText
//setting parameters for qr code
val charset = "UTF-8" // or "ISO-8859-1"
val hintMap: MutableMap<EncodeHintType, ErrorCorrectionLevel> = HashMap()
hintMap[EncodeHintType.ERROR_CORRECTION] = ErrorCorrectionLevel.L
if (valuecatchedvalues!=null) {
createQRCodeText(
valuecatchedvalues,
charset,
hintMap,
smallestDimension,
smallestDimension
) //MAIN METHOD FOR QR GENERATE
}
} catch (ex: Exception) {
Log.e("QrGenerate", ex.message)
}
buttonshare.setOnClickListener(View.OnClickListener {
val sharingIntent = Intent(Intent.ACTION_SEND)
sharingIntent.type = "text/plain"
sharingIntent.putExtra(Intent.EXTRA_SUBJECT, "Subject Here")
sharingIntent.putExtra(Intent.EXTRA_TEXT, bmp)
startActivity(
Intent.createChooser(
sharingIntent,
resources.getString(R.string.app_name)
)
)
})
}
private fun createQRCodeText(
valueqrTextData: String,
charset: String,
hintMap: MutableMap<EncodeHintType, ErrorCorrectionLevel>,
smallestDimension: Int,
smallestDimension1: Int
) {
try {
//generating qr code in bitmatrix type
val matrix = MultiFormatWriter().encode(
String(
valueqrTextData.toByteArray(charset(charset)), kotlin.text.charset(charset)
), BarcodeFormat.QR_CODE, smallestDimension, smallestDimension1, hintMap
)
//converting bitmatrix to bitmap
val width = matrix.width
val height = matrix.height
val pixels = IntArray(width * height)
// All are 0, or black, by default
for (y in 0 until height) {
val offset = y * width
for (x in 0 until width) {
pixels[offset + x] = if (matrix[x, y]) Color.BLACK else Color.WHITE
}
}
val bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
bitmap.setPixels(pixels, 0, width, 0, 0, width, height)
//setting bitmap to image view
val myImage = findViewById(R.id.imageView1) as ImageView
myImage.setImageBitmap(bitmap)
} catch (er: java.lang.Exception) {
Log.e("QrGenerate", er.message)
}
}
}
The following snippet is related to extracting text from previous Activity and generating QR in the receiving Activity.
val byteArray = intent.getByteArrayExtra("logoimage")
val bmp = BitmapFactory.decodeByteArray(byteArray, 0, byteArray.size)
img_logoimage.setImageBitmap(bmp)
val valuecatchedvalues: String = intent.getStringExtra("sendedvalues")
tv_textforqr.setText(valuecatchedvalues)
val tv_textforqr: String = intent.getStringExtra("logotext")
tv_textforlogoname.setText(tv_textforqr)
There is the only issue I am Facing here. I have QR image generated but as I try to share and save it it seems impossible for me.
Here is the Message after Share intent
I have searched this link for guidance Link 1
You've passed the wrong argument value for EXTRA_TEXT key. Instead of bmp you should use valuecatchedvalues:
sharingIntent.putExtra(Intent.EXTRA_TEXT, valuecatchedvalues)
Why?
Because this is the value you create your QR code from;
bmp is of type Bitmap. Bitmap cannot be processed when the value for EXTRA_TEXT key is extracted because String is expected.
I'm using CameraX's Analyzer use case with the MLKit's BarcodeScanner. I would like to crop portion of the image received from the camera, before passing it to the scanner.
What I'm doing right now is I convert ImageProxy (that I recieve in the Analyzer) to a Bitmap, crop it and then pass it to the BarcodeScanner. The downside is that it's not a very fast and efficient process.
I've also noticed the warning I get in the Logcat when running this code:
ML Kit has detected that you seem to pass camera frames to the
detector as a Bitmap object. This is inefficient. Please use
YUV_420_888 format for camera2 API or NV21 format for (legacy) camera
API and directly pass down the byte array to ML Kit.
It would be nice to not to do ImageProxy conversion, but how do I crop the rectangle I want to analyze?
What I've already tried is to set a cropRect field of the Image (imageProxy.image.cropRect) class, but it doesn't seem to affect the end result.
Yes, it's true that if you use ViewPort and set viewport to yours UseCases(imageCapture or imageAnalysis as here https://developer.android.com/training/camerax/configuration) you can get only information about crop rectangle especially if you use ImageAnalysis(because if you use imageCapture, for on-disk the image is cropped before saving and it doesn't work for ImageAnalysis and if you use imageCapture without saving on disk) and here solution how I solved this problem:
First of all set view port for use cases as here: https://developer.android.com/training/camerax/configuration
Get cropped bitmap to analyze
override fun analyze(imageProxy: ImageProxy) {
val mediaImage = imageProxy.image
if (mediaImage != null && mediaImage.format == ImageFormat.YUV_420_888) {
croppedBitmap(mediaImage, imageProxy.cropRect).let { bitmap ->
requestDetectInImage(InputImage.fromBitmap(bitmap, rotation))
.addOnCompleteListener { imageProxy.close() }
}
} else {
imageProxy.close()
}
}
private fun croppedBitmap(mediaImage: Image, cropRect: Rect): Bitmap {
val yBuffer = mediaImage.planes[0].buffer // Y
val vuBuffer = mediaImage.planes[2].buffer // VU
val ySize = yBuffer.remaining()
val vuSize = vuBuffer.remaining()
val nv21 = ByteArray(ySize + vuSize)
yBuffer.get(nv21, 0, ySize)
vuBuffer.get(nv21, ySize, vuSize)
val yuvImage = YuvImage(nv21, ImageFormat.NV21, mediaImage.width, mediaImage.height, null)
val outputStream = ByteArrayOutputStream()
yuvImage.compressToJpeg(cropRect, 100, outputStream)
val imageBytes = outputStream.toByteArray()
return BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.size)
}
Possibly there is a loss in conversion speed, but on my devices I did not notice the difference. I set 100 quality in method compressToJpeg, but mb if set less quality it can improve speed, it need test.
upd: May 02 '21 :
I found another way without convert to jpeg and then to bitmap. This should be a faster way.
Set viewport as previous.
Convert YUV_420_888 to NV21, then crop and analyze.
override fun analyze(imageProxy: ImageProxy) {
val mediaImage = imageProxy.image
if (mediaImage != null && mediaImage.format == ImageFormat.YUV_420_888) {
croppedNV21(mediaImage, imageProxy.cropRect).let { byteArray ->
requestDetectInImage(
InputImage.fromByteArray(
byteArray,
imageProxy.cropRect.width(),
imageProxy.cropRect.height(),
rotation,
IMAGE_FORMAT_NV21,
)
)
.addOnCompleteListener { imageProxy.close() }
}
} else {
imageProxy.close()
}
}
private fun croppedNV21(mediaImage: Image, cropRect: Rect): ByteArray {
val yBuffer = mediaImage.planes[0].buffer // Y
val vuBuffer = mediaImage.planes[2].buffer // VU
val ySize = yBuffer.remaining()
val vuSize = vuBuffer.remaining()
val nv21 = ByteArray(ySize + vuSize)
yBuffer.get(nv21, 0, ySize)
vuBuffer.get(nv21, ySize, vuSize)
return cropByteArray(nv21, mediaImage.width, cropRect)
}
private fun cropByteArray(array: ByteArray, imageWidth: Int, cropRect: Rect): ByteArray {
val croppedArray = ByteArray(cropRect.width() * cropRect.height())
var i = 0
array.forEachIndexed { index, byte ->
val x = index % imageWidth
val y = index / imageWidth
if (cropRect.left <= x && x < cropRect.right && cropRect.top <= y && y < cropRect.bottom) {
croppedArray[i] = byte
i++
}
}
return croppedArray
}
First crop fun I took from here: Android: How to crop images using CameraX?
And I found also another crop fun, it seems that it is more complicated:
private fun cropByteArray(src: ByteArray, width: Int, height: Int, cropRect: Rect, ): ByteArray {
val x = cropRect.left * 2 / 2
val y = cropRect.top * 2 / 2
val w = cropRect.width() * 2 / 2
val h = cropRect.height() * 2 / 2
val yUnit = w * h
val uv = yUnit / 2
val nData = ByteArray(yUnit + uv)
val uvIndexDst = w * h - y / 2 * w
val uvIndexSrc = width * height + x
var srcPos0 = y * width
var destPos0 = 0
var uvSrcPos0 = uvIndexSrc
var uvDestPos0 = uvIndexDst
for (i in y until y + h) {
System.arraycopy(src, srcPos0 + x, nData, destPos0, w) //y memory block copy
srcPos0 += width
destPos0 += w
if (i and 1 == 0) {
System.arraycopy(src, uvSrcPos0, nData, uvDestPos0, w) //uv memory block copy
uvSrcPos0 += width
uvDestPos0 += w
}
}
return nData
}
Second crop fun I took from here:
https://www.programmersought.com/article/75461140907/
I would be glad if someone can help improve the code.
I'm still improving the way to do it. But this will work for me now
CameraX crop image before sending to analyze
<androidx.constraintlayout.widget.ConstraintLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:paddingBottom="#dimen/_40sdp">
<androidx.camera.view.PreviewView
android:id="#+id/previewView"
android:layout_width="match_parent"
android:layout_height="0dp"
app:layout_constraintDimensionRatio="1:1"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent" /></androidx.constraintlayout.widget.ConstraintLayout>
Cropping an image into 1:1 before passing it to analyze
override fun onCaptureSuccess(image: ImageProxy) {
super.onCaptureSuccess(image)
var bitmap: Bitmap = imageProxyToBitmap(image)
val dimension: Int = min(bitmap.width, bitmap.height)
bitmap = ThumbnailUtils.extractThumbnail(bitmap, dimension, dimension)
imageView.setImageBitmap(bitmap) //Here you can pass the crop[from the center] image to analyze
image.close()
}
**Function for converting into bitmap **
private fun imageProxyToBitmap(image: ImageProxy): Bitmap {
val buffer: ByteBuffer = image.planes[0].buffer
val bytes = ByteArray(buffer.remaining())
buffer.get(bytes)
return BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
}
You would use ImageProxy.SetCroprect to get the rect and then use CropRect to set it.
For example if you had imageProxy, you would do : ImageProxy.setCropRect(Rect) and then you would do ImageProxy.CropRect.