How to run Channel first tflite model in Android - android

I am able to run my custom tflite model in android but the output is totally wrong. I suspect it is due to my model needs input shape [1, 3, 640, 640] but the code makes channel last ByteBuffer. I have created tensor buffer like this TensorBuffer.createFixedSize(intArrayOf(1, 3, 640, 640), DataType.FLOAT32) but I still suspect inside the for loop, the channel is not properly set in the flat input (ByteBuffer).
I have copied this code from example where the required model shape was [1,32,32,3] (channel last). This is the reason for my doubt.
Below is my code:-
val model = YoloxPlate.newInstance(applicationContext)
val inputFeature0 = TensorBuffer.createFixedSize(intArrayOf(1, 3, 640, 640), DataType.FLOAT32)
val input = ByteBuffer.allocateDirect(640*640*3*4).order(ByteOrder.nativeOrder())
for (y in 0 until 640) {
for (x in 0 until 640) {
val px = bitmap.getPixel(x, y)
// Get channel values from the pixel value.
val r = Color.red(px)
val g = Color.green(px)
val b = Color.blue(px)
// Normalize channel values to [-1.0, 1.0]. This requirement depends on the model.
// For example, some models might require values to be normalized to the range
// [0.0, 1.0] instead.
val rf = r/ 1f
val gf = g/ 1f
val bf = b/ 1f
input.putFloat(bf)
input.putFloat(gf)
input.putFloat(rf)
}
}
inputFeature0.loadBuffer(input)
val outputs = model.process(inputFeature0)
val outputFeature0 = outputs.outputFeature0AsTensorBuffer
val flvals = outputFeature0.getFloatArray();

After using whiteboard and making and setting dim manually of the matrix, I figured it out.
It also used BGR instead of RGB as required by the model.
Working Perfectly now, here is the code (need to optimize multiple loop):-
val model = YoloxPlate.newInstance(applicationContext)
val inputFeature0 = TensorBuffer.createFixedSize(intArrayOf(1, 3, 640, 640), DataType.FLOAT32)
val input = ByteBuffer.allocateDirect(640*640*3*4).order(ByteOrder.nativeOrder())
for (y in 0 until 640) {
for (x in 0 until 640) {
val px = bitmap.getPixel(x, y)
val b = Color.blue(px)
val bf = b/ 1f
input.putFloat(bf)
}
}
for (y in 0 until 640) {
for (x in 0 until 640) {
val px = bitmap.getPixel(x, y)
val g = Color.green(px)
val gf = g/ 1f
input.putFloat(gf)
}
}
for (y in 0 until 640) {
for (x in 0 until 640) {
val px = bitmap.getPixel(x, y)
val r = Color.red(px)
val rf = r/ 1f
input.putFloat(rf)
}
}
inputFeature0.loadBuffer(input)
val outputs = model.process(inputFeature0)
val outputFeature0 = outputs.outputFeature0AsTensorBuffer
val flvals = outputFeature0.getFloatArray();

Related

MPAndroidChart - Piechart - custom label lines

I'm trying to draw the label lines as in picture using MPAndroidChart with a pie chart. I can't figure out how to
decouple the lines from the chart
draw that little circle at the beginning of the line.
Thank you.
This is by no means easy to achieve. To decouple the lines from the chart, you can use valueLinePart1OffsetPercentage and play with line part lengths. But to get the chart to draw dots at the end of lines, you need a custom renderer. Here's one:
class CustomPieChartRenderer(pieChart: PieChart, val circleRadius: Float)
: PieChartRenderer(pieChart, pieChart.animator, pieChart.viewPortHandler) {
override fun drawValues(c: Canvas) {
super.drawValues(c)
val center = mChart.centerCircleBox
val radius = mChart.radius
var rotationAngle = mChart.rotationAngle
val drawAngles = mChart.drawAngles
val absoluteAngles = mChart.absoluteAngles
val phaseX = mAnimator.phaseX
val phaseY = mAnimator.phaseY
val roundedRadius = (radius - radius * mChart.holeRadius / 100f) / 2f
val holeRadiusPercent = mChart.holeRadius / 100f
var labelRadiusOffset = radius / 10f * 3.6f
if (mChart.isDrawHoleEnabled) {
labelRadiusOffset = (radius - radius * holeRadiusPercent) / 2f
if (!mChart.isDrawSlicesUnderHoleEnabled && mChart.isDrawRoundedSlicesEnabled) {
rotationAngle += roundedRadius * 360 / (Math.PI * 2 * radius).toFloat()
}
}
val labelRadius = radius - labelRadiusOffset
val dataSets = mChart.data.dataSets
var angle: Float
var xIndex = 0
c.save()
for (i in dataSets.indices) {
val dataSet = dataSets[i]
val sliceSpace = getSliceSpace(dataSet)
for (j in 0 until dataSet.entryCount) {
angle = if (xIndex == 0) 0f else absoluteAngles[xIndex - 1] * phaseX
val sliceAngle = drawAngles[xIndex]
val sliceSpaceMiddleAngle = sliceSpace / (Utils.FDEG2RAD * labelRadius)
angle += (sliceAngle - sliceSpaceMiddleAngle / 2f) / 2f
if (dataSet.valueLineColor != ColorTemplate.COLOR_NONE) {
val transformedAngle = rotationAngle + angle * phaseY
val sliceXBase = cos(transformedAngle * Utils.FDEG2RAD.toDouble()).toFloat()
val sliceYBase = sin(transformedAngle * Utils.FDEG2RAD.toDouble()).toFloat()
val valueLinePart1OffsetPercentage = dataSet.valueLinePart1OffsetPercentage / 100f
val line1Radius = if (mChart.isDrawHoleEnabled) {
(radius - radius * holeRadiusPercent) * valueLinePart1OffsetPercentage + radius * holeRadiusPercent
} else {
radius * valueLinePart1OffsetPercentage
}
val px = line1Radius * sliceXBase + center.x
val py = line1Radius * sliceYBase + center.y
if (dataSet.isUsingSliceColorAsValueLineColor) {
mRenderPaint.color = dataSet.getColor(j)
}
c.drawCircle(px, py, circleRadius, mRenderPaint)
}
xIndex++
}
}
MPPointF.recycleInstance(center)
c.restore()
}
}
This custom renderer extends the default pie chart renderer. I basically just copied the code from PieChartRenderer.drawValues method, converted it to Kotlin, and removed everything that wasn't needed. I only kept the logic needed to determine the position of the points at the end of lines.
I tried to reproduce the image you showed:
val chart: PieChart = view.findViewById(R.id.pie_chart)
chart.setExtraOffsets(40f, 0f, 40f, 0f)
// Custom renderer used to add dots at the end of value lines.
chart.renderer = CustomPieChartRenderer(chart, 10f)
val dataSet = PieDataSet(listOf(
PieEntry(40f),
PieEntry(10f),
PieEntry(10f),
PieEntry(15f),
PieEntry(10f),
PieEntry(5f),
PieEntry(5f),
PieEntry(5f)
), "Pie chart")
// Chart colors
val colors = listOf(
Color.parseColor("#4777c0"),
Color.parseColor("#a374c6"),
Color.parseColor("#4fb3e8"),
Color.parseColor("#99cf43"),
Color.parseColor("#fdc135"),
Color.parseColor("#fd9a47"),
Color.parseColor("#eb6e7a"),
Color.parseColor("#6785c2"))
dataSet.colors = colors
dataSet.setValueTextColors(colors)
// Value lines
dataSet.valueLinePart1Length = 0.6f
dataSet.valueLinePart2Length = 0.3f
dataSet.valueLineWidth = 2f
dataSet.valueLinePart1OffsetPercentage = 115f // Line starts outside of chart
dataSet.isUsingSliceColorAsValueLineColor = true
// Value text appearance
dataSet.yValuePosition = PieDataSet.ValuePosition.OUTSIDE_SLICE
dataSet.valueTextSize = 16f
dataSet.valueTypeface = Typeface.DEFAULT_BOLD
// Value formatting
dataSet.valueFormatter = object : ValueFormatter() {
private val formatter = NumberFormat.getPercentInstance()
override fun getFormattedValue(value: Float) =
formatter.format(value / 100f)
}
chart.setUsePercentValues(true)
dataSet.selectionShift = 3f
// Hole
chart.isDrawHoleEnabled = true
chart.holeRadius = 50f
// Center text
chart.setDrawCenterText(true)
chart.setCenterTextSize(20f)
chart.setCenterTextTypeface(Typeface.DEFAULT_BOLD)
chart.setCenterTextColor(Color.parseColor("#222222"))
chart.centerText = "Center\ntext"
// Disable legend & description
chart.legend.isEnabled = false
chart.description = null
chart.data = PieData(dataSet)
Again, not very straightforward. I hope you like Kotlin! You can move most of that configuration code to a subclass if you need it often. Here's the result:
I'm not a MPAndroidChart expert. In fact, I've used it only once, and that was 2 years ago. But if you do your research, you can find a solution most of the time. Luckily, MPAndroidChart is a very customizable.

How can I convert Tensor into Bitmap on PyTorch Mobile?

I found that solution (https://itnext.io/converting-pytorch-float-tensor-to-android-rgba-bitmap-with-kotlin-ffd4602a16b6) but when I tried to convert that way I found that the size of inputTensor.dataAsFloatArray is more than bitmap.width*bitmap.height. How works converting tensor to float array or is there any other possible method to convert pytorch tensor to bitmap?
val inputTensor = TensorImageUtils.bitmapToFloat32Tensor(
bitmap,
TensorImageUtils.TORCHVISION_NORM_MEAN_RGB, TensorImageUtils.TORCHVISION_NORM_STD_RGB
)
// Float array size is 196608 when width and height are 256x256 = 65536
val res = floatArrayToGrayscaleBitmap(inputTensor.dataAsFloatArray, bitmap.width, bitmap.height)
fun floatArrayToGrayscaleBitmap (
floatArray: FloatArray,
width: Int,
height: Int,
alpha :Byte = (255).toByte(),
reverseScale :Boolean = false
) : Bitmap {
// Create empty bitmap in RGBA format (even though it says ARGB but channels are RGBA)
val bmp = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
val byteBuffer = ByteBuffer.allocate(width*height*4)
Log.d("App", floatArray.size.toString() + " " + (width * height * 4).toString())
// mapping smallest value to 0 and largest value to 255
val maxValue = floatArray.max() ?: 1.0f
val minValue = floatArray.min() ?: 0.0f
val delta = maxValue-minValue
var tempValue :Byte
// Define if float min..max will be mapped to 0..255 or 255..0
val conversion = when(reverseScale) {
false -> { v: Float -> ((v-minValue)/delta*255).toByte() }
true -> { v: Float -> (255-(v-minValue)/delta*255).toByte() }
}
// copy each value from float array to RGB channels and set alpha channel
floatArray.forEachIndexed { i, value ->
tempValue = conversion(value)
byteBuffer.put(4*i, tempValue)
byteBuffer.put(4*i+1, tempValue)
byteBuffer.put(4*i+2, tempValue)
byteBuffer.put(4*i+3, alpha)
}
bmp.copyPixelsFromBuffer(byteBuffer)
return bmp
}
None of the answers were able to produce the output I wanted, so this is what I came up with - it is basically only reverse engineered version of what happenes in TensorImageUtils.bitmapToFloat32Tensor().
Please note that this function only works if you are using MemoryFormat.CONTIGUOUS (which is default) in TensorImageUtils.bitmapToFloat32Tensor().
fun tensor2Bitmap(input: FloatArray, width: Int, height: Int, normMeanRGB: FloatArray, normStdRGB: FloatArray): Bitmap? {
val pixelsCount = height * width
val pixels = IntArray(pixelsCount)
val output = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
val conversion = { v: Float -> ((v.coerceIn(0.0f, 1.0f))*255.0f).roundToInt()}
val offset_g = pixelsCount
val offset_b = 2 * pixelsCount
for (i in 0 until pixelsCount) {
val r = conversion(input[i] * normStdRGB[0] + normMeanRGB[0])
val g = conversion(input[i + offset_g] * normStdRGB[1] + normMeanRGB[1])
val b = conversion(input[i + offset_b] * normStdRGB[2] + normMeanRGB[2])
pixels[i] = 255 shl 24 or (r.toInt() and 0xff shl 16) or (g.toInt() and 0xff shl 8) or (b.toInt() and 0xff)
}
output.setPixels(pixels, 0, width, 0, 0, width, height)
return output
}
Example usage then could be as follows:
tensor2Bitmap(outputTensor.dataAsFloatArray, bitmap.width, bitmap.height, TensorImageUtils.TORCHVISION_NORM_MEAN_RGB, TensorImageUtils.TORCHVISION_NORM_STD_RGB)
// I faced the same problem, and I found the function itself
TensorImageUtils.bitmapToFloat32Tensor()
tortures the RGB colorspace. You should try to convert yuv to a bitmap and use
TensorImageUtils.bitmapToFloat32Tensor
instead for NOW.
// I modified the code from phillies (up) to get the coloful bitmap. Note that the format of an output tensor is typically NCHW.
// Here's my function in Kotlin. Hopefully it works in your case:
private fun floatArrayToBitmap(floatArray: FloatArray, width: Int, height: Int) : Bitmap {
// Create empty bitmap in ARGB format
val bmp: Bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
val pixels = IntArray(width * height * 4)
// mapping smallest value to 0 and largest value to 255
val maxValue = floatArray.max() ?: 1.0f
val minValue = floatArray.min() ?: -1.0f
val delta = maxValue-minValue
// Define if float min..max will be mapped to 0..255 or 255..0
val conversion = { v: Float -> ((v-minValue)/delta*255.0f).roundToInt()}
// copy each value from float array to RGB channels
for (i in 0 until width * height) {
val r = conversion(floatArray[i])
val g = conversion(floatArray[i+width*height])
val b = conversion(floatArray[i+2*width*height])
pixels[i] = rgb(r, g, b) // you might need to import for rgb()
}
bmp.setPixels(pixels, 0, width, 0, 0, width, height)
return bmp
}
Hopefully future releases of PyTorch Mobile will fix this bug.

Android Path Not Closing, I can't fill with color

I have a custom squircle android view. I am drawing a Path but and I cannot fill the path with color.
StackOverflow is asking me to provide more detail, but I don't think I can explain this better. All I need is to fill the path with code. If you need to see the whole class let me know.
init {
val typedValue = TypedValue()
val theme = context!!.theme
theme.resolveAttribute(R.attr.colorAccentTheme, typedValue, true)
paint.color = typedValue.data
paint.strokeWidth = 6f
paint.style = Paint.Style.FILL_AND_STROKE
paint.isAntiAlias = true
shapePadding = 10f
}
open fun onLayoutInit() {
val hW = (this.measuredW / 2) - shapePadding
val hH = (this.measuredH / 2) - shapePadding
/*
Returns a series of Vectors along the path
of the squircle
*/
points = Array(360) { i ->
val angle = toRadians(i.toDouble())
val x = pow(abs(cos(angle)), corners) * hW * sgn(cos(angle))
val y = pow(abs(sin(angle)), corners) * hH * sgn(sin(angle))
Pair(x.toFloat(), y.toFloat())
}
/*
Match the path to the points
*/
for (i in 0..points.size - 2) {
val p1 = points[i]
val p2 = points[i + 1]
path.moveTo(p1.first, p1.second)
path.lineTo(p2.first, p2.second)
}
/*
Finish closing the path's points
*/
val fst = points[0]
val lst = points[points.size - 1]
path.moveTo(lst.first, lst.second)
path.lineTo(fst.first, fst.second)
path.fillType = Path.FillType.EVEN_ODD
path.close()
postInvalidate()
}
override fun onDraw(canvas: Canvas?) {
canvas?.save()
canvas?.translate(measuredW / 2, measuredH / 2)
canvas?.drawPath(path, paint)
canvas?.restore()
super.onDraw(canvas)
}
}
The path stroke works fine, but I can't fill it with color.

Inaccurate prediction in MLKit custom model

Trying to use a retrained MobileNet model to predict dog breeds, but when using the model through Firebase MLKit, it is unable to correctly predict the dog breed. The desktop model and the tflite model are both able to correctly predict the breed, but using the same image of a pug, the desktop model and the tflite model (on desktop) are 87.8% confident that it is a pug; whereas on MLKit, the confidence is 1.47x10-2% confident.
I'm suspecting the issue is in my preprocessing of the image in the app code. The docs show how to scale the pixels in the range -1.0, 1.0; which according to the code for the keras image preprocessing function is what is required.
Here is my infer(iStream) function where I think the error may lie. Any help is greatly appreciated, this is driving me crazy.
private fun infer(iStream: InputStream?) {
Log.d("ML_TAG", "infer")
val bmp = Bitmap.createScaledBitmap(BitmapFactory.decodeStream(iStream), 224, 224, true)
i.setImageBitmap(bmp)
val bNum = 0
val input = Array(1) { Array(224) { Array(224) { FloatArray(3) } } }
for (x in 0..223) {
for (y in 0..223) {
val px = bmp.getPixel(x, y)
input[bNum][x][y][0] = (Color.red(px) - 127) / 255.0f
input[bNum][x][y][1] = (Color.green(px) - 127) / 255.0f
input[bNum][x][y][2] = (Color.blue(px) - 127) / 255.0f
}
}
val inputs = FirebaseModelInputs.Builder()
.add(input)
.build()
interpreter.run(inputs, ioOpts).addOnSuccessListener { res ->
val o = res.getOutput<kotlin.Array<FloatArray>>(0)
val prob = o[0]
val r = BufferedReader(InputStreamReader(assets.open("retrained_labels.txt")))
val arrToSort = arrayListOf<Pair<String, Float>>()
val rArr = r.readLines()
for (i in prob.indices) {
val p = Pair(rArr[i], prob[i])
arrToSort.add(p)
}
val sortedList = arrToSort.sortedWith(compareByDescending {it.second})
val topFive = sortedList.slice(0..4)
arrToSort.forEach {
if (it.first == "pug") {
Log.i("ML_TAG", "Pug: ${it.second}")
}
}
sortedList.forEach {
if(it.first == "pug") {
Log.i("ML_TAG", "Pug: ${it.second}")
}
}
topFive.forEach {
Log.i("ML_TAG", "${it.first}: ${it.second}")
}
}
.addOnFailureListener { res ->
Log.e("ML_TAG", res.message)
}
}
I think (Color.red(px) - 127) / 255.0f scales to [-0.5, 0.5]. Does (Color.red(px) - 127) / 128.0f produce better results?

Android monospaced font size

I am doing a graphical code editor where I can modify constants by dragging them.
I want to highlight the commands in the code with blue rectangles such that left and right borders lay in the middle of characters, but the blue rectangles are still misaligned in some cases:
My idea is to first compute the char width and char space, and then multiply them afterwards by the position of my command in my text.
val mCodePaint = new TextPaint()
mCodePaint.setTypeface(Typeface.MONOSPACE)
mCodePaint.setAntiAlias(true)
mCodePaint.setSubpixelText(true)
mCodePaint.setColor(0xFF000000)
val dimText = new Rect()
val dimText1 = new Rect()
val dimText2 = new Rect()
final val s1 = "WWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW"
final val s2 = "WWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW"
// dimText1.width() = char_length * s1.length + space_between_chars*(s1.length-1)
// dimText2.width() = char_length * s2.length + space_between_chars*(s2.length-1)
def getCharWidth(): Float = {
mCodePaint.getTextBounds(s1, 0, s1.length, dimText1)
mCodePaint.getTextBounds(s2, 0, s2.length, dimText2)
(dimText2.width() * (s1.length - 1) - dimText1.width() *(s2.length - 1))/(s1.length - s2.length)
}
def getIntercharWidth(): Float = {
mCodePaint.getTextBounds(s1, 0, s1.length, dimText1)
mCodePaint.getTextBounds(s2, 0, s2.length, dimText2)
(dimText1.width * s2.length - dimText2.width * s1.length)/(s1.length - s2.length)
}
// The main function that draw the text
def drawRuleCode(canvas: Canvas, ...): Unit = {
var char_width = getCharWidth() // At run time, equals 29
var space_width = getIntercharWidth() // At run time, equals -10
for(action <- ...) {
...
val column = action.column
val length = action.length
val x1 = left_x+8 + column*char_width + (column-1)*space_width - 0.5f*space_width
val x2 = x1 + length*char_width + (length-1)*space_width + 1*space_width
rectFData.set(x1, y1, x2, y2)
canvas.drawRoundRect(rectFData, 5, 5, selectPaint)
}
for(line <- ...) {
...
canvas.drawText(s, left_x + 8, ..., mCodePaint)
}
Do you have any idea on how to overcome that small alignment problem? Sometimes it makes a huge difference, especially when the expression is long.
EDIT: I drawed the computed text bounds, and actually they are wrong. The text is slightly larger than the rectangle given by getTextBounds (violet line):
Instead of using getTextBounds, I need to pass the scale argument, because the font size does not scale linearly with the canvas:
Explanation here
var c = new Matrix()
val c_array = new Array[Float](9)
// The main function that draw the text
def drawRuleCode(canvas: Canvas, ...): Unit = {
var box_width = getBoxWidth()
canvas.getMatrix(c)
c.getValues(c_array)
val scale = c_array(Matrix.MSCALE_X) // Compute the current matrix scale
var box_width = getBoxWidth(scale)
for(action <- ...) {
...
val column = action.column
val length = action.length
val x1 = left_x+8 + column*box_width
val x2 = x1 + length*box_width
rectFData.set(x1, y1, x2, y2)
canvas.drawRoundRect(rectFData, 5, 5, selectPaint)
}
def getBoxWidth(scale: Float): Float = {
mCodePaint.setTextSize(fontSize * scale)
val result = mCodePaint.measureText(s1).toFloat / s1.length / scale
mCodePaint.setTextSize(fontSize )
result
}

Categories

Resources