How to render Bitmap off-screen in Android using OpenGL? - android

I need to render a bitmap without displaying it on the screen. For that I create OpenGL context using EGL14 as described in this answer. Then I save OpenGL surface to bitmap using GLES20.glReadPixels. But for some reason it is not rendered as expected and is just transparent.
import android.graphics.Bitmap
import android.opengl.*
import android.opengl.EGL14.EGL_CONTEXT_CLIENT_VERSION
import java.nio.ByteBuffer
class Renderer {
private lateinit var display: EGLDisplay
private lateinit var surface: EGLSurface
private lateinit var eglContext: EGLContext
fun draw() {
// Just a stub that fills the bitmap with red color
GLES20.glClearColor(1f, 0f, 0f, 1f)
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT)
}
fun saveBitmap(): Bitmap {
val width = 320
val height = 240
val mPixelBuf = ByteBuffer.allocate(width * height * 4)
GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, mPixelBuf)
return Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
}
private fun initializeEglContext() {
display = EGL14.eglGetDisplay(EGL14.EGL_DEFAULT_DISPLAY)
if (display == EGL14.EGL_NO_DISPLAY) {
throw RuntimeException("eglGetDisplay failed ${EGL14.eglGetError()}")
}
val versions = IntArray(2)
if (!EGL14.eglInitialize(display, versions, 0, versions, 1)) {
throw RuntimeException("eglInitialize failed ${EGL14.eglGetError()}")
}
val configAttr = intArrayOf(
EGL14.EGL_COLOR_BUFFER_TYPE, EGL14.EGL_RGB_BUFFER,
EGL14.EGL_LEVEL, 0,
EGL14.EGL_RENDERABLE_TYPE, EGL14.EGL_OPENGL_ES2_BIT,
EGL14.EGL_SURFACE_TYPE, EGL14.EGL_PBUFFER_BIT,
EGL14.EGL_NONE
)
val configs: Array<EGLConfig?> = arrayOfNulls(1)
val numConfig = IntArray(1)
EGL14.eglChooseConfig(
display, configAttr, 0,
configs, 0, 1, numConfig, 0
)
if (numConfig[0] == 0) {
throw RuntimeException("No configs found")
}
val config: EGLConfig? = configs[0]
val surfAttr = intArrayOf(
EGL14.EGL_WIDTH, 320,
EGL14.EGL_HEIGHT, 240,
EGL14.EGL_NONE
)
surface = EGL14.eglCreatePbufferSurface(display, config, surfAttr, 0)
val contextAttrib = intArrayOf(
EGL_CONTEXT_CLIENT_VERSION, 2,
EGL14.EGL_NONE
)
eglContext = EGL14.eglCreateContext(display, config, EGL14.EGL_NO_CONTEXT, contextAttrib, 0)
EGL14.eglMakeCurrent(display, surface, surface, eglContext)
}
fun destroy() {
EGL14.eglMakeCurrent(display, EGL14.EGL_NO_SURFACE, EGL14.EGL_NO_SURFACE,
EGL14.EGL_NO_CONTEXT)
EGL14.eglDestroySurface(display, surface)
EGL14.eglDestroyContext(display, eglContext)
EGL14.eglTerminate(display)
}
}
This is how I use it:
val renderer = Renderer()
renderer.initializeEglContext()
renderer.draw()
val bitmap = renderer.saveBitmap()
renderer.destroy()
The code runs without any errors. I checked that context is created successfully. For example GLES20.glCreateProgram works as expected and returns a valid id. The only warning I get is
W/OpenGLRenderer: Failed to choose config with EGL_SWAP_BEHAVIOR_PRESERVED, retrying without...
But I'm not sure if it affects the result in any way.
However bitmap is not filled with color and is transparent:
val color = bitmap[0, 0]
Log.d("Main", "onCreate: ${Color.valueOf(color)}")
Color(0.0, 0.0, 0.0, 0.0, sRGB IEC61966-2.1)
I guess that I'm missing something, but I can't figure out what. How to make it to actually render?

Pixel buffer must be copied to bitmap:
val mPixelBuf bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
bitmap.copyPixelsFromBuffer(mPixelBuf)
return bitmap

Related

Kotlin Jetpack Compose: Display ByteArray or FileStream as Image in Android

Heyho,
I'm quiet new to Kotlin, but I came across this problem:
I have a library function, that generates an image(QR-Code). Now I'd like to display that image... But I've no idea how. The documentation only explains how to save the image locally. But I'm not really interested in saving it. So I can either get the image as a FileStream or as a ByteArray. Any possibility to display any of these as an Image in the UI?
An Example:
#Composable
fun QrCode(stand: String) {
Text(text = "QR-Code:", fontSize = 16.sp)
//? Image(QRCode(stand).render().getBytes()) // this obviously won't work
}
Any ideas?
In order to display an Image, you can refer this.
#Composable
fun BitmapImage(bitmap: Bitmap) {
Image(
bitmap = bitmap.asImageBitmap(),
contentDescription = "some useful description",
)
}
So the remaining is to find a way to convert your target input into a Bitmap.
If you have ByteArray of the image file, you can refer to this.
fun convertImageByteArrayToBitmap(imageData: ByteArray): Bitmap {
return BitmapFactory.decodeByteArray(imageData, 0, imageData.size)
}
If you just possess the QR String, you can refer this to convert QR String to a Bitmap.
fun encodeAsBitmap(source: String, width: Int, height: Int): Bitmap? {
val result: BitMatrix = try {
MultiFormatWriter().encode(source, BarcodeFormat.QR_CODE, width, height, null)
} catch (e: Exception) {
return null
}
val w = result.width
val h = result.height
val pixels = IntArray(w * h)
for (y in 0 until h) {
val offset = y * w
for (x in 0 until w) {
pixels[offset + x] = if (result[x, y]) Color.BLACK else Color.WHITE
}
}
val bitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888)
bitmap.setPixels(pixels, 0, width, 0, 0, w, h)
return bitmap
}
And you need to add this dependency implementation 'com.google.zxing:core:3.3.1' when you decide to use the above code.

Android: improving conversion speed of grayscale bitmap to monochromatic

How can I improve the speed of the following process of conversion?
For grayscale bitmap of 384x524 pixels, this takes around 2,1 seconds on the target device.
fun convertToMonochromatic(bitmap: Bitmap): Bitmap {
val result = Bitmap.createBitmap(bitmap.width, bitmap.height, bitmap.config)
for (row in 0 until bitmap.height) {
for (col in 0 until bitmap.width) {
val hsv = FloatArray(3)
Color.colorToHSV(bitmap.getPixel(col, row), hsv)
if (hsv[2] > 0.7f) {
result.setPixel(col, row, Color.WHITE)
} else {
result.setPixel(col, row, Color.BLACK)
}
}
}
return result
}
Is there some "mass operation", transforming all the pixels at once directly based on the HSV and the Value particulary
Using this access, to iterate the pixels one by one, it seems the main time consuming part is the colorToHSV function. It computes more info, than the needed Value.
Description of the transform https://www.rapidtables.com/convert/color/rgb-to-hsv.html
After the following adjustion, the needed time for the given size, is reduced from 2,1 seconds to 0,1 second.
fun convertToMonochromatic(bitmap: Bitmap): Bitmap {
val result = Bitmap.createBitmap(bitmap.width, bitmap.height, bitmap.config)
val size = bitmap.width * bitmap.height
val input = IntArray(size)
val output = IntArray(size)
bitmap.getPixels(input, 0, bitmap.width, 0, 0, bitmap.width, bitmap.height)
var current = 0
input.forEach {
val red = it shr 16 and 0xFF
val green = it shr 8 and 0xFF
val blue = it and 0xFF
val max = maxOf(red, green, blue)
if (max > 175) {
output[current++] = Color.WHITE
} else {
output[current++] = Color.BLACK
}
}
result.setPixels(output, 0, bitmap.width, 0, 0, bitmap.width, bitmap.height)
return result
}

Cannot get OpenGL ES Android plugin to show drawn vertex array

I am attempting to write a Flutter plugin for Android to allow me to directly write pixels using a Texture, so I need to make a SurfaceTexture available, and I want to be able to draw arbitrary pixel data to it using a single textured quad. For now, for debugging, I am simply trying to draw a single cyan triangle over a magenta background to verify my vertices are being drawn correctly, but it appears they are not. The glClear call is doing what I expect, as the magenta background is being shown instead of the black color that would otherwise be behind it, and I can change that color by changing what I pass to glClearColor, so in some way, the texture is being rendered, but I see no evidence that calling glDrawArrays is accomplishing anything. The code containing all of my interfacing with OpenGL ES is in the file below, and the drawTextureToCurrentSurface method is where both glClear and glDrawArrays are being called:
class EglContext {
companion object {
// Pass through position and UV values
val vertexSource = """
#version 300 es
precision mediump float;
/*layout(location = 0)*/ in vec2 position;
/*layout(location = 1)*/ in vec2 uv;
out vec2 uvOut;
void main() {
gl_Position = vec4(position, -0.5, 1.0);
uvOut = uv;
}
""".trimIndent()
// Eventually get the texture value, for now, just make it cyan so I can see it
val fragmentSource = """
#version 300 es
precision mediump float;
in vec2 uvOut;
out vec4 fragColor;
uniform sampler2D tex;
void main() {
vec4 texel = texture(tex, uvOut);
// Effectively ignore the texel without optimizing it out
fragColor = texel * 0.0001 + vec4(0.0, 1.0, 1.0, 1.0);
}
""".trimIndent()
var glThread: HandlerThread? = null
var glHandler: Handler? = null
}
private var display = EGL14.EGL_NO_DISPLAY
private var context = EGL14.EGL_NO_CONTEXT
private var config: EGLConfig? = null
private var vertexBuffer: FloatBuffer
private var uvBuffer: FloatBuffer
//private var indexBuffer: IntBuffer
private var defaultProgram: Int = -1
private var uniformTextureLocation: Int = -1
private var vertexLocation: Int = -1
private var uvLocation: Int = -1
var initialized = false
private fun checkGlError(msg: String) {
val errCodeEgl = EGL14.eglGetError()
val errCodeGl = GLES30.glGetError()
if (errCodeEgl != EGL14.EGL_SUCCESS || errCodeGl != GLES30.GL_NO_ERROR) {
throw RuntimeException(
"$msg - $errCodeEgl(${GLU.gluErrorString(errCodeEgl)}) : $errCodeGl(${
GLU.gluErrorString(
errCodeGl
)
})"
)
}
}
init {
// Flat square
// Am I allocating and writing to these correctly?
val vertices = floatArrayOf(-1f, -1f, 1f, -1f, -1f, 1f, 1f, 1f)
vertexBuffer = ByteBuffer.allocateDirect(vertices.size * 4).asFloatBuffer().also {
it.put(vertices)
it.position(0)
}
val uv = floatArrayOf(0f, 0f, 1f, 0f, 0f, 1f, 1f, 1f)
uvBuffer = ByteBuffer.allocateDirect(uv.size * 4).asFloatBuffer().also {
it.put(uv)
it.position(0)
}
// Not being used until I can figure out what's currently not working
/*val indices = intArrayOf(0, 1, 2, 2, 1, 3)
indexBuffer = ByteBuffer.allocateDirect(indices.size * 4).asIntBuffer().also {
it.position(0)
it.put(indices)
it.position(0)
}*/
if (glThread == null) {
glThread = HandlerThread("flutterSoftwareRendererPlugin")
glThread!!.start()
glHandler = Handler(glThread!!.looper)
}
}
// Run OpenGL code on a separate thread to keep the context available
private fun doOnGlThread(blocking: Boolean = true, task: () -> Unit) {
val semaphore: Semaphore? = if (blocking) Semaphore(0) else null
glHandler!!.post {
task.invoke()
semaphore?.release()
}
semaphore?.acquire()
}
fun setup() {
doOnGlThread {
Log.d("Native", "Setting up EglContext")
display = EGL14.eglGetDisplay(EGL14.EGL_DEFAULT_DISPLAY)
if (display == EGL14.EGL_NO_DISPLAY) {
Log.e("Native", "No display")
checkGlError("Failed to get display")
}
val versionBuffer = IntArray(2)
if (!EGL14.eglInitialize(display, versionBuffer, 0, versionBuffer, 1)) {
Log.e("Native", "Did not init")
checkGlError("Failed to initialize")
}
val configs = arrayOfNulls<EGLConfig>(1)
val configNumBuffer = IntArray(1)
var attrBuffer = intArrayOf(
EGL14.EGL_RENDERABLE_TYPE, EGL14.EGL_OPENGL_ES2_BIT,
EGL14.EGL_RED_SIZE, 8,
EGL14.EGL_GREEN_SIZE, 8,
EGL14.EGL_BLUE_SIZE, 8,
EGL14.EGL_ALPHA_SIZE, 8,
EGL14.EGL_DEPTH_SIZE, 16,
//EGL14.EGL_STENCIL_SIZE, 8,
//EGL14.EGL_SAMPLE_BUFFERS, 1,
//EGL14.EGL_SAMPLES, 4,
EGL14.EGL_NONE
)
if (!EGL14.eglChooseConfig(
display,
attrBuffer,
0,
configs,
0,
configs.size,
configNumBuffer,
0
)
) {
Log.e("Native", "No config")
checkGlError("Failed to choose a config")
}
if (configNumBuffer[0] == 0) {
Log.e("Native", "No config")
checkGlError("Got zero configs")
}
Log.d("Native", "Got Config x${configNumBuffer[0]}: ${configs[0]}")
config = configs[0]
attrBuffer = intArrayOf(
EGL14.EGL_CONTEXT_CLIENT_VERSION, 2, EGL14.EGL_NONE
)
context = EGL14.eglCreateContext(display, config, EGL14.EGL_NO_CONTEXT, attrBuffer, 0)
if (context == EGL14.EGL_NO_CONTEXT) {
Log.e("Native", "Failed to get any context")
checkGlError("Failed to get context")
}
Log.d("Native", "Context = $context\n 'Current' = ${EGL14.eglGetCurrentContext()}")
initialized = true
}
}
// Called by my plugin to get a surface to register for Texture widget
fun buildSurfaceTextureWindow(surfaceTexture: SurfaceTexture): EGLSurface {
var _surface: EGLSurface? = null
doOnGlThread {
val attribBuffer = intArrayOf(EGL14.EGL_NONE)
val surface =
EGL14.eglCreateWindowSurface(display, config, surfaceTexture, attribBuffer, 0)
if (surface == EGL14.EGL_NO_SURFACE) {
checkGlError("Obtained no surface")
}
EGL14.eglMakeCurrent(display, surface, surface, context)
Log.d("Native", "New current context = ${EGL14.eglGetCurrentContext()}")
if (defaultProgram == -1) {
defaultProgram = makeProgram(
mapOf(
GLES30.GL_VERTEX_SHADER to vertexSource,
GLES30.GL_FRAGMENT_SHADER to fragmentSource
)
)
uniformTextureLocation = GLES30.glGetUniformLocation(defaultProgram, "tex")
vertexLocation = GLES30.glGetAttribLocation(defaultProgram, "position")
uvLocation = GLES30.glGetAttribLocation(defaultProgram, "uv")
Log.d("Native", "Attrib locations $vertexLocation, $uvLocation")
checkGlError("Getting uniform")
}
_surface = surface
}
return _surface!!
}
fun makeCurrent(eglSurface: EGLSurface, width: Int, height: Int) {
doOnGlThread {
GLES30.glViewport(0, 0, width, height)
if (!EGL14.eglMakeCurrent(display, eglSurface, eglSurface, context)) {
checkGlError("Failed to make surface current")
}
}
}
fun makeTexture(width: Int, height: Int): Int {
var _texture: Int? = null
doOnGlThread {
val intArr = IntArray(1)
GLES30.glGenTextures(1, intArr, 0)
checkGlError("Generate texture")
Log.d("Native", "${EGL14.eglGetCurrentContext()} ?= ${EGL14.EGL_NO_CONTEXT}")
val texture = intArr[0]
Log.d("Native", "Texture = $texture")
GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, texture)
checkGlError("Bind texture")
val buffer = ByteBuffer.allocateDirect(width * height * 4)
GLES30.glTexImage2D(
GLES30.GL_TEXTURE_2D,
0,
GLES30.GL_RGBA,
width,
height,
0,
GLES30.GL_RGBA,
GLES30.GL_UNSIGNED_BYTE,
buffer
)
checkGlError("Create texture buffer")
_texture = texture
}
return _texture!!
}
private fun compileShader(source: String, shaderType: Int): Int {
val currentContext = EGL14.eglGetCurrentContext()
val noContext = EGL14.EGL_NO_CONTEXT
val shaderId = GLES30.glCreateShader(shaderType)
Log.d("Native", "Created $shaderId\nContext $currentContext vs $noContext")
checkGlError("Create shader")
if (shaderId == 0) {
Log.e("Native", "Could not create shader for some reason")
checkGlError("Could not create shader")
}
GLES30.glShaderSource(shaderId, source)
checkGlError("Setting shader source")
GLES30.glCompileShader(shaderId)
val statusBuffer = IntArray(1)
GLES30.glGetShaderiv(shaderId, GLES30.GL_COMPILE_STATUS, statusBuffer, 0)
val shaderLog = GLES30.glGetShaderInfoLog(shaderId)
Log.d("Native", "Compiling shader #$shaderId : $shaderLog")
if (statusBuffer[0] == 0) {
GLES30.glDeleteShader(shaderId)
checkGlError("Failed to compile shader $shaderId")
}
return shaderId
}
private fun makeProgram(sources: Map<Int, String>): Int {
val currentContext = EGL14.eglGetCurrentContext()
val noContext = EGL14.EGL_NO_CONTEXT
val program = GLES30.glCreateProgram()
Log.d("Native", "Created $program\nContext $currentContext vs $noContext")
checkGlError("Create program")
sources.forEach {
val shader = compileShader(it.value, it.key)
GLES30.glAttachShader(program, shader)
}
val linkBuffer = IntArray(1)
GLES30.glLinkProgram(program)
GLES30.glGetProgramiv(program, GLES30.GL_LINK_STATUS, linkBuffer, 0)
if (linkBuffer[0] == 0) {
GLES30.glDeleteProgram(program)
checkGlError("Failed to link program $program")
}
return program
}
// Called to actually draw to the surface. When fully implemented it should draw whatever is
// on the associated texture, but for now, to debug, I just want to verify I can draw vertices,
// but it seems I cannot?
fun drawTextureToCurrentSurface(texture: Int, surface: EGLSurface) {
doOnGlThread {
// Verify I have a context
val currentContext = EGL14.eglGetCurrentContext()
val noContext = EGL14.EGL_NO_CONTEXT
Log.d("Native", "Drawing, Context = $currentContext vs $noContext")
checkGlError("Just checking first")
GLES30.glClearColor(1f, 0f, 1f, 1f)
GLES30.glClearDepthf(1f)
GLES30.glDisable(GLES30.GL_DEPTH_TEST)
GLES30.glClear(GLES30.GL_COLOR_BUFFER_BIT or GLES30.GL_DEPTH_BUFFER_BIT)
checkGlError("Clearing")
GLES30.glUseProgram(defaultProgram)
checkGlError("Use program")
GLES30.glActiveTexture(GLES30.GL_TEXTURE0)
checkGlError("Activate texture 0")
GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, texture)
checkGlError("Bind texture $texture")
GLES30.glUniform1i(uniformTextureLocation, 0)
checkGlError("Set uniform")
GLES30.glEnableVertexAttribArray(vertexLocation)
vertexBuffer.position(0)
GLES30.glVertexAttribPointer(vertexLocation, 2, GLES30.GL_FLOAT, false, 0, vertexBuffer)
Log.d("Native", "Bound vertices (shader=$defaultProgram)")
checkGlError("Attribute 0")
GLES30.glEnableVertexAttribArray(uvLocation)
uvBuffer.position(0)
GLES30.glVertexAttribPointer(uvLocation, 2, GLES30.GL_FLOAT, false, 0, uvBuffer)
checkGlError("Attribute 1")
//indexBuffer.position(0)
//GLES30.glDrawElements(GLES30.GL_TRIANGLES, 4, GLES30.GL_UNSIGNED_INT, indexBuffer)
// I would expect to get a triangle of different color than the background
GLES30.glDrawArrays(GLES30.GL_TRIANGLE_STRIP, 0, 3)
GLES30.glFinish()
checkGlError("Finished GL")
EGL14.eglSwapBuffers(display, surface)
checkGlError("Swapped buffers")
}
}
...currently unused other methods
}
The general flow of the above code is that the init block executes when initializing the context, of which there is only one. setup is called when the plugin is registered, and buildSurfaceTextureWindow is called when initializing a SurfaceTexture for a Flutter Texture. The first time this is called, it compiles the shaders. When the plugin wants to render the texture, it calls makeCurrent then drawTextureToCurrentSurface, which is where the magenta background becomes visible but without any cyan triangle. Calls to GL functions are done in a separate thread using doOnGlThread.
If you need to see all of the code including the full plugin implementation and example app using it, I have it on Github, but as far as I can tell the above code should be the only relevant region to not seeing any geometry rendered in the effectively hardcoded color from my fragment shader.
tl;dr My background color from glClear shows up on screen, but my expected result of calling glDrawArrays, a cyan triangle, does not, and I am trying to understand why.
Apparently I needed to call .order(ByteOrder.nativeOrder()) on my buffers. Without this, the vertex array data is not set up properly. Also I needed to set glTexParameteri(GL_TEXTURE_2D, ...) for GL_TEXTURE_MAG/MIN_FILTER and GL_TEXTURE_WRAP_S/T. Without that, all textures are all-black

How can I convert Tensor into Bitmap on PyTorch Mobile?

I found that solution (https://itnext.io/converting-pytorch-float-tensor-to-android-rgba-bitmap-with-kotlin-ffd4602a16b6) but when I tried to convert that way I found that the size of inputTensor.dataAsFloatArray is more than bitmap.width*bitmap.height. How works converting tensor to float array or is there any other possible method to convert pytorch tensor to bitmap?
val inputTensor = TensorImageUtils.bitmapToFloat32Tensor(
bitmap,
TensorImageUtils.TORCHVISION_NORM_MEAN_RGB, TensorImageUtils.TORCHVISION_NORM_STD_RGB
)
// Float array size is 196608 when width and height are 256x256 = 65536
val res = floatArrayToGrayscaleBitmap(inputTensor.dataAsFloatArray, bitmap.width, bitmap.height)
fun floatArrayToGrayscaleBitmap (
floatArray: FloatArray,
width: Int,
height: Int,
alpha :Byte = (255).toByte(),
reverseScale :Boolean = false
) : Bitmap {
// Create empty bitmap in RGBA format (even though it says ARGB but channels are RGBA)
val bmp = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
val byteBuffer = ByteBuffer.allocate(width*height*4)
Log.d("App", floatArray.size.toString() + " " + (width * height * 4).toString())
// mapping smallest value to 0 and largest value to 255
val maxValue = floatArray.max() ?: 1.0f
val minValue = floatArray.min() ?: 0.0f
val delta = maxValue-minValue
var tempValue :Byte
// Define if float min..max will be mapped to 0..255 or 255..0
val conversion = when(reverseScale) {
false -> { v: Float -> ((v-minValue)/delta*255).toByte() }
true -> { v: Float -> (255-(v-minValue)/delta*255).toByte() }
}
// copy each value from float array to RGB channels and set alpha channel
floatArray.forEachIndexed { i, value ->
tempValue = conversion(value)
byteBuffer.put(4*i, tempValue)
byteBuffer.put(4*i+1, tempValue)
byteBuffer.put(4*i+2, tempValue)
byteBuffer.put(4*i+3, alpha)
}
bmp.copyPixelsFromBuffer(byteBuffer)
return bmp
}
None of the answers were able to produce the output I wanted, so this is what I came up with - it is basically only reverse engineered version of what happenes in TensorImageUtils.bitmapToFloat32Tensor().
Please note that this function only works if you are using MemoryFormat.CONTIGUOUS (which is default) in TensorImageUtils.bitmapToFloat32Tensor().
fun tensor2Bitmap(input: FloatArray, width: Int, height: Int, normMeanRGB: FloatArray, normStdRGB: FloatArray): Bitmap? {
val pixelsCount = height * width
val pixels = IntArray(pixelsCount)
val output = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
val conversion = { v: Float -> ((v.coerceIn(0.0f, 1.0f))*255.0f).roundToInt()}
val offset_g = pixelsCount
val offset_b = 2 * pixelsCount
for (i in 0 until pixelsCount) {
val r = conversion(input[i] * normStdRGB[0] + normMeanRGB[0])
val g = conversion(input[i + offset_g] * normStdRGB[1] + normMeanRGB[1])
val b = conversion(input[i + offset_b] * normStdRGB[2] + normMeanRGB[2])
pixels[i] = 255 shl 24 or (r.toInt() and 0xff shl 16) or (g.toInt() and 0xff shl 8) or (b.toInt() and 0xff)
}
output.setPixels(pixels, 0, width, 0, 0, width, height)
return output
}
Example usage then could be as follows:
tensor2Bitmap(outputTensor.dataAsFloatArray, bitmap.width, bitmap.height, TensorImageUtils.TORCHVISION_NORM_MEAN_RGB, TensorImageUtils.TORCHVISION_NORM_STD_RGB)
// I faced the same problem, and I found the function itself
TensorImageUtils.bitmapToFloat32Tensor()
tortures the RGB colorspace. You should try to convert yuv to a bitmap and use
TensorImageUtils.bitmapToFloat32Tensor
instead for NOW.
// I modified the code from phillies (up) to get the coloful bitmap. Note that the format of an output tensor is typically NCHW.
// Here's my function in Kotlin. Hopefully it works in your case:
private fun floatArrayToBitmap(floatArray: FloatArray, width: Int, height: Int) : Bitmap {
// Create empty bitmap in ARGB format
val bmp: Bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
val pixels = IntArray(width * height * 4)
// mapping smallest value to 0 and largest value to 255
val maxValue = floatArray.max() ?: 1.0f
val minValue = floatArray.min() ?: -1.0f
val delta = maxValue-minValue
// Define if float min..max will be mapped to 0..255 or 255..0
val conversion = { v: Float -> ((v-minValue)/delta*255.0f).roundToInt()}
// copy each value from float array to RGB channels
for (i in 0 until width * height) {
val r = conversion(floatArray[i])
val g = conversion(floatArray[i+width*height])
val b = conversion(floatArray[i+2*width*height])
pixels[i] = rgb(r, g, b) // you might need to import for rgb()
}
bmp.setPixels(pixels, 0, width, 0, 0, width, height)
return bmp
}
Hopefully future releases of PyTorch Mobile will fix this bug.

Converting a Bitmap to a WebRTC VideoFrame

I'm working on a WebRTC based app for Android using the native implementation (org.webrtc:google-webrtc:1.0.24064), and I need to send a series of bitmaps along with the camera stream.
From what I understood, I can derive from org.webrtc.VideoCapturer and do my rendering in a separate thread, and send video frames to the observer; however it expects them to be YUV420 and I'm not sure I'm doing the correct conversion.
This is what I currently have: CustomCapturer.java
Are there any examples I can look at for doing this kind of things? Thanks.
YuvConverter yuvConverter = new YuvConverter();
int[] textures = new int[1];
GLES20.glGenTextures(1, textures, 0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textures[0]);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,GLES20.GL_TEXTURE_MIN_FILTER,GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
TextureBufferImpl buffer = new TextureBufferImpl(bitmap.getWidth(), bitmap.getHeight(), VideoFrame.TextureBuffer.Type.RGB, textures[0], new Matrix(), textureHelper.getHandler(), yuvConverter, null);
VideoFrame.I420Buffer i420Buf = yuvConverter.convert(buffer);
VideoFrame CONVERTED_FRAME = new VideoFrame(i420Buf, 180, videoFrame.getTimestampNs()) ;
I've tried rendering it manually with GL as in Yang's answer, but that ended up with some tearing and framerate issues when dealing with a stream of images.
Instead, I've found that the SurfaceTextureHelper class helps simplify things quite a bit, as you can also use regular canvas drawing to render the bitmap into a VideoFrame. I'm guessing it still uses GL under the hood, as the performance was otherwise comparable. Here's an example VideoCapturer that takes in arbitrary bitmaps and outputs the captured frames to its observer:
import android.content.Context
import android.graphics.Bitmap
import android.graphics.Matrix
import android.graphics.Paint
import android.os.Build
import android.view.Surface
import org.webrtc.CapturerObserver
import org.webrtc.SurfaceTextureHelper
import org.webrtc.VideoCapturer
/**
* A [VideoCapturer] that can be manually driven by passing in [Bitmap].
*
* Once [startCapture] is called, call [pushBitmap] to render images as video frames.
*/
open class BitmapFrameCapturer : VideoCapturer {
private var surfaceTextureHelper: SurfaceTextureHelper? = null
private var capturerObserver: CapturerObserver? = null
private var disposed = false
private var rotation = 0
private var width = 0
private var height = 0
private val stateLock = Any()
private var surface: Surface? = null
override fun initialize(
surfaceTextureHelper: SurfaceTextureHelper,
context: Context,
observer: CapturerObserver,
) {
synchronized(stateLock) {
this.surfaceTextureHelper = surfaceTextureHelper
this.capturerObserver = observer
surface = Surface(surfaceTextureHelper.surfaceTexture)
}
}
private fun checkNotDisposed() {
check(!disposed) { "Capturer is disposed." }
}
override fun startCapture(width: Int, height: Int, framerate: Int) {
synchronized(stateLock) {
checkNotDisposed()
checkNotNull(surfaceTextureHelper) { "BitmapFrameCapturer must be initialized before calling startCapture." }
capturerObserver?.onCapturerStarted(true)
surfaceTextureHelper?.startListening { frame -> capturerObserver?.onFrameCaptured(frame) }
}
}
override fun stopCapture() {
synchronized(stateLock) {
surfaceTextureHelper?.stopListening()
capturerObserver?.onCapturerStopped()
}
}
override fun changeCaptureFormat(width: Int, height: Int, framerate: Int) {
// Do nothing.
// These attributes are driven by the bitmaps fed in.
}
override fun dispose() {
synchronized(stateLock) {
if (disposed) {
return
}
stopCapture()
surface?.release()
disposed = true
}
}
override fun isScreencast(): Boolean = false
fun pushBitmap(bitmap: Bitmap, rotationDegrees: Int) {
synchronized(stateLock) {
if (disposed) {
return
}
checkNotNull(surfaceTextureHelper)
checkNotNull(surface)
if (this.rotation != rotationDegrees) {
surfaceTextureHelper?.setFrameRotation(rotationDegrees)
this.rotation = rotationDegrees
}
if (this.width != bitmap.width || this.height != bitmap.height) {
surfaceTextureHelper?.setTextureSize(bitmap.width, bitmap.height)
this.width = bitmap.width
this.height = bitmap.height
}
surfaceTextureHelper?.handler?.post {
val canvas = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
surface?.lockHardwareCanvas()
} else {
surface?.lockCanvas(null)
}
if (canvas != null) {
canvas.drawBitmap(bitmap, Matrix(), Paint())
surface?.unlockCanvasAndPost(canvas)
}
}
}
}
}
https://github.com/livekit/client-sdk-android/blob/c1e207c30fce9499a534e13c63a59f26215f0af4/livekit-android-sdk/src/main/java/io/livekit/android/room/track/video/BitmapFrameCapturer.kt

Categories

Resources