As a brief context, I am getting raw video data from Zoom SDK as separate Y, U, and V ByteBuffers and trying to convert them to bitmaps.
However, I noticed that this conversion is resulting in grayscale bitmap images with green and pink spots [as shown in the screenshot below]
the method I’m using for conversion is the following:
fun ZoomVideoSDKVideoRawData.toBitmap(): Bitmap? {
val width = streamWidth
val height = streamHeight
val yuvBytes = ByteArray(width * (height + height / 2))
val yPlane = getyBuffer()
val uPlane = getuBuffer()
val vPlane = getvBuffer()
// copy Y
yPlane.get(yuvBytes, 0, width * height)
// copy U
var offset = width * height
uPlane.get(yuvBytes, offset, width * height / 4)
// copy V
offset += width * height / 4
vPlane.get(yuvBytes, offset, width * height / 4)
// make YUV image
val yuvImage = YuvImage(yuvBytes, ImageFormat.NV21, width, height, null)
// convert image to bitmap
val out = ByteArrayOutputStream()
yuvImage.compressToJpeg(Rect(0, 0, width, height), 50, out)
val imageBytes = out.toByteArray()
val image = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.size)
// return result
return image
}
any idea why the colors of the resulting bitmap are corrupted?
Thanks in advance!
I'm trying to get the most out of compression and I need to make my image grayscale in order to make the file lighter.
I found a function that makes image grayscale:
fun toGrayscale(bmpOriginal: Bitmap): Bitmap {
val width = bmpOriginal.width
val height = bmpOriginal.height
val bmpGrayscale = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
val c = Canvas(bmpGrayscale)
val paint = Paint()
val cm = ColorMatrix()
cm.setSaturation(0f)
val f = ColorMatrixColorFilter(cm)
paint.colorFilter = f
c.drawBitmap(bmpOriginal, 0f, 0f, paint)
return bmpGrayscale
}
Then I save it with
try {
FileOutputStream(filePathGrayScale).use { out ->
bmp.compress(Bitmap.CompressFormat.JPEG, 100, out)
}
} catch (e: IOException) {
e.printStackTrace()
}
But the image size increase!
What exactly I'm doing wrong and how to make the thing correct?
bmp.compress(Bitmap.CompressFormat.JPEG, 100, out)
You are saving the image with an compression quality of 100. This maximizes file size (and minimizes quality loss).
If you want to make the most of compression, pass a lower value here. There is no ideal value because it really depends on the image being compressed and the tolerable amount of artefacts. It's a trade-of between file size and image quality.
For some more reading about the quality level see:
What quality to choose when converting to JPG?
And some examples 1, 2
I have an imageView and several textViews
My app allows user to drag textViews on evey coordinates of imageView (imageView is not full screen) that user wants .
In other words this app allows user to add several captions to user image
and convert that image and captions to a single image and store it on user device.
According to one of stackOverFlow responses I can just convert one textView text to a bitamp
id there any way to screenshot from final image which user have created with its captions in kotlin??
This is my code:
#Throws(IOException::class)
fun foo(text: String) {
val textPaint = object : Paint() {
init {
setColor(Color.WHITE)
setTextAlign(Align.CENTER)
setTextSize(20f)
setAntiAlias(true)
}
}
val bounds = Rect()
textPaint.getTextBounds(text, 0, text.length, bounds)
val bmp = Bitmap.createBitmap(mImgBanner.getWidth(), mImgBanner.getHeight(), Bitmap.Config.RGB_565) //use ARGB_8888 for better quality
val canvas = Canvas(bmp)
canvas.drawText(text, 0, 20f, textPaint)
val path = Environment.getExternalStorageDirectory().getAbsolutePath() + "/image.png"
val stream = FileOutputStream(path)
bmp.compress(Bitmap.CompressFormat.PNG, 100, stream)
bmp.recycle()
stream.close()
}
Add desired views in xml layout inflate it and take screenshot of parent layout that is containing your views.
Code for taking screenshoot:
fun takeScreenshotOfView(view: View, height: Int, width: Int): Bitmap {
val bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
val canvas = Canvas(bitmap)
val bgDrawable = view.background
if (bgDrawable != null) {
bgDrawable.draw(canvas)
} else {
canvas.drawColor(Color.WHITE)
}
view.draw(canvas)
return bitmap
}
you can also use the extension View.drawToBitmap(). It will return a Bitmap
/**
* Return a [Bitmap] representation of this [View].
*
* The resulting bitmap will be the same width and height as this view's current layout
* dimensions. This does not take into account any transformations such as scale or translation.
*
* Note, this will use the software rendering pipeline to draw the view to the bitmap. This may
* result with different drawing to what is rendered on a hardware accelerated canvas (such as
* the device screen).
*
* If this view has not been laid out this method will throw a [IllegalStateException].
*
* #param config Bitmap config of the desired bitmap. Defaults to [Bitmap.Config.ARGB_8888].
*/
fun View.drawToBitmap(config: Bitmap.Config = Bitmap.Config.ARGB_8888): Bitmap {
if (!ViewCompat.isLaidOut(this)) {
throw IllegalStateException("View needs to be laid out before calling drawToBitmap()")
}
return Bitmap.createBitmap(width, height, config).applyCanvas {
translate(-scrollX.toFloat(), -scrollY.toFloat())
draw(this)
}
}
In android, I get an Image object from here https://inducesmile.com/android/android-camera2-api-example-tutorial/ this camera tutorial. But I want to now loop through the pixel values, does anyone know how I can do that? Do I need to convert it to something else and how can I do that?
Thanks
If you want to loop all throughout the pixel then you need to convert it first to Bitmap object. Now since what I see in the source code that it returns an Image, you can directly convert the bytes to bitmap.
Image image = reader.acquireLatestImage();
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.capacity()];
buffer.get(bytes);
Bitmap bitmapImage = BitmapFactory.decodeByteArray(bytes, 0, bytes.length, null);
Then once you get the bitmap object, you can now iterate through all of the pixels.
YuvToRgbConverter is useful for conversion from Image to Bitmap.
https://github.com/android/camera-samples/blob/master/Camera2Basic/utils/src/main/java/com/example/android/camera/utils/YuvToRgbConverter.kt
Usage sample.
val bmp = Bitmap.createBitmap(image.width, image.height, Bitmap.Config.ARGB_8888)
yuvToRgbConverter.yuvToRgb(image, bmp)
Actually you have two questions in one
1) How do you loop throw android.media.Image pixels
2) How do you convert android.media.image to Bitmap
The 1-st is easy. Note that the Image object that you get from the camera, it's just a YUV frame, where Y, and U+V components are in different planes. In many Image Processing cases you need only the Y plane, that means the gray part of the image. To get it I suggest code like this:
Image.Plane[] planes = image.getPlanes();
int yRowStride = planes[0].getRowStride();
byte[] yImage = new byte[yRowStride];
planes[0].getBuffer().get(yImage);
The yImage byte buffer is actually the gray pixels of the frame.
In same manner you can get the U+V parts to. Note that they can be U first, and V after, or V and after it U, and maybe interlived (that is the common case case with Camera2 API). So you get UVUV....
For debug purposes, I often write the frame to a file, and trying to open it with Vooya app (Linux) to check the format.
The 2-th question is a little bit more complex.
To get a Bitmap object I found some code example from TensorFlow project here. The most interesting functions for you is "convertImageToBitmap" that will return you with RGB values.
To convert them to a real Bitmap do the next:
Bitmap rgbFrameBitmap;
int[] cachedRgbBytes;
cachedRgbBytes = ImageUtils.convertImageToBitmap(image, cachedRgbBytes, cachedYuvBytes);
rgbFrameBitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
rgbFrameBitmap.setPixels(cachedRgbBytes,0,image.getWidth(), 0, 0,image.getWidth(), image.getHeight());
Note: There is more options of converting YUV to RGB frames, so if you need the pixels value, maybe Bitmap is not the best choice, as it may consume more memory than you need, to just get the RGB values
Java Conversion Method
ImageAnalysis imageAnalysis = new ImageAnalysis.Builder()
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.setOutputImageFormat(ImageAnalysis.OUTPUT_IMAGE_FORMAT_RGBA_8888)
.build();
imageAnalysis.setAnalyzer(ContextCompat.getMainExecutor(this), new ImageAnalysis.Analyzer() {
#Override
public void analyze(#NonNull ImageProxy image) {
// call toBitmap function
Bitmap bitmap = toBitmap(image);
image.close();
}
});
private Bitmap bitmapBuffer;
private Bitmap toBitmap(#NonNull ImageProxy image) {
if(bitmapBuffer == null){
bitmapBuffer = Bitmap.createBitmap(image.getWidth(),image.getHeight(),Bitmap.Config.ARGB_8888);
}
bitmapBuffer.copyPixelsFromBuffer(image.getPlanes()[0].getBuffer());
return bitmapBuffer;
}
https://docs.oracle.com/javase/1.5.0/docs/api/java/nio/ByteBuffer.html#get%28byte[]%29
According to the java docs: The buffer.get method transfers bytes from this buffer into the given destination array. An invocation of this method of the form src.get(a) behaves in exactly the same way as the invocation
src.get(a, 0, a.length)
I assume you have YUV (YUV (YUV_420_888) Image provided by Camera. Using this interesting How to use YUV (YUV_420_888) Image in Android tutorial I can propose following solution to convert Image to Bitmap.
Use this to convert YUV Image to Bitmap:
private Bitmap yuv420ToBitmap(Image image, Context context) {
RenderScript rs = RenderScript.create(SpeedMeasurementActivity.this);
ScriptIntrinsicYuvToRGB script = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
// Refer the logic in a section below on how to convert a YUV_420_888 image
// to single channel flat 1D array. For sake of this example I'll abstract it
// as a method.
byte[] yuvByteArray = image2byteArray(image);
Type.Builder yuvType = new Type.Builder(rs, Element.U8(rs)).setX(yuvByteArray.length);
Allocation in = Allocation.createTyped(rs, yuvType.create(), Allocation.USAGE_SCRIPT);
Type.Builder rgbaType = new Type.Builder(rs, Element.RGBA_8888(rs))
.setX(image.getWidth())
.setY(image.getHeight());
Allocation out = Allocation.createTyped(rs, rgbaType.create(), Allocation.USAGE_SCRIPT);
// The allocations above "should" be cached if you are going to perform
// repeated conversion of YUV_420_888 to Bitmap.
in.copyFrom(yuvByteArray);
script.setInput(in);
script.forEach(out);
Bitmap bitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
out.copyTo(bitmap);
return bitmap;
}
and a supportive function to convert 3 Planes YUV image to 1 dimesional byte array:
private byte[] image2byteArray(Image image) {
if (image.getFormat() != ImageFormat.YUV_420_888) {
throw new IllegalArgumentException("Invalid image format");
}
int width = image.getWidth();
int height = image.getHeight();
Image.Plane yPlane = image.getPlanes()[0];
Image.Plane uPlane = image.getPlanes()[1];
Image.Plane vPlane = image.getPlanes()[2];
ByteBuffer yBuffer = yPlane.getBuffer();
ByteBuffer uBuffer = uPlane.getBuffer();
ByteBuffer vBuffer = vPlane.getBuffer();
// Full size Y channel and quarter size U+V channels.
int numPixels = (int) (width * height * 1.5f);
byte[] nv21 = new byte[numPixels];
int index = 0;
// Copy Y channel.
int yRowStride = yPlane.getRowStride();
int yPixelStride = yPlane.getPixelStride();
for(int y = 0; y < height; ++y) {
for (int x = 0; x < width; ++x) {
nv21[index++] = yBuffer.get(y * yRowStride + x * yPixelStride);
}
}
// Copy VU data; NV21 format is expected to have YYYYVU packaging.
// The U/V planes are guaranteed to have the same row stride and pixel stride.
int uvRowStride = uPlane.getRowStride();
int uvPixelStride = uPlane.getPixelStride();
int uvWidth = width / 2;
int uvHeight = height / 2;
for(int y = 0; y < uvHeight; ++y) {
for (int x = 0; x < uvWidth; ++x) {
int bufferIndex = (y * uvRowStride) + (x * uvPixelStride);
// V channel.
nv21[index++] = vBuffer.get(bufferIndex);
// U channel.
nv21[index++] = uBuffer.get(bufferIndex);
}
}
return nv21;
}
start with the imageProxy from the analyizer
#Override
public void analyze(#NonNull ImageProxy imageProxy)
{
Image mediaImage = imageProxy.getImage();
if (mediaImage != null)
{
toBitmap(mediaImage);
}
imageProxy.close();
}
Then convert to a bitmap
private Bitmap toBitmap(Image image)
{
if (image.getFormat() != ImageFormat.YUV_420_888)
{
throw new IllegalArgumentException("Invalid image format");
}
byte[] nv21b = yuv420ThreePlanesToNV21BA(image.getPlanes(), image.getWidth(), image.getHeight());
YuvImage yuvImage = new YuvImage(nv21b, ImageFormat.NV21, image.getWidth(), image.getHeight(), null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvImage.compressToJpeg (new Rect(0, 0,
yuvImage.getWidth(),
yuvImage.getHeight()),
mQuality, baos);
mFrameBuffer = baos;
//byte[] imageBytes = baos.toByteArray();
//Bitmap bm = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
return null;
}
Here's the static function that worked for me
public static byte [] yuv420ThreePlanesToNV21BA(Plane[] yuv420888planes, int width, int height)
{
int imageSize = width * height;
byte[] out = new byte[imageSize + 2 * (imageSize / 4)];
if (areUVPlanesNV21(yuv420888planes, width, height)) {
// Copy the Y values.
yuv420888planes[0].getBuffer().get(out, 0, imageSize);
ByteBuffer uBuffer = yuv420888planes[1].getBuffer();
ByteBuffer vBuffer = yuv420888planes[2].getBuffer();
// Get the first V value from the V buffer, since the U buffer does not contain it.
vBuffer.get(out, imageSize, 1);
// Copy the first U value and the remaining VU values from the U buffer.
uBuffer.get(out, imageSize + 1, 2 * imageSize / 4 - 1);
}
else
{
// Fallback to copying the UV values one by one, which is slower but also works.
// Unpack Y.
unpackPlane(yuv420888planes[0], width, height, out, 0, 1);
// Unpack U.
unpackPlane(yuv420888planes[1], width, height, out, imageSize + 1, 2);
// Unpack V.
unpackPlane(yuv420888planes[2], width, height, out, imageSize, 2);
}
return out;
}
bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.image);
1-Store the path to the image file as a string variable. To decode the content of an image file, you need the file path stored within your code as a string. Use the following syntax as a guide:
String picPath = "/mnt/sdcard/Pictures/mypic.jpg";
2-Create a Bitmap Object And Use BitmapFactory:
Bitmap picBitmap;
Bitmap picBitmap = BitmapFactory.decodeFile(picPath);
I need to load a very long (approx 15K px) black-and-white bitmap which would have screen width.
I tried several approaches and the best seems using Glide:
private fun loadGlideScreenWideCompress(context: Context, imageView: AppCompatImageView) {
imageView.adjustViewBounds = true
val params = LayoutParams(MATCH_PARENT, WRAP_CONTENT)
imageView.layoutParams = params
GlideApp.with(context)
.load(imageRes)
.encodeFormat(Bitmap.CompressFormat.WEBP)
.diskCacheStrategy(DiskCacheStrategy.ALL)
.into(imageView)
}
The problem is - image quality is lost. The image is blurred and text is not readable.
I tried to use BitmapRegionDecoder. I did not see the quality loss or memory issues. However, it decodes just a portion of the image. I do not really understand how to use it: to decode next part on scroll event? This would be hard to implement. Measure the drawable height and passing full height to the BitmapRegionDecoder intuitively feels wrong because this is the decoder is just for regions.
Another issue is that the image is large and I need it to work for all screen sizes. If I take the largest possible size and then scale it, I would need to perform expensive bitmap creating operations and block the main thread potentially.
The conventional approach does not work and give OOM exceptions:
val bitmap = BitmapFactory.Options().run {
inJustDecodeBounds = true
inPreferredConfig = Bitmap.Config.ALPHA_8
inDensity = displayMetrics.densityDpi
BitmapFactory.decodeResource(context.resources, imageRes, this)
}
imageView.scaleType = ImageView.ScaleType.FIT_CENTER
imageView.setImageBitmap(bitmap)
Code with scaling down:
val res = context.resources
val display = res.displayMetrics
val dr = res.getDrawable(imageRes!!)
val original = (dr as BitmapDrawable).bitmap
val scale = original.width / display.widthPixels
val scaledBitmap = BitmapDrawable(res, Bitmap.createScaledBitmap(
original,
display.widthPixels,
original.height / scale,
true
))
imageView.adjustViewBounds = true
val bos = ByteArrayOutputStream()
scaledBitmap.bitmap.compress(CompressFormat.WEBP, 100, bos)
val decoder = BitmapRegionDecoder.newInstance(
ByteArrayInputStream(bos.toByteArray()),
false
)
val rect = Rect(
0,
0,
scaledBitmap.intrinsicWidth,
scaledBitmap.intrinsicHeight
)
val bitmapFactoryOptions = BitmapFactory.Options()
bitmapFactoryOptions.inPreferredConfig = Bitmap.Config.ALPHA_8;
bitmapFactoryOptions.inDensity = display.densityDpi;
val bmp = decoder.decodeRegion(rect, bitmapFactoryOptions);
imageView.setImageBitmap(bmp)
So the question is: What would be the best approach in this situation and how to use BitmapRegionDecoder correctly?