Related
In android, I get an Image object from here https://inducesmile.com/android/android-camera2-api-example-tutorial/ this camera tutorial. But I want to now loop through the pixel values, does anyone know how I can do that? Do I need to convert it to something else and how can I do that?
Thanks
If you want to loop all throughout the pixel then you need to convert it first to Bitmap object. Now since what I see in the source code that it returns an Image, you can directly convert the bytes to bitmap.
Image image = reader.acquireLatestImage();
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.capacity()];
buffer.get(bytes);
Bitmap bitmapImage = BitmapFactory.decodeByteArray(bytes, 0, bytes.length, null);
Then once you get the bitmap object, you can now iterate through all of the pixels.
YuvToRgbConverter is useful for conversion from Image to Bitmap.
https://github.com/android/camera-samples/blob/master/Camera2Basic/utils/src/main/java/com/example/android/camera/utils/YuvToRgbConverter.kt
Usage sample.
val bmp = Bitmap.createBitmap(image.width, image.height, Bitmap.Config.ARGB_8888)
yuvToRgbConverter.yuvToRgb(image, bmp)
Actually you have two questions in one
1) How do you loop throw android.media.Image pixels
2) How do you convert android.media.image to Bitmap
The 1-st is easy. Note that the Image object that you get from the camera, it's just a YUV frame, where Y, and U+V components are in different planes. In many Image Processing cases you need only the Y plane, that means the gray part of the image. To get it I suggest code like this:
Image.Plane[] planes = image.getPlanes();
int yRowStride = planes[0].getRowStride();
byte[] yImage = new byte[yRowStride];
planes[0].getBuffer().get(yImage);
The yImage byte buffer is actually the gray pixels of the frame.
In same manner you can get the U+V parts to. Note that they can be U first, and V after, or V and after it U, and maybe interlived (that is the common case case with Camera2 API). So you get UVUV....
For debug purposes, I often write the frame to a file, and trying to open it with Vooya app (Linux) to check the format.
The 2-th question is a little bit more complex.
To get a Bitmap object I found some code example from TensorFlow project here. The most interesting functions for you is "convertImageToBitmap" that will return you with RGB values.
To convert them to a real Bitmap do the next:
Bitmap rgbFrameBitmap;
int[] cachedRgbBytes;
cachedRgbBytes = ImageUtils.convertImageToBitmap(image, cachedRgbBytes, cachedYuvBytes);
rgbFrameBitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
rgbFrameBitmap.setPixels(cachedRgbBytes,0,image.getWidth(), 0, 0,image.getWidth(), image.getHeight());
Note: There is more options of converting YUV to RGB frames, so if you need the pixels value, maybe Bitmap is not the best choice, as it may consume more memory than you need, to just get the RGB values
Java Conversion Method
ImageAnalysis imageAnalysis = new ImageAnalysis.Builder()
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.setOutputImageFormat(ImageAnalysis.OUTPUT_IMAGE_FORMAT_RGBA_8888)
.build();
imageAnalysis.setAnalyzer(ContextCompat.getMainExecutor(this), new ImageAnalysis.Analyzer() {
#Override
public void analyze(#NonNull ImageProxy image) {
// call toBitmap function
Bitmap bitmap = toBitmap(image);
image.close();
}
});
private Bitmap bitmapBuffer;
private Bitmap toBitmap(#NonNull ImageProxy image) {
if(bitmapBuffer == null){
bitmapBuffer = Bitmap.createBitmap(image.getWidth(),image.getHeight(),Bitmap.Config.ARGB_8888);
}
bitmapBuffer.copyPixelsFromBuffer(image.getPlanes()[0].getBuffer());
return bitmapBuffer;
}
https://docs.oracle.com/javase/1.5.0/docs/api/java/nio/ByteBuffer.html#get%28byte[]%29
According to the java docs: The buffer.get method transfers bytes from this buffer into the given destination array. An invocation of this method of the form src.get(a) behaves in exactly the same way as the invocation
src.get(a, 0, a.length)
I assume you have YUV (YUV (YUV_420_888) Image provided by Camera. Using this interesting How to use YUV (YUV_420_888) Image in Android tutorial I can propose following solution to convert Image to Bitmap.
Use this to convert YUV Image to Bitmap:
private Bitmap yuv420ToBitmap(Image image, Context context) {
RenderScript rs = RenderScript.create(SpeedMeasurementActivity.this);
ScriptIntrinsicYuvToRGB script = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
// Refer the logic in a section below on how to convert a YUV_420_888 image
// to single channel flat 1D array. For sake of this example I'll abstract it
// as a method.
byte[] yuvByteArray = image2byteArray(image);
Type.Builder yuvType = new Type.Builder(rs, Element.U8(rs)).setX(yuvByteArray.length);
Allocation in = Allocation.createTyped(rs, yuvType.create(), Allocation.USAGE_SCRIPT);
Type.Builder rgbaType = new Type.Builder(rs, Element.RGBA_8888(rs))
.setX(image.getWidth())
.setY(image.getHeight());
Allocation out = Allocation.createTyped(rs, rgbaType.create(), Allocation.USAGE_SCRIPT);
// The allocations above "should" be cached if you are going to perform
// repeated conversion of YUV_420_888 to Bitmap.
in.copyFrom(yuvByteArray);
script.setInput(in);
script.forEach(out);
Bitmap bitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
out.copyTo(bitmap);
return bitmap;
}
and a supportive function to convert 3 Planes YUV image to 1 dimesional byte array:
private byte[] image2byteArray(Image image) {
if (image.getFormat() != ImageFormat.YUV_420_888) {
throw new IllegalArgumentException("Invalid image format");
}
int width = image.getWidth();
int height = image.getHeight();
Image.Plane yPlane = image.getPlanes()[0];
Image.Plane uPlane = image.getPlanes()[1];
Image.Plane vPlane = image.getPlanes()[2];
ByteBuffer yBuffer = yPlane.getBuffer();
ByteBuffer uBuffer = uPlane.getBuffer();
ByteBuffer vBuffer = vPlane.getBuffer();
// Full size Y channel and quarter size U+V channels.
int numPixels = (int) (width * height * 1.5f);
byte[] nv21 = new byte[numPixels];
int index = 0;
// Copy Y channel.
int yRowStride = yPlane.getRowStride();
int yPixelStride = yPlane.getPixelStride();
for(int y = 0; y < height; ++y) {
for (int x = 0; x < width; ++x) {
nv21[index++] = yBuffer.get(y * yRowStride + x * yPixelStride);
}
}
// Copy VU data; NV21 format is expected to have YYYYVU packaging.
// The U/V planes are guaranteed to have the same row stride and pixel stride.
int uvRowStride = uPlane.getRowStride();
int uvPixelStride = uPlane.getPixelStride();
int uvWidth = width / 2;
int uvHeight = height / 2;
for(int y = 0; y < uvHeight; ++y) {
for (int x = 0; x < uvWidth; ++x) {
int bufferIndex = (y * uvRowStride) + (x * uvPixelStride);
// V channel.
nv21[index++] = vBuffer.get(bufferIndex);
// U channel.
nv21[index++] = uBuffer.get(bufferIndex);
}
}
return nv21;
}
start with the imageProxy from the analyizer
#Override
public void analyze(#NonNull ImageProxy imageProxy)
{
Image mediaImage = imageProxy.getImage();
if (mediaImage != null)
{
toBitmap(mediaImage);
}
imageProxy.close();
}
Then convert to a bitmap
private Bitmap toBitmap(Image image)
{
if (image.getFormat() != ImageFormat.YUV_420_888)
{
throw new IllegalArgumentException("Invalid image format");
}
byte[] nv21b = yuv420ThreePlanesToNV21BA(image.getPlanes(), image.getWidth(), image.getHeight());
YuvImage yuvImage = new YuvImage(nv21b, ImageFormat.NV21, image.getWidth(), image.getHeight(), null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvImage.compressToJpeg (new Rect(0, 0,
yuvImage.getWidth(),
yuvImage.getHeight()),
mQuality, baos);
mFrameBuffer = baos;
//byte[] imageBytes = baos.toByteArray();
//Bitmap bm = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
return null;
}
Here's the static function that worked for me
public static byte [] yuv420ThreePlanesToNV21BA(Plane[] yuv420888planes, int width, int height)
{
int imageSize = width * height;
byte[] out = new byte[imageSize + 2 * (imageSize / 4)];
if (areUVPlanesNV21(yuv420888planes, width, height)) {
// Copy the Y values.
yuv420888planes[0].getBuffer().get(out, 0, imageSize);
ByteBuffer uBuffer = yuv420888planes[1].getBuffer();
ByteBuffer vBuffer = yuv420888planes[2].getBuffer();
// Get the first V value from the V buffer, since the U buffer does not contain it.
vBuffer.get(out, imageSize, 1);
// Copy the first U value and the remaining VU values from the U buffer.
uBuffer.get(out, imageSize + 1, 2 * imageSize / 4 - 1);
}
else
{
// Fallback to copying the UV values one by one, which is slower but also works.
// Unpack Y.
unpackPlane(yuv420888planes[0], width, height, out, 0, 1);
// Unpack U.
unpackPlane(yuv420888planes[1], width, height, out, imageSize + 1, 2);
// Unpack V.
unpackPlane(yuv420888planes[2], width, height, out, imageSize, 2);
}
return out;
}
bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.image);
1-Store the path to the image file as a string variable. To decode the content of an image file, you need the file path stored within your code as a string. Use the following syntax as a guide:
String picPath = "/mnt/sdcard/Pictures/mypic.jpg";
2-Create a Bitmap Object And Use BitmapFactory:
Bitmap picBitmap;
Bitmap picBitmap = BitmapFactory.decodeFile(picPath);
In my Android code, I keep an array of colours I would like to use in my renderscript code as follows:
for(int col = 0; col < imgWidth; col++){
const uchar4 colour = *(const uchar4*)rsGetElementAt(colours, col, 0);
if (in.r == colour.r && in.g == colour.g && in.b == colour.b){
in.r = 255;
in.g = 0;
in.b = 0;
break;
} else {
in.r = 0;
in.g = 255;
in.b = 0;
}
}
return in;
So basically, if the pixel is one of the colours in the array, turn it red. The problem is that I'm struggling to allocate the pixel array.
In RS I have:
rs_allocation colours;
int imgWidth;
and in java I have:
Bitmap.Config conf = Bitmap.Config.ARGB_8888; // see other conf types
Bitmap myBitmap = Bitmap.createBitmap(pickedPanels.size(), 1, conf);
myBitmap.setPixels(myInts, 0, myBitmap.getWidth(), 0, 0, myBitmap.getWidth(),0);
final RenderScript rs = RenderScript.create(this);
// The input image.
final Allocation input = Allocation.createFromBitmap(rs, bitmap, Allocation.MipmapControl.MIPMAP_NONE,
Allocation.USAGE_SCRIPT);
// The output image.
final Allocation output = Allocation.createTyped(rs, input.getType());
final ScriptC_singlesource script = new
ScriptC_singlesource(rs);
// The array of colours.
final Allocation pixels = Allocation.createFromBitmap(rs, myBitmap, Allocation.MipmapControl.MIPMAP_NONE,
Allocation.USAGE_SCRIPT);
script.set_image(pixels);
script.set_imgWidth(pickedPanels.size());
script.forEach_root(input, output);
// retrieve output
output.copyTo(bitmap);
ImageView destim = (ImageView) findViewById (dest);
destim.setDrawingCacheEnabled(true);
destim.setImageBitmap(bitmap);
I've read that you cannot have more than 2 allocations. So, I'm guessing that's my problem. SO, I want to try and read the colours in as a uchar4* but I cannot find an example of how to do this.
I am having issues when using ScriptIntrinsicYuvToRGB from the support library to convert images from NV21 format to Bitmaps (ARGB_8888). The code below can illustrate the problem.
Suppose I have the following 50x50 image (the one below is a screenshot from device, not actually 50x50):
Then if I convert said image to a Bitmap through the YuvImage#compressToJpeg + BitmapFactory.decodeByteArray:
YuvImage yuvImage = new YuvImage(example, android.graphics.ImageFormat.NV21, width, height, null);
ByteArrayOutputStream os = new ByteArrayOutputStream();
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 100, os);
byte[] jpegByteArray = os.toByteArray();
return BitmapFactory.decodeByteArray(jpegByteArray, 0, jpegByteArray.length);
I get the expected image. But if I convert it through ScriptIntrinsicYuvToRGB as following:
RenderScript rs = RenderScript.create(context);
Type.Builder tb = new Type.Builder(rs, Element.createPixel(rs,
Element.DataType.UNSIGNED_8, Element.DataKind.PIXEL_YUV));
tb.setX(width);
tb.setY(height);
tb.setYuvFormat(android.graphics.ImageFormat.NV21);
Allocation yuvAllocation = Allocation.createTyped(rs, tb.create(), Allocation.USAGE_SCRIPT);
yuvAllocation.copyFrom(example);
Type rgbType = Type.createXY(rs, Element.RGBA_8888(rs), width, height);
Allocation rgbAllocation = Allocation.createTyped(rs, rgbType);
ScriptIntrinsicYuvToRGB yuvToRgbScript = ScriptIntrinsicYuvToRGB.create(rs, Element.RGBA_8888(rs));
yuvToRgbScript.setInput(yuvAllocation);
yuvToRgbScript.forEach(rgbAllocation);
Bitmap convertedBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
rgbAllocation.copyTo(convertedBitmap);
I get the following corrupted image:
I have noticed that this happens with images of square sizes and never with powers of 2 (e.g. 64x64, 128x128, etc). Had I not tried square sizes I would not have noticed the problem, as some picture sizes such as 2592x1728 works ok. What am I missing?
Update: putting the code that generated the original image as requested:
int width = 50;
int height = 50;
int size = width * height;
byte[] example = new byte[size + size / 2];
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
example[y * width + x] = (byte) ((x*y / (float)size) * 255);
}
}
for (int i = 0; i < size / 2; i++) {
example[size + i] = (byte) (127);
}
The following code behaves in the wrong way:
Type.Builder tb = new Type.Builder(rs, Element.createPixel(rs,
Element.DataType.UNSIGNED_8, Element.DataKind.PIXEL_YUV));
tb.setX(width);
tb.setY(height);
tb.setYuvFormat(android.graphics.ImageFormat.NV21);
yuvAllocation = Allocation.createTyped(rs, tb.create(), Allocation.USAGE_SCRIPT);
If you replace it using a "raw" way of creating an allocation, the conversion will work:
int expectedBytes = width * height *
ImageFormat.getBitsPerPixel(ImageFormat.NV21) / 8;
Type.Builder yuvTypeBuilder = new Type.Builder(rs, Element.U8(rs))
.setX(expectedBytes);
Type yuvType = yuvTypeBuilder.create();
yuvAllocation = Allocation.createTyped(rs, yuvType, Allocation.USAGE_SCRIPT);
It seems that, if you use the PIXEL_YUV definition, there is a size problem on non-multiple-of-16 dimensions. Still investigating on it.
IMHO the best way to convert NV21 (or NV12) to ARGB on Android is to use native (C++) LibYUV library
https://chromium.googlesource.com/libyuv/libyuv/
The main advantage on ARM v7 (and newest v8) based Android Devices is NEON optimization which makes conversion extremely fast.
You can make own build of LibYUV (Build Instruction) or use any prebuild from github: https://github.com/search?q=libyuv+Android
I am writing an app that takes the camera feed, converts it to rgb, in order to do some processing.
It works fine on the old camera implementation which uses NV21 Yuv format.
The issue I am having is with the new Yuv format, YUV_420_888. The image is no longer converted correctly to RGB in the new Camera2 Api which sends YUV_420_888 yuv format instead of NV21 (YUV_420_SP) format.
Can someone please tell me how should I convert YUV_420_888 to RGB?
Thanks
Camera2 YUV_420_888 to RGB Mat(opencv) in Java
#Override
public void onImageAvailable(ImageReader reader){
Image image = null;
try {
image = reader.acquireLatestImage();
if (image != null) {
byte[] nv21;
ByteBuffer yBuffer = mImage.getPlanes()[0].getBuffer();
ByteBuffer uBuffer = mImage.getPlanes()[1].getBuffer();
ByteBuffer vBuffer = mImage.getPlanes()[2].getBuffer();
int ySize = yBuffer.remaining();
int uSize = uBuffer.remaining();
int vSize = vBuffer.remaining();
nv21 = new byte[ySize + uSize + vSize];
//U and V are swapped
yBuffer.get(nv21, 0, ySize);
vBuffer.get(nv21, ySize, vSize);
uBuffer.get(nv21, ySize + vSize, uSize);
Mat mRGB = getYUV2Mat(nv21);
}
} catch (Exception e) {
Log.w(TAG, e.getMessage());
}finally{
image.close();// don't forget to close
}
}
public Mat getYUV2Mat(byte[] data) {
Mat mYuv = new Mat(image.getHeight() + image.getHeight() / 2, image.getWidth(), CV_8UC1);
mYuv.put(0, 0, data);
Mat mRGB = new Mat();
cvtColor(mYuv, mRGB, Imgproc.COLOR_YUV2RGB_NV21, 3);
return mRGB;
}
In my approach I use OpenCV Mat and script from
https://gist.github.com/camdenfullmer/dfd83dfb0973663a7974
First of all you convert your YUV_420_888 Image to Mat with the code in the link above.
*mImage is my Image object which i get in ImageReader.OnImageAvailableListener
Mat mYuvMat = imageToMat(mImage);
public static Mat imageToMat(Image image) {
ByteBuffer buffer;
int rowStride;
int pixelStride;
int width = image.getWidth();
int height = image.getHeight();
int offset = 0;
Image.Plane[] planes = image.getPlanes();
byte[] data = new byte[image.getWidth() * image.getHeight() * ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8];
byte[] rowData = new byte[planes[0].getRowStride()];
for (int i = 0; i < planes.length; i++) {
buffer = planes[i].getBuffer();
rowStride = planes[i].getRowStride();
pixelStride = planes[i].getPixelStride();
int w = (i == 0) ? width : width / 2;
int h = (i == 0) ? height : height / 2;
for (int row = 0; row < h; row++) {
int bytesPerPixel = ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8;
if (pixelStride == bytesPerPixel) {
int length = w * bytesPerPixel;
buffer.get(data, offset, length);
if (h - row != 1) {
buffer.position(buffer.position() + rowStride - length);
}
offset += length;
} else {
if (h - row == 1) {
buffer.get(rowData, 0, width - pixelStride + 1);
} else {
buffer.get(rowData, 0, rowStride);
}
for (int col = 0; col < w; col++) {
data[offset++] = rowData[col * pixelStride];
}
}
}
}
Mat mat = new Mat(height + height / 2, width, CvType.CV_8UC1);
mat.put(0, 0, data);
return mat;
}
We have 1 channel YUV Mat. Define new Mat for BGR(not RGB yet) image:
Mat bgrMat = new Mat(mImage.getHeight(), mImage.getWidth(),CvType.CV_8UC4);
I just started learning OpenCV so propably this doesn't have to be 4-channel Mat and instead could be 3-channel but it works for me.
Now I use convert color method to change my yuv Mat into bgr Mat.
Imgproc.cvtColor(mYuvMat, bgrMat, Imgproc.COLOR_YUV2BGR_I420);
Now we can do all the image processing like finding contours, colors, circles, etc. To print image back on screen we need to convert it to bitmap:
Mat rgbaMatOut = new Mat();
Imgproc.cvtColor(bgrMat, rgbaMatOut, Imgproc.COLOR_BGR2RGBA, 0);
final Bitmap bitmap = Bitmap.createBitmap(bgrMat.cols(), bgrMat.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(rgbaMatOut, bitmap);
I have all my image processing in seperate thread so to set my ImageView I need to do this on the UI thread.
runOnUiThread(new Runnable() {
#Override
public void run() {
if(bitmap != null) {
mImageView.setImageBitmap(bitmap);
}
}
});
Have you tried using this script? It's an answer posted by yydcdut on this question
https://github.com/pinguo-yuyidong/Camera2/blob/master/camera2/src/main/rs/yuv2rgb.rs
Use Shyam Kumar's answer is not right for my phone, but Daniel Więcek's is right.I debug it, find planes[i].getRowStride() is 1216, planes[i].getPixelStride() is 2. While image width and height is both 1200.
Because my reputation is 3, so I cann't comment but post an answer.
Approximately 10 times faster than the mentioned "imageToMat"-Function above is this code:
Image image = reader.acquireLatestImage();
...
Mat yuv = new Mat(image.getHeight() + image.getHeight() / 2, image.getWidth(), CvType.CV_8UC1);
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
final byte[] data = new byte[buffer.limit()];
buffer.get(data);
yuv.put(0, 0, data);
...
image.close();
So I ran into the exact same problem, where I had a code that took the old YUV_420_SP format byte[] data from OnPreviewFrame() and converted it to RGB.
They key here is that the 'old' data in the byte[] is as YYYYYY...CrCbCrCbCrCb, and the 'new' data from the Camera2 API is divided into 3 planes: 0=Y, 1=Cb, 2=Cr., from where you can obtain each's byte[]s. So, all you need to do is reorder the new data as a single array that matches the 'old' format, which you can pass to your existing toRGB() functions:
Image.Plane[] planes = image.getPlanes(); // in YUV220_888 format
int acc = 0, i;
ByteBuffer[] buff = new ByteBuffer[planes.length];
for (i = 0; i < planes.length; i++) {
buff[i] = planes[i].getBuffer();
acc += buff[i].capacity();
}
byte[] data = new byte[acc],
tmpCb = new byte[buff[1].capacity()] , tmpCr = new byte[buff[2].capacity()];
buff[0].get(data, 0, buff[0].capacity()); // Y
acc = buff[0].capacity();
buff[2].get(tmpCr, 0, buff[2].capacity()); // Cr
buff[1].get(tmpCb, 0, buff[1].capacity()); // Cb
for (i=0; i<tmpCb.length; i++) {
data[acc] = tmpCr[i];
data[acc + 1] = tmpCb[i];
acc++;
}
..and now data[] is formatted just as the old YUV_420_SP.
(hope that it helps someone, despite the years passed..)
I'm trying to write a cellular automata program that manipulates integer values in an array and then displays the array as an image.
array of integers --> image --> display on screen
I've tried BitmapFactory.decodeByteArray(), and Bitmap.createBitmap() but all the examples require reading an existing .jpg or .png into a byte[], which is then converted back into a bitmap.
Does anyone have a clear example of building an array from scratch, converting to an image and then displaying? Even the simplest example of an entirely blue square 50x50 pixels would be helpful.
If BitmapFactory.decodeByteArray() is not the best option, I'm open to any alternatives.
Thanks!
my code so far, from an onClick() method:
display = (ImageView) findViewById(R.id.imageView1);
Bitmap bMap = null;
int w = 50, h = 50; // set width = height = 50
byte[] input = new byte[w * h]; // initialize input array
for (int y = 0; y < h; y++) { // fill input with blue
for (int x = 0; x < w; x++) {
input[y * w + x] = (byte) Color.BLUE;
}
}
bMap = BitmapFactory.decodeByteArray(byteArray, 0, byteArray.length, Bitmap.Config.ARGB_8888); // convert byte array to bitmap
display.setImageBitmap(bMap); // post bitmap to imageview
// You are using RGBA that's why Config is ARGB.8888
bitmap = Bitmap.createBitmap(100, 100, Bitmap.Config.ARGB_8888);
// vector is your int[] of ARGB value .
bitmap.copyPixelsFromBuffer(makeBuffer(vector, vector.length));
private IntBuffer makeBuffer(int[] src, int n) {
IntBuffer dst = IntBuffer.allocate(n*n);
for (int i = 0; i < n; i++) {
dst.put(src);
}
dst.rewind();
return dst;
}
creating an empty bitmap and drawing though canvas in android