How can I convert byte array received using socket.
The C++ client send image data which is of type uchar.
At the android side I am receiving this uchar array as byte[] which is ranges from -128 to +127.
What I wanted to do is that receives this data and display it. For that I was trying to convert to Bitmap using BitmapFactory.decodeByteArray(), but no luck I am getting null Bitmap. Am I doing right or any other method available.
Thanks in advance....
From the comments to the answers above, it seems like you want to create a Bitmap object from a stream of RGB values, not from any image format like PNG or JPEG.
This probably means that you know the image size already. In this case, you could do something like this:
byte[] rgbData = ... // From your server
int nrOfPixels = rgbData.length / 3; // Three bytes per pixel.
int pixels[] = new int[nrOfPixels];
for(int i = 0; i < nrOfPixels; i++) {
int r = data[3*i];
int g = data[3*i + 1];
int b = data[3*i + 2];
pixels[i] = Color.rgb(r,g,b);
}
Bitmap bitmap = Bitmap.createBitmap(pixels, width, height, Bitmap.Config.ARGB_8888);
I've been using it like below in one of my projects and so far it's been pretty solid. I'm not sure how picky it is as far as it not being compressed as a PNG though.
byte[] bytesImage;
Bitmap bmpOld; // Contains original Bitmap
Bitmap bmpNew;
ByteArrayOutputStream baoStream = new ByteArrayOutputStream();
bmpOld.compress(Bitmap.CompressFormat.PNG, 100, baoStream);
bytesImage = baoStream.toByteArray();
bmpNew = BitmapFactory.decodeByteArray(bytesImage, 0, bytesImage.length);
edit: I've adapted the code from this post to use RGB, so the code below should work for you. I haven't had a chance to test it yet so it may need some adjusting.
Byte[] bytesImage = {0,1,2, 0,1,2, 0,1,2, 0,1,2};
int intByteCount = bytesImage.length;
int[] intColors = new int[intByteCount / 3];
int intWidth = 2;
int intHeight = 2;
final int intAlpha = 255;
if ((intByteCount / 3) != (intWidth * intHeight)) {
throw new ArrayStoreException();
}
for (int intIndex = 0; intIndex < intByteCount - 2; intIndex = intIndex + 3) {
intColors[intIndex / 3] = (intAlpha << 24) | (bytesImage[intIndex] << 16) | (bytesImage[intIndex + 1] << 8) | bytesImage[intIndex + 2];
}
Bitmap bmpImage = Bitmap.createBitmap(intColors, intWidth, intHeight, Bitmap.Config.ARGB_8888);
InputStream is = new java.net.URL(urldisplay).openStream();
byte[] colors = IOUtils.toByteArray(is);
int nrOfPixels = colors.length / 3; // Three bytes per pixel.
int pixels[] = new int[nrOfPixels];
for(int i = 0; i < nrOfPixels; i++) {
int r = (int)(0xFF & colors[3*i]);
int g = (int)(0xFF & colors[3*i+1]);
int b = (int)(0xFF & colors[3*i+2]);
pixels[i] = Color.rgb(r,g,b);
}
imageBitmap = Bitmap.createBitmap(pixels, width, height,Bitmap.Config.ARGB_4444);
bmImage.setImageBitmap(imageBitmap );
Related
In Android, I want to load a PNG, get the RGB values in a byte array to do some computation, then I want to recreate a Bitmap with the new values.
To do that, I wrote 2 functions to convert a Bitmap into an RGB byte array and another one to convert an RGB byte array back to a Bitmap, the alpha channel can be ignored.
These are the conversion functions:
public static byte[] ARGB2byte(Bitmap img)
{
int width = img.getWidth();
int height = img.getHeight();
int[] pixels = new int[height*width];
byte rgbIm[] = new byte[height*width*3];
img.getPixels(pixels,0,width,0,0,width,height);
int pixel_count = 0;
int count=width*height;
while(pixel_count < count){
int inVal = pixels[pixel_count];
//Get the pixel channel values from int
int a = (inVal >> 24) & 0xff;
int r = (inVal >> 16) & 0xff;
int g = (inVal >> 8) & 0xff;
int b = inVal & 0xff;
rgbIm[pixel_count*3] = (byte)(r);
rgbIm[pixel_count*3 + 1] = (byte)(g);
rgbIm[pixel_count*3 + 2] = (byte)(b);
pixel_count++;
}
return rgbIm;
}
public static Bitmap byte2ARGB(byte[] data, int width, int height)
{
int pixelsCount = data.length / 3;
int[] pixels = new int[pixelsCount];
for (int i = 0; i < pixelsCount; i++)
{
int offset = 3 * i;
int r = data[offset];
int g = data[offset + 1];
int b = data[offset + 2];
pixels[i] = Color.rgb(r, g, b);
}
return Bitmap.createBitmap(pixels, width, height, Bitmap.Config.ARGB_8888);
}
So I tried to test these function just loading an image from the asset folder, converting it to byte array, converting it back to Bitmp immediately and saving it to the internal storage to inspect if the final result matches the original image.
Unfortunately it doesn't, the color space seems wrong.
For example if I load this png:
and run the following code:
// Loads the png from assets folder
AssetManager am = getInstrumentation().getContext().getAssets();
InputStream is = am.open(filename);
Bitmap bitmap = BitmapFactory.decodeStream(is);
// Conversion to byte array
byte[] barray = ARGB2byte(bitmap);
Bitmap reconverted = Utils.byte2ARGB(barray, bitmap.getWidth(), bitmap.getHeight());
// Saving the reconverted Bitmap
try {
String folder_path = context.getFilesDir().getAbsolutePath() + "/";
File file = new File(folder_path + "test_conversion.png");
FileOutputStream fos = new FileOutputStream(file);
reconverted.compress(Bitmap.CompressFormat.PNG, 100, fos);
fos.close();
} catch (FileNotFoundException e) {
Log.d("saving bitmap", "File not found: " + e.getMessage());
} catch (IOException e) {
Log.d("saving bitmap", "Error accessing file: " + e.getMessage());
}
}
I get this as result:
What I'm doing wrong?
If I use the same code to save the original Bitmap right after I loaded it, the image I get is correct, so I'm probably doing some mistake during the conversion.
I also inspected the R,G,B values of the byte array of the original image comparing them with th values from the byte array obtained from the reconverted image, and they are the same!!
Is there something that the Bitmap library of Android does under the hood, maybe with the alpha channel? I can't figure it out.
Thank you
Sorry for I can't speak English well first.
It's cause cast byte to int in java will get unexpected result.
Use this instead
int r = data[offset] & 0xFF;
int g = data[offset + 1] & 0xFF;
int b = data[offset + 2] & 0xFF;
See this post too.
I'm porting ios app to android and have issue with bitmap colors while creating it from byte array. This code works perfectly on ios (C# - Xamarin):
const int bitsPerComponent = 8;
const int bytePerPixel = 4;
var bytesPerRow = bytePerPixel * BarcodeImageWidth;
var colorSpace = CGColorSpace.CreateDeviceRGB();
var context = new CGBitmapContext(byteArray,
BarcodeImageWidth, BarcodeImageHeight,
bitsPerComponent, bytesPerRow,
colorSpace, CGImageAlphaInfo.NoneSkipFirst);
BarcodeImageView.Image = new UIImage(context.ToImage());
And this code on android makes bitmap with wrong colors:
var bitmap = Bitmap.CreateBitmap(barcode.Width, barcode.Height, Bitmap.Config.Argb8888);
bitmap.CopyPixelsFromBuffer(ByteBuffer.Wrap(imageBytes));
barcode.SetImageBitmap(bitmap);
Fixed that by manually created pixel array with skipped first (alpha) byte.
int pixelsCount = imageBytes.Length / 4;
int[] pixels = new int[pixelsCount];
for (int i = 0; i < pixelsCount; i++)
{
var offset = 4 * i;
int r = imageBytes[offset + 1];
int g = imageBytes[offset + 2];
int b = imageBytes[offset + 3];
pixels[i] = Color.Rgb(r, g, b);
}
var bitmap = Bitmap.CreateBitmap(pixels, barcode.Width, barcode.Height, Bitmap.Config.Argb8888);
barcode.SetImageBitmap(bitmap);
I'm trying to implement camera preview image data processing using camera2 api as proposed here: Camera preview image data processing with Android L and Camera2 API.
I successfully receive callbacks using onImageAvailableListener, but for future processing I need to obtain bitmap from YUV_420_888 android.media.Image. I searched for similar questions, but none of them helped.
Could you please suggest me how to convert android.media.Image (YUV_420_888) to Bitmap or maybe there's a better way of listening for preview frames?
You can do this using the built-in Renderscript intrinsic, ScriptIntrinsicYuvToRGB. Code taken from Camera2 api Imageformat.yuv_420_888 results on rotated image:
#Override
public void onImageAvailable(ImageReader reader)
{
// Get the YUV data
final Image image = reader.acquireLatestImage();
final ByteBuffer yuvBytes = this.imageToByteBuffer(image);
// Convert YUV to RGB
final RenderScript rs = RenderScript.create(this.mContext);
final Bitmap bitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
final Allocation allocationRgb = Allocation.createFromBitmap(rs, bitmap);
final Allocation allocationYuv = Allocation.createSized(rs, Element.U8(rs), yuvBytes.array().length);
allocationYuv.copyFrom(yuvBytes.array());
ScriptIntrinsicYuvToRGB scriptYuvToRgb = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
scriptYuvToRgb.setInput(allocationYuv);
scriptYuvToRgb.forEach(allocationRgb);
allocationRgb.copyTo(bitmap);
// Release
bitmap.recycle();
allocationYuv.destroy();
allocationRgb.destroy();
rs.destroy();
image.close();
}
private ByteBuffer imageToByteBuffer(final Image image)
{
final Rect crop = image.getCropRect();
final int width = crop.width();
final int height = crop.height();
final Image.Plane[] planes = image.getPlanes();
final byte[] rowData = new byte[planes[0].getRowStride()];
final int bufferSize = width * height * ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8;
final ByteBuffer output = ByteBuffer.allocateDirect(bufferSize);
int channelOffset = 0;
int outputStride = 0;
for (int planeIndex = 0; planeIndex < 3; planeIndex++)
{
if (planeIndex == 0)
{
channelOffset = 0;
outputStride = 1;
}
else if (planeIndex == 1)
{
channelOffset = width * height + 1;
outputStride = 2;
}
else if (planeIndex == 2)
{
channelOffset = width * height;
outputStride = 2;
}
final ByteBuffer buffer = planes[planeIndex].getBuffer();
final int rowStride = planes[planeIndex].getRowStride();
final int pixelStride = planes[planeIndex].getPixelStride();
final int shift = (planeIndex == 0) ? 0 : 1;
final int widthShifted = width >> shift;
final int heightShifted = height >> shift;
buffer.position(rowStride * (crop.top >> shift) + pixelStride * (crop.left >> shift));
for (int row = 0; row < heightShifted; row++)
{
final int length;
if (pixelStride == 1 && outputStride == 1)
{
length = widthShifted;
buffer.get(output.array(), channelOffset, length);
channelOffset += length;
}
else
{
length = (widthShifted - 1) * pixelStride + 1;
buffer.get(rowData, 0, length);
for (int col = 0; col < widthShifted; col++)
{
output.array()[channelOffset] = rowData[col * pixelStride];
channelOffset += outputStride;
}
}
if (row < heightShifted - 1)
{
buffer.position(buffer.position() + rowStride - length);
}
}
}
return output;
}
For a simpler solution see my implementation here:
Conversion YUV 420_888 to Bitmap (full code)
The function takes the media.image as input, and creates three RenderScript allocations based on the y-, u- and v-planes. It follows the YUV_420_888 logic as shown in this Wikipedia illustration.
However, here we have three separate image planes for the Y, U and V-channels, thus I take these as three byte[], i.e. U8 allocations. The y-allocation has size width * height bytes, while the u- and v-allocatons have size width * height/4 bytes each, reflecting the fact that each u-byte covers 4 pixels (ditto each v byte).
I write some code about this, and it's the YUV datas preview and chang it to JPEG datas ,and I can use it to save as bitmap ,byte[] ,or others.(You can see the class "Allocation" ).
And SDK document says: "For efficient YUV processing with android.renderscript: Create a RenderScript Allocation with a supported YUV type, the IO_INPUT flag, and one of the sizes returned by getOutputSizes(Allocation.class), Then obtain the Surface with getSurface()."
here is the code, hope it will help you:https://github.com/pinguo-yuyidong/Camera2/blob/master/camera2/src/main/rs/yuv2rgb.rs
I've a c++ websocket server, and I want to ti send an openCV image (cv::Mat) to my Android client.
I understood that I should use base64 string, but I can't find out how to do it from my openCV frames.
I don't know how to convert a cv::Mat to a bytearray.
Thank you
Hi you can use this below code which works form me
C++ Client
Here we will send BGR raw byte in to socket by accessing Mat data pointer.
Before sending make sure that Mat is continues otherwise make it continues.
int sendImage(Mat frame){
int imgSize = frame.total()*frame.elemSize();
int bytes=0;
int clientSock;
const char* server_ip=ANDROID_IP;
int server_port=2000;
struct sockaddr_in serverAddr;
socklen_t serverAddrLen = sizeof(serverAddr);
if ((clientSock = socket(PF_INET, SOCK_STREAM, 0)) < 0) {
printf("\n--> socket() failed.");
return -1;
}
serverAddr.sin_family = PF_INET;
serverAddr.sin_addr.s_addr = inet_addr(server_ip);
serverAddr.sin_port = htons(server_port);
if (connect(clientSock, (sockaddr*)&serverAddr, serverAddrLen) < 0) {
printf("\n--> connect() failed.");
return -1;
}
frame = (frame.reshape(0,1)); // to make it continuous
/* start sending images */
if ((bytes = send(clientSock, frame.data, imgSize, 0)) < 0){
printf("\n--> send() failed");
return -1;
}
/* if something went wrong, restart the connection */
if (bytes != imgSize) {
cout << "\n--> Connection closed " << endl;
close(clientSock);
return -1;
}
return 0;
}
Java Server
You should know size of image going to receive.
Receives stream from socket and convert to byte array.
Convert byte array BGR and create Bitmap.
Code for Receiving byte array from C++ Server
public static byte imageByte[];
int imageSize=921600;//expected image size 640X480X3
InputStream in = server.getInputStream();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte buffer[] = new byte[1024];
int remainingBytes = imageSize; //
while (remainingBytes > 0) {
int bytesRead = in.read(buffer);
if (bytesRead < 0) {
throw new IOException("Unexpected end of data");
}
baos.write(buffer, 0, bytesRead);
remainingBytes -= bytesRead;
}
in.close();
imageByte = baos.toByteArray();
baos.close();
Code to Convert byte array to RGB bitmap image
int nrOfPixels = imageByte.length / 3; // Three bytes per pixel.
int pixels[] = new int[nrOfPixels];
for(int i = 0; i < nrOfPixels; i++) {
int r = imageByte[3*i];
int g = imageByte[3*i + 1];
int b = imageByte[3*i + 2];
if (r < 0)
r = r + 256; //Convert to positive
if (g < 0)
g = g + 256; //Convert to positive
if (b < 0)
b = b + 256; //Convert to positive
pixels[i] = Color.rgb(b,g,r);
}
Bitmap bitmap = Bitmap.createBitmap(pixels, 640, 480, itmap.Config.ARGB_8888);
Check this answer in question Serializing OpenCV Mat_ . If it is not a problem for you to use boost, it can solve your problem. Probably you will need some additional JNI magic on client side.
You can take into account which of the data are important for you: cols (number of columns), rows (number of columns), data (which contains the pixel information), type (data type and channel number).
You have to vectorize your matrix, because it is not neccessarrily continuous and take into account the variations of the size of a pixel in the memory.
Suppose:
cv::Mat m;
Then to allocate:
int depth; // measured in bytes
switch (m.depth())
{
// ... you might check for all of the possibilities
case CV_16U:
depth = 2;
}
char *array = new char[4 + 4 + 4 + m.cols * m.rows * m.channels() * depth]; // rows + cols + type + data
And than write the header information:
int *rows = array;
int *cols = &array[4];
int *type = &array[8];
*rows = m.rows;
*cols = m.cols;
*type = m.type;
And finally the data:
char *mPtr;
for (int i = 0; i < m.rows; i++)
{
mPtr = m.ptr<char>(i); // data type doesn't matter
for (int j = 0; j < m.cols; j++)
{
array[i * rows + j + 3 * 4] = mPtr[j];
}
}
Hopefully no bugs in the code.
I want to make video of my android screen(what i am doing on the android screen) programatically.
Is there any best tutorial or help regarding this.
I have searched a lot but i found that thing...(Capture the android screen picture programtically).
Ok fine if i capture a lot of images after each milliseconds than how i make video with a lot of captured images in android programmatic.
you can use following code for screen capturing in Android.
Please view this url.....
http://android-coding.blogspot.in/2011/05/create-custom-dialog-with-dynamic.html
As long as you have bitmaps you can flip it into video using JCodec ( http://jcodec.org ).
Here's a sample image sequence encoder: https://github.com/jcodec/jcodec/blob/master/src/main/java/org/jcodec/api/SequenceEncoder.java . You can modify it for your purposes by replacing BufferedImage with Bitmap.
Use these helper methods:
public static Picture fromBitmap(Bitmap src) {
Picture dst = Picture.create((int)src.getWidth(), (int)src.getHeight(), RGB);
fromBitmap(src, dst);
return dst;
}
public static void fromBitmap(Bitmap src, Picture dst) {
int[] dstData = dst.getPlaneData(0);
int[] packed = new int[src.getWidth() * src.getHeight()];
src.getPixels(packed, 0, src.getWidth(), 0, 0, src.getWidth(), src.getHeight());
for (int i = 0, srcOff = 0, dstOff = 0; i < src.getHeight(); i++) {
for (int j = 0; j < src.getWidth(); j++, srcOff++, dstOff += 3) {
int rgb = packed[srcOff];
dstData[dstOff] = (rgb >> 16) & 0xff;
dstData[dstOff + 1] = (rgb >> 8) & 0xff;
dstData[dstOff + 2] = rgb & 0xff;
}
}
}
public static Bitmap toBitmap(Picture src) {
Bitmap dst = Bitmap.create(pic.getWidth(), pic.getHeight(), ARGB_8888);
toBitmap(src, dst);
return dst;
}
public static void toBitmap(Picture src, Bitmap dst) {
int[] srcData = src.getPlaneData(0);
int[] packed = new int[src.getWidth() * src.getHeight()];
for (int i = 0, dstOff = 0, srcOff = 0; i < src.getHeight(); i++) {
for (int j = 0; j < src.getWidth(); j++, dstOff++, srcOff += 3) {
packed[dstOff] = (srcData[srcOff] << 16) | (srcData[srcOff + 1] << 8) | srcData[srcOff + 2];
}
}
dst.setPixels(packed, 0, src.getWidth(), 0, 0, src.getWidth(), src.getHeight());
}
You can as well wait for JCodec team to implement full Android support, they are working on it according to this: http://jcodec.org/news/no_deps.html
you can use following code for screen capturing in Android.
ImageView v1 = (ImageView)findViewById(R.id.mImage);
v1.setDrawingCacheEnabled(true);
Bitmap bm = v1.getDrawingCache();
For Creating Video from Images visit this link.