Byte Array to 2D array - android

What My code does . is take a snapshot of the map convert it to gray (opencv) and then make it to byte array.
Now what I dont know how to start doing is making this Bytearray to a 2D Array,
here is a block of the code.
Date now = new Date();
android.text.format.DateFormat.format("yyyy-MM-dd_hh:mm:ss",now);
try {
StrictMode.ThreadPolicy old = StrictMode.getThreadPolicy();
StrictMode.setThreadPolicy(new StrictMode.ThreadPolicy.Builder(old)
.permitDiskWrites()
.build());
String mPath = Environment.getExternalStorageDirectory().toString() + "/" + now + ".jpg";
File imageFile = new File(mPath);
FileOutputStream out = new FileOutputStream(imageFile);
bitmap.compress(Bitmap.CompressFormat.PNG,90,out);
Log.d("Image:","Saved Snashot. Starting covertion");
//show snapshot in imageview
ImageView imgview = (ImageView) findViewById(R.id.imageView);
Bitmap smyBitmap = BitmapFactory.decodeFile(mPath);
Bitmap myBitmap = BitmapFactory.decodeFile(mPath);
imgview.setImageBitmap(smyBitmap);
Mat mat = new Mat(myBitmap.getHeight(),myBitmap.getWidth(),CvType.CV_8UC3);
Mat mat1 = new Mat(myBitmap.getHeight(),myBitmap.getWidth(),CvType.CV_8UC1);
Imgproc.cvtColor(mat,mat1,Imgproc.COLOR_BGR2GRAY);
ImageView imgview2 = (ImageView) findViewById(R.id.imageView2);
Mat tmp = new Mat (myBitmap.getWidth(), myBitmap.getHeight(), CvType.CV_8UC1);
Utils.bitmapToMat(myBitmap, tmp);
Imgproc.cvtColor(tmp, tmp, Imgproc.COLOR_RGB2GRAY);
Utils.matToBitmap(tmp, myBitmap);
// Utils.matToBitmap(mat1,img);
String mPathgray = Environment.getExternalStorageDirectory().toString() + "/" + now + "gray.jpg";
File imageFilegray = new File(mPathgray);
FileOutputStream gout = new FileOutputStream(imageFilegray);
bitmap.compress(Bitmap.CompressFormat.PNG,90,gout);
byte[] byteArray = bttobyte(myBitmap);
Log.d("location"," " + mPathgray);
imgview2.setImageBitmap(myBitmap);
Log.d("Activity", "Byte array: "+ Arrays.toString(byteArray));
}
catch (Exception e) {
e.printStackTrace();
}
}
};
mMap.snapshot(callback);
Log.d("test","test2");
}
});
}
public byte[] bttobyte(Bitmap bitmap) {
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 70, stream);
return stream.toByteArray();
}

The problem you are trying to solve is actually extracting image data from JPEG encoding in the byte array. It is not as simple as being stored pixel by pixel in a grid of width and height equal to your image size, as you seem to be implying. That is a consequence of using this
bitmap.compress(Bitmap.CompressFormat.JPEG, 70, stream);
For example that byte data actually might encode JFIF format:
0xffd8 - SOI - exactly 1 - start of image
e.g. 0xffe0 - zero or more - e.g. APP0 for JFIF - [some sort of header, either JFIF or EXIF].
0xffdb - DQT - one or more - define quantisation tables. These are the main source of quality in the image.
0xffcn - SOFn - exactly one - start of frame. SOF0 indicates a baseline DCT JPEG
0xffc4 - DHT - one or more - define Huffman tables. These are used in the final lossless encoding steps.
0xffda - SOS - exactly one (for baseline, more than one intermixed with more DHTs if progressive) - start of stream. This is where the actual data starts.
0xffd9 - EOI - exactly one - end of image.
Essentially once you have transformed your bitmap, you need a JPEG decoder to parse this data.
I think what you want is to be working with the original bitmap. For example there are APIs to extract pixel data. That is what you need if I understand what it is you want to do correctly.
Bitmap.getPixels(int[] pixels,
int offset,
int stride,
int x,
int y,
int width,
int height)
The array pixels[] is filled with data. You can then parse the array if you want to store it in a w by h 2D array by iterating copying the data. Something like:
int offset = 0;
int pixels2d[w][h]= new int[w][h];
for (int[] row : pixels2d) {
System.arraycopy(pixels, offset, row, 0, row.length);
offset += row.length;
}

You just create a new array with 2 dimensions
byte[][] picture = new byte[myBitmap.getWidth()][,myBitmap.getHeight()]
which you can access via
byte myByte = picture[x][y]
after this you iterate your original bytarray by line followed by row

Related

Imebra - Change color contrasts

I am able to load dicom image using imebra, and want to change the colors of image, but cant figure out a way. I want to achieve functionality as in Dicomite app.
Following is my code:
public void loadDCM() {
com.imebra.DataSet loadedDataSet = com.imebra.CodecFactory.load(dicomPath.getPath());
com.imebra.VOIs voi = loadedDataSet.getVOIs();
com.imebra.Image image = loadedDataSet.getImageApplyModalityTransform(0);
// com.imebra.Image image = loadedDataSet.getImage(0);
String colorSpace = image.getColorSpace();
long width = image.getWidth();
long height = image.getHeight();
TransformsChain transformsChain = new TransformsChain();
com.imebra.DrawBitmap drawBitmap = new com.imebra.DrawBitmap(transformsChain);
com.imebra.TransformsChain chain = new com.imebra.TransformsChain();
if (com.imebra.ColorTransformsFactory.isMonochrome(image.getColorSpace())) {
// Allocate a VOILUT transform. If the DataSet does not contain any pre-defined
// settings then we will find the optimal ones.
VOILUT voilutTransform = new VOILUT();
// Retrieve the VOIs (center/width pairs)
com.imebra.VOIs vois = loadedDataSet.getVOIs();
// Retrieve the LUTs
List < LUT > luts = new ArrayList < LUT > ();
for (long scanLUTs = 0;; scanLUTs++) {
try {
luts.add(loadedDataSet.getLUT(new com.imebra.TagId(0x0028, 0x3010), scanLUTs));
} catch (Exception e) {
break;
}
}
if (!vois.isEmpty()) {
voilutTransform.setCenterWidth(vois.get(0).getCenter(), vois.get(0).getWidth());
} else if (!luts.isEmpty()) {
voilutTransform.setLUT(luts.get(0));
} else {
voilutTransform.applyOptimalVOI(image, 0, 0, width, height);
}
chain.addTransform(voilutTransform);
com.imebra.DrawBitmap draw = new com.imebra.DrawBitmap(chain);
// Ask for the size of the buffer (in bytes)
long requestedBufferSize = draw.getBitmap(image, drawBitmapType_t.drawBitmapRGBA, 4, new byte[0]);
byte buffer[] = new byte[(int) requestedBufferSize]; // Ideally you want to reuse this in subsequent calls to getBitmap()
ByteBuffer byteBuffer = ByteBuffer.wrap(buffer);
// Now fill the buffer with the image data and create a bitmap from it
drawBitmap.getBitmap(image, drawBitmapType_t.drawBitmapRGBA, 4, buffer);
Bitmap renderBitmap = Bitmap.createBitmap((int) image.getWidth(), (int) image.getHeight(), Bitmap.Config.ARGB_8888);
renderBitmap.copyPixelsFromBuffer(byteBuffer);
image_view.setImageBitmap(renderBitmap);
}
If you are dealing with a monochrome image and you want to modify the presentation luminosity/contrast, then you have to modify the parameters of the VOILUT transform (voilutTransform variable in your code).
You can get the center and width that the transform is applying to the image before calculating the bitmap to be displayed, then modify them before calling drawBitmap.getBitmap again.
E.g., to double the contrast:
voilutTransform.setCenterWidth(voilutTransform.getCenter(), voilutTransform.getWidth() / 2);
// Now fill the buffer with the image data and create a bitmap from it
drawBitmap.getBitmap(image, drawBitmapType_t.drawBitmapRGBA, 4, buffer);
See this answer for more details about the center/width

Android Nexus 10 blank png from JNI ByteArray

I am developing an application using JNI and a third party engine (Unreal Engine 4) in charge of managing the graphics pipeline/rendering.
The third party engine is written in C++, thus the need of using JNI to bridge it with Android.
The app requires to save a screenshot on the device of what is being displayed on the screen (in other words a dump of the framebuffer).
The third party engine exposes an API that calls a custom handler, passing in the width, height and color data of the screen.
colordata is a custom container of uint8 representing RGBA components.
I successfully managed to convert the colorData to a jbyteArray and pass it as an argument to a function on the JAVA side.
On the java side things are simpler: I create a bitmap from the byteArray, flip it and save it as a jpg/png via a custom AsyncTask.
The problem:
The code works marvellously o Samsung Galaxy S4/Note3 (Both Android 5.0), whereas on a Nexus 10 Android version 5.1.1 the png that gets saved is blank.
I am afraid that the problem with this lies on a depper level than the ones I have access to, i.e. graphics card/drivers/OS version, but I am not an expert in that field, so I would like to know if someone has already experienced a similar issue or could shed some light on what is causing it.
This is the code used to bridge the engine with Java (I started c++ with this project so maybe there are ownership/memory issues in this snippet. You are more than welcome to correct me in case :))
void AndroidInterface::SaveBitmap(const TArray<FColor>& colorData, int32 width, int32 height) {
JNIEnv* env = FAndroidApplication::GetJavaEnv(true);
TArray<FColor> bitmap = colorData;
TArray<uint8> compressedBitmap;
FImageUtils::CompressImageArray(width, height, bitmap, compressedBitmap);
size_t len = width*height*compressedBitmap.GetTypeSize();
LOGD("===========Width: %i, height: %i - Len of bitmap element: %i==========", width, height, len);
jbyteArray bitmapData = env->NewByteArray(len);
LOGD("===========Called new byte array==========");
env->SetByteArrayRegion(bitmapData, 0, len, (const jbyte*)compressedBitmap.GetData() );
LOGD("===========Populated byte array==========");
check (bitmapData != NULL && "Couldn't create byte array");
jclass gameActivityClass = FAndroidApplication::FindJavaClass("com/epicgames/ue4/GameActivity");
check (gameActivityClass != nullptr && "GameActivityClassNotFound");
//get the method signature to take a game screenshot
jmethodID saveScreenshot = env->GetMethodID(gameActivityClass, "saveScreenshot", "([BII)V");
env->CallVoidMethod(AndroidInterface::sGameActivity, saveScreenshot, bitmapData, width, height);
env->DeleteLocalRef(bitmapData);
}
This is the java code in charge of converting from byte[] to Bitmap:
public void saveScreenshot(final byte[] colors, int width, int height) {
android.util.Log.d("GameActivity", "======saveScreenshot called. Width: " + width + " height: " + height + "=======");
android.util.Log.d("GameActivity", "Color content---->\n " + Arrays.toString(colors));
final BitmapFactory.Options opts = new BitmapFactory.Options();
opts.inPreferredConfig = Bitmap.Config.ARGB_8888;
final Bitmap bitmap = BitmapFactory.decodeByteArray(colors, 0, colors.length, opts);
final FlipBitmap flipBitmapTask = new FlipBitmap();
flipBitmapTask.executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR, bitmap);
}
FlipBitmap is the AsyncTask in charge of saving the bitmap to a file:
private class FlipBitmap extends AsyncTask<Bitmap, Void, File> {
#Override
protected File doInBackground(Bitmap... params) {
final Bitmap src = params[0];
final File file = new File(MainActivity.SCREENSHOT_FOLDER + "screenshot" + System.currentTimeMillis() + ".png");
final Matrix matrix = new Matrix();
matrix.setScale(1, -1);
final Bitmap dst = Bitmap.createBitmap(src, 0, 0, src.getWidth(), src.getHeight(), matrix, false);
try {
final FileOutputStream out = new FileOutputStream(file);
dst.compress(Bitmap.CompressFormat.PNG, 90, out);
out.flush();
out.close();
} catch (Exception e) {
e.printStackTrace();
}
return file;
}
#Override
protected void onPostExecute(File file) {
android.util.Log.d("GameActivity", "FlipBitmap onPostExecute");
if (file.exists()) {
final Intent i = new Intent(Intent.ACTION_SENDTO);
i.setData(Uri.parse("mailto:" + Globals.Network.MAIL_TO));
i.putExtra(Intent.EXTRA_SUBJECT, Globals.Network.MAIL_SUBJECT);
i.putExtra(Intent.EXTRA_TEXT, mBodyEmail);
i.putExtra(Intent.EXTRA_STREAM, Uri.parse("file://" + file.getAbsolutePath()));
startActivity(Intent.createChooser(i, "Invia via email"));
}
}
}
Thanks in advance!

onPreviewFrame YUV grayscale skewed

I'm trying to get the picture from a surfaceView where I have the camera view running,
I've already implemented onPreviewFrame, and it's called correctly as the debug shows me.
The problem I'm facing now, it's since the byte[] data I receive in the method, it's in YUV space color (NV21), I'm trying to convert it to grayscale to generate a Bitmap and then storing it into a file.
The conversion process that I'm following it's:
public Bitmap convertYuvGrayScaleRGB(byte[] yuv, int width, int height) {
int[] pixels = new int[width * height];
for (int i = 0; i < height*width; i++) {
int grey = yuv[i] & 0xff;
pixels[i] = 0xFF000000 | (grey * 0x00010101);
}
return Bitmap.createBitmap(pixels, width, height, Bitmap.Config.ARGB_8888);
}
The importing procedure for storing it to a file, it's:
Bitmap bitmap = convertYuvGrayScaleRGB(data,widht,heigth);
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG, 50, bytes);
File f = new File(Environment.getExternalStorageDirectory()
+ File.separator + "test.jpg");
Log.d("Camera", "File: " + f.getAbsolutePath());
try {
f.createNewFile();
FileOutputStream fo = new FileOutputStream(f);
fo.write(bytes.toByteArray());
fo.close();
bitmap.recycle();
bitmap = null;
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
Altough, the result I've got it's the following:
I can't find any obvious mistake in your code, but i've already met this kind of skewed images before. When this happened to me, it was due to:
At some point in the code, the image width and height are swapped,
Or the original image you're trying to convert has padding, in which case you will need a stride in addition of the width and height.
Hope this helps!
Probably the Width of the image you are converting is not even. in that case
it is padded in memory.
Let me have a look at the docs...
It seems more complicated than this. if you want your code to work as it is now, you will have to have the width
a multiple of 16.
from the docs:
public static final int YV12
Added in API level 9 Android YUV format.
This format is exposed to software decoders and applications.
YV12 is a 4:2:0 YCrCb planar format comprised of a WxH Y plane
followed by (W/2) x (H/2) Cr and Cb planes.
This format assumes
an even width an even height a horizontal stride multiple of 16 pixels
a vertical stride equal to the height y_size = stride * height
c_stride = ALIGN(stride/2, 16) c_size = c_stride * height/2 size =
y_size + c_size * 2 cr_offset = y_size cb_offset = y_size + c_size
I just had this problem with the S3. My problem was that I used the wrong dimensions for the preview. I assumed the camera was 16:9 when it was actually 4:3.
Use Camera.getParameters().getPreviewSize() to see what the output is in.
I made this:
int frameSize = width * height;
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
ret[frameSize + (i >> 1) * width + (j & ~1) + 1] = 127; //U
ret[frameSize + (i >> 1) * width + (j & ~1) + 0] = 127; //V
}
}
So simple but it works really good and fast ;)

Buffer not large enough for pixels

I get the error that my buffer is not large enough for pixels. Any recommendations? The Bitmap b should be the same size as the gSaveBitmap that I'm trying to place its pixels into.
if(gBuffer == null)
{
Bitmap b = Bitmap.createScaledBitmap(gBitmap, mWidth, mHeight, false);
//gBuffer = ByteBuffer.allocateDirect(b.getRowBytes()*b.getHeight()*4);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
b.compress(Bitmap.CompressFormat.PNG, 100, stream);
gBuffer = ByteBuffer.wrap(stream.toByteArray());
b.recycle();
}
gSaveBitmap.copyPixelsFromBuffer(gBuffer);
Update: The below code gives the exact same error without any compression involved.
if(gBuffer == null)
{
Bitmap b = Bitmap.createScaledBitmap(gBitmap, mWidth, mHeight, false);
int bytes = b.getWidth()*b.getHeight()*4;
gBuffer = ByteBuffer.allocate(bytes);
b.copyPixelsToBuffer(gBuffer);
b.recycle();
}
gSaveBitmap.copyPixelsFromBuffer(gBuffer);
Update: Solved the issue by doubling the size of gBuffer. Perhaps someone can tell me why this is the correct size. Also ... the picture is in the wrong rotation, needs rotated 90 degrees. Any ideas how the data would need to be rearranged in gBuffer?
gBuffer = ByteBuffer.allocate(b.getRowBytes()*b.getHeight()*2);
I think I might have solved this one-- take a look at the source (version 2.3.4_r1, last time Bitmap was updated on Grepcode prior to 4.4) for Bitmap::copyPixelsFromBuffer()
public void copyPixelsFromBuffer(Buffer src) {
checkRecycled("copyPixelsFromBuffer called on recycled bitmap");
int elements = src.remaining();
int shift;
if (src instanceof ByteBuffer) {
shift = 0;
} else if (src instanceof ShortBuffer) {
shift = 1;
} else if (src instanceof IntBuffer) {
shift = 2;
} else {
throw new RuntimeException("unsupported Buffer subclass");
}
long bufferBytes = (long)elements << shift;
long bitmapBytes = (long)getRowBytes() * getHeight();
if (bufferBytes < bitmapBytes) {
throw new RuntimeException("Buffer not large enough for pixels");
}
nativeCopyPixelsFromBuffer(mNativeBitmap, src);
}
The wording of the error is a bit unclear, but the code clarifies-- it means that your buffer is calculated as not having enough data to fill the pixels of your bitmap.
This is because they use the buffer's remaining() method to figure the capacity of the buffer, which takes into account the current value of its position attribute. If you call rewind() on your buffer before you invoke copyPixelsFromBuffer(), you should see the runtime exception disappear.
I found the answer for this issue :
You should always set the buffer size > bit map size since The Bitmap on different Android version always changed.
You can log the codes in below to see the buffer size & bitmap size (Android API should >= 12 to use following log)
Log.i("", "Bitmap size = " + mBitmap.getByteCount());
Log.i("", "Buffer size = " + mBuffer.capacity());
It should be worked.
Thanks,

Convert Image byte[] from CMYK to RGB?

I need to convert bytes of an CYMK image to bytes for RGB image.
I think it's possible to skip the bytes of the header and convert others bytes in RGB and then change the header bytes for RGB format.
Which are the header bytes to change for RGB?
Which is the formula for the bit color conversion without ICC profile?
Can anybody help me to complete this code?
//Decode with inSampleSize
Bitmap Resultbitmap;
string path = "imageFileCmyk.jpg";
int scale=4;
BitmapFactory.Options o2 = new BitmapFactory.Options();
o2.inPurgeable = true;
o2.inSampleSize=scale;
o2.inDither = false;
Resultbitmap = BitmapFactory.decodeStream(new FileInputStream(path), null, o2);
if (Resultbitmap==null) // Warning!! unsupported color conversion request
{
File tmpfile = new File(path);
FileInputStream is = new FileInputStream(tmpfile.getPath());
byte[] cmykBytes= new byte[(int)tmpfile.length()];
byte[] rgbBytes= new byte[(int)tmpfile.length()];
is.read(cmykBytes);
for (int i = 0; cmykBytes.length>i; ++i)
{
if (i>11) // skip header's bytes, is it correct ??
{
rgbBytes[i] = cmykBytes[i]?? // How ??
}
}
// new header bytes for RGB format
rgbBytes[??]= ?? // How ??
Resultbitmap = BitmapFactory.decodeByteArray(rgbBytes, 0, rgbBytes.length, o2);
}
return Resultbitmap;
Thanks,
Alberto
I made it! I found a nice tool for correct handling *.jpg files on Android platform with uncommon colorspaces like CMYK, YCCK and so on.
Use https://github.com/puelocesar/android-lib-magick, it's free and easy to configure android library.
Here is a snippet for converting CMYK images to RGB colorspace:
ImageInfo info = new ImageInfo(Environment.getExternalStorageDirectory().getAbsolutePath() + "/cmyk.jpg");
MagickImage imageCMYK = new MagickImage(info);
Log.d(TAG, "ColorSpace BEFORE => " + imageCMYK.getColorspace());
boolean status = imageCMYK.transformRgbImage(ColorspaceType.CMYKColorspace);
Log.d(TAG, "ColorSpace AFTER => " + imageCMYK.getColorspace() + ", success = " + status);
imageCMYK.setFileName(Environment.getExternalStorageDirectory().getAbsolutePath() + "/cmyk_new.jpg");
imageCMYK.writeImage(info);
Bitmap bitmap = BitmapFactory.decodeFile(Environment.getExternalStorageDirectory().getAbsolutePath()
+ "/Docs/cmyk_new.jpg");
if (bitmap == null) {
//if decoding fails, create empty image
bitmap = Bitmap.createBitmap(imageCMYK.getWidth(), imageCMYK.getHeight(), Config.ARGB_8888);
}
ImageView imageView1 = (ImageView) findViewById(R.id.imageView1);
imageView1.setImageBitmap(bitmap);
Just importing android-lib-magick the code to transform its colorspace is quite simple:
ImageInfo info = new ImageInfo(path); // path where the CMYK image is your device
MagickImage imageCMYK = new MagickImage(info);
imageCMYK.transformRgbImage(ColorspaceType.CMYKColorspace);
Bitmap bitmap = MagickBitmap.ToBitmap(imageCMYK);
Kord, even it is not needed to save again the transformed image, you can just create a bitmap with it. If the image is not in your device or on your SD card, you will need to download it first.
I have implemented two simple methods using android-lib-magick called "getCMYKImageFromPath" and "getCMYKImageFromURL". You can see the code here:
https://github.com/Mariovc/GetCMYKImage

Categories

Resources