I wanted to write a simple processing function.
It should run like this:
Load a Jpeg
Convert it to Bitmap
save bitmap as byte array
process
data convert back to bitmap show Image.
public class MainActivity extends Activity {
ImageView imgView;
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
imgView = (ImageView) findViewById(R.id.imageView1);
AssetFileDescriptor asf;
String filename = Environment.getExternalStorageDirectory() + "/Test/"
+ "DSC00751.JPG";
Bitmap map = BitmapFactory.decodeFile(filename);
ByteArrayOutputStream bout = new ByteArrayOutputStream();
// Convert image so it can be stored in byteArray
map.compress(CompressFormat.JPEG, 100, bout);
byte[] array = bout.toByteArray();
// Process image.
for (int i = 0; i < array.length; i++) {
if (array[i] < 0) {
array[i] = (byte) 200;
}
}
// Convert result and display
Bitmap bmp = BitmapFactory.decodeByteArray(array, 0, array.length);
imgView.setImageBitmap(bmp);
Toast.makeText(getApplicationContext(), "done", Toast.LENGTH_SHORT).show();
}
I get a whitescreen in return. No matter how my processing code looks like.
I tried using foreach(byte b : array) before, but this always returned the original image.
What am I doing wrong?
// Process image.
for (int i = 0; i < array.length; i++) {
if (array[i] < 0) {
array[i] = (byte) 200;
}
}
in this code you are changing image bytes!! so thats why it appears white!!
what else?
anyway, if you need to process an image you need to do it like that :
Bitmap bitmap =...;
int[] pixels = new int[bitmap.getWidth() * bitmap.getHeight()];
bitmap.getPixels(pixels, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
now you have the pixels array of the image (int[])
Related
I'm trying to make an application which compares a pictures taken from the camera with others stored in the SD card.
If I try to compare two images stored in SD card it works fine, but when I try to use the camera it freezes.
That's a part of my code:
private boolean isSame = false;
final int c = 2;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
if (!OpenCVLoader.initDebug()) {
System.out.println("Errore");
}
final Mat[] vector = new Mat[2];
findViewById(R.id.button).setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
try {
File fotofatte = new File(Environment.getExternalStorageDirectory().getAbsolutePath()+"/images"+"/taken");
if(!fotofatte.exists()) {
fotofatte.mkdirs();
}
Intent imageIntent = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE);
File image = new File(fotofatte, "image_001.jpeg");
Uri uriSavedImage = Uri.fromFile(image);
imageIntent.putExtra(MediaStore.EXTRA_OUTPUT, uriSavedImage);
startActivityForResult(imageIntent, 1);
for (int i = 0; i < c; i++) {
final String baseDir = Environment.getExternalStorageDirectory().getAbsolutePath();
if (baseDir == null) {
throw new IOException();
}
Bitmap bm = BitmapFactory.decodeResource( getResources(), R.drawable.tmp);
Mat img = new Mat(bm.getHeight(), bm.getWidth(), CvType.CV_8UC1);
Utils.bitmapToMat(bm, img);
Bitmap bm2 = BitmapFactory.decodeFile(fotofatte+ "/image_001.jpeg");
Mat templ = new Mat(bm2.getHeight(), bm2.getWidth(), CvType.CV_8UC1);
Utils.bitmapToMat(bm2, templ);
Bitmap bm3 = BitmapFactory.decodeResource( getResources(), R.drawable.dd);
Mat img2 = new Mat(bm3.getHeight(), bm3.getWidth(), CvType.CV_8UC1);
Utils.bitmapToMat(bm3, img2);
vector[0] = img;
vector[1] = img2;
int result_cols = templ.cols() - vector[i].cols() + 1;
int result_rows = templ.rows() - vector[i].rows() + 1;
Mat result = new Mat(result_cols, result_rows, CvType.CV_32FC1);
// / Do the Matching and Normalize
Imgproc.matchTemplate(templ, vector[i], result, Imgproc.TM_CCOEFF_NORMED);
/* Core.normalize(result, result, 0, 1, Core.NORM_MINMAX, -1,
new Mat());*/
// / Localizing the best match with minMaxLoc
Core.MinMaxLocResult mmr = Core.minMaxLoc(result);
// Point matchLoc;
/* if (Imgproc.TM_CCOEFF == Imgproc.TM_SQDIFF
|| Imgproc.TM_CCOEFF == Imgproc.TM_SQDIFF_NORMED) {
matchLoc = mmr.minLoc;
System.out.println(mmr.maxVal);
System.out.println(mmr.minVal);
} else {
matchLoc = mmr.maxLoc;
System.out.println(mmr.maxVal);
System.out.println(mmr.minVal);
}*/
System.out.println(mmr.maxVal);
}
} catch (IOException e) {
System.out.println(e.toString());
}
}
});
}}
I don't receive any error in my log.
Thank you for the help.
UPDATE
Hi guys, I've resized my bitmaps and now I'm able to compare pictures taken from the camera with templates stored in the SD card.
But now I'm facing a new problem.
If I make a photo of a glass to use it as template and then I use another photo with the same glass for the comparison, I have only 0.4175666272640228 as result (mmr.MaxValue).
How can I fix this?
I'm working with Image steganography in android right now. For that I need to convert the image into bits array and decode it back. But when I try to convert my image back into its original shape, it showing only a black colour in my ImageView. Here is my code
btnEncode = (Button) findViewById(R.id.encode);
btnEncode.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
// TODO Auto-generated method stub
//imgPath.setText(imageToBase64(selectedImagePath));
ImageView imageView=(ImageView)findViewById(R.id.imageView1);
BitmapDrawable drawable = (BitmapDrawable) imageView.getDrawable();
Bitmap bitmap = drawable.getBitmap();
bytes = getBytesFromBitmap(bitmap);
StringBuilder binary = new StringBuilder();
for (byte b : bytes)
{
int val = b;
for (int i = 0; i < 8; i++)
{
binary.append((val & 128) == 0 ? 0 : 1);
val <<= 1;
}
binary.append(' ');
}
//To save the binary in newString
String ImageEncoded=new String(binary.toString());
TextView imgData=(TextView)findViewById(R.id.txtResult);
imgData.setText(ImageEncoded);
}
});
btnDecode = (Button) findViewById(R.id.decode);
btnDecode.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v)
{
// TODO Auto-generated method stub
ImageView imageView=(ImageView)findViewById(R.id.imageView1);
BitmapDrawable drawable = (BitmapDrawable) imageView.getDrawable();
Bitmap bitmap = drawable.getBitmap();
bytes = getBytesFromBitmap(bitmap);
Bitmap bmp = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
ImageView image = (ImageView) findViewById(R.id.imageView2);
image.setImageBitmap(bmp);
}
});
public static byte[] getBytesFromBitmap(Bitmap bitmap)
{
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(CompressFormat.JPEG, 70, stream);
return stream.toByteArray();
}
Its about your converter format. Use CompressFormat.PNG instead of CompressFormat.JPEG. This caused by "JPEGs don't do transparency like PNG".
I'm downloading a bitmap from an URL with the following code. If I do this cyclic (like streaming images from a camera) then the bitmap will be reallocated again and again. So I wonder if there is a way to write the newly downloaded byte-array into the existing bitmap which is already allocated in memory.
public static Bitmap downloadBitmap(String url) {
try {
URL newUrl = new URL(url);
return BitmapFactory.decodeStream(newUrl.openConnection()
.getInputStream());
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
Within this segment in the bitmap memory management section entitled 'Manage Memory on Android 3.0 and Higher' they start to speak of how to manipulate the bitmaps so that you can reuse the bitmap space so that the location for the Bitmap itself does not need to be re-allocated. If you are indeed looking at using the stream from the camera then this will cover back to Honeycomb since they will be the same sizes. Otherwise, it may only help out past 4.4 Kitkat.
But, you could store a local WeakReference (if you want it to be collected in case of memory issues) within the downloadBitmap class and then re-assign to that space and return there instead of creating a bitmap each time in a single line.
The app is slowed down because it allocates and de-allocates memory in each cycle. There are three ways to avoid that.
The first version works without OpenCV but still allocates some memory in each cycle. But the amount is much smaller and therefore it is at least two times faster. How? By re-using an existing and allready allocated buffer (byte[]). I'm using it with a pre-allocated SteamInfo buffer of 1.000.000 length (about double the size than I'm expecting).
By the way - reading the input stream in chunks and using BitmapFactory.decodeByteArray is much faster than putting the URL's input stream directly into BitmapFactory.decodeStream.
public static class StreamInfo {
public byte[] buffer;
public int length;
public StreamInfo(int length) {
buffer = new byte[length];
}
}
public static StreamInfo imageByte(StreamInfo buffer, String url) {
try {
URL newUrl = new URL(url);
InputStream is = (InputStream) newUrl.getContent();
byte[] tempBuffer = new byte[8192];
int bytesRead;
int position = 0;
if (buffer != null) {
// re-using existing buffer
while ((bytesRead = is.read(tempBuffer)) != -1) {
System.arraycopy(tempBuffer, 0, buffer.buffer, position,
bytesRead);
position += bytesRead;
}
buffer.length = position;
return buffer;
} else {
// allocating new buffer
ByteArrayOutputStream output = new ByteArrayOutputStream();
while ((bytesRead = is.read(tempBuffer)) != -1) {
output.write(tempBuffer, 0, bytesRead);
position += bytesRead;
}
byte[] result = output.toByteArray();
buffer = new StreamInfo(result.length * 2, false);
buffer.length = position;
System.arraycopy(result, 0, buffer.buffer, 0, result.length);
return buffer;
}
} catch (MalformedURLException e) {
e.printStackTrace();
return null;
} catch (IOException e) {
e.printStackTrace();
return null;
}
}
The second version uses OpenCV Mat and a pre-allocated Bitmap. Receiving the stream is done as in version one. So it does not need further memory allocation anymore (for details check out this link). This version works fine but it is a bit slower because it contains conversions between OpenCV Mat and Bitmap.
private NetworkCameraFrame frame;
private HttpUtils.StreamInfo buffer = new HttpUtils.StreamInfo(1000000);
private MatOfByte matForConversion;
private NetworkCameraFrame receive() {
buffer = HttpUtils.imageByte(buffer, uri);
if (buffer == null || buffer.length == 0)
return null;
Log.d(TAG, "Received image with byte-array of length: "
+ buffer.length / 1024 + "kb");
if (frame == null) {
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inJustDecodeBounds = true;
Bitmap bmp = BitmapFactory.decodeByteArray(buffer.buffer, 0,
buffer.length);
frame = new NetworkCameraFrame(bmp.getWidth(), bmp.getHeight());
Log.d(TAG, "NetworkCameraFrame created");
bmp.recycle();
}
if (matForConversion == null)
matForConversion = new MatOfByte(buffer.buffer);
else
matForConversion.fromArray(buffer.buffer);
Mat newImage = Highgui.imdecode(matForConversion,
Highgui.IMREAD_UNCHANGED);
frame.put(newImage);
return frame;
}
private class NetworkCameraFrame implements CameraFrame {
Mat mat;
private int mWidth;
private int mHeight;
private Bitmap mCachedBitmap;
private boolean mBitmapConverted;
public NetworkCameraFrame(int width, int height) {
this.mWidth = width;
this.mHeight = height;
this.mat = new Mat(new Size(width, height), CvType.CV_8U);
this.mCachedBitmap = Bitmap.createBitmap(width, height,
Bitmap.Config.ARGB_8888);
}
#Override
public Mat gray() {
return mat.submat(0, mHeight, 0, mWidth);
}
#Override
public Mat rgba() {
return mat;
}
// #Override
// public Mat yuv() {
// return mYuvFrameData;
// }
#Override
public synchronized Bitmap toBitmap() {
if (mBitmapConverted)
return mCachedBitmap;
Mat rgba = this.rgba();
Utils.matToBitmap(rgba, mCachedBitmap);
mBitmapConverted = true;
return mCachedBitmap;
}
public synchronized void put(Mat frame) {
mat = frame;
invalidate();
}
public void release() {
mat.release();
mCachedBitmap.recycle();
}
public void invalidate() {
mBitmapConverted = false;
}
};
The third version uses the instructions "Usage of BitmapFactory" on BitmapFactory.Options and a mutable Bitmap that is then re-used while decoding. It even work ed for me on Android JellyBean. Make sure you're using the correct BitmapFactory.Options when created the very first Bitmap.
BitmapFactory.Options options = new BitmapFactory.Options();
options.inBitmap = bmp; // the old Bitmap that should be reused
options.inMutable = true;
options.inSampleSize = 1;
Bitmap bmp = BitmapFactory.decodeByteArray(buffer, 0, buffer.length, options);
options.inBitmap = bmp;
This was actually the fastest streaming then.
UPDATES: Even if i don't retrieve images from cache, i tried to retrieve via Drawable where i stored all the 18 images in the "drawable-mdpi" folder. Still, a blank screen was display.
I was able to retrieved images from the server and save the image (.GIF) into the cache. However, when i need to load that image from cache, the image doesn't show up on screen. Here is the codes that does the work:
File cacheDir = context.getCacheDir();
File cacheMap = new File(cacheDir, smallMapImageNames.get(i).toString());
if(cacheMap.exists()){
FileInputStream fis = null;
try {
fis = new FileInputStream(cacheMap);
Bitmap local = BitmapFactory.decodeStream(fis);
puzzle.add(local);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
}else{
Drawable smallMap = LoadImageFromWebOperations(mapPiecesURL.get(i).toString());
if(i==0){
height1 = smallMap.getIntrinsicHeight();
width1 = smallMap.getIntrinsicWidth();
}
if (smallMap instanceof BitmapDrawable) {
Bitmap bitmap = ((BitmapDrawable)smallMap).getBitmap();
FileOutputStream fos = null;
try {
cacheMap.createNewFile();
fos = new FileOutputStream(cacheMap);
bitmap.compress(Bitmap.CompressFormat.PNG, 100, fos);
fos.flush();
fos.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
puzzle.add(bitmap);
}
}
ArrayList to store the image names: smallMapImageNames (The image names can also be found in the URL)
ArrayList to store the URL of the images: mapPiecesURL
To sum it up i have 2 questions
1) how to load images from cache?
2) regarding the bitmap.compress(), the images from the server is .GIF format but i apply Bitmap.CompressFormat.PNG. So is there going to be any problem with this?
Can anyone please help me with this?
The two functions
private Bitmap getBitMap(Context context) {
// TODO Auto-generated method stub
WifiPositioningServices wifiPositioningServices = new WifiPositioningServices();
String[] mapURLandCalibratedPoint1 = wifiPositioningServices.GetMapURLandCalibratedPoint("ERLab-1_1.GIF","ERLab"); //list of map pieces url in the first 9 pieces
String[] mapURLandCalibratedPoint2 = wifiPositioningServices.GetMapURLandCalibratedPoint("ERLab-4_1.GIF","ERLab"); //list of map pieces url in the last 9 pieces
ArrayList<String> smallMapImageNames = new ArrayList<String>();
ArrayList<String> mapPiecesURL = new ArrayList<String>();
for(int i=0; i<mapURLandCalibratedPoint1.length; i++){
if(mapURLandCalibratedPoint1[i].length()>40){ //image url
int len = mapURLandCalibratedPoint1[i].length();
int subStrLen = len-13;
smallMapImageNames.add(mapURLandCalibratedPoint1[i].substring(subStrLen, len-3)+"JPEG");
mapPiecesURL.add(mapURLandCalibratedPoint1[i]);
}
else{
//perform other task
}
}
for(int i=0; i<mapURLandCalibratedPoint2.length; i++){
if(mapURLandCalibratedPoint2[i].length()>40){ //image url
int len = mapURLandCalibratedPoint2[i].length();
int subStrLen = len-13;
smallMapImageNames.add(mapURLandCalibratedPoint2[i].substring(subStrLen, len-3)+"JPEG");
mapPiecesURL.add(mapURLandCalibratedPoint2[i]);
}
else{
//perform other task
}
}
Bitmap result = Bitmap.createBitmap(1029, 617, Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(result);
ArrayList<Bitmap> puzzle = new ArrayList<Bitmap>();
int height1 = 0 ;
int width1 = 0;
File cacheDir = context.getCacheDir();
for(int i=0; i<18; i++){
File cacheMap = new File(cacheDir, smallMapImageNames.get(i).toString());
if(cacheMap.exists()){
//retrieved from cached
try {
FileInputStream fis = new FileInputStream(cacheMap);
Bitmap bitmap = BitmapFactory.decodeStream(fis);
puzzle.add(bitmap);
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}else{
//retrieve from server and cached it
Drawable smallMap = LoadImageFromWebOperations(mapPiecesURL.get(i).toString());
if(i==0){
height1 = smallMap.getIntrinsicHeight();
width1 = smallMap.getIntrinsicWidth();
}
if (smallMap instanceof BitmapDrawable) {
Bitmap bitmap = ((BitmapDrawable)smallMap).getBitmap();
FileOutputStream fos = null;
try {
cacheMap.createNewFile();
fos = new FileOutputStream(cacheMap);
bitmap.compress(CompressFormat.JPEG, 100, fos);
fos.flush();
fos.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
puzzle.add(bitmap);
}
}
}
Rect srcRect;
Rect dstRect;
int cnt =0;
for (int j = 0; j < 3; j++) {
int newHeight = height1 * (j % 3);
for (int k = 0; k < 3; k++) {
if (j == 0 && k == 0) {
srcRect = new Rect(0, 0, width1, height1);
dstRect = new Rect(srcRect);
} else {
int newWidth = width1 * k;
srcRect = new Rect(0, 0, width1, height1);
dstRect = new Rect(srcRect);
dstRect.offset(newWidth, newHeight);
}
canvas.drawBitmap(puzzle.get(cnt), srcRect, dstRect,null);
cnt++;
}
}
for(int a=0; a<3; a++){
int newHeight = height1 * (a % 3);
for (int k = 3; k < 6; k++) {
if (a == 0 && k == 0) {
srcRect = new Rect(0, 0, width1*3, height1);
dstRect = new Rect(srcRect);
} else {
int newWidth = width1 * k;
srcRect = new Rect(0, 0, width1, height1);
dstRect = new Rect(srcRect);
dstRect.offset(newWidth, newHeight);
}
canvas.drawBitmap(puzzle.get(cnt), srcRect, dstRect,
null);
cnt++;
}
}
return result;
}
private Drawable LoadImageFromWebOperations(String url) {
// TODO Auto-generated method stub
try
{
InputStream is = (InputStream) new URL(url).getContent();
Drawable d = Drawable.createFromStream(is, "src name");
return d;
}catch (Exception e) {
System.out.println("Exc="+e);
return null;
}
}
I am actually trying to display 18 pieces (3X6) of images to form up a floorplan. So to display the images, i use two for-loop to display it. the two .GIF images, ERLab-1_1.GIF and ERLab-4_1.GIF are the center piece of each group. For example, the first row of would be ERLab-0_0.GIF, ERLab-1_0.GIF, ERLab-2_0.GIF, ERLab-3_0.GIF, ERLab-4_0.GIF, ERLab-5_0.GIF. Second row would be XXX-X_1.GIF and XXX-X_2.GIF for the third row.
Lastly,
Bitmap resultMap = getBitMap(this.getContext());
bmLargeImage = Bitmap.createBitmap(1029 , 617, Bitmap.Config.ARGB_8888);
bmLargeImage = resultMap;
Then in the onDraw function would be drawing the image onto the canvas.
I just solved my own question.
In this line, canvas.drawBitmap(puzzle.get(cnt), srcRect, dstRect,null); within each of the for-loop which i am using it to draw the bitmap onto the canvas, i need to cast the each item in the ArrayList (puzzle) to Bitmap. Only then will the image get display.
I thought that if the ArrayList is definite as such, ArrayList<Bitmap> puzzle = new ArrayList<Bitmap>(); each items in the ArrayList would be of Bitmap type. But isn't that always true?
I am using a code that combine to images into 1 by using canvas . I show that image to ImageView it looks fine. But when I try to show that into WebView it show background black to right that image. I try to change the background color in HTML but it not change color. or make transparent. Can anyone help? Result is here The above image is in ImageView and below is in WebView.
public class MyBimapTest extends Activity {
/** Called when the activity is first created. */
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
ImageView img1 = (ImageView) findViewById(R.id.ImageView01);
img1.setVisibility(View.INVISIBLE);
Drawable dra1 = img1.getDrawable();
Bitmap map1 = ((BitmapDrawable) dra1).getBitmap();
ImageView img2 = (ImageView) findViewById(R.id.ImageView02);
img2.setVisibility(View.INVISIBLE);
Drawable dra2 = img2.getDrawable();
Bitmap map2 = ((BitmapDrawable) dra2).getBitmap();
// ***
ByteArrayOutputStream baos = new ByteArrayOutputStream();
map1.compress(Bitmap.CompressFormat.JPEG, 100, baos);
byte[] b = baos.toByteArray();
String abc = Base64.encodeBytes(b);
byte[] byt = null;
try {
byt = Base64.decode(abc);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
map1 = BitmapFactory.decodeByteArray(byt, 0, byt.length);
// ***
Bitmap map = combineImages(map1, map2);
ByteArrayOutputStream bbb = new ByteArrayOutputStream();
map.compress(Bitmap.CompressFormat.JPEG, 100, bbb);
byte[] bit = bbb.toByteArray();
String imgToString = Base64.encodeBytes(bit);
String imgTag = "<img src='data:image/jpg;base64," + imgToString
+ "' align='left' bgcolor='ff0000'/>";
WebView webView = (WebView) findViewById(R.id.storyView);
webView.loadData(imgTag, "text/html", "utf-8");
Drawable end = new BitmapDrawable(map);
ImageView img3 = (ImageView) findViewById(R.id.ImageView03);
img3.setImageBitmap(map);
}
public Bitmap combineImages(Bitmap c, Bitmap s) {
Bitmap cs = null;
int width, height = 0;
width = c.getWidth() + (s.getWidth() / 2);
height = c.getHeight() + (s.getHeight() / 2);
cs = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Canvas comboImage = new Canvas(cs);
comboImage.drawBitmap(c, 0f, 0f, null);
comboImage.drawBitmap(s, c.getWidth() - (s.getWidth() / 2), c
.getHeight()
- (s.getHeight() / 2), null);
return cs;
}
}
The JPEG format does not support alpha transparency, which is why the transparent background becomes black when you convert your original image to JPEG.
Use the PNG format instead:
map1.compress(Bitmap.CompressFormat.PNG, 100, baos);
and
String imgTag = "<img src='data:image/png;base64," + imgToString
+ "' align='left' bgcolor='ff0000'/>";