I was struggeling to get YuvImage to a png imageformat on an Android 5.0.1 device where the png showed up as green images. On a Android 5.1.1 this did not happend, and the images was showing just fine.
After some time I found out that there is a bug in Android 5.0.1 which makes the images that are converted appear green. This was fixed in Android 5.1.1
However, does anyone know about a solution in order to make this work on devices that has not got this fix?
I don't think there is a way to workaround the bug because in my experience the images are already green when generated by the system, and it is not a problem of the conversion to PNG.
I see that you are using the Camera 2 API from your comment response, and since you are using YUV format I believe you are trying to save images from the continuous feed from the camera (as opposed from full resolution picture taking). If that is the case, I'll suggest using the older Camera API if at all possible, as I haven't seen a device that does not work when capturing preview images in YUV format (NV21), which can easily converted to a PNG, although having to go through a JPEG step:
YuvImage yuvImage = new YuvImage(nv21bytearray, ImageFormat.NV21, width, height, null);
ByteArrayOutputStream os = new ByteArrayOutputStream();
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 100, os);
byte[] jpegByteArray = os.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegByteArray, 0, jpegByteArray.length);
FileOutputStream fos = new FileOutputStream(Environment.getExternalStorageDirectory() + "/imagename.png");
bitmap.compress(Bitmap.CompressFormat.PNG, 100, fos);
fos.close();
with nv21bytearray being the NV21 byte array returned by the old camera API onPreviewFrame(...) method.
Related
I am trying to use the Azure Face API on android. I am capturing an image from the device camera and then converting it to an InputStream to be sent to the detect method. I keep getting the error "com.microsoft.projectoxford.face.rest.ClientException: Image size is too small"
I checked the documentation and the image size is 1.4Mb which is within the 1Kb-4Mb range. I don't understand why it isn't working.
Bitmap bitmap = cameraKitImage.getBitmap();
ByteArrayOutputStream bos = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG, 100, bos);
bitmapdata = bos.toByteArray();
new FaceTask().execute(new ByteArrayInputStream(bitmapdata));
Face[] faces = faceServiceClient.detect(inputStreams[0], true, false, null);
Somehow your file is being compressed more than 1Kb or if it is small in actual size already. Try to save it somewhere in the drawable or assets folder and open it in the input stream.
I am developing an app which generate .png images from a 1080p video. But the pngs are in MBs, and even my app is crashing due to large size of pngs. I want to compress or something like that to reduce the size of each png. I have done a lot of methods like createScaledBitmap, or compress(CompressFormat.PNG, 20, stream); or Bitmap.createBitmap(source, 0, 0, source.getWidth(),source.getHeight(), m, true); also searched a lot of methods.
But it's not reducing the size as much as I want. it still remains in 2.2+ MB per png.
Any idea other than these. Thanks.
I am also building an app which deals with high resolution images but on server side. I am compressing and re-sizing the image on server side by using thumbnailator-0.4.8.jar. Example
InputStream in = new ByteArrayInputStream(bytes);
BufferedImage bImageFromConvert = ImageIO.read( in );
BufferedImage newImage = Thumbnails.of(bImageFromConvert).size(213, 316).asBufferedImage();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ImageIO.write(newImage, "jpg", baos);
baos.flush();
retVal = baos.toByteArray();
baos.close();
I am trying to load images into a Mat in openCV for Android for face recognition.
The images are in in jpeg format of size 640 x 480.
I am using Eclipse and this codes are in .cpp file.
This is my codes.
while (getline(file, line)) {
stringstream liness(line);
getline(liness, path, ',');
getline(liness, classlabel);
if(!path.empty() && !classlabel.empty()) {
images.push_back(imread(path, 0));
labels.push_back(atoi(classlabel.c_str()));
}
}
However, I am getting an error saying that "The matrix is not continuous, thus its number of rows cannot be changed in function cv::Mat cv:Mat:reshape(int,int)const"
I tried using the solution in OpenCV 2.0 C++ API using imshow: returns unhandled exception and "bad-flag"
but it's in Visual Studio.
Any help would be greatly appreciated.
Conversion of image from Camera preview.
The image is converted to Grayscale from camera preview data.
Mat matRgb = new Mat();
Imgproc.cvtColor(matYuv, matRgb, Imgproc.COLOR_YUV420sp2RGB, 4);
try{
Mat matGray = new Mat();
Imgproc.cvtColor(matRgb, matGray, Imgproc.COLOR_RGB2GRAY, 0);
resultBitmap = Bitmap.createBitmap(640, 480, Bitmap.Config.ARGB_8888);
Utils.matToBitmap(matGray, resultBitmap);
Saving image.
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bmFace[0].compress(Bitmap.CompressFormat.JPEG, 100, stream);
byte[] flippedImageByteArray = stream.toByteArray();
the 'Mat not continuous' error is not at all related to the link you have there.
if you're trying fisher or eigenfaces, the images have to get 'flattened' to a single row for the pca.
this is not possible, if the data has 'gaps' or was padded to make the row size a multiple of 4. some image editors do that to your data.
also, imho your images are by far too large ( pca works best, when it'S almost quadratic, ie the rowsize (num_pixels) is similar to the colsize(num_images).
so my proposal would be, to resize the train images ( and also the test images later ) to something like 100x100, when loading them, this will also achieve a continuous data block.
(and again, avoid jpegs for anything image-processing related, too many compression artefacts!)
does anybody know what pixel type does BitmapFactory.decodeByteArray() returns?
basicly i'm using this snippet on the camera preview callback:
YuvImage img = new YuvImage(mLastFrame, ImageFormat.NV21, mPreviewSize.width, mPreviewSize.height, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
img.compressToJpeg(new Rect(0,0,mPreviewSize.width, mPreviewSize.height), 30, out);
return BitmapFactory.decodeByteArray(out.toByteArray(), 0, out.size());
i get a good Bitmap which i can use with android. but if i want to use it with OpenCV, it doesn't work, and as OpenCV requires an ARGB_8888 type, i'm guessing, i get another type from the decodeByteArray() function.
thanks,
Vlad
You can use Bitmap.getConfig() on your resulting Bitmap and see if it is ALPHA_8, RGB_565 etc...
I am trying to compress the photo took by the camera in Android. But the color of the image changed when it compressed by Bitmap.CompressFormat.JPEG. How can I solve this problem? Thanks
I have put some sample images which generated from my code. You can see the color of the paper on the top of the images is different.
Here is the code snippet:
Bitmap bitmap = BitmapFactory.decodeFile(Common.FOLDER_PATH + "pic.jpg");
FileOutputStream stream2 = new FileOutputStream(Common.FOLDER_PATH + "pic100.jpg");
bitmap.compress(Bitmap.CompressFormat.JPEG, 100, stream2);
FileOutputStream stream3 = new FileOutputStream(Common.FOLDER_PATH + "pic100.png");
bitmap.compress(Bitmap.CompressFormat.PNG, 100, stream3);
This is original image:
This is JPEG:
This is PNG:
JPEG is a lossy compression format and there may be loss of image information during the compression. The sacrifice of original image information is made for a better compression ratio (resulting in smaller file).
However, if this is not acceptable for you, you should use one of the lossless compression methods which includes the PNG.