SkiaSharp load imge from ImageView - android

I am trying to load bitmap from ImageView to SKCanvasView.
The problem is that this image has CenterCrop set and image that was loaded to canvas is way too big. How to make it look exactly like in ImageView?
Code samples:
_skBitmap = ((BitmapDrawable)_frontView.ImageView.Drawable).Bitmap.ToSKBitmap();
And drawing:
private void _canvas_PaintSurface(object sender, SKPaintSurfaceEventArgs e)
{
var canvas = e.Surface.Canvas;
var scale = Resources.DisplayMetrics.Density;
canvas.Scale(scale);
if (_skBitmap != null)
{
canvas.DrawBitmap(_skBitmap, e.Info.Rect);
}
}

In case someone is interested (in my case image is always wider then screen):
private void _canvas_PaintSurface(object sender, SKPaintSurfaceEventArgs e)
{
var canvas = e.Surface.Canvas;
var scale = Resources.DisplayMetrics.Density;
canvas.Scale(scale);
if (_skBitmap != null)
{
var factor = scaledSize.Height / _skBitmap.Height;
var nw = _skBitmap.Width * factor;
var offset = (scaledSize.Width - nw) / 2;
var sr = new SKRect(offset, 0, nw + offset, scaledSize.Height);
canvas.DrawBitmap(_skBitmap, sr);
}
}

Related

How to convert SkiaSharp.SkBitmap to Android.Graphics.Bitmap?

I want to convert PinBitmap (SkiaSharp.SkBitmap) to Android.Graphics.Bitmap. I couldn't find online references, I only tried this in the Android project:
Android.Graphics.Bitmap bitmap = BitmapFactory.DecodeByteArray(myView.PinBitmap.Bytes, 0, myView.PinBitmap.Bytes.Length);
but the bitmap is null.
I'm creating the PinBitmap from a SKCanvasView:
private void SKCanvasView_PaintSurface(object sender, SkiaSharp.Views.Forms.SKPaintSurfaceEventArgs e)
{
var surface = e.Surface;
var canvas = surface.Canvas;
SKImageInfo info = e.Info;
canvas.DrawLine(10, 10, 10, 200, new SKPaint() { IsStroke = true, Color = SKColors.Green, StrokeWidth = 10 });
SKBitmap saveBitmap = new SKBitmap();
// Create bitmap the size of the display surface
if (saveBitmap == null)
{
saveBitmap = new SKBitmap(info.Width, info.Height);
}
// Or create new bitmap for a new size of display surface
else if (saveBitmap.Width < info.Width || saveBitmap.Height < info.Height)
{
SKBitmap newBitmap = new SKBitmap(Math.Max(saveBitmap.Width, info.Width),
Math.Max(saveBitmap.Height, info.Height));
using (SKCanvas newCanvas = new SKCanvas(newBitmap))
{
newCanvas.Clear();
newCanvas.DrawBitmap(saveBitmap, 0, 0);
}
saveBitmap = newBitmap;
}
// Render the bitmap
canvas.Clear();
canvas.DrawBitmap(saveBitmap, 0, 0);
var customPin = new CustomPin { PinBitmap = saveBitmap };
Content = customPin;
}
This is easy to do, you just need to have the SkiaSharp.Views NuGet
package installed. Then, there are extension methods:
skiaBitmap = androidBitmap.ToSKBitmap();
androidBitmap = skiaBitmap.ToBitmap();
There are also a few others, like: ToSKImage
and ToSKPixmap.
NOTE: these all make copies of the pixel data. To avoid memory issue,
you can dispose of the original as soon as the method returns.
Source: https://forums.xamarin.com/discussion/comment/294868/#Comment_294868
I want to convert PinBitmap (SkiaSharp.SkBitmap) to Android.Graphics.Bitmap
In Android MainActivity, you could use AndroidExtensions.ToBitmap Method to convert PinBitmap to Bitmap.
AndroidExtensions.ToBitmap Method: https://learn.microsoft.com/en-us/dotnet/api/skiasharp.views.android.androidextensions.tobitmap?view=skiasharp-views-1.68.1
Install SkiaSharp.Views.Forms from NuGet Package. https://www.nuget.org/packages/SkiaSharp.Views.Forms/
Use the reference.
using SkiaSharp.Views.Android;
Use the code below.
var bitmap = AndroidExtensions.ToBitmap(PinBitmap);

MLKit Firebase android - How to convert FirebaseVisionFace to Image Object (like Bitmap)?

I have integrated MLkit FaceDetection into my android application. I have referred below URL
https://firebase.google.com/docs/ml-kit/android/detect-faces
Code for Face Detection Processor Class is
import java.io.IOException;
import java.util.List;
/** Face Detector Demo. */
public class FaceDetectionProcessor extends VisionProcessorBase<List<FirebaseVisionFace>> {
private static final String TAG = "FaceDetectionProcessor";
private final FirebaseVisionFaceDetector detector;
public FaceDetectionProcessor() {
FirebaseVisionFaceDetectorOptions options =
new FirebaseVisionFaceDetectorOptions.Builder()
.setClassificationType(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS)
.setLandmarkType(FirebaseVisionFaceDetectorOptions.ALL_LANDMARKS)
.setTrackingEnabled(true)
.build();
detector = FirebaseVision.getInstance().getVisionFaceDetector(options);
}
#Override
public void stop() {
try {
detector.close();
} catch (IOException e) {
Log.e(TAG, "Exception thrown while trying to close Face Detector: " + e);
}
}
#Override
protected Task<List<FirebaseVisionFace>> detectInImage(FirebaseVisionImage image) {
return detector.detectInImage(image);
}
#Override
protected void onSuccess(
#NonNull List<FirebaseVisionFace> faces,
#NonNull FrameMetadata frameMetadata,
#NonNull GraphicOverlay graphicOverlay) {
graphicOverlay.clear();
for (int i = 0; i < faces.size(); ++i) {
FirebaseVisionFace face = faces.get(i);
FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
graphicOverlay.add(faceGraphic);
faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
}
}
#Override
protected void onFailure(#NonNull Exception e) {
Log.e(TAG, "Face detection failed " + e);
}
}
Here in "onSuccess" listener , we will get array of "FirebaseVisionFace" class objects which will have "Bounding Box" of face.
#Override
protected void onSuccess(
#NonNull List<FirebaseVisionFace> faces,
#NonNull FrameMetadata frameMetadata,
#NonNull GraphicOverlay graphicOverlay) {
graphicOverlay.clear();
for (int i = 0; i < faces.size(); ++i) {
FirebaseVisionFace face = faces.get(i);
FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
graphicOverlay.add(faceGraphic);
faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
}
}
I want to know How to convert this FirebaseVisionFace objects into Bitmap.
I want to extract face image and show it in ImageView. Can anyone please help me . Thanks in advance.
Note : I have downloaded the sample Source code of MLKit android from below URL
https://github.com/firebase/quickstart-android/tree/master/mlkit
You created the FirebaseVisionImage from a bitmap. After detection returns, each FirebaseVisionFace describes a bounding box as a Rect that you can use to extract the detected face from the original bitmap, e.g. using Bitmap.createBitmap().
Since the accepted answer was not specific enough I will try to explain what I did.
1.- Create an ImageView on LivePreviewActivity like this:
private ImageView imageViewTest;
2.- Create it on the Activity xml and link it to java file. I placed it right before the the sample code had, so it can be visible on top of the camera feed.
3.-When they create a FaceDetectionProcessor pass an instance of the imageView to be able to set the source image inside the object.
FaceDetectionProcessor processor = new FaceDetectionProcessor(imageViewTest);
4.-Change the constructor of FaceDetectionProcessor to be able to receive ImageView as a parameter and create a global variable that saves that instance.
public FaceDetectionProcessor(ImageView imageView) {
FirebaseVisionFaceDetectorOptions options =
new FirebaseVisionFaceDetectorOptions.Builder()
.setClassificationType(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS)
.setTrackingEnabled(true)
.build();
detector = FirebaseVision.getInstance().getVisionFaceDetector(options);
this.imageView = imageView;
}
5.- I created a crop method that takes a bitmap and a Rect to focus only on the face. So go ahead and do the same.
public static Bitmap cropBitmap(Bitmap bitmap, Rect rect) {
int w = rect.right - rect.left;
int h = rect.bottom - rect.top;
Bitmap ret = Bitmap.createBitmap(w, h, bitmap.getConfig());
Canvas canvas = new Canvas(ret);
canvas.drawBitmap(bitmap, -rect.left, -rect.top, null);
return ret;
}
6.- Modify detectInImage method to save an instance of the bitmap being detected and save it in a global variable.
#Override
protected Task<List<FirebaseVisionFace>> detectInImage(FirebaseVisionImage image) {
imageBitmap = image.getBitmapForDebugging();
return detector.detectInImage(image);
}
7.- Finally, modify OnSuccess method by calling the cropping method and assign result to the imageView.
#Override
protected void onSuccess(
#NonNull List<FirebaseVisionFace> faces,
#NonNull FrameMetadata frameMetadata,
#NonNull GraphicOverlay graphicOverlay) {
graphicOverlay.clear();
for (int i = 0; i < faces.size(); ++i) {
FirebaseVisionFace face = faces.get(i);
FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
graphicOverlay.add(faceGraphic);
faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
croppedImage = cropBitmap(imageBitmap, face.getBoundingBox());
}
imageView.setImageBitmap(croppedImage);
}
This may help you if you're trying to use ML Kit to detect faces and OpenCV to perform image processing on the detected face. Note in this particular example, you need the originalcamera bitmap inside onSuccess.
I haven't found a way to do this without a bitmap and truthfully still searching.
#Override
protected void onSuccess(#NonNull List<FirebaseVisionFace> faces, #NonNull FrameMetadata frameMetadata, #NonNull GraphicOverlay graphicOverlay) {
graphicOverlay.clear();
for (int i = 0; i < faces.size(); ++i) {
FirebaseVisionFace face = faces.get(i);
/* Original implementation has original image. Original Image represents the camera preview from the live camera */
// Create Mat representing the live camera itself
Mat rgba = new Mat(originalCameraImage.getHeight(), originalCameraImage.getWidth(), CvType.CV_8UC4);
// The box with a Imgproc affect made by OpenCV
Mat rgbaInnerWindow;
Mat mIntermediateMat = new Mat();
// Make box for Imgproc the size of the detected face
int rows = (int) face.getBoundingBox().height();
int cols = (int) face.getBoundingBox().width();
int left = cols / 8;
int top = rows / 8;
int width = cols * 3 / 4;
int height = rows * 3 / 4;
// Create a new bitmap based on live preview
// which will show the actual image processing
Bitmap newBitmap = Bitmap.createBitmap(originalCameraImage);
// Bit map to Mat
Utils.bitmapToMat(newBitmap, rgba);
// Imgproc stuff. In this examply I'm doing edge detection.
rgbaInnerWindow = rgba.submat(top, top + height, left, left + width);
Imgproc.Canny(rgbaInnerWindow, mIntermediateMat, 80, 90);
Imgproc.cvtColor(mIntermediateMat, rgbaInnerWindow, Imgproc.COLOR_GRAY2BGRA, 4);
rgbaInnerWindow.release();
// After processing image, back to bitmap
Utils.matToBitmap(rgba, newBitmap);
// Load the bitmap
CameraImageGraphic imageGraphic = new CameraImageGraphic(graphicOverlay, newBitmap);
graphicOverlay.add(imageGraphic);
FaceGraphic faceGraphic;
faceGraphic = new FaceGraphic(graphicOverlay, face, null);
graphicOverlay.add(faceGraphic);
FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
graphicOverlay.add(faceGraphic);
// I can't speak for this
faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
}
}
Actually you can just read the ByteBuffer then you can get the array for write to object files you want with OutputStream. Of course you can get it from getBoundingBox() too.

Is there a method to resize bitmap image?

I'm having problem with my file browser.
When it reached to any directory : Get folder list to list map, and use onBindCustomView for setting icon for each list items.
If that file format is image, it shows the image preview instead of showing image icon.
But the problem is, one or less than 10 image file is OK. But when the image file count is reached to over 50, it lags very hard.
I think this lag problem is caused by image preview. Because it didn't resize, so I add the code to resize image preview from any x any to 100x100.
But there is another problem.
bitmap.createBitmap(bitmap, 0, 0, 100, 100)
This cut image and make it small. This only resize the image. So the result is the small part of image.
Is there a method to resize the image and keep the image looks original?
You can use canvas:
In your view.
<script>
$('#files')
.bind('change', function(ev) {
message_loading("none");
ev.preventDefault();
ev.stopPropagation();
uploadFiles(this);
message_loading("display");
})
</script>
In your script
function uploadFiles(target) {
var content = target.files;
var img = reduceImgSize(content[0], content[0]["name"]);
}
function reduceImgSize(f, name){
var reader = new FileReader();
reader.onload = function (readerEvent) {
var image = new Image();
image.onload = function (imageEvent) {
var canvas = document.createElement('canvas'),
max_size = image.width/1.1,
width = image.width,
height = image.height;
if (width > height) {
if (width > max_size) {
height *= max_size / width;
width = max_size;
}
} else {
if (height > max_size) {
width *= max_size / height;
height = max_size;
}
}
canvas.width = width; // write your width
canvas.height = height; // write your height
canvas.getContext('2d').drawImage(image, 0, 0, width, height);
var dataUrl = canvas.toDataURL('image/jpeg');
var resizedImage = dataURLToBlob(dataUrl);
var imageFile = new File([resizedImage], name, {type : "image/jpeg"});
return imageFile;
}
image.src = readerEvent.target.result;
}
reader.readAsDataURL(f);
}
I hope it helps

composing canvas with Images works on Android but fails silently on IOS

I have the following code which fails on IOS but works on Android (using ionic/cordova). What am I missing (maybe I am being lucky with Android!) ? On Androids I get a seamless vertical stacking of image(s), whereas on IOS I get a blank look. It fails on IOS whether I am stacking one SVG or many.
The SVG elements by themselves show up fine on the page, and in debugger I can see their complete markup.
function _appendImgToCanvas(canvas, image, first_image) {
// image contents are -- {id, h, w}
var ctx = canvas.getContext("2d");
var img_type = "data:image/svg+xml;charset=utf-8";
try {
var img_el = document.getElementById(image.id).getElementsByTagName("svg")[0];
var img_src = new XMLSerializer().serializeToString(img_el); // thx Kaiido
var img = new Image();
img.src = img_type + "," + img_src;
var h = image.h, w = image.w; // is in CSS pixels
var old_h = canvas.height, old_w = canvas.width;
var old_image;
if (first_image) {
ctx.clearRect(0, 0, old_w, old_h);
old_h = old_w = 0; // it's a new beginning
} else {
old_image = canvas.toDataURL("image/png"); // Android appears to wipe out image on resizing
}
// update canvas dims, and update its CSS style too
canvas.setAttribute("height", old_h + h);
canvas.setAttribute("width", Math.max(w, old_w));
canvas.style.height = canvas.height + "px";
canvas.style.width = canvas.width + "px";
// retrieve any old image into the resized canvas
if (old_image) {
var x = new Image();
x.src = old_image;
ctx.drawImage(x,0,0);
}
// add the given image
ctx.drawImage(img, 0, old_h);
$log.debug("IMG#" + image.id, w,"x",h, "appended at h=", old_h, "new dim=", canvas.width, "x", canvas.height);
} catch(e){$log.error("IMG#" + image.id, "ERROR in appending", e);
}
}
You have to wait for your images has loaded, add the drawing operations into their onload handler.
Also, for converting your svg element to a dataURL, use encodeURIComponent(new XMLSerializer().serializeToString(yourSVGElement)) method instead of outerHTML/innerHTML, you will avoid encoding issues and this if/else //probably ios..

How do I edit and save a large bitmap in Android? Currently writing it out as tiles but it is slow to join them back together

I'm using renderscript on Android to edit photos, currently due to the texture size limit and memory limits on Android the app will crash if I try anything too large eg photos taken with the devices camera.
My first thought to get around this was to use BitmapRegionDecoder and tile the large photo into manageable pieces, edit them through renderscript and save them one at a time, then stitch it all together using PNGJ - a PNG decoding and encoding library that allows writing PNG images to disk in parts so I don't have the full image in memory.
This works fine but stitching it together takes a rather long time - around 1 minute at a guess.
Are there any other solutions I should consider? I can change to JPEG if there is a solution there, but I haven't found it yet. Basically I'm looking for the other side of a BitmapRegionDecoder, a BitmapRegionEncoder.
Just to be clear, I do not want to resize the image.
Load the image in horizontal stripes using BitmapRegionDecoder. The code below assumes that it is PNG and uses PNGJ to copy the metadata to new image, but adding support for JPEG should not be too difficult.
Process each stripe with Renderscript.
Save it using PNGJ. Do not use high compression or it will slow down to a crawl.
PNG version of this image (4850x3635px) takes 12 seconds on Nexus 5 with a trivial RS filter (desaturation).
void processPng(String forig,String fdest) {
try {
Allocation inAllocation = null;
Allocation outAllocation = null;
final int block_height = 64;
FileInputStream orig = new FileInputStream(forig);
FileInputStream orig2 = new FileInputStream(forig);
FileOutputStream dest = new FileOutputStream(fdest);
BitmapRegionDecoder decoder = BitmapRegionDecoder.newInstance(orig, false);
Rect blockRect = new Rect();
PngReader pngr = new PngReader(orig2);
PngWriter pngw = new PngWriter(dest, pngr.imgInfo);
pngw.copyChunksFrom(pngr.getChunksList());
// keep compression quick
pngw.getPixelsWriter().setDeflaterCompLevel(1);
int channels = 3; // needles to say, this should not be hardcoded
int width = pngr.imgInfo.samplesPerRow / channels;
int height = pngr.imgInfo.rows;
pngr.close(); // don't need it anymore
blockRect.left = 0;
blockRect.right = width;
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig = Bitmap.Config.ARGB_8888;
Bitmap blockBitmap;
byte []bytes = new byte[width * block_height * 4];
byte []byteline = new byte[width * channels];
for (int row = 0; row <= height / block_height; row++) {
int h;
// are we nearing the end?
if((row + 1) * block_height <= height)
h = block_height;
else {
h = height - row * block_height;
// so that new, smaller Allocations are created
inAllocation = outAllocation = null;
}
blockRect.top = row * block_height;
blockRect.bottom = row * block_height + h;
blockBitmap = decoder.decodeRegion(blockRect, options);
if(inAllocation == null)
inAllocation = Allocation.createFromBitmap(mRS, blockBitmap);
if(outAllocation == null)
{
Type.Builder TypeDir = new Type.Builder(mRS, Element.U8_4(mRS));
TypeDir.setX(width).setY(h);
outAllocation = Allocation.createTyped(mRS, TypeDir.create());
}
inAllocation.copyFrom(blockBitmap);
mScript.forEach_saturation(inAllocation, outAllocation);
outAllocation.copyTo(bytes);
int idx = 0;
for(int raster = 0; raster < h; raster++) {
for(int m = 0; m < width; m++)
{
byteline[m * channels] = bytes[idx++];
byteline[m * channels + 1] = bytes[idx++];
byteline[m * channels + 2] = bytes[idx++];
idx++;
}
ImageLineByte line = new ImageLineByte(pngr.imgInfo, byteline);
pngw.writeRow(line);
}
}
pngw.end();
} catch (IOException e)
{
Log.d("BIG", "File io problem");
}
}
Based on #MiloslawSmyk answer, this is the version to load a large JPEG and save it with PNGJ:
fun processPng(forig: String, fdest: String) {
try {
val blockHeight = 64
val orig = FileInputStream(forig)
val dest = FileOutputStream(fdest)
val decoder = BitmapRegionDecoder.newInstance(orig, false)
val blockRect = Rect()
val channels = 3 // needles to say, this should not be hardcoded
val sizeOptions = BitmapFactory.Options().apply {
inJustDecodeBounds = true
}
BitmapFactory.decodeFile(forig, sizeOptions)
val height: Int = sizeOptions.outHeight
val width: Int = sizeOptions.outWidth
val pngw = PngWriter(dest, ImageInfo(width, height, 8, false))
// keep compression quick
pngw.pixelsWriter.deflaterCompLevel = 1
blockRect.left = 0
blockRect.right = width
val options = BitmapFactory.Options().apply {
inPreferredConfig = Bitmap.Config.ARGB_8888
}
var blockBitmap: Bitmap
val byteLine = ByteArray(width * channels)
for (row in 0..height / blockHeight) {
// are we nearing the end?
val h: Int = if ((row + 1) * blockHeight <= height)
blockHeight
else {
height - row * blockHeight
}
blockRect.top = row * blockHeight
blockRect.bottom = row * blockHeight + h
blockBitmap = decoder.decodeRegion(blockRect, options)
// convert bitmap into byte array
val size = blockBitmap.rowBytes * blockBitmap.height
val byteBuffer = ByteBuffer.allocate(size)
blockBitmap.copyPixelsToBuffer(byteBuffer)
val bytes = byteBuffer.array()
var idx = 0
for (raster in 0 until h) {
for (m in 0 until width) {
byteLine[m * channels] = bytes[idx++]
byteLine[m * channels + 1] = bytes[idx++]
byteLine[m * channels + 2] = bytes[idx++]
idx++
}
val line = ImageLineByte(pngw.imgInfo, byteLine)
pngw.writeRow(line)
}
}
pngw.end()
} catch (e: IOException) {
Log.d("BIG", "File io problem")
}
}

Categories

Resources