I am trying to blur bitmaps in my app using RenderScript framework. I am using the following code:
public static Bitmap apply(Context context, Bitmap sentBitmap, int radius)
{
Bitmap bitmap = sentBitmap.copy(sentBitmap.getConfig(), true);
final RenderScript rs = RenderScript.create(context);
final Allocation input = Allocation.createFromBitmap(rs, sentBitmap,
Allocation.MipmapControl.MIPMAP_NONE,
Allocation.USAGE_SCRIPT);
final Allocation output = Allocation.createTyped(rs, input.getType());
final ScriptIntrinsicBlur script = ScriptIntrinsicBlur.create(rs, Element.U8_4(rs));
script.setRadius(radius);
script.setInput(input);
script.forEach(output);
output.copyTo(bitmap);
return bitmap;
}
Unfortunately all I get with the code is black bitmaps. How can I fix the issue?
Bitmaps passed to the apply method are created in the following way:
Bitmap b = Bitmap.createBitmap(thisView.getWidth(),
thisView.getHeight(),
Bitmap.Config.ARGB_8888);
The width and height of these bitmaps are multiples of 4.
There are also some errors reported by RenderScript but I don't know what they means and how should I fix them (the documentation for ScriptIntrinsicBlur is rather thin). Here are these errors:
20305-20391/com.xxx E/RenderScript﹕ rsi_ScriptIntrinsicCreate 5
20305-20391/com.xxx E/RenderScript﹕ rsAssert failed: mUserRefCount > 0, in
frameworks/rs/rsObjectBase.cpp at 112
EDIT:
The radius is 5 and I am running the app on Samsung Galaxy Nexus with Android 4.2.1.
Use this function to blur your input bitmap image:
Bitmap BlurImage(Bitmap input) {
RenderScript rsScript = RenderScript.create(this);
Allocation alloc = Allocation.createFromBitmap(rsScript, input);
ScriptIntrinsicBlur blur = ScriptIntrinsicBlur.create(rsScript, alloc.getElement());
blur.setRadius(12);
blur.setInput(alloc);
Bitmap result = Bitmap.createBitmap(input.getWidth(), input.getHeight(), input.getConfig());
Allocation outAlloc = Allocation.createFromBitmap(rsScript, result);
blur.forEach(outAlloc);
outAlloc.copyTo(result);
rsScript.destroy();
return result;
}
Thanks to #Tim Murray, I fixed the issue (there were two actually)
I switched to using the support library and now I hope Android Studio with gradle projects will eventually learn to handle the library symbols.
Another major source of problems was the fact I used completely transparent bitmaps as input for the ScriptIntrinsicBlur. My bad.
EDIT from March-07-2013
Android Studio 0.5 fixes issues with support RenderScript in Gradle-powered projects.
Related
I want to blur a Bitmap object in Android, currently I have the following code which blurs an ImageView:
private fun blurImageView(radius: Float) {
if (Build.VERSION.SDK_INT >= 31) {
binding.activityMainImageView.setRenderEffect(
RenderEffect.createBlurEffect(radius, radius, Shader.TileMode.CLAMP)
)
}
}
I want to get the underlying Bitmap object, so I tried to achieve that by doing the following:
binding.activityMainImageView.drawToBitmap()
But it doesn't seem to be working.
So how would I go around simply blurring a Bitmap object, is this possible with RenderScript at all? If not, what are my options to create a blur effect on a Bitmap object and get the underlying Bitmap object?
The developer documentation has given no info on how you could go around doing this.
You can create a bitmap blur effect in Android using RenderScript API!
The code maybe like this:
public Bitmap blur(Bitmap image) {
if (null == image){
return null;
}
Bitmap outputBitmap = Bitmap.createBitmap(image);
final RenderScript renderScript = RenderScript.create(this);
Allocation tmpIn = Allocation.createFromBitmap(renderScript, image);
Allocation tmpOut = Allocation.createFromBitmap(renderScript, outputBitmap);
//Intrinsic Gausian blur filter
ScriptIntrinsicBlur theIntrinsic = ScriptIntrinsicBlur.create(renderScript, Element.U8_4(renderScript));
theIntrinsic.setRadius(25); //should be 0-25
theIntrinsic.setInput(tmpIn);
theIntrinsic.forEach(tmpOut);
tmpOut.copyTo(outputBitmap);
return outputBitmap;
}
Then use:
Bitmap bitmap = BitmapFactory.decodeResource(getResources(), R.mipmap.test_pic);
Bitmap blurredBitmap = blur(bitmap);
Hope this can help you!
Lots of questions have been made about camera2 api and RAW image format, but searching online I have still not found the answer (that's why I am here btw).
I am trying to do some real-time image processing on camera-captured frames using ImageReader and setRepeatingRequest with the front-facing camera. As suggested in some previous posts, I am acquiring the image in a RAW format (specifically Imageformat.yuv_420_888) in order to have a frame-rate around 30fps:
imageReader = ImageReader.newInstance(width, height, ImageFormat.YUV_420_888, 2);
My image-processing algorithm requires an RGB image as input, so I need to convert from YUV to RGB. To do that I use ScriptIntrinsicYuvToRGB
private static Bitmap YUV_420_888_toRGBIntrinsics(Image image) {
if (image == null) return null;
int W = image.getWidth();
int H = image.getHeight();
Image.Plane Y = image.getPlanes()[0];
Image.Plane U = image.getPlanes()[1];
Image.Plane V = image.getPlanes()[2];
int Yb = Y.getBuffer().remaining();
int Ub = U.getBuffer().remaining();
int Vb = V.getBuffer().remaining();
byte[] data = new byte[Yb + Ub + Vb];
Y.getBuffer().get(data, 0, Yb);
V.getBuffer().get(data, Yb, Vb);
U.getBuffer().get(data, Yb + Vb, Ub);
rs = RenderScript.create(context);
ScriptIntrinsicYuvToRGB yuvToRgbIntrinsic = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
Type.Builder yuvType = new Type.Builder(rs, Element.U8(rs)).setX(data.length);
Allocation in = Allocation.createTyped(rs, yuvType.create(), Allocation.USAGE_SCRIPT);
Type.Builder rgbaType = new Type.Builder(rs, Element.RGBA_8888(rs)).setX(W).setY(H);
Allocation out = Allocation.createTyped(rs, rgbaType.create(), Allocation.USAGE_SCRIPT);
final Bitmap bmpout = Bitmap.createBitmap(W, H, Bitmap.Config.ARGB_8888);
in.copyFromUnchecked(data);
yuvToRgbIntrinsic.setInput(in);
yuvToRgbIntrinsic.forEach(out);
out.copyTo(bmpout);
image.close();
return bmpout ;
}
This method is quite fast since I can convert a 1080p image in less than 20ms. The only issue is that the image result is rotated by 270 degrees (i.e. picture is taken in landscape mode). Even if I set JPEG_ORIENTATION in the camera builder settings,captureRequestBuilder.set(CaptureRequest.JPEG_ORIENTATION, characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION)); the result is still the same.
Here my question:
Is there a way to get back a rotated image through renderscript intrinsics?
Is there a "rotate" function that does not allocate memory ?
Are there settings for image rotation of a YUV type?
Other the solutions that I have tried - Matrix rotation, YUV array rotation - are quite slow. Moreover I think that rotating an image by 90/180/270 is an easy task if done after having taken it, just need to save rows instead of columns (somehow).
No, there's no built-in rotation for YUV output. To minimize overhead, it's always produced as-is from the image sensor. You can read the SENSOR_ORIENTATION field to determine how the image sensor is placed on the device; typically the long edge of the image sensor lines up with the long edge of the Android device, but that still leaves two rotations that are valid.
Also, if your goal is to have the image 'upright', then you also need to read what the device's orientation is from the accelerometer, and add that in to the rotation.
You're doing a copy already getting the frame from the Image into the Allocation, so doing a 90/180/270 degree rotation then is relatively straightforward, though memory-bandwidth-intensive.
You can also take a look at one of Google's sample apps, HdrViewfinderDemo, which pipes camera data into RenderScript without the intermediate copy you're doing, and then converts to RGB to draw to a SurfaceView. It doesn't have a rotation in it now, but you could adjust the lookup done via rsGetElementAtYuv_uchar_* to do 90-increments.
I'm trying to load a photo from the web and perform a blur on it, outputting the blurred image as a separate bitmap. My code looks like this:
URL url = new URL(myUrl);
mNormalImage = BitmapFactory.decodeStream(url.openStream());
final RenderScript rs = RenderScript.create( mContext );
final Allocation input = Allocation.createFromBitmap( rs, mNormalImage, Allocation.MipmapControl.MIPMAP_NONE, Allocation.USAGE_SCRIPT );
final Allocation output = Allocation.createTyped( rs, input.getType() );
final ScriptIntrinsicBlur script;
script = ScriptIntrinsicBlur.create( rs, Element.U8_4( rs ) );
script.setRadius( 3.f );
script.setInput( input );
script.forEach( output );
output.copyTo( mBlurredImage );
and I'm getting the error:
android.renderscript.RSIllegalArgumentException:
Cannot update allocation from bitmap, sizes mismatch
Why is this happening?
Where is mBlurredImage created? This is happening because the size of that bitmap doesn't match the input. You should create it using something like:
Bitmap mBlurredImage =
Bitmap.createBitmap(
mNormalImage.getWidth(),
mNormalImage.getHeight(),
mNormalImage.getConfig());
Using the glide android library I get the image as a bitmap (see glide documentation) and then I try to blur the bitmap, using renderscript and ScriptIntrinsicBlur, which is a gaussian blur. (Taken from this stackoverflow post)
Glide.with(getApplicationContext())
.load(ImageUrl)
.asBitmap()
.into(new SimpleTarget<Bitmap>(300,200) {
#Override
public void onResourceReady(Bitmap resource, GlideAnimation glideAnimation) {
RenderScript rs = RenderScript.create(mContext); // context = this. this referring to the activity
final Allocation input = Allocation.createFromBitmap( rs, resource, Allocation.MipmapControl.MIPMAP_NONE, Allocation.USAGE_SCRIPT );
final Allocation output = Allocation.createTyped( rs, input.getType() );
final ScriptIntrinsicBlur script = ScriptIntrinsicBlur.create( rs, Element.U8_4( rs ) );
script.setRadius(8f);
script.setInput(input);
script.forEach(output);
output.copyTo(resource);
mImageView.setImageBitmap(resource);
}
});
The problem is that this is the output, rather than a blurred image:
Any help would be much appreciated thanks. :)
As it's support only U8_4 and U8 format. You'll have to convert your bitmap to ARGB_8888 before you send it to RenderScript by this example.
Bitmap U8_4Bitmap;
if(sentBitmap.getConfig() == Bitmap.Config.ARGB_8888) {
U8_4Bitmap = sentBitmap;
} else {
U8_4Bitmap = sentBitmap.copy(Bitmap.Config.ARGB_8888, true);
}
//==============================
Bitmap bitmap = Bitmap.createBitmap(U8_4Bitmap.getWidth(), U8_4Bitmap.getHeight(), U8_4Bitmap.getConfig());
final RenderScript rs = RenderScript.create(context);
final Allocation input = Allocation.createFromBitmap(rs,
U8_4Bitmap,
Allocation.MipmapControl.MIPMAP_NONE,
Allocation.USAGE_SCRIPT);
final Allocation output = Allocation.createTyped(rs, input.getType());
final ScriptIntrinsicBlur script = ScriptIntrinsicBlur.create(rs, output.getElement());
script.setRadius(radius);
script.setInput(input);
script.forEach(output);
output.copyTo(bitmap);
rs.destroy();
return bitmap;
Is it possible that the input image is not a U8_4 (i.e. RGBA8888)? Can you switch from using "Element.U8_4(rs)" to instead use "output.getElement()"? That would probably do the right thing. If it turns out that the image is not RGBA8888, you might at least get a Java exception describing what the underlying format is (if it isn't something supported with our Blur).
I need to blur image with a SeekBar that lets user to control radius of the blur. I use this method below, however it seems wasting memory and time because of creating new bitmaps every function call when SeekBar value changed. What is the best approach for live blur implementation with RenderScript?
public static Bitmap blur(Context ctx, Bitmap image, float blurRadius) {
int width = Math.round(image.getWidth() * BITMAP_SCALE);
int height = Math.round(image.getHeight() * BITMAP_SCALE);
Bitmap inputBitmap = Bitmap.createScaledBitmap(image, width, height, false);
Bitmap outputBitmap = Bitmap.createBitmap(width, height, Config.ARGB_8888);
RenderScript rs = RenderScript.create(ctx);
ScriptIntrinsicBlur theIntrinsic = ScriptIntrinsicBlur.create(rs, Element.U8_4(rs));
Allocation tmpIn = Allocation.createFromBitmap(rs, inputBitmap);
Allocation tmpOut = Allocation.createFromBitmap(rs, outputBitmap);
theIntrinsic.setRadius(blurRadius);
theIntrinsic.setInput(tmpIn);
theIntrinsic.forEach(tmpOut);
tmpOut.copyTo(outputBitmap);
rs.destroy();
if(inputBitmap!=outputBitmap)
inputBitmap.recycle();
return outputBitmap;
}
These calls can be quite expensive and really should be done in an outer part of your application. You can then reuse the RenderScript context/object and ScriptIntrinsicBlur when you need them. You also should not destroy them when the function finishes (since you will be reusing them). For greater savings, you can pass the actual input/output bitmaps to your routine (or their Allocations) and keep those steady too. There really is a lot of dynamic creation/destruction in this snippet, and I can imagine that some of these things don't change frequently (and thus don't need to keep being recreated from scratch).
...
RenderScript rs = RenderScript.create(ctx);
ScriptIntrinsicBlur theIntrinsic = ScriptIntrinsicBlur.create(rs, Element.U8_4(rs));
...