I am processing a camera frame using OpenCV. The functionality is working fine. However, it takes between 40~60 ms to process each frame. I am wondering if there is a possibility to improve the execution time. Following are the steps I take before I pass the mat to OpenCV cascade classifier.
mPixelBuf.rewind();
GLES20.glReadPixels(0, 0, mWidth, mHeight, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, mPixelBuf);
Bitmap bmp = Bitmap.createBitmap(mWidth, mHeight, Bitmap.Config.ARGB_8888);
mPixelBuf.rewind();
bmp.copyPixelsFromBuffer(mPixelBuf);
// received inverted(upside-down) bitmap. Fixing it.
android.graphics.Matrix m = new android.graphics.Matrix();
m.preScale(1, -1);
Bitmap flipped = Bitmap.createBitmap(bmp, 0, 0, mWidth, mHeight, m, false);
Mat mat = new Mat();
// convert bitmap to mat
Utils.bitmapToMat(flipped, mat);
// apply gray scale
Imgproc.cvtColor(mat, mat, Imgproc.COLOR_BGR2GRAY);
// pass the mat to cascade classifier...
I guess there could be some smart ways to avoid this extra processing or perhaps reduce the image resolution for faster image processing.
Related
I have a unity scene where I want to pass a Texture2D into android, do some processing, then save it as a bitmap. My code isn't working. For simplicity sake i removed the processing part of it and I'm trying to just save the image as a bitmap.
On the unity side there's some initialization for the android package and this line:
_pluginInterface.CallStatic("ProcessImage", testTexture.GetNativeTexturePtr().ToInt32(), testTexture.width, testTexture.height);
On the Java side:
public static void ProcessImage(int ptr, int width, int height){
Bitmap b = Bitmap.createBitmap(width, height,Bitmap.Config.ARGB_8888); // this is just so I get the right length
int byteCount = b.getByteCount();
ByteBuffer inputBuffer = ByteBuffer.allocate(byteCount);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, ptr);
GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, inputBuffer);
Bitmap bmp = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565);
inputBuffer.rewind();
bmp.copyPixelsFromBuffer(inputBuffer);
SaveToFileDebug(bmp); // basic function that saves the bitmap
}
As you can probably tell I'm all over the place with the encodings and i think that might be the problem, but they don't seem to match between unity and android.
Thanks in advance
Instead of doing such acrobatics coding just use GetRawTextureData and use the returned byte array to create your bitmap
Example
byte[] raw = testTexture.GetRawTextureData();
_plugin.Interface.CallStatic("YourMethod",raw,width, height);
Java Side
Bitmap bmp = BitmapFactory.decodeByteArray(raw, 0, raw.length)
I'd like to load a huge bitmap as a texture into the graphics card's memory on Android, in OpenGL ES 2.0, to be used as a texture atlas, in the biggest size possible. My device has a maximum texture size of 8192x8192.
I know that I can load a bitmap as a texture the following way:
// create bitmap
Bitmap bitmap = Bitmap.createBitmap(8192, 8192, Bitmap.Config.ARGB_8888);
{ // draw something
Canvas c = new Canvas(bitmap);
Paint p = new Paint(Paint.ANTI_ALIAS_FLAG);
p.setColor(0xFFFF0000);
c.drawCircle(4096, 4096, 4096, p);
}
// load as a texture
GLUtils.texImage2D(GL_TEXTURE_2D, 0, bitmap, 0);
However, (not surprisingly) I get a java.lang.OutOfMemoryError when trying to create a bitmap of this size.
Is it possible to load it in parts? As it's a texture atlas, it could be assembled from smaller bitmaps. I looked at the texSubImage2D function, but I don't understand where you would initialize the full-sized texture, or provide the size of the full texture beforehand.
On the GL side you need to allocate the full storage, and then patch it.
Allocate storage using glTexImage2D() with a null value for the data parameter. Upload patches using glTexSubImage2D().
Note that this still requires 256MB of memory, so on many budget devices you'll still get an OOM ...
Based on solidpixel's answer, this is a code that does the job:
GLES20.glTexImage2D ( GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, 8192, 8192, 0,
GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, null);
Bitmap bitmap = Bitmap.createBitmap(1024, 1024, Bitmap.Config.ARGB_8888);
Canvas c = new Canvas(bitmap);
Paint p = new Paint(Paint.ANTI_ALIAS_FLAG);
for (int i = 0; i < 8; ++i) {
for (int j = 0; j < 8; ++j) {
// clear the bitmap
c.drawColor(Color.TRANSPARENT, PorterDuff.Mode.CLEAR);
{ // draw something
p.setARGB(255, 32 * i, 32 * j, 255 - 32 * j);
c.drawCircle(512, 512, 512, p);
}
// load as part of a texture
GLUtils.texSubImage2D(GL_TEXTURE_2D, 0, i * 1024, j * 1024, bitmap);
}
}
Here the texture is assembled from 64, 1024x1024-sized bitmaps.
I have an Android Bitmap in my code and I would like to run the
cvCanny method on it. However, it needs to be in a Mat first. How do I
convert the data to Mat, and how do I convert it back to Bitmap when
I'm done?
First import org.opencv.android.Utils
Then use:
Mat src = new Mat();
Utils.bitmapToMat(bitmap, src);
To perform edge detection:
Mat dest = new Mat();
Imgproc.Canny(src, dest, min, max);
Bitmap edgeBitmap = Bitmap.createBitmap(bitmap.getWidth(), bitmap.getHeight(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(dest, edgeBitmap);
//edgeBitmap is ready
I have a little problem with rendering an object offscreen to create a bitmap of it and displaying it in an imageview. It does not shot the alpha channel correctly.
It works fine, when I save the bitmap as png and loading it then. But when I directly load it into an imageview, I will see a white background, which is the actual background color without the alpha channel.
Here the code for exporting the bitmap from my EGL Surface:
public Bitmap exportBitmap() {
ByteBuffer buffer = ByteBuffer.allocateDirect(w*h*4);
GLES20.glReadPixels(0, 0, w, h, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, buffer);
Bitmap bitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(buffer);
Log.i("Render","Terminating");
EGL14.eglMakeCurrent(oldDisplay, oldDrawSurface, oldReadSurface, oldCtx);
EGL14.eglDestroySurface(eglDisplay, eglSurface);
EGL14.eglDestroyContext(eglDisplay, eglCtx);
EGL14.eglTerminate(eglDisplay);
return bitmap;
}
And here the code for setting the imageview (r is the class containing the previous function):
Bitmap bm;
bm = r.exportBitmap();
ImageView infoView = (ImageView)findViewById(R.id.part_icon);
infoView.setImageBitmap(bm);
Do I have to set some flags on the ImageView or set something in the config of the bitmap?
I'll add some code examples plus images to clarify the problem:
First the way I want it to work:
bm = renderer.exportBitmap();
Second the way it works, with the save to png workaround:
bm = renderer.exportBitmap();
//PNG To Bitmap
String path = Environment.getExternalStorageDirectory()+"/"+getName()+".png";
bm.compress(CompressFormat.PNG, 100, new FileOutputStream(new File(path)));
bm = BitmapFactory.decodeFile(path);
Third to clarify my premultiplied. Alpha is taken into account the wrong way:
bm = renderer.exportBitmap();
for(int x=bm.getWidth()-50; x<bm.getWidth(); x++) {
for(int y=bm.getHeight()-50; y<bm.getHeight(); y++) {
int px = bm.getPixel(x,y);
bm.setPixel(x, y,
Color.argb(255,
Color.red(px),
Color.green(px),
Color.blue(px)));
}
}
Sorry for the long post.
I have changed the exportBitmap() function. Basically I read the pixels and write them again. This is solving the problem. I really don't know why. It would be nice, if somebody could explain it to me. It is not that clean to create a new integer array for these bitmapdata.
public Bitmap exportBitmap() {
ByteBuffer buffer = ByteBuffer.allocateDirect(w*h*4);
GLES20.glReadPixels(0, 0, w, h, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, buffer);
Bitmap bitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(buffer);
//This is the modification
int[] pixels = new int[w*h];
bitmap.getPixels(pixels, 0, w,0, 0,w, h);
bitmap.setPixels(pixels, 0, w,0, 0,w, h);
EGL14.eglMakeCurrent(oldDisplay, oldDrawSurface, oldReadSurface, oldCtx);
EGL14.eglDestroySurface(eglDisplay, eglSurface);
EGL14.eglDestroyContext(eglDisplay, eglCtx);
EGL14.eglTerminate(eglDisplay);
return bitmap;
}
I'm trying to convert Mat rectangle from onCameraFrame to bitmap then send the bitmap to ocr function because i need to make the ocr works on real-time on ROI according to rectangle. I tried these lines:
Mat mgray= inputFrame.gray();
Mat grayInnerWindow = mgray.submat(100, 100 +500, 150, 200 + 50);
Imgproc.cvtColor(mIntermediateMat, rgbaInnerWindow, Imgproc.COLOR_GRAY2BGRA, 4);
rgbaInnerWindow.release();
Size sizeGray = grayInnerWindow.size();
int rowss = (int) sizeGray.height;
int colss = (int) sizeGray.width;
Mat tmp = new Mat(colss, rowss, CvType.CV_8UC4);
Imgproc.cvtColor(rgbaInnerWindow, tmp, Imgproc.COLOR_RGBA2BGRA, 4);
Bitmap bmp = Bitmap.createBitmap(tmp.cols(), tmp.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(tmp, bmp);
ImageView Img = (ImageView)findViewById(R.id.image_manipulations_activity_surface_view);
Img.setImageBitmap(bmp);
but when i run it terminated.
you are releasing rgbaInnerWindow, then you re-use it as input to cvtColor.
yep, that will burn, rgbaInnerWindow is invalid/empty now..
also, why all those cvtColor calls ? can't you just pass an 8bit grayscale image to your ocr?
(converting a 4channel, all grayscale image from rgba to bgra is utterly useless)