JavaCV - Rendering to a GL Surface - android

Using JavaCV to consume a multicast stream, I want to render the video frames in a GLSurfaceView. The frames are grabbed using the FFmpegFrameGrabber class; I have successfully output the captured frames to sdcard and a non-GL surface for visual debuggging. I have looked all over for a solution or clue to no avail; here is the section of code where help is needed:
// get the frame
opencv_core.IplImage img = capture.grab();
if (img != null) {
opencv_core.CvMat rgbaImg = opencv_core.CvMat.create(height, width, CV_8U, 4);
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
// convert colorspace
cvCvtColor(img, rgbaImg, CV_BGR2RGBA);
bitmap.copyPixelsFromBuffer(rgbaImg.getByteBuffer());
Rect rect = new Rect(x, y, width, height);
Canvas c = surface.lockCanvas(rect);
c.drawBitmap(bitmap, 0, 0, null);
surface.unlockCanvasAndPost(c);
if (bitmap != null) {
bitmap.recycle();
}
if (rgbaImg != null) {
rgbaImg.release();
}
}
Also if there is a more optimal way to do anything above, let me know.
Edit Since there's not much action on the first part of this question, would a "workaround" of rendering on the SufaceTexture that is used to create the Surface be a possibility instead?
SurfaceTexture surfaceTexture = new SurfaceTexture(textureId);
surfaceTexture.setOnFrameAvailableListener(this);
surface = new Surface(surfaceTexture);
Note: I am forced to stick with Android 4.2.2 for now.

ffmpeg with wild video formats:
For your first method, you can speed it up by a shared bitmap and the other resources, that will eliminate any memory allocation and speed it up a lot.
As for rendering FFmpeg results to a GLSurfaceView, you should look here:
(I have used both JJmpeg and JavaCV)
https://code.google.com/p/jjmpeg/source/browse/#svn%2Fbranches%2Fffmpeg-0.10-android%2Fjjmpeg%2Fsrc%2Fau%2Fnotzed%2Fjjmpeg%2Fmediaplayer
Most of the gems are here: (GLESVideoRenderer.onDrawFrame method)
https://code.google.com/p/jjmpeg/source/browse/branches/ffmpeg-0.10-android/jjmpeg/src/au/notzed/jjmpeg/mediaplayer/GLESVideoRenderer.java
Basic idea is to load the frames into 2D texture array, and then draw it.
You can modify the FFmpegFrameGrabber to a renderer for the GLSurfaceView, framerates will vary between devices.
If you know the video format:
What you really should do since you are already on Android 4.2.2, is to use MediaCodec from SDK and push the frames directly onto a surface.

Related

OpenGL byte-buffer with OpenCV face detection

I am trying to overlay stickers on face using OpenCV and OpenGL.
I am getting the ByteBuffer inside the onDrawFrame:
#Override
public void onDrawFrame(GL10 unused) {
if (VERBOSE) {
Log.d(TAG, "onDrawFrame tex=" + mTextureId);
}
mSurfaceTexture.updateTexImage();
mSurfaceTexture.getTransformMatrix(mSTMatrix);
byteBuffer.rewind();
GLES20.glReadPixels(0, 0, mWidth, mHeight, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, byteBuffer);
mat.put(0, 0, byteBuffer.array());
if (mCascadeClassifier != null) {
mFaces.empty();
mCascadeClassifier.detectMultiScale(mat, mFaces);
Log.d(TAG, "No. of faces detected : " + mFaces.toArray().length);
}
drawFrame(mTextureId, mSTMatrix);
}
My mat object is initialized in with camera preview width and height:
mat = new Mat(height, width, CvType.CV_8UC3);
The log return 0 face detections. I have two questions:
What am I missing here for face detection using OpenCV?
Also, how can I improve the performance/efficiency of video frame rendering and do the realtime face detection? because glReadPixels takes time to execute and slow down the rendering.
You are calling glReadPixels() on the GLES frame buffer before you've rendered anything. You'd need to do it after drawFrame() if you were hoping to read back the SurfaceTexture rendering. You may want to consider rendering the texture offscreen to a pbuffer EGLSurface instead, and reading back from that.
There are a few different ways to get the pixel data from the Camera:
Use the Camera byte[] APIs. Generally involves a software copy, so it tends to be slow.
Send the output to an ImageReader. This gives you immediate access to the raw YUV data.
Send the output to a SurfaceTexture, render the texture, read RGB data out with glReadPixels() (which is what I believe you are trying to do). This is generally very fast, but on some devices and versions of Android it can be slow.

Has anyone managed to obtain a YUV_420_888 frame using RenderScript and the new Camera API?

I'm using RenderScript and Allocation to obtain YUV_420_888 frames from the Android Camera2 API, but once I copy the byte[] from the Allocation I receive only the Y plane from the 3 planes which compose the frame, while the U and V planes values are set to 0 in the byte[]. I'm trying to mimic the onPreviewframe from the previos camera API in order to perform in app processing of the camera frames. My Allocation is created like:
Type.Builder yuvTypeBuilderIn = new Type.Builder(rs, Element.YUV(rs));
yuvTypeBuilderIn.setX(dimensions.getWidth());
yuvTypeBuilderIn.setY(dimensions.getHeight());
yuvTypeBuilderIn.setYuvFormat(ImageFormat.YUV_420_888);
allocation = Allocation.createTyped(rs, yuvTypeBuilderIn.create(),
Allocation.USAGE_IO_INPUT | Allocation.USAGE_SCRIPT);
while my script looks like:
#pragma version(1)
#pragma rs java_package_name(my_package)
#pragma rs_fp_relaxed
rs_allocation my_frame;
The Android sample app HdrViewfinderDemo uses RenderScript to process YUV data from camera2.
https://github.com/googlesamples/android-HdrViewfinder
Specifically, the ViewfinderProcessor sets up the Allocations, and hdr_merge.rs reads from them.
Yes I did it, since I couldn't find anything useful. But I didn't go the proposed way of defining an allocation to the surface. Instead I just converted the output of the three image planes to RGB. The reason for this approach is that I use the YUV420_888 data twofold. First on a high frequency basis just the intensity values (Y). Second, I need to make some color Bitmaps too. Thus, the following solution. The script takes about 80ms for a 1280x720 YUV_420_888 image, maybe not ultra fast, but ok for my purpose.
UPDATE: I deleted the code here, since I wrote a more general solution here YUV_420_888 -> Bitmap conversion that takes into account pixelStride and rowStride too.
I think that you can use an ImageReader to get the frames of you camera into YUV_420_888
reader = ImageReader.newInstance(previewSize.getWidth(), previewSize.getHeight(), ImageFormat.YUV_420_888, 2);
Then you set an OnImageAvailableListener to the reader :
reader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
int jump = 4; //Le nombre d'image à sauter avant d'en traiter une, pour liberer de la mémoire
Image readImage = reader.acquireNextImage();
readImage.getPlane[0] // The Y plane
readImage.getPlane[1] //The U plane
readImage.getPlane[2] //The V plane
readImage.close();
}
}, null);
Hope that will help you
I'm using almost the same method as widea in their answer.
The exception you keep getting after ~50 frames might be due to the fact that you're processing all the frames by using acquireNextImage. The documentation suggest to:
Warning: Consider using acquireLatestImage() instead, as it will automatically release older images, and allow slower-running processing routines to catch up to the newest frame. [..]
So in case your exception is a IllegalStateException, switching to acquireLatestImage might help.
And make sure you call close() on all images retrieved from ImageReader.

Image data from Android camera2 API flipped & squished on Galaxy S5

I am implementing an app that uses real-time image processing on live images from the camera. It was working, with limitations, using the now deprecated android.hardware.Camera; for improved flexibility & performance I'd like to use the new android.hardware.camera2 API. I'm having trouble getting the raw image data for processing however. This is on a Samsung Galaxy S5. (Unfortunately, I don't have another Lollipop device handy to test on other hardware).
I got the overall framework (with inspiration from the 'HdrViewFinder' and 'Camera2Basic' samples) working, and the live image is drawn on the screen via a SurfaceTexture and a GLSurfaceView. However, I also need to access the image data (grayscale only is fine, at least for now) for custom image processing. According to the documentation to StreamConfigurationMap.isOutputSupportedFor(class), the recommended surface to obtain image data directly would be ImageReader (correct?).
So I've set up my capture requests as:
mSurfaceTexture.setDefaultBufferSize(640, 480);
mSurface = new Surface(surfaceTexture);
...
mImageReader = ImageReader.newInstance(640, 480, format, 2);
...
List<Surface> surfaces = new ArrayList<Surface>();
surfaces.add(mSurface);
surfaces.add(mImageReader.getSurface());
...
mCameraDevice.createCaptureSession(surfaces, mCameraSessionListener, mCameraHandler);
and in the onImageAvailable callback for the ImageReader, I'm accessing the data as follows:
Image img = reader.acquireLatestImage();
ByteBuffer grayscalePixelsDirectByteBuffer = img.getPlanes()[0].getBuffer();
...but while (as said) the live image preview is working, there's something wrong with the data I get here (or with the way I get it). According to
mCameraInfo.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP).getOutputFormats();
...the following ImageFormats should be supported: NV21, JPEG, YV12, YUV_420_888. I've tried all (plugged in for 'format' above), all support the set resolution according to getOutputSizes(format), but none of them give the desired result:
NV21: ImageReader.newInstance throws java.lang.IllegalArgumentException: NV21 format is not supported
JPEG: This does work, but it doesn't seem to make sense for a real-time application to go through JPEG encode and decode for each frame...
YV12 and YUV_420_888: this is the weirdest result -- I can see get the grayscale image, but it is flipped vertically (yes, flipped, not rotated!) and significantly squished (scaled significantly horizontally, but not vertically).
What am I missing here? What causes the image to be flipped and squished? How can I get a geometrically correct grayscale buffer? Should I be using a different type of surface (instead of ImageReader)?
Any hints appreciated.
I found an explanation (though not necessarily a satisfactory solution): it turns out that the sensor array's aspect ratio is 16:9 (found via mCameraInfo.get(CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE);).
At least when requesting YV12/YUV_420_888, the streamer appears to not crop the image in any way, but instead scale it non-uniformly, to reach the requested frame size. The images have the correct proportions when requesting a 16:9 format (of which there are only two higher-res ones, unfortunately). Seems a bit odd to me -- it doesn't appear to happen when requesting JPEG, or with the equivalent old camera API functions, or for stills; and I'm not sure what the non-uniformly scaled frames would be good for.
I feel that it's not a really satisfactory solution, because it means that you can't rely on the list of output formats, but instead have to find the sensor size first, find formats with the same aspect ratio, then downsample the image yourself (as needed)...
I don't know if this is the expected outcome here or a 'feature' of the S5. Comments or suggestions still welcome.
I had the same problem and found a solution.
The first part of the problem is setting the size of the surface buffer:
// We configure the size of default buffer to be the size of camera preview we want.
//texture.setDefaultBufferSize(width, height);
This is where the image gets skewed, not in the camera. You should comment it out, and then set an up-scaling of the image when displaying it.
int[] rgba = new int[width*height];
//getImage(rgba);
nativeLoader.convertImage(width, height, data, rgba);
Bitmap bmp = mBitmap;
bmp.setPixels(rgba, 0, width, 0, 0, width, height);
Canvas canvas = mTextureView.lockCanvas();
if (canvas != null) {
//canvas.drawBitmap(bmp, 0, 0, null );//configureTransform(width, height), null);
//canvas.drawBitmap(bmp, configureTransform(width, height), null);
canvas.drawBitmap(bmp, new Rect(0,0,320,240), new Rect(0,0, 640*2,480*2), null );
//canvas.drawBitmap(bmp, (canvas.getWidth() - 320) / 2, (canvas.getHeight() - 240) / 2, null);
mTextureView.unlockCanvasAndPost(canvas);
}
image.close();
You can play around with the values to fine tune the solution for your problem.

Screenshot on android OpenGL ES application

I have a basic openGL ES 20 application running with on a GLSurfaceView that has been added:
GLSurfaceView view = new GLSurfaceView(this);
view.setRenderer(new OpenGLRenderer());
setContentView(view);
Basically I am trying get a screenshot with the following method:
private static Bitmap getScreenshot(View v)
{
Bitmap b = Bitmap.createBitmap(v.getWidth(), v.getHeight(),
Bitmap.Config.ARGB_8888);
Canvas c = new Canvas(b);
v.draw(c);
return b;
}
But it seems the bitmap is transparent. The view I am passing in is:
View content = m_rootActivity.getWindow().getDecorView().getRootView();
Anyone has a solution on how to get screenshot on openGL ES without resorting into going into the DrawFrame method which I have seen in other solutions.
Maybe pass in the reference of the renderer? Any help would be appreciated.
Update:
I was exploring in rendering the bitmap from the onDrawFrame (Display black screen while capture screenshot of GLSurfaceView)
However, I was wondering if there is a better solution since I won't have access to the renderer nor the surfaceview. I can pass in their reference but would like a solution where we can just capture the entire view like what was mentioned earlier.
See this question.
You can get a screenshot with:
#Override
public void onDrawFrame(GL10 gl) {
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// draw ...
if (takeScreenshot) {
int screenshotSize = width * height;
ByteBuffer bb = ByteBuffer.allocateDirect(screenshotSize * 4);
bb.order(ByteOrder.nativeOrder());
GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, bb);
int pixelsBuffer[] = new int[screenshotSize];
bb.asIntBuffer().get(pixelsBuffer);
bb = null;
for (int i = 0; i < screenshotSize; ++i) {
// The alpha and green channels' positions are preserved while the red and blue are swapped
pixelsBuffer[i] = ((pixelsBuffer[i] & 0xff00ff00)) | ((pixelsBuffer[i] & 0x000000ff) << 16) | ((pixelsBuffer[i] & 0x00ff0000) >> 16);
}
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixelsBuffer, screenshotSize-width, -width, 0, 0, width, height);
// save bitmap...
}
}
You can not by any chance at all get the buffer data to the CPU from the GPU without reading the pixels. You should understand that this is not the same pipeline as is with views, the data in the buffer are filled on the GPU and are then sent directly to the display or nowhere.
So that being said the answer is no. You can not simply get a screenshot as a concept of screenshot does not even exist in this matter. There are only raw (usually RGBA) data on the GPU buffer. And those data must be filled with what you draw to get all you have drawn, if you were to simply read those data at any time the buffer might just be cleared, it might be half drawn or if you are lucky fully drawn.
So that is the reason why you make those screenshot in the drawing pipeline as you must assure the buffer is filled with the data.
There are generally 2 smart ways of intercepting the drawing pipeline best done just before presenting the buffer. One is to pass a certain flag that a screenshot should be done where then the engine itself creates a screenshot which is nice since it has all the data of the buffer on the fly. The second is to create a callback handle where the engine will notify the owner on every frame being fully drawn, in this case the owner can do some additional drawing or creating a screenshot or count frames per second... this again has many bonuses but you do need to at least pass the buffer dimensions to do anything with the buffer.
Also note that reading the pixels is extremely slow and in some cases the image you will receive will be upside-down.

android fast pixel access and manipulation

I'm trying to port an emulator that i have written in java to android. Things have been going nicely, I was able to port most of my codes with minor changes however due to how emulation works, I need to render image at pixel level.
As for desktop java I use
int[] pixelsA = ((DataBufferInt) src.getRaster().getDataBuffer()).getData();
which allow me to get the reference to the pixel buffer and update it on the fly(minimize object creations)
Currently this is what my emulator for android does for every frame
#Override
public void onDraw(Canvas canvas)
{
buffer = Bitmap.createBitmap(pixelsA, 256, 192, Bitmap.Config.RGB_565);
canvas.drawBitmap(buffer, 0, 0, null);
}
pixelsA is an array int[], pixelsA contains all the colour informations, so every frame it will have to create a bitmap object by doing
buffer = Bitmap.createBitmap(pixelsA, 256, 192, Bitmap.Config.RGB_565);
which I believe is quite expensive and slow.
Is there any way to draw pixels efficiently with canvas?
One quite low-level method, but working fine for me (with native code):
Create Bitmap object, as big as your visible screen.
Also create a View object and implement onDraw method.
Then in native code you'd load libjnigraphics.so native library, lookup functions AndroidBitmap_lockPixels and AndroidBitmap_unlockPixels.
These functions are defined in Android source in bitmap.h.
Then you'd call lock/unlock on a bitmap, receiving address to raw pixels. You must interpret RGB format of pixels accordingly to what it really is (16-bit 565 or 32-bit 8888).
After changing content of the bitmap, you want to present this on screen.
Call View.invalidate() on your View. In its onDraw, blit your bitmap into given Canvas.
This method is very low level and dependent on actual implementation of Android, however it's very fast, you may get 60fps no problem.
bitmap.h is part of Android NDK since platform version 8, so this IS official way to do this from Android 2.2.
You can use the drawBitmap method that avoids creating a Bitmap each time, or even as a last resort, draw the pixels one by one with drawPoint.
Don't recreate the bitmap every single time. Try something like this:
Bitmap buffer = null;
#Override
public void onDraw(Canvas canvas)
{
if(buffer == null) buffer = Bitmap.createBitmap(256, 192, Bitmap.Config.RGB_565);
buffer.copyPixelsFromBuffer(pixelsA);
canvas.drawBitmap(buffer, 0, 0, null);
}
EDIT: as pointed out, you need to update the pixel buffer. And the bitmap must be mutable for that to happen.
if pixelsA is already an array of pixels (which is what I would infer from your statement about containing colors) then you can just render them directly without converting with:
canvas.drawBitmap(pixelsA, 0, 256, 0, 0, 256, 192, false, null);

Categories

Resources