setPreviewDisplay and setDisplayOrientation - android

I'm puzzled by OpenCV's Android camera sample code. They make a custom class which implements SurfaceHolder.Callback and put the following line inside the method surfaceChanged:
mCamera.setPreviewDisplay(null);
The Android documentation for setPreviewDisplay explains:
This method must be called before startPreview(). The one exception is
that if the preview surface is not set (or set to null) before
startPreview() is called, then this method may be called once with a
non-null parameter to set the preview surface. (This allows camera
setup and surface creation to happen in parallel, saving time.) The
preview surface may not otherwise change while preview is running.
Unusually, OpenCV's code never calls setPreviewDisplay with a non-null SurfaceHolder. It works fine, but changing the rotation of the image using setDisplayOrientation doesn't work. This line also doesn't appear to do anything, since I get the same results without it.
If I call setPreviewDisplay with the SurfaceHolder supplied to surfaceChanged instead of null, the image rotates but does not include the results of the image processing. I also get an IllegalArgumentException when calling lockCanvas later on.
What's going on?
Here are the (possibly) most relevant parts of their code, slightly simplified and with methods inlined. Here is the full version.
Class definition
public abstract class SampleViewBase extends SurfaceView
implements SurfaceHolder.Callback, Runnable {
When the camera is opened
mCamera.setPreviewCallbackWithBuffer(new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera) {
synchronized (SampleViewBase.this) {
System.arraycopy(data, 0, mFrame, 0, data.length);
SampleViewBase.this.notify();
}
camera.addCallbackBuffer(mBuffer);
}
});
When the surface changes
/* Now allocate the buffer */
mBuffer = new byte[size];
/* The buffer where the current frame will be copied */
mFrame = new byte [size];
mCamera.addCallbackBuffer(mBuffer);
try {
mCamera.setPreviewDisplay(null);
} catch (IOException e) {
Log.e(TAG, "mCamera.setPreviewDisplay/setPreviewTexture fails: " + e);
}
[...]
/* Now we can start a preview */
mCamera.startPreview();
The run method
public void run() {
mThreadRun = true;
Log.i(TAG, "Starting processing thread");
while (mThreadRun) {
Bitmap bmp = null;
synchronized (this) {
try {
this.wait();
bmp = processFrame(mFrame);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
if (bmp != null) {
Canvas canvas = mHolder.lockCanvas();
if (canvas != null) {
canvas.drawBitmap(bmp, (canvas.getWidth() - getFrameWidth()) / 2,
(canvas.getHeight() - getFrameHeight()) / 2, null);
mHolder.unlockCanvasAndPost(canvas);
}
}
}
Log.i(TAG, "Finishing processing thread");
}

I ran into this same problem. Instead of using a SurfaceView.Callback, I subclassed their class JavaCameraView. See my live face detection and drawing sample here. It was then trivial to rotate the matrix coming out of the camera according to the device's orientation, prior to processing. Relevant excerpt of linked code:
#Override
public Mat onCameraFrame(Mat inputFrame) {
int flipFlags = 1;
if(display.getRotation() == Surface.ROTATION_270) {
flipFlags = -1;
Log.i(VIEW_LOG_TAG, "Orientation is" + getRotation());
}
Core.flip(inputFrame, mRgba, flipFlags);
inputFrame.release();
Imgproc.cvtColor(mRgba, mGray, Imgproc.COLOR_RGBA2GRAY);
if (mAbsoluteFaceSize == 0) {
int height = mGray.rows();
if (Math.round(height * mRelativeFaceSize) > 0) {
mAbsoluteFaceSize = Math.round(height * mRelativeFaceSize);
}
}
}

I solved the rotation issue using OpenCV itself: after finding out how much the screen rotation needs to be corrected using this code, I apply a rotation matrix to the raw camera image (after converting from YUV to RGB):
Point center = new Point(mFrameWidth/2, mFrameHeight/2);
Mat rotationMatrix = Imgproc.getRotationMatrix2D(center, totalRotation, 1);
[...]
Imgproc.cvtColor(mYuv, mIntermediate, Imgproc.COLOR_YUV420sp2RGBA, 4);
Imgproc.warpAffine(mIntermediate, mRgba, rotationMatrix,
new Size(mFrameHeight, mFrameWidth));
A separate issue is that setPreviewDisplay(null) gives a blank screen on some phones. The solution, which I got from here and draws on this bugreport and this SO question, passes a hidden, "fake" SurfaceView to the preview display to get it to start, but actually displays the output on an overlaid custom view, which I call CameraView. So, after calling setContentView() in the activity's onCreate(), stick in this code:
if (VERSION.SDK_INT < VERSION_CODES.HONEYCOMB) {
final SurfaceView fakeView = new SurfaceView(this);
fakeView.setLayoutParams(new LayoutParams(LayoutParams.MATCH_PARENT, LayoutParams.MATCH_PARENT));
fakeView.setZOrderMediaOverlay(false);
final CameraView cameraView = (CameraView) this.findViewById(R.id.cameraview);
cameraView.setZOrderMediaOverlay(true);
cameraView.fakeView = fakeView;
}
Then, when setting the preview display, use this code:
try {
if (VERSION.SDK_INT >= VERSION_CODES.HONEYCOMB)
mCamera.setPreviewTexture(new SurfaceTexture(10));
else
mCamera.setPreviewDisplay(fakeView.getHolder());
} catch (IOException e) {
Log.e(TAG, "mCamera.setPreviewDisplay fails: "+ e);
}
If you are only developing for Honeycomb and above, just replace setPreviewDisplay(null) with mCamera.setPreviewTexture(new SurfaceTexture(10)); and be done with it. setDisplayOrientation() still doesn't work if you do this, though, so you'll still have to use the rotation matrix solution.

Related

Surface Texture object is not getting the frames from a Surface Class

On the one hand, I have a Surface Class which when instantiated, automatically initialize a new thread and start grabbing frames from a streaming source via native code based on FFMPEG. Here is the main parts of the code for the aforementioned Surface Class:
public class StreamingSurface extends Surface implements Runnable {
...
public StreamingSurface(SurfaceTexture surfaceTexture, int width, int height) {
super(surfaceTexture);
screenWidth = width;
screenHeight = height;
init();
}
public void init() {
mDrawTop = 0;
mDrawLeft = 0;
mVideoCurrentFrame = 0;
this.setVideoFile();
this.startPlay();
}
public void setVideoFile() {
// Initialise FFMPEG
naInit("");
// Get stream video res
int[] res = naGetVideoRes();
mDisplayWidth = (int)(res[0]);
mDisplayHeight = (int)(res[1]);
// Prepare Display
mBitmap = Bitmap.createBitmap(mDisplayWidth, mDisplayHeight, Bitmap.Config.ARGB_8888);
naPrepareDisplay(mBitmap, mDisplayWidth, mDisplayHeight);
}
public void startPlay() {
thread = new Thread(this);
thread.start();
}
#Override
public void run() {
while (true) {
while (2 == mStatus) {
//pause
SystemClock.sleep(100);
}
mVideoCurrentFrame = naGetVideoFrame();
if (0 < mVideoCurrentFrame) {
//success, redraw
if(isValid()){
Canvas canvas = lockCanvas(null);
if (null != mBitmap) {
canvas.drawBitmap(mBitmap, mDrawLeft, mDrawTop, prFramePaint);
}
unlockCanvasAndPost(canvas);
}
} else {
//failure, probably end of video, break
naFinish(mBitmap);
mStatus = 0;
break;
}
}
}
}
In my MainActivity class, I instantiated this class in the following way:
public void startCamera(int texture)
{
mSurface = new SurfaceTexture(texture);
mSurface.setOnFrameAvailableListener(this);
Surface surface = new StreamingSurface(mSurface, 640, 360);
surface.release();
}
I read the following line in the Android developer page, regarding the Surface class constructor:
"Images drawn to the Surface will be made available to the SurfaceTexture, which can attach them to an OpenGL ES texture via updateTexImage()."
That is exactly what I want to do, and I have everything ready for the further renderization. But definitely, with the above code, I never get my frames captured in the surface class transformed to its corresponding SurfaceTexture. I know this because the debugger, for instace, never call the OnFrameAvailableLister method associated with that Surface Texture.
Any ideas? Maybe the fact that I am using a thread to call the drawing functions is messing everything? In such a case, what alternatives I have to grab the frames?
Thanks in advance

Rendering camera into multiple surfaces - on and off screen

I want to render the camera output into a view and once in a while save the camera output frame to a file, with the constraint being - the saved frame should be the same resolution as the camera is configured, while the view is smaller than the camera output (maintaining the aspect ratio).
Based on the ContinuousCaptureActivity example in grafika, I thought the best approach would be to send the camera to a SurfaceTexture and generally rendering the output and downscaling it into a SurfaceView, and when needed, render the full frame into a different Surface that has no view, in order to retrieve a byte buffer from it in parallel to the regular SurfaceView rendering.
The example is very similar to my situation - the preview is rendered to a view of smaller size and can be recorded and saved at the full resolution via a VideoEncoder.
I replaced the VideoEncoder logic with my own and got stuck trying to provide a Surface, like the encoder does, for the full resolution rendering. How do I create such a Surface? Am I approaching this correctly?
Some code ideas based on the example:
Inside the surfaceCreated(SurfaceHolder holder) method (line 350):
#Override // SurfaceHolder.Callback
public void surfaceCreated(SurfaceHolder holder) {
Log.d(TAG, "surfaceCreated holder=" + holder);
mEglCore = new EglCore(null, EglCore.FLAG_RECORDABLE);
mDisplaySurface = new WindowSurface(mEglCore, holder.getSurface(), false);
mDisplaySurface.makeCurrent();
mFullFrameBlit = new FullFrameRect(
new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT));
mTextureId = mFullFrameBlit.createTextureObject();
mCameraTexture = new SurfaceTexture(mTextureId);
mCameraTexture.setOnFrameAvailableListener(this);
Log.d(TAG, "starting camera preview");
try {
mCamera.setPreviewTexture(mCameraTexture);
} catch (IOException ioe) {
throw new RuntimeException(ioe);
}
mCamera.startPreview();
// *** MY EDIT START ***
// Encoder creation no longer needed
// try {
// mCircEncoder = new CircularEncoder(VIDEO_WIDTH, VIDEO_HEIGHT, 6000000,
// mCameraPreviewThousandFps / 1000, 7, mHandler);
// } catch (IOException ioe) {
// throw new RuntimeException(ioe);
// }
mEncoderSurface = new WindowSurface(mEglCore, mCameraTexture); // <-- Crashes with EGL error 0x3003
// *** MY EDIT END ***
updateControls();
}
The drawFrame() method (line 420):
private void drawFrame() {
//Log.d(TAG, "drawFrame");
if (mEglCore == null) {
Log.d(TAG, "Skipping drawFrame after shutdown");
return;
}
// Latch the next frame from the camera.
mDisplaySurface.makeCurrent();
mCameraTexture.updateTexImage();
mCameraTexture.getTransformMatrix(mTmpMatrix);
// Fill the SurfaceView with it.
SurfaceView sv = (SurfaceView) findViewById(R.id.continuousCapture_surfaceView);
int viewWidth = sv.getWidth();
int viewHeight = sv.getHeight();
GLES20.glViewport(0, 0, viewWidth, viewHeight);
mFullFrameBlit.drawFrame(mTextureId, mTmpMatrix);
mDisplaySurface.swapBuffers();
// *** MY EDIT START ***
// Send it to the video encoder.
if (someCondition) {
mEncoderSurface.makeCurrent();
GLES20.glViewport(0, 0, VIDEO_WIDTH, VIDEO_HEIGHT);
mFullFrameBlit.drawFrame(mTextureId, mTmpMatrix);
mEncoderSurface.swapBuffers();
try {
mEncoderSurface.saveFrame(new File(getExternalFilesDir(null), String.valueOf(System.currentTimeMillis()) + ".png"));
} catch (IOException e) {
e.printStackTrace();
}
}
// *** MY EDIT END ***
}
You're on the right track. The SurfaceTexture just does a quick bit of wrapping around the original YUV frame from the camera, so the "external" texture is the original image, with no changes. You can't read the pixels straight out of an external texture, so you have to render it somewhere first.
The easiest way to do this is to create an off-screen pbuffer surface. Grafika's gles/OffscreenSurface class does exactly this (with a call to eglCreatePbufferSurface()). Make that EGLSurface current, render the texture onto a FullFrameRect, then read the framebuffer with glReadPixels() (see EglSurfaceBase#saveFrame() for code). Don't call eglSwapBuffers().
Note that you're not creating an Android Surface for the output, just an EGLSurface. (They're different.)

Rotating IplImage creating distorted image using OpenCV

I am having difficulty when I am trying to live stream from android device to RTMP. First it was streaming good however, it was tilted so got a code to rotate the image.
public IplImage rotateImage(IplImage img) {
IplImage img_rotate = IplImage.create(img.height(), img.width(),
img.depth(), img.nChannels());
//acctually, I don't know how to use these two methods properly
cvTranspose(img, img_rotate);
cvFlip(img_rotate, img_rotate, 1); //?????90?
return img_rotate;
}
My code where I call the rotate from onPreviewFrame():
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
if (yuvIplimage != null && recording) {
videoTimestamp = 1000 * (System.currentTimeMillis() - startTime);
// Put the camera preview frame right into the yuvIplimage object
yuvIplimage.getByteBuffer().put(data);
try {
// Get the correct time
recorder.setTimestamp(videoTimestamp);
// Record the image into FFmpegFrameRecorder
recorder.record(rotateImage(yuvIplimage));
} catch (FFmpegFrameRecorder.Exception e) {
Log.v(LOG_TAG,e.getMessage());
e.printStackTrace();
}
}
}
As a result of this, I am getting the below image
In some other post, I came to know that we have to use
rgbimage = IplImage.create(imageWidth, imageHeight, IPL_DEPTH_8U, 3);
opencv_imgproc.cvCvtColor(yuvimage, rgbimage, opencv_imgproc.CV_YUV2BGR_NV21);
Then we can rotate however when ever I use this, I get exception BufferOverflow sometimes and sometimes RuntimeException

Can I use lockCanvas() in onPreviewFrame callback?

I'm fighting with an IllegalArgumentException when using lockCanvas() in onPreviewFrame.
Basically what I want to achieve is to grab the frame, process it, and draw it directly to the surface of SurfacePreview.
here is my code of onPreviewFrame
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Canvas canvas = null;
if (mHolder == null) {
return;
}
int mImgFormat = mCamera.getParameters().getPreviewFormat();
try {
canvas = mHolder.lockCanvas();
canvas.drawColor(0, android.graphics.PorterDuff.Mode.CLEAR);
mHolder.unlockCanvasAndPost(canvas);
} catch (Exception e) {
e.printStackTrace();
} finally {
if (canvas != null)
{
mHolder.unlockCanvasAndPost(canvas);
}
}
}
I have read a lot of documentation and topics about camera in android, and I suppose that locking the surface that frames are drawn onto isn't possible, because it's beyond scope of the application control. Is it true?
One of possible solutions is to draw on top of the surface view with another view, but I want to keep my processed frames (there will be some canny edge detection and color corrections, which may be time consuming) drawn in sync with preview. When camera will keep spitting frames with good fps my calculations will have a hard time catching up, and in worst case scenario draw an overlay that is dozens of frames behind.
I studied openCV source and for me it looks like they've managed to do this in CameraBridgeViewBase.java:
protected void deliverAndDrawFrame(CvCameraViewFrame frame) {
Mat modified;
if (mListener != null) {
modified = mListener.onCameraFrame(frame);
} else {
modified = frame.rgba();
}
boolean bmpValid = true;
if (modified != null) {
try {
Utils.matToBitmap(modified, mCacheBitmap);
} catch(Exception e) {
Log.e(TAG, "Mat type: " + modified);
Log.e(TAG, "Bitmap type: " + mCacheBitmap.getWidth() + "*" + mCacheBitmap.getHeight());
Log.e(TAG, "Utils.matToBitmap() throws an exception: " + e.getMessage());
bmpValid = false;
}
}
if (bmpValid && mCacheBitmap != null) {
Canvas canvas = getHolder().lockCanvas();
if (canvas != null) {
canvas.drawColor(0, android.graphics.PorterDuff.Mode.CLEAR);
canvas.drawBitmap(mCacheBitmap, (canvas.getWidth() - mCacheBitmap.getWidth()) / 2, (canvas.getHeight() - mCacheBitmap.getHeight()) / 2, null);
if (mFpsMeter != null) {
mFpsMeter.measure();
mFpsMeter.draw(canvas, 20, 30);
}
getHolder().unlockCanvasAndPost(canvas);
}
}
}
I'm either missing something or I'm just too old for this :)
Also, it's my first post here so hello everyone :)
Actually, openCV's camera plays a little trick. In the CameraBridgeViewBase.java and JavaCameraView.java file. I found that a SurfaceTexture class(from api 11) is used to receive frames from the camera. The advantage of the SurfaceTexture is you do not need to always draw it on the screen(it can be contained in the memory). Then a SurfaceView class is responsible to draw the processed frame from the onPreviewFrame function. Note this SurfaceView is not bound to the camera. Hope this can be help.

PreviewCallback onPreviewFrame does not change data

I want to do some image processing with images from camera and display it on a SurfaceView but I don't know how to modify the camera frame. I tried to use setPreviewCallbackWithBuffer and onPreviewFrame but they do not work as expected, the frame is not modified.
/** A basic Camera preview class */
public class CameraPreview extends SurfaceView implements
SurfaceHolder.Callback, Camera.PreviewCallback {
private SurfaceHolder mHolder;
private Camera mCamera;
private byte[] mData;
private long prevFrameTick = System.currentTimeMillis();
Canvas mCanvas;
public CameraPreview(Context context, Camera camera) {
super(context);
mCamera = camera;
// Install a SurfaceHolder.Callback so we get notified when the
// underlying surface is created and destroyed.
mHolder = getHolder();
mHolder.addCallback(this);
Size previewSize = mCamera.getParameters().getPreviewSize();
mData = new byte[(int) (previewSize.height * previewSize.width * 1.5)];
initBuffer();
// deprecated setting, but required on Android versions prior to 3.0
mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
}
private void initBuffer() {
mCamera.addCallbackBuffer(mData);
mCamera.addCallbackBuffer(mData);
mCamera.addCallbackBuffer(mData);
mCamera.setPreviewCallbackWithBuffer(this);
}
public void setCamera(Camera cam) {
mCamera = cam;
initBuffer();
}
public void surfaceCreated(SurfaceHolder holder) {
// The Surface has been created, now tell the camera where to draw the
// preview.
try {
mCamera.setPreviewDisplay(holder);
initBuffer();
mCamera.startPreview();
} catch (IOException e) {
Log.d("APP",
"Error setting camera preview: " + e.getMessage());
}
}
public void surfaceDestroyed(SurfaceHolder holder) {
// empty. Take care of releasing the Camera preview in your activity.
}
public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
// If your preview can change or rotate, take care of those events here.
// Make sure to stop the preview before resizing or reformatting it.
if (mHolder.getSurface() == null) {
// preview surface does not exist
return;
}
// stop preview before making changes
try {
mCamera.stopPreview();
} catch (Exception e) {
// ignore: tried to stop a non-existent preview
}
// set preview size and make any resize, rotate or
// reformatting changes here
// start preview with new settings
try {
mCamera.setPreviewDisplay(mHolder);
initBuffer();
mCamera.startPreview();
} catch (Exception e) {
Log.d("APP",
"Error starting camera preview: " + e.getMessage());
}
}
public void onPreviewFrame(byte[] data, Camera camera) {
// System.arraycopy(data, 0, mData, 0, data.length);
Log.e("onPreviewFrame", data.length + " "
+ (System.currentTimeMillis() - prevFrameTick));
prevFrameTick = System.currentTimeMillis();
mData = new byte[data.length];
mCamera.addCallbackBuffer(mData);
}
}
You cannot modify the preview data sent to a SurfaceView, if you're using the setPreviewDisplay() call. The preview video stream is managed entirely outside of your application and isn't accessible to it.
There are a few options you can take:
You can place a second view on top of the SurfaceView, such as an ImageView or another SurfaceView, and draw the data received by the onPreviewFrame callback into this view. You'll have to do some color/pixel format conversion from the preview callback format (usually NV21) for display, and obviously you have to run your image processing on that data first as well. This isn't very efficient, unless you're willing to write some JNI code.
On Android 3.0 or newer, you can use the Camera.setPreviewTexture() method, and pipe the camera preview stream into an OpenGL texture by using a SurfaceTexture object, which you can then manipulate in OpenGL before displaying. Then you don't need the preview callbacks at all. This is more efficient if GPU processing is sufficient. You can also use the OpenGL readPixels call to get the processed preview data back to your application, if you want to display it/process it some other way.
Maybe it will be helpfull to someone.
I have solved this problem by using the OpenCV library for retrieving Frames from a Camera.
In the OpenCv 3 there is a method onCameraFrame(CvCameraViewFrame inputFrame):
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
// here you can do something with inputFrame before it appears on the preview
return inputFrame.rgba();
}
You can just try the Camera Preview project from the samples folder.
Or you can do this in the ndk https://vec.io/posts/how-to-render-image-buffer-in-android-ndk-native-code
But I haven't tryed this jet.
Or here you can find decoding from YUV to RGB in C/C++ with NDK https://github.com/youten/YUV420SP

Categories

Resources