I am having difficulty when I am trying to live stream from android device to RTMP. First it was streaming good however, it was tilted so got a code to rotate the image.
public IplImage rotateImage(IplImage img) {
IplImage img_rotate = IplImage.create(img.height(), img.width(),
img.depth(), img.nChannels());
//acctually, I don't know how to use these two methods properly
cvTranspose(img, img_rotate);
cvFlip(img_rotate, img_rotate, 1); //?????90?
return img_rotate;
}
My code where I call the rotate from onPreviewFrame():
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
if (yuvIplimage != null && recording) {
videoTimestamp = 1000 * (System.currentTimeMillis() - startTime);
// Put the camera preview frame right into the yuvIplimage object
yuvIplimage.getByteBuffer().put(data);
try {
// Get the correct time
recorder.setTimestamp(videoTimestamp);
// Record the image into FFmpegFrameRecorder
recorder.record(rotateImage(yuvIplimage));
} catch (FFmpegFrameRecorder.Exception e) {
Log.v(LOG_TAG,e.getMessage());
e.printStackTrace();
}
}
}
As a result of this, I am getting the below image
In some other post, I came to know that we have to use
rgbimage = IplImage.create(imageWidth, imageHeight, IPL_DEPTH_8U, 3);
opencv_imgproc.cvCvtColor(yuvimage, rgbimage, opencv_imgproc.CV_YUV2BGR_NV21);
Then we can rotate however when ever I use this, I get exception BufferOverflow sometimes and sometimes RuntimeException
Related
I want to capture the RGB image and Depth map of the same frame and store it as a bitmap.
As I'm new to ARCore, I tried modifying the sample of Hello AR provided by Google in the below link.
https://developers.google.com/ar/develop/unity/tutorials/hello-ar-sample
In the main layout I created a button, and assigned the onClick() function as captureFrame() defined as below.
I was planning to create the bitmap from the plane data from the RGBImage and DepthImage objects.
public void captureFrame(View view){
Toast.makeText(getApplicationContext(),"Capturing depth and rgb photo",Toast.LENGTH_SHORT).show();
if (session == null) {
return;
}
// Notify ARCore session that the view size changed so that the perspective matrix and
// the video background can be properly adjusted.
displayRotationHelper.updateSessionIfNeeded(session);
try {
session.setCameraTextureName(backgroundRenderer.getTextureId());
Frame frame = session.update();
Image RGBImage = frame.acquireCameraImage();
Image DepthImage = frame.acquireDepthImage();
Log.d("Amey","Format of the RGB Image: " + RGBImage.getFormat());
Log.d("Amey","Format of the Depth Image: " + DepthImage.getFormat());
} catch (Exception e) {
e.printStackTrace();
}
}
Though, I'm getting the following error from session.update():
W/System.err: com.google.ar.core.exceptions.MissingGlContextException
Can anyone suggest a workaround for this problem, or any better approach for getting bitmaps of RGB and depth images through ARcore?
So I am writing some facial landmarking code with the affdex sdk, and I am trying to pass the frame I recieved from the image listener to get certain pixels from its bitmap, and I am getting back null when I try to get this bitmap. Any help to figure out why this is the case would really help! Additionally, I am using the CameraDetector.
#Override
public void onImageResults(List<Face> faces, Frame frame, float v) {
if (faces == null|| frame == null)
return; //frame was not processed
if (faces.size() == 0)
overlayView.adjustFaces(null, null);
//final Bitmap b = Bitmap.createBitmap(cameraView.getMeasuredWidth(), cameraView.getMeasuredHeight(), Bitmap.Config.ARGB_8888);
overlayView.adjustFaces(faces, frame);
final Bitmap frameF = frame.getOriginalBitmapFrame();
final List<Face> facesF = faces;
extractorThread.addToRunnableQueue(new Runnable() {
#Override
public void run() {
float data = regionVal(facesF, frameF);
System.out.println(data);
extractorThread.updateBuffer(data);
extractorThread.computeHR();
}
});
}
The frameF bitmap I get is always null, and I dont know why
getOriginalBitmapFrame() only returns a Bitmap if the Frame is a BitmapFrame. If the Frame is a ByteArrayFrame, it returns null.
CameraDetector works with ByteArrayFrames, since the camera's onPreviewFrame callback provides a byte array. So, you can get the image data via getByteArray().
I am developing an Android App in which I'm using ImageReader to get image from a Surface. The surface's data is achieved from the VirtualDisplay when i record screen in Lollipop version. The problem is the image is available with very low rate (1 fps) (OnImageAvailableListener.onImageAvailable() function is invoked). When i tried to use MediaEncoder with this surface as an input surface the output video looks smooth under 30fps.
Is there any suggestion for me to read the image data of surface with high fps?
ImageReader imageReader = ImageReader.newInstance(width, height, PixelFormat.RGBA_8888, 2);
mImageReader.setOnImageAvailableListener(onImageListener, null);
mVirtualDisplay = mMediaProjection.createVirtualDisplay("VideoCap",
mDisplayWidth, mDisplayHeight, mScreenDensity,
DisplayManager.VIRTUAL_DISPLAY_FLAG_AUTO_MIRROR,
imageReader.getSurface(), null /*Callbacks*/, null /*Handler*/);
//
//
OnImageAvailableListener onImageListener = new OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
// TODO Auto-generated method stub
if(reader != mImageReader)
return;
Image image = reader.acquireLatestImage();
if(image == null)
return;
// do some stuff
image.close();
}
};
FPS increased extremely when I switched to another format :
mImageReader = ImageReader.newInstance(mVideoSize.getWidth(), mVideoSize.getHeight(), ImageFormat.YUV_420_888, 2);
Hope this will help you.
I found that in addition to selecting the YUV format, I also had to select the smallest image size available for the device in order to get a significant speed increase.
I'm puzzled by OpenCV's Android camera sample code. They make a custom class which implements SurfaceHolder.Callback and put the following line inside the method surfaceChanged:
mCamera.setPreviewDisplay(null);
The Android documentation for setPreviewDisplay explains:
This method must be called before startPreview(). The one exception is
that if the preview surface is not set (or set to null) before
startPreview() is called, then this method may be called once with a
non-null parameter to set the preview surface. (This allows camera
setup and surface creation to happen in parallel, saving time.) The
preview surface may not otherwise change while preview is running.
Unusually, OpenCV's code never calls setPreviewDisplay with a non-null SurfaceHolder. It works fine, but changing the rotation of the image using setDisplayOrientation doesn't work. This line also doesn't appear to do anything, since I get the same results without it.
If I call setPreviewDisplay with the SurfaceHolder supplied to surfaceChanged instead of null, the image rotates but does not include the results of the image processing. I also get an IllegalArgumentException when calling lockCanvas later on.
What's going on?
Here are the (possibly) most relevant parts of their code, slightly simplified and with methods inlined. Here is the full version.
Class definition
public abstract class SampleViewBase extends SurfaceView
implements SurfaceHolder.Callback, Runnable {
When the camera is opened
mCamera.setPreviewCallbackWithBuffer(new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera) {
synchronized (SampleViewBase.this) {
System.arraycopy(data, 0, mFrame, 0, data.length);
SampleViewBase.this.notify();
}
camera.addCallbackBuffer(mBuffer);
}
});
When the surface changes
/* Now allocate the buffer */
mBuffer = new byte[size];
/* The buffer where the current frame will be copied */
mFrame = new byte [size];
mCamera.addCallbackBuffer(mBuffer);
try {
mCamera.setPreviewDisplay(null);
} catch (IOException e) {
Log.e(TAG, "mCamera.setPreviewDisplay/setPreviewTexture fails: " + e);
}
[...]
/* Now we can start a preview */
mCamera.startPreview();
The run method
public void run() {
mThreadRun = true;
Log.i(TAG, "Starting processing thread");
while (mThreadRun) {
Bitmap bmp = null;
synchronized (this) {
try {
this.wait();
bmp = processFrame(mFrame);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
if (bmp != null) {
Canvas canvas = mHolder.lockCanvas();
if (canvas != null) {
canvas.drawBitmap(bmp, (canvas.getWidth() - getFrameWidth()) / 2,
(canvas.getHeight() - getFrameHeight()) / 2, null);
mHolder.unlockCanvasAndPost(canvas);
}
}
}
Log.i(TAG, "Finishing processing thread");
}
I ran into this same problem. Instead of using a SurfaceView.Callback, I subclassed their class JavaCameraView. See my live face detection and drawing sample here. It was then trivial to rotate the matrix coming out of the camera according to the device's orientation, prior to processing. Relevant excerpt of linked code:
#Override
public Mat onCameraFrame(Mat inputFrame) {
int flipFlags = 1;
if(display.getRotation() == Surface.ROTATION_270) {
flipFlags = -1;
Log.i(VIEW_LOG_TAG, "Orientation is" + getRotation());
}
Core.flip(inputFrame, mRgba, flipFlags);
inputFrame.release();
Imgproc.cvtColor(mRgba, mGray, Imgproc.COLOR_RGBA2GRAY);
if (mAbsoluteFaceSize == 0) {
int height = mGray.rows();
if (Math.round(height * mRelativeFaceSize) > 0) {
mAbsoluteFaceSize = Math.round(height * mRelativeFaceSize);
}
}
}
I solved the rotation issue using OpenCV itself: after finding out how much the screen rotation needs to be corrected using this code, I apply a rotation matrix to the raw camera image (after converting from YUV to RGB):
Point center = new Point(mFrameWidth/2, mFrameHeight/2);
Mat rotationMatrix = Imgproc.getRotationMatrix2D(center, totalRotation, 1);
[...]
Imgproc.cvtColor(mYuv, mIntermediate, Imgproc.COLOR_YUV420sp2RGBA, 4);
Imgproc.warpAffine(mIntermediate, mRgba, rotationMatrix,
new Size(mFrameHeight, mFrameWidth));
A separate issue is that setPreviewDisplay(null) gives a blank screen on some phones. The solution, which I got from here and draws on this bugreport and this SO question, passes a hidden, "fake" SurfaceView to the preview display to get it to start, but actually displays the output on an overlaid custom view, which I call CameraView. So, after calling setContentView() in the activity's onCreate(), stick in this code:
if (VERSION.SDK_INT < VERSION_CODES.HONEYCOMB) {
final SurfaceView fakeView = new SurfaceView(this);
fakeView.setLayoutParams(new LayoutParams(LayoutParams.MATCH_PARENT, LayoutParams.MATCH_PARENT));
fakeView.setZOrderMediaOverlay(false);
final CameraView cameraView = (CameraView) this.findViewById(R.id.cameraview);
cameraView.setZOrderMediaOverlay(true);
cameraView.fakeView = fakeView;
}
Then, when setting the preview display, use this code:
try {
if (VERSION.SDK_INT >= VERSION_CODES.HONEYCOMB)
mCamera.setPreviewTexture(new SurfaceTexture(10));
else
mCamera.setPreviewDisplay(fakeView.getHolder());
} catch (IOException e) {
Log.e(TAG, "mCamera.setPreviewDisplay fails: "+ e);
}
If you are only developing for Honeycomb and above, just replace setPreviewDisplay(null) with mCamera.setPreviewTexture(new SurfaceTexture(10)); and be done with it. setDisplayOrientation() still doesn't work if you do this, though, so you'll still have to use the rotation matrix solution.
Hi i am developing real time image processing application on android. I am using PreviewCallback to get image in every frame. When i get data in Tablet devices the data returns very big. So its too hard to work in large data in real time.
My question is, is there any way to get smaller resolution data from camera preview.
CAMERA PREVIEW CODE:
public void onPreviewFrame(byte[] data, Camera camera) {
// TODO Auto-generated method stub
Camera.Parameters params = camera.getParameters();
Log.v("image format", Integer.toString(params.getPreviewFormat()));
//Frame captureing via frameManager
frameManager.initCamFrame(params.getPreviewSize().width, params.getPreviewSize().height,
data);
}
});
You can call parameters.setPreviewSize(width, height), but you want to do it before camera preview starts. And you need to use supported value, viz previous answer.
And you also should not call camera.getParameters() every frame, just do that once and save the values to some variable. You have some limited time in onPreviewFrame, because byte[] data is overwritten on each frame, so try to do only the important stuff here.
And you should use setPreviewCallbackWithBuffer, it quite improves performance - check this post.
Are you aware you can get a list of supported preview sizes from the camera parameters by calling getSupportedPreviewSizes()? The devices I've tried this on all returned a sorted list, although sometimes in ascending and sometimes in descending order. You'll probably want to manually iterate the list to find the 'smallest' preview size, or sort it first and grab the first item.
you can try this:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
try {
byte[] baos = convertYuvToJpeg(data, camera);
StringBuilder dataBuilder = new StringBuilder();
dataBuilder.append("data:image/jpeg;base64,").append(Base64.encodeToString(baos, Base64.DEFAULT));
mSocket.emit("newFrame", dataBuilder.toString());
} catch (Exception e) {
Log.d("########", "ERROR");
}
}
};
public byte[] convertYuvToJpeg(byte[] data, Camera camera) {
YuvImage image = new YuvImage(data, ImageFormat.NV21,
camera.getParameters().getPreviewSize().width, camera.getParameters().getPreviewSize().height, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
int quality = 20; //set quality
image.compressToJpeg(new Rect(0, 0, camera.getParameters().getPreviewSize().width, camera.getParameters().getPreviewSize().height), quality, baos);//this line decreases the image quality
return baos.toByteArray();
}