I was using a tutorial for camera2 api for android and one of the steps was to resize the textureview's surface to an acceptable format by doing the following:
SurfaceTexture surfaceTexture = mTextureView.getSurfaceTexture();
surfaceTecture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight();
Surface previewSurface = new Surface(surfaceTexture);
previewBuilder = CD.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
previewBuilder.addTarget(previewSurface);
So the mPreviewSize variable is of type Size and it was determined beforehand it cycles through the acceptable formats and selects the most optimal one according to your screen size. The problem is I'm using a SurfaceView and I'm trying to resize the surface object in the SurfaceView I tried this but it didn't work:
SurfaceHolder SH= gameSurface.getHolder();
SH.setFixedSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
Surface Sur = SH.getSurface();
previewBuilder = CD.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
previewBuilder.addTarget(Sur);
So in debug mode I see mPreviewSize is correct (as in it is set to an acceptable format) but I get an error saying that I'm trying to use an unacceptable format size, it shows the size and it's not the same as mPreviewSize which means the resizing isn't working. Any ideas?
You probably need to wait to receive the surfaceChanged callback from the SurfaceView, before trying to use the Surface to create a camera capture session.
setFixedSize doesn't necessarily take effect immediately.
Related
I am creating an application which takes video from both front and rear cameras simultaneously. Both cameras are sending images to respective ImageReader for some processing. I have a TextureView as well to show preview from any one of the user desired camera.
So the capture session of camera showing preview has two surfaces ImageReader and TextureView and the other camera only has ImageReader.
Now, when user switches the camera I want to remove the TextureView's Surface from one CameraCaptureSession and add to other session
Is there any way I can remove a Surface from a CameraCaptureSession without closing the session?
My code as of now (similar is for rear camera):
SurfaceTexture surfaceTexture = mTextureView.getSurfaceTexture();
surfaceTexture.setDefaultBufferSize(mTextureView.getWidth(), mTextureView.getHeight());
mCaptureRequestBuilderFront = mCameraDeviceFront.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
List<Surface> surfaces = new ArrayList<Surface>();
/* Make Surface out of texture as preview is shown on a Surface */
Surface surface = new Surface(surfaceTexture);
surfaces.add(surface);
mCaptureRequestBuilderFront.addTarget(surface);
/* Make Surface out of ImageReader to get images for processing */
Surface readerSurface = mImageReaderFront.getSurface();
surfaces.add(readerSurface);
mCaptureRequestBuilderFront.addTarget(readerSurface);
/* Create the Capture Session to start getting images from the camera */
mCameraDeviceFront.createCaptureSession(
surfaces
, mSessionCallbackFront
, mBackgroundHandler);
No, this isn't possible. You can certainly stop targeting the TextureView in your requests, but another session can't include the TextureView in its set of outputs unless the first session is recreated without it.
If you want to make this smoother, you'd basically need to implement your own buffer routing - for example, have a GL stage that has two input SurfaceTextures and renders into the TextureView SurfaceTexture, and then connect each camera to a SurfaceTexture. Then you write a pixel shader that just copies either Surface Texture A or B into the output, depending on which camera is active.
That's a lot of boilerplate, but is pretty efficient.
On recent Android releases, you could try using a pair of ImageReaders for camera and a ImageWriter to a TextureView, using the ImageReader constructor that accepts a usage flag, with usage flag GPU_SAMPLED_IMAGE. And then queue an Image from the ImageReader you currently have active to the ImageWriter to the TextureView.
I have an app saving camera images continuously by using ImageReader.
Now I have a requisite need to add multiple SurfaceView dynamically for showing different size of preview after camera session created.
Because the surface of ImageReader was added before session created like this:
mBuilder = mCameraDevice!!.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)
mBuilder!!.addTarget(mImageReader!!.surface)
val surfaces = ArrayList<Surface>()
surfaces.add(mImageReader!!.surface)
mCameraDevice!!.createCaptureSession(surfaces, mSessionCallback, mBackgroundHandler)
And my new SurfaceView will be create after createCaptureSession.
So how should I add another preview surface to device for receiver data from camera2?
This is not possible with the camera2 directly, for different output resolutions. If you need to change the resolution of an output, you have to create a new capture session with the new outputs you want.
If you want multiple SurfaceViews of the same size, you can use the surface sharing APIs added in API level 26 and later in OutputConfiguration (https://developer.android.com/reference/android/hardware/camera2/params/OutputConfiguration).
If that's not sufficient, the other option is to connect the camera to a SurfaceTexture with the maximum SurfaceView resolution you might want, and then render lower resolution outputs from that via OpenGL, creating EGL windows for each new SurfaceView you want to draw to. That's a lot of code needed to set up the EGL context and rendering, but should be fairly efficient.
This is coded in NativeScript, so I'll try my best to adapt the scenario to Java. I have created an in-app video view with support to record the video.
This is done as follows:
First I create a SurfaceView that will hold the preview of the camera:
this.mSurfaceView = new android.view.SurfaceView(this._context);
this.mHolder = this.mSurfaceView.getHolder();
this.mHolder.setType(android.view.SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
Then I create an instance of the Camera, and sets the video surface:
var mCamera = android.hardware.Camera;
var camera = mCamera.open(1);
this.camera = camera;
this.camera.setDisplayOrientation(90);
var parameters = camera.getParameters();
parameters.setRecordingHint(true);
if( parameters.isVideoStabilizationSupported() ){
parameters.setVideoStabilization(true);
}
camera.setParameters(parameters);
this.camera.setPreviewDisplay(_this.mHolder);
this.camera.startPreview();
this.camera.startFaceDetection();
Now, all is good. I have the camera preview in the view that I want it to be. The color is good and I think the image aspect ratio is good too.
However, when I initiate the recording, as I do with the following code:
this.mediarecorder = new android.media.MediaRecorder();
// Step 1: Unlock and set camera to MediaRecorder
this.camera.unlock();
this.mediarecorder.setCamera(this.camera);
// Step 2: Set sources
this.mediarecorder.setAudioSource(android.media.MediaRecorder.AudioSource.CAMCORDER);
this.mediarecorder.setVideoSource(android.media.MediaRecorder.VideoSource.CAMERA);
//this.mediarecorder.setOutputFormat(android.media.MediaRecorder.OutputFormat.MPEG_4);
// Step 3: Set a CamcorderProfile (requires API Level 8 or higher)
this.mediarecorder.setProfile(android.media.CamcorderProfile.get(android.media.CamcorderProfile.QUALITY_HIGH));
// platform.screen.mainScreen.widthDIPs
// platform.screen.mainScreen.heightDIPs
// Step 4: Set output file
var fileName = "videoCapture_" + new Date() + ".mp4";
var path = android.os.Environment.getExternalStoragePublicDirectory(android.os.Environment.DIRECTORY_DCIM).getAbsolutePath() + "/Camera/" + fileName;
this.file = new java.io.File(path);
this.mediarecorder.setOutputFile(this.file.toString());
this.mediarecorder.setOrientationHint(270);
try {
this.mediarecorder.prepare();
this.mediarecorder.start();
} catch( ex ) {
console.log(ex);
}
Then, the image suddenly becomes darker, and my face (its what's in focus when I'm trying it out) gets wider. So the aspect ratio changes, and so does the lighting somehow.
I have tried setting setPictureSize on the camera parameters, and setVideoSize on the MediaRecorder with no luck. And for the lighting change, I have simply no clue as to whats going on. Now I've been googling myself half way to heaven, and still found nothing, so I hope someone here has got any tip on what to pursue next?
Video recording generally tries to run at a steady frame rate, such as 30fps. Camera preview often slows itself down to 10-15fps to maintain brightness, so if you're in a darker location, video recording will be darker (since it can't expose for longer than 1/30s instead of 1/10s that camera preview can).
Did you call setVideoSize before or after calling setProfile? The setProfile call changes many parameters, including preview size; most video recording sizes are 16:9, and the default camera preview resolution is likely a 4:3 size. So when you start the recording, the aspect ratio switches.
Most video recording apps use 16:9 preview sizes even before starting recording so that they're consistent. You can also record 4:3 video, but that's generally not what people want to see.
I've bumped into the issue with slow focusing on Nexus 6.
I develop camera application and now I'm using camera2 API.
For application needs we create preview request with 2 surfaces
- SurfaceView (viewfinder)
- YUV ImageReader surface (to use data in hstogram calculation)
And there is a critical point! If just add only viewfinder surface, focusing occurs as normal. But with 2 those surfaces focusing occurs very slow with visual steps of lens moving!
Code is quite standard, written according google documentations:
mImageReaderPreviewYUV = ImageReader.newInstance(previewWidth, previewHeight, ImageFormat.YUV_420_888, 2);
previewRequestBuilder = camDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
previewRequestBuilder.addTarget(getCameraSurface()); //Add surface of SurfaceView
previewRequestBuilder.addTarget(mImageReaderPreviewYUV); //Add ImageReader
mCaptureSession.setRepeatingRequest(previewRequestBuilder.build(), captureCallback null);
Does the system logcat show any warnings about buffers not being available?
Is the preview frame rate slow, or is smooth (~30fps) but focusing just works oddly?
If the former, you may not be returning Image objects to the ImageReader (by closing them once done with them) at 30 fps, so the camera device is starved for buffers to fill, and cannot maintain 30fps preview.
To test this, implement the minimal ImageReaderListener.onImageAvailable(ImageReader reader) method that just returns the image immediately:
public class TestImageListener extends ImageReaderListener {
public void onImageAvailable(ImageReader reader) {
Image img = reader.acquireNextImage();
img.close();
}
}
...
mImageReaderPreviewYUV.setOnImageAvailableListener(new TestImageListener());
If this lets you get fluid preview, then your image processing is too slow.
As a solution, you should increase the number of buffers in your ImageReader, and the nuse the reader.acquireLatestImage() to drop older buffers and only process the newest Image each time you calculate your histogram.
I had the same issues on the N6 and I think it works smoother now - add the ImageReader surface before the camera surface:
previewRequestBuilder = camDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
previewRequestBuilder.addTarget(mImageReaderPreviewYUV); //Add ImageReader
previewRequestBuilder.addTarget(getCameraSurface()); //Add surface of SurfaceView
I also tested my camera app with a N4/5.0.1 and both ways work perfectly there.
I am trying to build a camera app that takes in the camera preview, manipulates the pixels, and then displays the new image. I need the manipulation to happen in real time.
From what I have read online, and from questions here, you need to make a custom surface view and manipulate the pixel array from the onPreviewFrame method. I have built a custom surface view and have this method running. I have converted the YUV to RGB.
Now, my question is, how do I display this new pixel array on the screen in real time? Do I somehow return it in the onPreviewFrame method? Do I have to change the byte[] array? Do I take my new pixel array and display it using a Bitmap? Is there a way to get the byte[] array from the camera preview without even displaying the preview?
If someone could answer these questions, with code examples that would be great! I am kind of new to Android, so I need the answers explained well enough for me to understand. Here is part of the code I have that runs the camera preview:
public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
// If your preview can change or rotate, take care of those events here.
// Make sure to stop the preview before resizing or reformatting it.
if (mHolder.getSurface() == null){
// preview surface does not exist
return;
}
// stop preview before making changes
try {
mCamera.stopPreview();
} catch (Exception e){
// ignore: tried to stop a non-existent preview
}
// start preview with new settings
try {
//parameters.setPreviewSize(w, h);
mCamera.setParameters(parameters);
mCamera.setPreviewDisplay(mHolder);
mCamera.setPreviewCallback(new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera)
{
System.out.println("onPreviewFrame");
//transforms NV21 pixel data into RGB pixels
decodeYUV420SP(pixels, data, previewSize.width, previewSize.height);
//Outuput the value of the top left pixel in the preview to LogCat
Log.i("Pixels", "The top right pixel has the following RGB (hexadecimal) values:"
+Integer.toHexString(pixels[0]));
}
});
mCamera.startPreview();
} catch (Exception e){
Log.d(null, "Error starting camera preview: " + e.getMessage());
}
}
This gives me the rgb pixel array I want to display instead of the preview. How do I do this?
You cannot manipulate the surface that you connected to camera preview. The byte array you receive in onPreviewFrame() is just a copy of what the framework displays on the screen. Moreover, you will find that the two streams are asynchronous: you can slow down the callbacks (e.g. by adding some sleep() into your callback), but the preview surface will be updated nevertheless.
You can hide the preview SurfaceView by placing other views on top of it, or you can get rid of this view altogether by using setPreviewTexture() instead of setPreviewDisplay() (note: Added in API level 11). Hiding the surface is not as easy as it may seem: the framework may pop it up to the top, it requires careful synchronization of camera start or restart with layout.
Anyway, after you have the surface hidden, you can use the byte array received in onPreviewFrame() to generate an image and display it. You are free to manipulate the pixels to your liking. I believe that the optimal technique is to send the pixel data to OpenGL: you can use a shader to offload YCrCb (NV21) to RGB conversion to GPU.