I am creating an application which takes video from both front and rear cameras simultaneously. Both cameras are sending images to respective ImageReader for some processing. I have a TextureView as well to show preview from any one of the user desired camera.
So the capture session of camera showing preview has two surfaces ImageReader and TextureView and the other camera only has ImageReader.
Now, when user switches the camera I want to remove the TextureView's Surface from one CameraCaptureSession and add to other session
Is there any way I can remove a Surface from a CameraCaptureSession without closing the session?
My code as of now (similar is for rear camera):
SurfaceTexture surfaceTexture = mTextureView.getSurfaceTexture();
surfaceTexture.setDefaultBufferSize(mTextureView.getWidth(), mTextureView.getHeight());
mCaptureRequestBuilderFront = mCameraDeviceFront.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
List<Surface> surfaces = new ArrayList<Surface>();
/* Make Surface out of texture as preview is shown on a Surface */
Surface surface = new Surface(surfaceTexture);
surfaces.add(surface);
mCaptureRequestBuilderFront.addTarget(surface);
/* Make Surface out of ImageReader to get images for processing */
Surface readerSurface = mImageReaderFront.getSurface();
surfaces.add(readerSurface);
mCaptureRequestBuilderFront.addTarget(readerSurface);
/* Create the Capture Session to start getting images from the camera */
mCameraDeviceFront.createCaptureSession(
surfaces
, mSessionCallbackFront
, mBackgroundHandler);
No, this isn't possible. You can certainly stop targeting the TextureView in your requests, but another session can't include the TextureView in its set of outputs unless the first session is recreated without it.
If you want to make this smoother, you'd basically need to implement your own buffer routing - for example, have a GL stage that has two input SurfaceTextures and renders into the TextureView SurfaceTexture, and then connect each camera to a SurfaceTexture. Then you write a pixel shader that just copies either Surface Texture A or B into the output, depending on which camera is active.
That's a lot of boilerplate, but is pretty efficient.
On recent Android releases, you could try using a pair of ImageReaders for camera and a ImageWriter to a TextureView, using the ImageReader constructor that accepts a usage flag, with usage flag GPU_SAMPLED_IMAGE. And then queue an Image from the ImageReader you currently have active to the ImageWriter to the TextureView.
Related
ImageReader and SurfaceTexture is async from app side. SurfaceTexture.OnFrameAvailableListener and ImageReader.OnImageAvailableListener are coming in different time.
Now I will make an AR App. I calculate the object motion with the image from ImageReader and output the object motion information. On the other hand. Call updateTexImage to render background. But the question is the object motion has obviously latency behind background rendering.
The workflow is below:
Camera2->ImageReader->calculate object motion -> Render a virtual object with object motion information
Camera2->SufaceTexture->Render backgroud with updateTexImage
the upateTexImage and rendering-virtual-object is call in Render.onDrawFrame
So obviously the question is how to Sync ImageReader and SurfaceTexture with Android Camera2 output
The easiest option is not to use two data paths, and instead either do image analysis on the SurfaceTexture buffer (either in EGL or read back from GPU to CPU for analysis), or use the ImageReader buffer to draw everything with.
If that's not feasible, you need to look at the timestamps (https://developer.android.com/reference/android/graphics/SurfaceTexture.html#getTimestamp() and https://developer.android.com/reference/android/media/Image.html#getTimestamp()). For the same capture, the two paths will have the same timestamp, so you can queue up and synchronize your final drawing by matching them up.
I have an app saving camera images continuously by using ImageReader.
Now I have a requisite need to add multiple SurfaceView dynamically for showing different size of preview after camera session created.
Because the surface of ImageReader was added before session created like this:
mBuilder = mCameraDevice!!.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)
mBuilder!!.addTarget(mImageReader!!.surface)
val surfaces = ArrayList<Surface>()
surfaces.add(mImageReader!!.surface)
mCameraDevice!!.createCaptureSession(surfaces, mSessionCallback, mBackgroundHandler)
And my new SurfaceView will be create after createCaptureSession.
So how should I add another preview surface to device for receiver data from camera2?
This is not possible with the camera2 directly, for different output resolutions. If you need to change the resolution of an output, you have to create a new capture session with the new outputs you want.
If you want multiple SurfaceViews of the same size, you can use the surface sharing APIs added in API level 26 and later in OutputConfiguration (https://developer.android.com/reference/android/hardware/camera2/params/OutputConfiguration).
If that's not sufficient, the other option is to connect the camera to a SurfaceTexture with the maximum SurfaceView resolution you might want, and then render lower resolution outputs from that via OpenGL, creating EGL windows for each new SurfaceView you want to draw to. That's a lot of code needed to set up the EGL context and rendering, but should be fairly efficient.
I was using a tutorial for camera2 api for android and one of the steps was to resize the textureview's surface to an acceptable format by doing the following:
SurfaceTexture surfaceTexture = mTextureView.getSurfaceTexture();
surfaceTecture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight();
Surface previewSurface = new Surface(surfaceTexture);
previewBuilder = CD.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
previewBuilder.addTarget(previewSurface);
So the mPreviewSize variable is of type Size and it was determined beforehand it cycles through the acceptable formats and selects the most optimal one according to your screen size. The problem is I'm using a SurfaceView and I'm trying to resize the surface object in the SurfaceView I tried this but it didn't work:
SurfaceHolder SH= gameSurface.getHolder();
SH.setFixedSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
Surface Sur = SH.getSurface();
previewBuilder = CD.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
previewBuilder.addTarget(Sur);
So in debug mode I see mPreviewSize is correct (as in it is set to an acceptable format) but I get an error saying that I'm trying to use an unacceptable format size, it shows the size and it's not the same as mPreviewSize which means the resizing isn't working. Any ideas?
You probably need to wait to receive the surfaceChanged callback from the SurfaceView, before trying to use the Surface to create a camera capture session.
setFixedSize doesn't necessarily take effect immediately.
I would like to make "instant" screenshots of an Presentation object in Android. My Presentation normally renders to a virtual display (PRIVATE) which is backed by the surface from a MediaRecorder that is setup to record video. Recording the presentation as a video works great.
mMediaRecorder = new MediaRecorder();
mMediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mMediaRecorder.setVideoSource(MediaRecorder.VideoSource.SURFACE);
mMediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
mMediaRecorder.setOutputFile(getScratchFile().getAbsolutePath());
mMediaRecorder.setVideoEncodingBitRate(bitrateInBitsPerSecond);
mMediaRecorder.setVideoFrameRate(30);
mMediaRecorder.setVideoSize(mVideoSize.getWidth(), mVideoSize.getHeight());
mMediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264);
mMediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
mVirtualDisplay = mDisplayManager.createVirtualDisplay(
DISPLAY_NAME, // string name required
mVideoSize.getWidth(),
mVideoSize.getHeight(),
160, // screen densityDpi, not sure what it means in this context
mMediaRecorder.getSurface(), // the media recorder must already be {#code prepare()}'d
DisplayManager.VIRTUAL_DISPLAY_FLAG_OWN_CONTENT_ONLY); // only we can use it
mPresentation = new MyPresentation(getActivity(), mVirtualDisplay().getDisplay());
How can I get a screenshot of the mMediaRecorder.getSurface() at any time, including when the mMediaRecorder is setup but not recording?
I've tried a number of methods related to the view.getDrawingCache() on the Presentation root view object and I only get clear/black output. The presentation itself contains TextureView objects, which I'm guessing are messing up this strategy.
I also tried using an ImageReader with the DisplayPreview of the mMediaRecorder but it receives no images in the callback -- ever
mMediaRecorder.setDisplayPreview(mImageReader.getSurface());
I'd really like some way to mirror the Surface which backs the presentation into an ImageReader and use that as a consumer, I just can't see how to "mirror" one surface as a producer into another "consumer" class. It seems there should be an easy way with SurfaceFlinger.
Surfaces are the producer end of a producer-consumer data structure. A producer can't pull data back out of the pipe, so attempting to read frames back from a Surface isn't possible.
When feeding MediaCodec or MediaRecorder, the consumer end is in a different process (mediaserver) that manages the media hardware. For a SurfaceView, the consumer is in SurfaceFlinger. For a TextureView, both ends are in your app, which is why you can easily get a frame from TextureView (call getBitmap()).
To intercept the incoming data you'd need to have both producer and consumer in the same process. The SurfaceTexture class (also known as "GLConsumer") provides this -- it's a consumer that converts the frames it receives into GLES textures.
So the idea would be to create a SurfaceTexture, create a new Surface from that (you'll note that Surface's only public constructor takes a SurfaceTexture), and pass that Surface as the virtual display output Surface. Then as frames come in you "forward" them to the MediaRecorder by rendering the textures with OpenGL ES.
This is not entirely straightforward, especially if you haven't worked with OpenGL ES before. Various examples can be found in Grafika (e.g. "texture from camera" and "record GL app").
I don't know if setDisplayPreview() is expected to work for anything other than Camera.
I've bumped into the issue with slow focusing on Nexus 6.
I develop camera application and now I'm using camera2 API.
For application needs we create preview request with 2 surfaces
- SurfaceView (viewfinder)
- YUV ImageReader surface (to use data in hstogram calculation)
And there is a critical point! If just add only viewfinder surface, focusing occurs as normal. But with 2 those surfaces focusing occurs very slow with visual steps of lens moving!
Code is quite standard, written according google documentations:
mImageReaderPreviewYUV = ImageReader.newInstance(previewWidth, previewHeight, ImageFormat.YUV_420_888, 2);
previewRequestBuilder = camDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
previewRequestBuilder.addTarget(getCameraSurface()); //Add surface of SurfaceView
previewRequestBuilder.addTarget(mImageReaderPreviewYUV); //Add ImageReader
mCaptureSession.setRepeatingRequest(previewRequestBuilder.build(), captureCallback null);
Does the system logcat show any warnings about buffers not being available?
Is the preview frame rate slow, or is smooth (~30fps) but focusing just works oddly?
If the former, you may not be returning Image objects to the ImageReader (by closing them once done with them) at 30 fps, so the camera device is starved for buffers to fill, and cannot maintain 30fps preview.
To test this, implement the minimal ImageReaderListener.onImageAvailable(ImageReader reader) method that just returns the image immediately:
public class TestImageListener extends ImageReaderListener {
public void onImageAvailable(ImageReader reader) {
Image img = reader.acquireNextImage();
img.close();
}
}
...
mImageReaderPreviewYUV.setOnImageAvailableListener(new TestImageListener());
If this lets you get fluid preview, then your image processing is too slow.
As a solution, you should increase the number of buffers in your ImageReader, and the nuse the reader.acquireLatestImage() to drop older buffers and only process the newest Image each time you calculate your histogram.
I had the same issues on the N6 and I think it works smoother now - add the ImageReader surface before the camera surface:
previewRequestBuilder = camDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
previewRequestBuilder.addTarget(mImageReaderPreviewYUV); //Add ImageReader
previewRequestBuilder.addTarget(getCameraSurface()); //Add surface of SurfaceView
I also tested my camera app with a N4/5.0.1 and both ways work perfectly there.