ImageReader and SurfaceTexture is async from app side. SurfaceTexture.OnFrameAvailableListener and ImageReader.OnImageAvailableListener are coming in different time.
Now I will make an AR App. I calculate the object motion with the image from ImageReader and output the object motion information. On the other hand. Call updateTexImage to render background. But the question is the object motion has obviously latency behind background rendering.
The workflow is below:
Camera2->ImageReader->calculate object motion -> Render a virtual object with object motion information
Camera2->SufaceTexture->Render backgroud with updateTexImage
the upateTexImage and rendering-virtual-object is call in Render.onDrawFrame
So obviously the question is how to Sync ImageReader and SurfaceTexture with Android Camera2 output
The easiest option is not to use two data paths, and instead either do image analysis on the SurfaceTexture buffer (either in EGL or read back from GPU to CPU for analysis), or use the ImageReader buffer to draw everything with.
If that's not feasible, you need to look at the timestamps (https://developer.android.com/reference/android/graphics/SurfaceTexture.html#getTimestamp() and https://developer.android.com/reference/android/media/Image.html#getTimestamp()). For the same capture, the two paths will have the same timestamp, so you can queue up and synchronize your final drawing by matching them up.
Related
I have an app saving camera images continuously by using ImageReader.
Now I have a requisite need to add multiple SurfaceView dynamically for showing different size of preview after camera session created.
Because the surface of ImageReader was added before session created like this:
mBuilder = mCameraDevice!!.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)
mBuilder!!.addTarget(mImageReader!!.surface)
val surfaces = ArrayList<Surface>()
surfaces.add(mImageReader!!.surface)
mCameraDevice!!.createCaptureSession(surfaces, mSessionCallback, mBackgroundHandler)
And my new SurfaceView will be create after createCaptureSession.
So how should I add another preview surface to device for receiver data from camera2?
This is not possible with the camera2 directly, for different output resolutions. If you need to change the resolution of an output, you have to create a new capture session with the new outputs you want.
If you want multiple SurfaceViews of the same size, you can use the surface sharing APIs added in API level 26 and later in OutputConfiguration (https://developer.android.com/reference/android/hardware/camera2/params/OutputConfiguration).
If that's not sufficient, the other option is to connect the camera to a SurfaceTexture with the maximum SurfaceView resolution you might want, and then render lower resolution outputs from that via OpenGL, creating EGL windows for each new SurfaceView you want to draw to. That's a lot of code needed to set up the EGL context and rendering, but should be fairly efficient.
I would like to access the YUV images of a video, and passing a ImageReader surface to a MediaCodec (as refered in the documentation) looked like a really smart way to do it. However, i can't make sense of the data inside the Image instance supplied by the onImageAvailable callback. Just looking at its Y plane, it looks to be mostly 0 values, no matter what is the video i provide.
I read some #fadden comments that looked a bit old by now, referring that the ImageReader surface was not available yet for MediaCodec, is this still the case? Did anyone succeeded in implementing a MediaCodec decoding to a ImageReader surface solution?
To illustrate, i was hopping for the Y plane to be like this:
and it comes out as:
Thanks for any pointers
I met the same problem. In my case, it is because I did not specicify the MediaCodec to output the same format image as the ImageReader.
int imageFormat = ImageFormat.YUV_420_888;
mImageReader = ImageReader.newInstance(
videoFormat.getInteger(MediaFormat.KEY_WIDTH),
videoFormat.getInteger(MediaFormat.KEY_HEIGHT),
imageFormat,
3
);
// I add the following line of code.
videoFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420Flexible);
// Then initialize the MediaCodec with videoFormat.
I am trying to apply face detection on camera preview frames. I am using OpenGL and OpenCV to process these camera frames at run-time.
#Override
public void onDrawFrame(GL10 unused) {
if (VERBOSE) {
Log.d(TAG, "onDrawFrame tex=" + mTextureId);
}
mSurfaceTexture.updateTexImage();
mSurfaceTexture.getTransformMatrix(mSTMatrix);
// TODO: need to implement
//JniCppManager.processFrame();
drawFrame(mTextureId, mSTMatrix);
}
I am trying to implement a c++ implementation of processFrame(). How can I get a Mat object in c++ from transformation matrix? Could anyone provide me some pointers to the solution.
Your pipeline is currently:
Camera (produces frame)
SurfaceTexture (receives frame, converts to GLES "external" texture)
[missing stuff]
Array of RGB bytes passed to C++
What you need to do for [missing stuff] is render the pixels to an off-screen pbuffer and read them back with glReadPixels(). You can do this from code written in Java or native; for the former you'd want to read them into a "direct" ByteBuffer so you can easily access the pixels from native code. The EGL context used by GLES is held in thread-local storage, so the native code running on the GLSurfaceView render thread will be able to access it.
An example of this can be found in the bigflake ExtractMpegFramesTest, which differs primarily in that it's grabbing frames from a video rather than a Camera.
For API 19+, if you can process frames in YV12 or NV21 rather than RGB, you can feed the Camera to an ImageReader and get access to the data without having to copy/convert it.
I am reading the code about Android Camera2 APIs from here:
https://github.com/googlesamples/android-Camera2Basic
And it is confusing in this lines:
https://github.com/googlesamples/android-Camera2Basic/blob/master/Application/src/main/java/com/example/android/camera2basic/Camera2BasicFragment.java#L570-L574
that the previewRequest builder only add surface, which is the TextureView to show, as target. But the following line actually add both as the targets. As I understand, this should not fire the "OnImageAvailable" Lisenter during preview, no? So why this add the imagereader's surface here?
I tried to removed this imagereader's surface here but got error when I really want to capture an image.....
SOOO CONFUSING!!!
You need to declare all output Surfaces that image data might be sent to at the time you create a CameraCaptureSession. This is just the way the framework is designed.
Whenever you create a CaptureRequest, you add a (list of) target output Surface(s). This is where the image data from the captured frame will go- it may be a Surface associated with a TextureView for displaying, or with an ImageReader for saving, or with an Allocation for processing, etc. (A Surface is really just a buffer which can take the data output by the camera. The type of object that buffer is associated with determines how you can access/work with the data.)
You don't have to send the data from each frame to all registered Surfaces, but it has to be sent to a subset of them. You can't add a Surface as a target to a CaptureRequest if it wasn't registered with the CameraCaptureSession when it was created. Well, you can, but passing it to the session will cause a crash, so don't.
I would like to make "instant" screenshots of an Presentation object in Android. My Presentation normally renders to a virtual display (PRIVATE) which is backed by the surface from a MediaRecorder that is setup to record video. Recording the presentation as a video works great.
mMediaRecorder = new MediaRecorder();
mMediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mMediaRecorder.setVideoSource(MediaRecorder.VideoSource.SURFACE);
mMediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
mMediaRecorder.setOutputFile(getScratchFile().getAbsolutePath());
mMediaRecorder.setVideoEncodingBitRate(bitrateInBitsPerSecond);
mMediaRecorder.setVideoFrameRate(30);
mMediaRecorder.setVideoSize(mVideoSize.getWidth(), mVideoSize.getHeight());
mMediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264);
mMediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
mVirtualDisplay = mDisplayManager.createVirtualDisplay(
DISPLAY_NAME, // string name required
mVideoSize.getWidth(),
mVideoSize.getHeight(),
160, // screen densityDpi, not sure what it means in this context
mMediaRecorder.getSurface(), // the media recorder must already be {#code prepare()}'d
DisplayManager.VIRTUAL_DISPLAY_FLAG_OWN_CONTENT_ONLY); // only we can use it
mPresentation = new MyPresentation(getActivity(), mVirtualDisplay().getDisplay());
How can I get a screenshot of the mMediaRecorder.getSurface() at any time, including when the mMediaRecorder is setup but not recording?
I've tried a number of methods related to the view.getDrawingCache() on the Presentation root view object and I only get clear/black output. The presentation itself contains TextureView objects, which I'm guessing are messing up this strategy.
I also tried using an ImageReader with the DisplayPreview of the mMediaRecorder but it receives no images in the callback -- ever
mMediaRecorder.setDisplayPreview(mImageReader.getSurface());
I'd really like some way to mirror the Surface which backs the presentation into an ImageReader and use that as a consumer, I just can't see how to "mirror" one surface as a producer into another "consumer" class. It seems there should be an easy way with SurfaceFlinger.
Surfaces are the producer end of a producer-consumer data structure. A producer can't pull data back out of the pipe, so attempting to read frames back from a Surface isn't possible.
When feeding MediaCodec or MediaRecorder, the consumer end is in a different process (mediaserver) that manages the media hardware. For a SurfaceView, the consumer is in SurfaceFlinger. For a TextureView, both ends are in your app, which is why you can easily get a frame from TextureView (call getBitmap()).
To intercept the incoming data you'd need to have both producer and consumer in the same process. The SurfaceTexture class (also known as "GLConsumer") provides this -- it's a consumer that converts the frames it receives into GLES textures.
So the idea would be to create a SurfaceTexture, create a new Surface from that (you'll note that Surface's only public constructor takes a SurfaceTexture), and pass that Surface as the virtual display output Surface. Then as frames come in you "forward" them to the MediaRecorder by rendering the textures with OpenGL ES.
This is not entirely straightforward, especially if you haven't worked with OpenGL ES before. Various examples can be found in Grafika (e.g. "texture from camera" and "record GL app").
I don't know if setDisplayPreview() is expected to work for anything other than Camera.