Hello I have a problem with camera 2 api. I am using the front camera and a surface view for display the preview.
When I get the list of sizes from StreamConfigurationMap, i have this list of sizes.
> 0 = {Size#5139} "1600x1200"
1 = {Size#5152} "1280x720"
2 = {Size#5153} "960x720"
3 = {Size#5154} "720x480"
4 = {Size#5155} "640x480"
5 = {Size#5156} "480x320"
6 = {Size#5157} "320x240"
7 = {Size#5158} "176x144"
The choose a preferred size to use as a Preview Size. Like this:
mTexturePreviewSize = getPreferredSize(sizeList, width, height);
The preferred size chosen was the 1600*1200.
When calling to CreateCaptureSession,
mCaptureRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mCaptureRequestBuilder.addTarget(surface);
mCameraDevice.createCaptureSession(Arrays.asList(surface, mReader.getSurface()), new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(CameraCaptureSession session) {
if (mCameraDevice == null) return;
try {
mCaptureRequest = mCaptureRequestBuilder.build();
mCameraSession = session;
mCameraSession.setRepeatingRequest(mCaptureRequest, mCameraSessionCallback, mBackgroundHandler);
} catch (CameraAccessException ex) {
ex.printStackTrace();
}
}
I have onConfigureFailed. I thought any of the size in the list would allow me to configure the camera correctly.
Related
I am trying to access the frames of the preview on Android using this Library, and then passing the FastJavaByteArray to Zxing's Decode() method to scan for a barcode in a specific area of the preview. The preview is working fine, and when I use the normal Preview.SetPreviewCallback(this); it works fine and the OnPreviewFrame(byte[], Camera) gets called. It's just when I use Preview.SetNonMarshalingPreviewCallback(this) that the method isn't being called. I'm not to sure why this is happening and I wanted to use this so I could scan for a barcode frame by frame using Zxing. I've attached my code below.
public void Open() {
if (!closed) return;
try {
Preview = Camera.Open();
}
catch (Exception e) {
Console.WriteLine(e);
}
var parameters = Preview.GetParameters();
int numBytes = (parameters.PreviewSize.Width * parameters.PreviewSize.Height * Android.Graphics.ImageFormat.GetBitsPerPixel(parameters.PreviewFormat)) / 8;
using (FastJavaByteArray buffer = new FastJavaByteArray(numBytes))
Preview.AddCallbackBuffer(new FastJavaByteArray(numBytes));
var options = new ZXing.Mobile.MobileBarcodeScanningOptions();
options.PossibleFormats.Add(BarcodeFormat.QR_CODE);
barcodeReader = options.BuildBarcodeReader();
Preview.SetNonMarshalingPreviewCallback(this);
//Preview.SetPreviewCallback(this);
Handler handler = new Handler();
Action loop = null;
loop = () =>
{
if (!closed)
{
AutoFocusLoop();
handler.PostDelayed(loop, (long)(1000 * AF_DELAY));
}
};
handler.Post(loop);
closed = false;
}
public void OnPreviewFrame(IntPtr data, Camera camera)
{
throw new NotImplementedException();
}
I'm trying to send Android Camera2 output to both a preview Surface and a surface obtained from MediaCodec.createInputSurface(). However, when I pass those surfaces to a call to CameraDevice.createCaptureSession and then try to build a CaptureRequest, I get:
android.hardware.camera2.CameraAccessException: CAMERA_ERROR (3): submitRequestList - cannot use a surface that wasn't configured.
The CaptureRequest building logic (see below) is from an offical Flutter camera plugin and works fine when you use MediaRecorder.getSurface() instead of MediaCodec.createInputSurface(). Which suggests that the MediaCodec surface hasn't been configured. I'm using a VideoEncoder class from well-tried open source RTMP code (https://github.com/pedroSG94/rtmp-rtsp-stream-client-java) that works with the old Camera API (i.e. not Camera2). That class initialises the codec thus:
String type = CodecUtil.H264_MIME;
MediaCodecInfo encoder = chooseEncoder(type);
try {
if (encoder != null) {
codec = MediaCodec.createByCodecName(encoder.getName());
} else {
Log.e(TAG, "Valid encoder not found");
return false;
}
MediaFormat videoFormat;
//if you dont use mediacodec rotation you need swap width and height in rotation 90 or 270
// for correct encoding resolution
String resolution;
if (!hardwareRotation && (rotation == 90 || rotation == 270)) {
resolution = height + "x" + width;
videoFormat = MediaFormat.createVideoFormat(type, height, width);
} else {
resolution = width + "x" + height;
videoFormat = MediaFormat.createVideoFormat(type, width, height);
}
Log.i(TAG, "Prepare video info: " + this.formatVideoEncoder.name() + ", " + resolution);
videoFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT,
this.formatVideoEncoder.getFormatCodec());
videoFormat.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 0);
videoFormat.setInteger(MediaFormat.KEY_BIT_RATE, bitRate);
videoFormat.setInteger(MediaFormat.KEY_FRAME_RATE, fps);
videoFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, iFrameInterval);
if (hardwareRotation) {
videoFormat.setInteger("rotation-degrees", rotation);
}
if (this.avcProfile > 0 && this.avcProfileLevel > 0) {
// MediaFormat.KEY_PROFILE, API > 21
videoFormat.setInteger("profile", this.avcProfile);
// MediaFormat.KEY_LEVEL, API > 23
videoFormat.setInteger("level", this.avcProfileLevel);
}
codec.configure(videoFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
inputSurface = codec.createInputSurface();
The code that fails when I try to build a capture request is in CameraCaptureSession.StateCallback.onConfigured, where the call to build() raises the exception:
createCaptureSession( CameraDevice.TEMPLATE_RECORD, successCallback, surfaceFromMediaCodec );
private void createCaptureSession(
int templateType, Runnable onSuccessCallback, Surface... surfaces)
throws CameraAccessException {
// Close any existing capture session.
closeCaptureSession();
// Create a new capture builder.
captureRequestBuilder = cameraDevice.createCaptureRequest(templateType);
// Build Flutter surface to render to
SurfaceTexture surfaceTexture = flutterTexture.surfaceTexture();
surfaceTexture.setDefaultBufferSize(previewSize.getWidth(), previewSize.getHeight());
Surface flutterSurface = new Surface(surfaceTexture);
captureRequestBuilder.addTarget(flutterSurface);
List<Surface> remainingSurfaces = Arrays.asList(surfaces);
if (templateType != CameraDevice.TEMPLATE_PREVIEW) {
// If it is not preview mode, add all surfaces as targets.
for (Surface surface : remainingSurfaces) {
captureRequestBuilder.addTarget(surface);
}
}
// Prepare the callback
CameraCaptureSession.StateCallback callback =
new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(#NonNull CameraCaptureSession session) {
try {
if (cameraDevice == null) {
dartMessenger.send(
DartMessenger.EventType.ERROR, "The camera was closed during configuration.");
return;
}
cameraCaptureSession = session;
captureRequestBuilder.set(
CaptureRequest.CONTROL_MODE, CameraMetadata.CONTROL_MODE_AUTO);
cameraCaptureSession.setRepeatingRequest(captureRequestBuilder.build(), null, null);
if (onSuccessCallback != null) {
onSuccessCallback.run();
}
} catch (CameraAccessException | IllegalStateException | IllegalArgumentException e) {
Log.i( TAG, "exception building capture session " + e );
dartMessenger.send(DartMessenger.EventType.ERROR, e.getMessage());
}
}
#Override
public void onConfigureFailed(#NonNull CameraCaptureSession cameraCaptureSession) {
dartMessenger.send(
DartMessenger.EventType.ERROR, "Failed to configure camera session.");
}
};
// Collect all surfaces we want to render to.
List<Surface> surfaceList = new ArrayList<>();
surfaceList.add(flutterSurface);
surfaceList.addAll(remainingSurfaces);
// Start the session
cameraDevice.createCaptureSession(surfaceList, callback, null);
}
If I remove the MediaCodec inputSurface as a build target, it works (but I don't capture anything into the MediaCodec). What am I missing? BTW, there are bits of Flutter code in the second code extract but there's no evidence that the embedding in Flutter is relevant.
Answering my own question. I was thrown off the scent by the misleading Exception message "cannot use a surface that wasn't configured". The surface was configured. And I thought I'd checked the sizes, but one was 720x480, the other was 480x720. It worked after I swapped.
I have a Camera App developed for Android 26 SDK. I have been using it happily with a Motorola G5 and G6 but when I move to a Motorola G7 the app crashes when I press the button to take a picture in my app.
The G7 is running Android 9. I have another Android 9 phone a Samsung S10 plus. The S10 plus does not crash when I press the button for taking a picture.
While debugging I noticed that the G7 doesn't call ImageReader.OnImageAvailableListener while the S10 does. Looking at the code this is where the image is saved for use later on in CameraCaptureSession.CaptureCallback. The callback expects bytes to be populated and crashes when it isn't (I haven't included the stack trace because it's not a little unhelpful but I can if you think you would like to see it).
I can get the G7 to save the image if I run it slowly through debug on 'some' occasions.
So I have a button that calls the function onImageCaptureClick() inside it does a bunch of stuff but one of the things it does is create an ImageReader.OnImageAvailableListener. The OnImageAvailableListener saves the image and populates a variable bytes from the image buffer. This onImageAvailableListener is attached to my reader by using reader.setOnImageAvailableListener(readerListener, null), and this listener is never used. When I get in to the CaptureCallBack the class variable bytes is not populated and the app crashes.
Do you have any idea where I would look to solve this?
protected void onImageCaptureClick() {
if (null == mCameraDevice) {
logger.debug("null == mCameraDevice");
Log.e(TAG, "cameraDevice is null");
return;
}
CameraManager manager = (CameraManager) getSystemService(Context.CAMERA_SERVICE);
try {
CameraCharacteristics characteristics = manager.getCameraCharacteristics(mCameraDevice.getId());
Size[] jpegSizes = null;
if (characteristics != null) {
jpegSizes = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP).getOutputSizes(ImageFormat.JPEG);
}
int width = 640;
int height = 480;
if (jpegSizes != null && 0 < jpegSizes.length) {
width = jpegSizes[0].getWidth();
height = jpegSizes[0].getHeight();
}
ImageReader reader = ImageReader.newInstance(width, height, ImageFormat.JPEG, 1);
List < Surface > outputSurfaces = new ArrayList < > (2);
outputSurfaces.add(reader.getSurface());
outputSurfaces.add(new Surface(mTextureView.getSurfaceTexture()));
final CaptureRequest.Builder captureBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
captureBuilder.addTarget(reader.getSurface());
if (mFlashMode == FLASH_MODE_OFF) {
captureBuilder.set(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_OFF);
logger.debug("FLASH OFF");
}
if (mFlashMode == CONTROL_AE_MODE_ON) {
captureBuilder.set(CaptureRequest.CONTROL_AE_MODE,
CaptureRequest.CONTROL_AE_MODE_ON);
captureBuilder.set(CaptureRequest.FLASH_MODE,
CaptureRequest.FLASH_MODE_TORCH);
logger.debug("FLASH ON");
}
if (mFlashMode == CONTROL_AE_MODE_ON_AUTO_FLASH) {
captureBuilder.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
captureBuilder.set(CaptureRequest.FLASH_MODE,
CaptureRequest.FLASH_MODE_OFF);
logger.debug("FLASH AUTO");
}
captureBuilder.set(CaptureRequest.SCALER_CROP_REGION, zoom);
int rotation = getWindowManager().getDefaultDisplay().getRotation();
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION, ORIENTATIONS.get(rotation));
final File file = new File(_pictureUri.getPath());
logger.debug("OnImageCaptureClick: _pictureUri is: " + _pictureUri.getPath());
// ************************************
// this listener is not used on the G7,
// and so the image isn't saved.
// ************************************
ImageReader.OnImageAvailableListener readerListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image image = null;
try {
image = reader.acquireLatestImage();
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
bytes = new byte[buffer.capacity()];
buffer.get(bytes);
logger.debug("onImageCaptureClick, the filesize to save is: " + bytes.toString());
save();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
if (image != null) {
image.close();
}
}
}
private void save() throws IOException {
OutputStream output = null;
try {
output = new FileOutputStream(file);
output.write(bytes);
} finally {
if (null != output) {
output.close();
}
}
}
};
// ********************************************************
// the reader sets the listener here but it is never called
// and when I get in to the CaptureCallback the BitmapUtils
// expects bytes to be populated and crashes the app
// ********************************************************
reader.setOnImageAvailableListener(readerListener, null);
final CameraCaptureSession.CaptureCallback captureListener = new CameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(#NonNull CameraCaptureSession session, #NonNull CaptureRequest request, #NonNull TotalCaptureResult result) {
super.onCaptureCompleted(session, request, result);
try {
BitmapUtils.addTimeStampAndRotate(_pictureUri, bytes);
Intent intent = new Intent(CameraActivity.this, CameraReviewPhotoActivity.class);
intent.putExtra(MediaStore.EXTRA_OUTPUT, _pictureUri);
startActivityForResult(intent, CameraActivity.kRequest_Code_Approve_Image);
} catch (IOException e) {
e.printStackTrace();
} catch (ImageReadException e) {
e.printStackTrace();
} catch (ImageWriteException e) {
e.printStackTrace();
}
}
};
mCameraDevice.createCaptureSession(outputSurfaces, new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(CameraCaptureSession session) {
try {
session.capture(captureBuilder.build(), captureListener, null);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
#Override
public void onConfigureFailed(CameraCaptureSession session) {
Log.w(TAG, "Failed to configure camera");
}
}, null);
} catch (CameraAccessException e) {
e.printStackTrace();
} finally {
takePictureButton.setEnabled(false);
mTextureView.setEnabled(false);
}
The API makes no guarantee about the order of onCaptureCompleted and OnImageAvailableListener. They may arrive in arbitrary order, depending on the device, capture settings, the load on the device, or even the particular OS build you have.
Please don't make any assumptions about it.
Instead, if you need both callbacks to fire before you process something, then wait for both to happen before you move forward. For example, check if the other callback has fired in each callback, and if so, call the method to do the processing.
I think I have found the solution to this.
I have the following phones:
Samsung S10 Plus
Motorola G7
Motorola G6
My app works on the S10 and the G6.
The S10 and G6 both call the OnImageAvailableListener function before the onCaptureCompleted callback. The G7 however calls them both the other way around onCaptureCompleted then OnImageAvailableListener.
According to https://proandroiddev.com/understanding-camera2-api-from-callbacks-part-1-5d348de65950 the correct way is onCaptureCompleted then OnImageAvailableListener.
In my code I am assuming that OnImageAvailableListener has saved the image and then OnCaptureCompleted tries to manipulate it, which causes the crash.
Looking at the INFO_SUPPORTED_HARDWARE_LEVEL of each device I have the following levels of support from none level 0 to uber level 3.
Samsung S10 Plus reports device level support level 1
Motorola G7 reports device level support level 3
Motorola G6 reports device level support level 2
My assumption at this point is that the events fire in a different order when you support the android-camera2 API at Level 3 compared to other levels.
Hope this helps
I'm trying to connect my Android app with the OpenCV library and I need to use a native camera for have more control of the camera options. For do it I have found http://nezarobot.blogspot.it/2016/03/android-surfacetexture-camera2-opencv.html, that is what I need.
My problem is that if I use this code, with some small change and when I launch it, my app crash with 3 errors reported:
E/BufferQueueProducer: [SurfaceTexture-0-31525-0] connect(P): already connected (cur=4 req=2)
D/PlateNumberDetection/DetectionBasedTracker: ANativeWindow_lock failed with error code -22
A/libc: Fatal signal 11 (SIGSEGV), code 1, fault addr 0x315e9858 in tid 31735 (CameraBackgroun)
I have tried to close the camera before the jni call and I can capture and show only the first frame, but then I need to restart the camera and I can't create the same thread by itself.
Here I take the frame and I send to NDK.
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener
= new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image image;
try {
image = reader.acquireLatestImage();
if( image == null) {
return;
}
if (image.getFormat() != ImageFormat.YUV_420_888) {
throw new IllegalArgumentException("image must have format YUV_420_888.");
}
Image.Plane[] planes = image.getPlanes();
if (planes[1].getPixelStride() != 1 && planes[1].getPixelStride() != 2) {
throw new IllegalArgumentException(
"src chroma plane must have a pixel stride of 1 or 2: got "
+ planes[1].getPixelStride());
}
mNativeDetector.detect(image.getWidth(), image.getHeight(), planes[0].getBuffer(), surface);
} catch (IllegalStateException e) {
Log.e(TAG, "Too many images queued for saving, dropping image for request: ", e);
return;
}
image.close();
}
};
and here I manage the camera preview
protected void createCameraPreview() {
try {
SurfaceTexture texture = textureView.getSurfaceTexture();
assert texture != null;
texture.setDefaultBufferSize(imageDimension.getWidth(), imageDimension.getHeight());
surface = new Surface(texture);
captureRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
captureRequestBuilder.addTarget(mImageReader.get().getSurface());
BlockingSessionCallback sessionCallback = new BlockingSessionCallback();
List<Surface> outputSurfaces = new ArrayList<>();
outputSurfaces.add(mImageReader.get().getSurface());
outputSurfaces.add(new Surface(textureView.getSurfaceTexture()));
cameraDevice.createCaptureSession(outputSurfaces, sessionCallback, mBackgroundHandler);
try {
Log.d(TAG, "waiting on session.");
cameraCaptureSessions = sessionCallback.waitAndGetSession(SESSION_WAIT_TIMEOUT_MS);
try {
captureRequestBuilder.set(CaptureRequest.CONTROL_MODE, CaptureRequest.CONTROL_MODE_AUTO);
Log.d(TAG, "setting repeating request");
cameraCaptureSessions.setRepeatingRequest(captureRequestBuilder.build(),
mCaptureCallback, mBackgrounHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
} catch (TimeoutRuntimeException e) {
Toast.makeText(AydaMainActivity.this, "Failed to configure capture session.", Toast.LENGTH_SHORT);
}
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
Have you tried that code without your "some small change" first? I have tried that project and it worked well on multiple devices. So it will be useful to first establish if it does not work on your phone at all, or if there is a problem in your modifications.
I am working with camera2Basic now and trying to get each frame data to do some image processing. I am using camera2 API in Android5.0, everything is fine when only doing the camera preview and it is fluid. But the preview stuck when I use the ImageReader.OnImageAvailableListener callback to get each frame data, this cause a bad User Experience.
The following is my related codes:
This is the setup for camera and ImageReader, I set the format of image is YUV_420_888
public<T> Size setUpCameraOutputs(CameraManager cameraManager,Class<T> kClass, int width, int height) {
boolean flagSuccess = true;
try {
for (String cameraId : cameraManager.getCameraIdList()) {
CameraCharacteristics characteristics = cameraManager.getCameraCharacteristics(cameraId);
// choose the front or back camera
if (FLAG_CAMERA.BACK_CAMERA == mChosenCamera &&
CameraCharacteristics.LENS_FACING_BACK != characteristics.get(CameraCharacteristics.LENS_FACING)) {
continue;
}
if (FLAG_CAMERA.FRONT_CAMERA == mChosenCamera &&
CameraCharacteristics.LENS_FACING_FRONT != characteristics.get(CameraCharacteristics.LENS_FACING)) {
continue;
}
StreamConfigurationMap map = characteristics.get(
CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
Size largestSize = Collections.max(
Arrays.asList(map.getOutputSizes(ImageFormat.YUV_420_888)),
new CompareSizesByArea());
mImageReader = ImageReader.newInstance(largestSize.getWidth(), largestSize.getHeight(),
ImageFormat.YUV_420_888, 3);
mImageReader.setOnImageAvailableListener(mOnImageAvailableListener, mBackgroundHandler);
...
mCameraId = cameraId;
}
} catch (CameraAccessException e) {
e.printStackTrace();
} catch (NullPointerException e) {
}
......
}
When the camera opened successfully, I Create a CameraCaptureSession for camera preview
private void createCameraPreviewSession() {
if (null == mTexture) {
return;
}
// We configure the size of default buffer to be the size of camera preview we want.
mTexture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
// This is the output Surface we need to start preview
Surface surface = new Surface(mTexture);
// We set up a CaptureRequest.Builder with the output Surface.
try {
mPreviewRequestBuilder =
mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mPreviewRequestBuilder.addTarget(mImageReader.getSurface());
mPreviewRequestBuilder.addTarget(surface);
// We create a CameraCaptureSession for camera preview
mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(CameraCaptureSession session) {
if (null == mCameraDevice) {
return;
}
// when the session is ready, we start displaying the preview
mCaptureSession = session;
// Finally, we start displaying the camera preview
mPreviewRequest = mPreviewRequestBuilder.build();
try {
mCaptureSession.setRepeatingRequest(mPreviewRequest,
mCaptureCallback, mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
#Override
public void onConfigureFailed(CameraCaptureSession session) {
}
}, null);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
The last is the ImageReader.OnImageAvailableListener callback
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Log.d(TAG, "The onImageAvailable thread id: " + Thread.currentThread().getId());
Image readImage = reader.acquireLatestImage();
readImage.close();
}
};
Maybe I do the wrong setup but I try several times and it doesn't work. Maybe there is another way to get frame data rather than ImageReader but I don't know.
Anybody knows how to get each frame data in realtime?
I do not believe that Chen is correct. The image format has almost 0 effect on the speed on the devices I have tested. Instead, the problem seems to be with the image size. On an Xperia Z3 Compact with the image format YUV_420_888, I am offered a bunch of different options in the StreamConfigurationMap's getOutputSizes method:
[1600x1200, 1280x720, 960x720, 720x480, 640x480, 480x320, 320x240, 176x144]
For these respective sizes, the maximum fps I get when setting mImageReader.getSurface() as a target for the mPreviewRequestBuilder are:
[13, 18, 25, 28, 30, 30, 30, 30 ]
So one solution is to use a lower resolution to achieve the rate you want. For the curious... note that these timings do not seem to be affected by the line
mPreviewRequestBuilder.addTarget(surface);
...
mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
I was worried that adding the surface on the screen might be adding overhead, but if I remove that first line and change the second to
mCameraDevice.createCaptureSession(Arrays.asList(mImageReader.getSurface()),
then I see the timings change by less than 1 fps. So it doesn't seem to matter whether you are also displaying the image on the screen.
I think there is simply some overhead in the camera2 API or ImageReader's framework that makes it impossible to get the full rate that the TextureView is clearly getting.
One of the most disappointing things of all is that, if you switch back to the deprecated Camera API, you can easily get 30 fps by setting up a PreviewCallback via the Camera.setPreviewCallbackWithBuffer method. With that method, I am able to get 30fps regardless of the resolution. Specifically, although it does not offer me 1600x1200 directly, it does offer 1920x1080, and even that is 30fps.
I'm trying the same things, I think you may change the Format like
mImageReader = ImageReader.newInstance(largestSize.getWidth(),
largestSize.getHeight(),
ImageFormat.FLEX_RGB_888, 3);
Because using the YUV may cause CPU to compress the data and it may cost some time. RGB can be directly showed on the device. And detect face from image should put in other Thread you must know it.