I need to take pictures continuously with Camera2 API. It works fine on high end devices (for instance a Nexus 5X), but on slower ones (for instance a Samsung Galaxy A3), the preview freezes.
The code is a bit long, so I post only the most relevant parts:
Method called to start my preview:
private void startPreview() {
SurfaceTexture texture = mTextureView.getSurfaceTexture();
if(texture != null) {
try {
// We configure the size of default buffer to be the size of camera preview we want.
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
// This is the output Surface we need to start preview.
Surface surface = new Surface(texture);
// We set up a CaptureRequest.Builder with the output Surface.
mPreviewRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mPreviewRequestBuilder.addTarget(surface);
// Here, we create a CameraCaptureSession for camera preview.
mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()), new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(#NonNull CameraCaptureSession cameraCaptureSession) {
// If the camera is already closed, return:
if (mCameraDevice == null) { return; }
// When the session is ready, we start displaying the preview.
mCaptureSession = cameraCaptureSession;
// Auto focus should be continuous for camera preview.
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
mPreviewRequest = mPreviewRequestBuilder.build();
// Start the preview
try { mCaptureSession.setRepeatingRequest(mPreviewRequest, null, mPreviewBackgroundHandler); }
catch (CameraAccessException e) { e.printStackTrace(); }
}
#Override
public void onConfigureFailed(#NonNull CameraCaptureSession cameraCaptureSession) {
Log.e(TAG, "Configure failed");
}
}, null
);
}
catch (CameraAccessException e) { e.printStackTrace(); }
}
}
Method called to take a picture:
private void takePicture() {
try {
CaptureRequest.Builder captureBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
captureBuilder.addTarget(mImageReader.getSurface());
mCaptureSession.capture(captureBuilder.build(), null, mCaptureBackgroundHandler);
}
catch (CameraAccessException e) { e.printStackTrace(); }
}
And here is my ImageReader:
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(final ImageReader reader) {
mSaveBackgroundHandler.post(new Runnable() {
#Override
public void run() {
// Set the destination file:
File destination = new File(getExternalFilesDir(null), "image_" + mNumberOfImages + ".jpg");
mNumberOfImages++;
// Acquire the latest image:
Image image = reader.acquireLatestImage();
// Save the image:
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
FileOutputStream output = null;
try {
output = new FileOutputStream(destination);
output.write(bytes);
}
catch (IOException e) { e.printStackTrace(); }
finally {
image.close();
if (null != output) {
try { output.close(); }
catch (IOException e) { e.printStackTrace(); }
}
}
// Take a new picture if needed:
if(mIsTakingPictures) {
takePicture();
}
}
});
}
};
I have a button that toggle the mIsTakingPictures boolean, and makes the first takePicture call.
To recap, I'm using 3 threads:
one for the preview
one for the capture
one for the image saving
What can be the cause of this freeze?
It's impossible to avoid framing lost in your preview when you are taking images all time on weak devices. The only way to avoid this is on devices which support TEMPLATE_ZERO_SHUTTER_LAG and using a reprocessableCaptureSession. The documentation about this is pretty horrible and find a way to implement it can be a odyssey. I have this problem a few months ago and finally I found the way to implement it:
How to use a reprocessCaptureRequest with camera2 API
In that answer you can also find some Google CTS test's which also implements ReprocessableCaptureSession and shoot some burst captures with ZSL template.
Finally, you can also use a CaptureBuilder with your preview surface and the image reader surface attached, in that case your preview will continue working all time and also you will save each frame as a new picture. But you will still having the freeze problem.
I also tried implement a burst capture using a handler which dispatch a new capture call each 100 milliseconds, this second option was pretty good in performance and avoiding frame rate lost, but you will not get as many captures per second like the two ImageReader option.
Hope that my answer will help you a bit, API 2 still being a bit complex and there's not so many examples or information about it.
One thing I noticed on low end devices: the preview stops after a capture, even when using camera 1 api, so it has to be restarted manually, thus producing a small preview freeze when capturing a high resolution picture.
But the camera 2 api provides the possibility to get raw image when taking a still capture (that wasn't possible on the devices I have when using camera 1 (Huawei P7, Sony Xperia E5, wiko UFeel)). Using this feature is much faster than capturing a JPEG (maybe due to JPEG compression), so the preview can be restarted earlier, and the preview freeze is shorter. Of course using this solution you'll have to convert the picture from YUV to JPEG in a background task..
Related
I am writing an Android app that supports saving RAW/JPEG and records videos at the same time. I tried passing 4 surfaces when creating CameraCaptureSession: preview, 2x ImageSaver and 1x PersistentInputSurface created by MediaCodec#createPersistentInputSurface. By using persistent input surface, I intend to avoid a stoppage between 2 captures.
When creating the session it fails with:
W/CameraDevice-JV-0: Stream configuration failed due to: endConfigure:380: Camera 0: Unsupported set of inputs/outputs provided
Session 0: Failed to create capture session; configuration failed
I have tried taking out all other surfaces, leaving only the PersistentInputSurface, still fails.
#Override
public void onResume() {
super.onResume();
//Some other setups...
if (persistentRecorderSurface == null) {
persistentRecorderSurface = MediaCodec.createPersistentInputSurface();
}
startBackgroundThread();
startCamera();
if (mPreviewView.isAvailable()) {
configureTransform(mPreviewView.getWidth(), mPreviewView.getHeight());
} else {
mPreviewView.setSurfaceTextureListener(mSurfaceTextureListener);
}
if (mOrientationListener != null && mOrientationListener.canDetectOrientation()) {
mOrientationListener.enable();
}
}
private void createCameraPreviewSessionLocked() {
try {
SurfaceTexture texture = mPreviewView.getSurfaceTexture();
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
Surface surface = new Surface(texture);
mPreviewRequestBuilder = mBackCameraDevice.createCaptureRequest(
CameraDevice.TEMPLATE_PREVIEW);
mPreviewRequestBuilder.addTarget(surface);
mBackCameraDevice.createCaptureSession(Arrays.asList(
surface,
mJpegImageReader.get().getSurface(),
mRAWImageReader.get().getSurface(),
persistentRecorderSurface
), new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(CameraCaptureSession session) {
synchronized (mCameraStateLock) {
if (mBackCameraDevice == null) {
return;
}
try {
setup3AControlsLocked(mPreviewRequestBuilder);
session.setRepeatingRequest(mPreviewRequestBuilder.build(),
mPreCaptureCallback, mBackgroundHandler);
mState = CameraStates.PREVIEW;
} catch (CameraAccessException | IllegalStateException e) {
e.printStackTrace();
return;
}
mSession = session;
}
}
#Override
public void onConfigureFailed(CameraCaptureSession session) {
showToast("Failed to configure camera.");
}
}, mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
It'd be helpful to see the system log lines right before that error line to confirm, but most likely:
You need to actually tie the persistentRecorderSurface to a MediaRecorder or MediaCodec, and call prepare() on those, before you create the camera capture session.
Otherwise, there's nothing actually at the other end of the persistent surface, and the camera can't tell what resolution or other settings are required.
Also keep in mind that there are limits on how many concurrent outputs you can have from the camera, depending on its supported hardware level and capabilities. There is currently no requirement that a device must support your combination of outputs (preview, record, JPEG, RAW), unfortunately, so it's very likely many or all devices will still give you an error.
I am trying to record video using a Vivo X20 (7.1.1) and the camera2 api without using a preview and without recording sound (Strictly recording HD Video only).
I'm currently stuck because I cannot figure out how to successfully call MediaRecorder.setVideoSize() and record a video in HD. Currently when I run the app the log shows the error: Surface with size (w=1920, h=1080) and format 0x1 is not valid, size not in valid set: [1440x1080, 1280x960, 1280x720, 960x540, 800x480, 720x480, 768x432, 640x480, 384x288, 352x288, 320x240, 176x144]
The phone's stock camera app can record video up to 4K so I'm definitely missing something here. There are a total of two camera devices identified by CameraManager. When I use getOutPutFormats() from CameraCharacteristics it shows the same valid set of resolutions for both cameras and it is the same range as the above error message.
The below is the code I am using to initialize MediaRecorder and initiate a capture session:
public void StartRecordingVideo() {
Initialize();
recordingVideo = true;
cameraManager = (CameraManager) this.getSystemService(Context.CAMERA_SERVICE);
try {
if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
String[] cameraIDs = cameraManager.getCameraIdList();
//LogAllCameraInfo();
if (cameraIDs != null)
{
for(int x = 0; x < cameraIDs.length; x++)
{
Log.d(LOG_ID, "ID: " + cameraIDs[x]);
}
}
cameraManager.openCamera(deviceCameraID, cameraStateCallback, handler);
Log.d(LOG_ID, "Successfully opened camera");
}
else
{
throw new IllegalAccessException();
}
}
catch (Exception e)
{
recordingVideo = false;
Log.e(LOG_ID, "Error during record video start: " + e.getMessage());
}
}
private void Initialize()
{
videoRecordThread = new HandlerThread("video_capture");
videoRecordThread.start();
handler = new Handler((videoRecordThread.getLooper()));
try
{
vidRecorder = new MediaRecorder();
vidRecorder.setVideoSource(MediaRecorder.VideoSource.SURFACE);
vidRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
vidRecorder.setVideoFrameRate(30);
vidRecorder.setCaptureRate(30);
vidRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.DEFAULT);
vidRecorder.setVideoEncodingBitRate(10000000);
vidRecorder.setVideoSize(1920, 1080);
String videoFilename = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS)+ File.separator + System.currentTimeMillis() + ".mp4";
vidRecorder.setOutputFile(videoFilename);
Log.d(LOG_ID, "Starting video: " + videoFilename);
vidRecorder.prepare();
}
catch (Exception e)
{
Log.e(LOG_ID, "Error during Initialize: " + e.getMessage());
}
}
And the onReady/onSurfacePrepared/Camera onOpened callbacks:
#Override
public void onReady(CameraCaptureSession session) {
Log.d(LOG_ID, "onReady: ");
super.onReady(session);
try {
CaptureRequest.Builder builder = deviceCamera.createCaptureRequest(CameraDevice.TEMPLATE_RECORD);
builder.addTarget(vidRecorder.getSurface());
CaptureRequest request = builder.build();
session.setRepeatingRequest(request, null, handler);
vidRecorder.start();
} catch (CameraAccessException e) {
Log.d(LOG_ID, "Error on Ready: " + e.getMessage());
}
}
#Override
public void onSurfacePrepared(CameraCaptureSession session, Surface surface) {
Log.d(LOG_ID, "onSurfacePrepared: ");
super.onSurfacePrepared(session, surface);
}
#Override
public void onOpened(CameraDevice camera) {
Log.d(LOG_ID, "onOpened: ");
deviceCamera = camera;
try {
camera.createCaptureSession(Arrays.asList(vidRecorder.getSurface()), recordSessionStateCallback, handler);
} catch (CameraAccessException e) {
Log.d(LOG_ID, "onOpened: " + e.getMessage());
}
}
I've tried messing with the order of calls and the output format/encoder with no luck. I am sure that I have all the required permissions. Thanks in advance for your time!
This device most likely supports camera2 at the LEGACY level; check what the output of INFO_SUPPORTED_HARDWARE_LEVEL is.
LEGACY devices are effectively running camera2 on top of the legacy android.hardware.Camera API (more complex than that, but roughly true); as a result, their capabilities via camera2 are restricted.
The maximum recording resolution is one significant problem; android.hardware.Camera records videos via a magic path that the LEGACY mapping layer cannot directly use (there's no Surface involved). As a result, camera2 LEGACY can only record at the maximum preview resolution supported by android.hardware.Camera, not at the maximum recording resolution.
Sounds like this device has no support for 1:1 1080p preview, which is pretty unusual for a device launched so recently.
You can verify if the set of supported preview sizes in the deprecated Camera API matches the list you get in your error; if it doesn't then there may be a OS bug in generating the list so it'd be good to know.
But in general, you can't request sizes that aren't enumerated in the CameraCharacteristics StreamConfiguraitonMap for the camera, no matter what the feature list on the side of the box says. Sometimes the OEM camera app has magic hooks to enable features that normal apps can't get to; often because the feature only works in some very very specific set of circumstances, which normal apps wouldn't know how to replicate.
I've worked on this for days now. I have an OpenCV/JavaCameraView-based project I am trying to integrate with an Android android.hardware.camera2.CaptureRequest object I use to control the camera's sensitivity to light.
The CameraBridgeViewBase.CvCameraViewListener2/JavaCameraView project works (I can see an image on the screen), and the CaptureRequest project works (see this, too), but combining the two technologies in a single project has proved a big problem.
Has anyone gotten these two technologies to work together? My goal is to have the camera under the control of the CaptureRequest (limited SENSOR_SENSITIVITY + SENSOR_EXPOSURE_TIME), but for the screen to show a streaming image filtered through an OpenCV shared library I've written.
To control the camera, I can use something like this:
mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(#NonNull CameraCaptureSession cameraCaptureSession) {
// The camera is already closed
if (null == mCameraDevice) {
return;
}
long exposureTime = 66259688,frameDuration = 1000;
int sensitivity = 1512;
mCaptureSession = cameraCaptureSession;
try {
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
setAutoFlash(mPreviewRequestBuilder);
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_OFF);
mPreviewRequestBuilder.set(CaptureRequest.SENSOR_EXPOSURE_TIME, Long.valueOf(exposureTime));
mPreviewRequestBuilder.set(CaptureRequest.SENSOR_SENSITIVITY, Integer.valueOf(sensitivity));
mPreviewRequestBuilder.set(CaptureRequest.SENSOR_FRAME_DURATION, Long.valueOf(frameDuration));
mPreviewRequest = mPreviewRequestBuilder.build();
mCaptureSession.setRepeatingRequest(mPreviewRequest,
mCaptureCallback, mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}catch (Exception e) {
e.printStackTrace();
}
}
To filter the image stream, I can use something like this:
public class MyCoolScanner extends AppCompatActivity implements CameraBridgeViewBase.CvCameraViewListener2 {
...
#Override
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
mRgba = inputFrame.rgba();
OpenCVNative.myCoolLibrary(mRgba.getNativeObjAddr(),mGray.getNativeObjAddr());
return mGray;
}
Thanks in advance.
Please find implementation of open CV with Android , it may help you .
Camera calibration With OpenCV
Camera calibration - opencv 2.3.1 android
https://groups.google.com/forum/#!topic/android-opencv/xXtUvdA1E4M
https://fossies.org/dox/opencv-3.2.0/CameraCalibrationActivity_8java_source.html
I believe OpenCV does not natively support use with android.camera2
The answer is no, OpenCV will not work with android.camera2, not without some kind of convoluted work around.
I am working with camera2Basic now and trying to get each frame data to do some image processing. I am using camera2 API in Android5.0, everything is fine when only doing the camera preview and it is fluid. But the preview stuck when I use the ImageReader.OnImageAvailableListener callback to get each frame data, this cause a bad User Experience.
The following is my related codes:
This is the setup for camera and ImageReader, I set the format of image is YUV_420_888
public<T> Size setUpCameraOutputs(CameraManager cameraManager,Class<T> kClass, int width, int height) {
boolean flagSuccess = true;
try {
for (String cameraId : cameraManager.getCameraIdList()) {
CameraCharacteristics characteristics = cameraManager.getCameraCharacteristics(cameraId);
// choose the front or back camera
if (FLAG_CAMERA.BACK_CAMERA == mChosenCamera &&
CameraCharacteristics.LENS_FACING_BACK != characteristics.get(CameraCharacteristics.LENS_FACING)) {
continue;
}
if (FLAG_CAMERA.FRONT_CAMERA == mChosenCamera &&
CameraCharacteristics.LENS_FACING_FRONT != characteristics.get(CameraCharacteristics.LENS_FACING)) {
continue;
}
StreamConfigurationMap map = characteristics.get(
CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
Size largestSize = Collections.max(
Arrays.asList(map.getOutputSizes(ImageFormat.YUV_420_888)),
new CompareSizesByArea());
mImageReader = ImageReader.newInstance(largestSize.getWidth(), largestSize.getHeight(),
ImageFormat.YUV_420_888, 3);
mImageReader.setOnImageAvailableListener(mOnImageAvailableListener, mBackgroundHandler);
...
mCameraId = cameraId;
}
} catch (CameraAccessException e) {
e.printStackTrace();
} catch (NullPointerException e) {
}
......
}
When the camera opened successfully, I Create a CameraCaptureSession for camera preview
private void createCameraPreviewSession() {
if (null == mTexture) {
return;
}
// We configure the size of default buffer to be the size of camera preview we want.
mTexture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
// This is the output Surface we need to start preview
Surface surface = new Surface(mTexture);
// We set up a CaptureRequest.Builder with the output Surface.
try {
mPreviewRequestBuilder =
mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mPreviewRequestBuilder.addTarget(mImageReader.getSurface());
mPreviewRequestBuilder.addTarget(surface);
// We create a CameraCaptureSession for camera preview
mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(CameraCaptureSession session) {
if (null == mCameraDevice) {
return;
}
// when the session is ready, we start displaying the preview
mCaptureSession = session;
// Finally, we start displaying the camera preview
mPreviewRequest = mPreviewRequestBuilder.build();
try {
mCaptureSession.setRepeatingRequest(mPreviewRequest,
mCaptureCallback, mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
#Override
public void onConfigureFailed(CameraCaptureSession session) {
}
}, null);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
The last is the ImageReader.OnImageAvailableListener callback
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Log.d(TAG, "The onImageAvailable thread id: " + Thread.currentThread().getId());
Image readImage = reader.acquireLatestImage();
readImage.close();
}
};
Maybe I do the wrong setup but I try several times and it doesn't work. Maybe there is another way to get frame data rather than ImageReader but I don't know.
Anybody knows how to get each frame data in realtime?
I do not believe that Chen is correct. The image format has almost 0 effect on the speed on the devices I have tested. Instead, the problem seems to be with the image size. On an Xperia Z3 Compact with the image format YUV_420_888, I am offered a bunch of different options in the StreamConfigurationMap's getOutputSizes method:
[1600x1200, 1280x720, 960x720, 720x480, 640x480, 480x320, 320x240, 176x144]
For these respective sizes, the maximum fps I get when setting mImageReader.getSurface() as a target for the mPreviewRequestBuilder are:
[13, 18, 25, 28, 30, 30, 30, 30 ]
So one solution is to use a lower resolution to achieve the rate you want. For the curious... note that these timings do not seem to be affected by the line
mPreviewRequestBuilder.addTarget(surface);
...
mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
I was worried that adding the surface on the screen might be adding overhead, but if I remove that first line and change the second to
mCameraDevice.createCaptureSession(Arrays.asList(mImageReader.getSurface()),
then I see the timings change by less than 1 fps. So it doesn't seem to matter whether you are also displaying the image on the screen.
I think there is simply some overhead in the camera2 API or ImageReader's framework that makes it impossible to get the full rate that the TextureView is clearly getting.
One of the most disappointing things of all is that, if you switch back to the deprecated Camera API, you can easily get 30 fps by setting up a PreviewCallback via the Camera.setPreviewCallbackWithBuffer method. With that method, I am able to get 30fps regardless of the resolution. Specifically, although it does not offer me 1600x1200 directly, it does offer 1920x1080, and even that is 30fps.
I'm trying the same things, I think you may change the Format like
mImageReader = ImageReader.newInstance(largestSize.getWidth(),
largestSize.getHeight(),
ImageFormat.FLEX_RGB_888, 3);
Because using the YUV may cause CPU to compress the data and it may cost some time. RGB can be directly showed on the device. And detect face from image should put in other Thread you must know it.
I'm trying to implement an ImageReader on my application, but i don't know why, he doesn't read anything.
List<Surface> surfaces = new ArrayList<Surface>();
Surface previewSurface = new Surface(texture);
previewRequestBuilder.addTarget(previewSurface);
recordRequestBuilder.addTarget(previewSurface);
surfaces.add(previewSurface);
Surface recorderSurface = mediaRecorder.getSurface();
surfaces.add(recorderSurface);
ImageReader mImageReader = ImageReader.newInstance(previewSize.getWidth(),previewSize.getHeight(), ImageFormat.JPEG,5);
Surface processSurface = mImageReader.getSurface();
surfaces.add(processSurface);
mImageReader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Log.v("ImageReader ","An Image");
}
},null);
cameraDevice.createCaptureSession(surfaces, new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(CameraCaptureSession cameraCaptureSession) {
captureSession = cameraCaptureSession;
updateRequest(PREVIEW_REQUEST);
}
#Override
public void onConfigureFailed(CameraCaptureSession cameraCaptureSession) {
Activity activity = getActivity();
if (null != activity) {
Toast.makeText(activity, "Failed", Toast.LENGTH_SHORT).show();
}
}
}, null);
} catch (CameraAccessException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
So i got 3 Surface : previewSurface for the display, recordSurface for recording the video and processSurface to get the images (with ImageReader) and process it.
But I don't even see my Log.v once !
Thanks by advance for your answers.
There are at least 2 reasons why your code might not work:
In your implementation of the OnImageAvailableListener, in the method onImageAvailable(ImageReader reader) you do not read and close the image. In my experience if you don't read/close the image from the reader the camera freezes. If this is the case then you should see the log message at least once (or even more times). I would suggest that you add reading and closing the image to the method:
#Override
public void onImageAvailable(ImageReader reader) {
Log.v("ImageReader ","An Image");
Image img = reader.acquireNextImage();
img.close();
}
Taking 3 streams (for 3 surfaces) of a certain size and type might not be supported on your device. You should verify what support you have on your device (LEGACY/LIMITED/FULL). For instance, your device may not support 3 simultaneous streams of maximum size. Check the documentation. There are nice tables showing what is possible and carefully check if your sizes/types are ok.