I'm trying to connect my Android app with the OpenCV library and I need to use a native camera for have more control of the camera options. For do it I have found http://nezarobot.blogspot.it/2016/03/android-surfacetexture-camera2-opencv.html, that is what I need.
My problem is that if I use this code, with some small change and when I launch it, my app crash with 3 errors reported:
E/BufferQueueProducer: [SurfaceTexture-0-31525-0] connect(P): already connected (cur=4 req=2)
D/PlateNumberDetection/DetectionBasedTracker: ANativeWindow_lock failed with error code -22
A/libc: Fatal signal 11 (SIGSEGV), code 1, fault addr 0x315e9858 in tid 31735 (CameraBackgroun)
I have tried to close the camera before the jni call and I can capture and show only the first frame, but then I need to restart the camera and I can't create the same thread by itself.
Here I take the frame and I send to NDK.
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener
= new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image image;
try {
image = reader.acquireLatestImage();
if( image == null) {
return;
}
if (image.getFormat() != ImageFormat.YUV_420_888) {
throw new IllegalArgumentException("image must have format YUV_420_888.");
}
Image.Plane[] planes = image.getPlanes();
if (planes[1].getPixelStride() != 1 && planes[1].getPixelStride() != 2) {
throw new IllegalArgumentException(
"src chroma plane must have a pixel stride of 1 or 2: got "
+ planes[1].getPixelStride());
}
mNativeDetector.detect(image.getWidth(), image.getHeight(), planes[0].getBuffer(), surface);
} catch (IllegalStateException e) {
Log.e(TAG, "Too many images queued for saving, dropping image for request: ", e);
return;
}
image.close();
}
};
and here I manage the camera preview
protected void createCameraPreview() {
try {
SurfaceTexture texture = textureView.getSurfaceTexture();
assert texture != null;
texture.setDefaultBufferSize(imageDimension.getWidth(), imageDimension.getHeight());
surface = new Surface(texture);
captureRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
captureRequestBuilder.addTarget(mImageReader.get().getSurface());
BlockingSessionCallback sessionCallback = new BlockingSessionCallback();
List<Surface> outputSurfaces = new ArrayList<>();
outputSurfaces.add(mImageReader.get().getSurface());
outputSurfaces.add(new Surface(textureView.getSurfaceTexture()));
cameraDevice.createCaptureSession(outputSurfaces, sessionCallback, mBackgroundHandler);
try {
Log.d(TAG, "waiting on session.");
cameraCaptureSessions = sessionCallback.waitAndGetSession(SESSION_WAIT_TIMEOUT_MS);
try {
captureRequestBuilder.set(CaptureRequest.CONTROL_MODE, CaptureRequest.CONTROL_MODE_AUTO);
Log.d(TAG, "setting repeating request");
cameraCaptureSessions.setRepeatingRequest(captureRequestBuilder.build(),
mCaptureCallback, mBackgrounHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
} catch (TimeoutRuntimeException e) {
Toast.makeText(AydaMainActivity.this, "Failed to configure capture session.", Toast.LENGTH_SHORT);
}
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
Have you tried that code without your "some small change" first? I have tried that project and it worked well on multiple devices. So it will be useful to first establish if it does not work on your phone at all, or if there is a problem in your modifications.
Related
I have a Camera App developed for Android 26 SDK. I have been using it happily with a Motorola G5 and G6 but when I move to a Motorola G7 the app crashes when I press the button to take a picture in my app.
The G7 is running Android 9. I have another Android 9 phone a Samsung S10 plus. The S10 plus does not crash when I press the button for taking a picture.
While debugging I noticed that the G7 doesn't call ImageReader.OnImageAvailableListener while the S10 does. Looking at the code this is where the image is saved for use later on in CameraCaptureSession.CaptureCallback. The callback expects bytes to be populated and crashes when it isn't (I haven't included the stack trace because it's not a little unhelpful but I can if you think you would like to see it).
I can get the G7 to save the image if I run it slowly through debug on 'some' occasions.
So I have a button that calls the function onImageCaptureClick() inside it does a bunch of stuff but one of the things it does is create an ImageReader.OnImageAvailableListener. The OnImageAvailableListener saves the image and populates a variable bytes from the image buffer. This onImageAvailableListener is attached to my reader by using reader.setOnImageAvailableListener(readerListener, null), and this listener is never used. When I get in to the CaptureCallBack the class variable bytes is not populated and the app crashes.
Do you have any idea where I would look to solve this?
protected void onImageCaptureClick() {
if (null == mCameraDevice) {
logger.debug("null == mCameraDevice");
Log.e(TAG, "cameraDevice is null");
return;
}
CameraManager manager = (CameraManager) getSystemService(Context.CAMERA_SERVICE);
try {
CameraCharacteristics characteristics = manager.getCameraCharacteristics(mCameraDevice.getId());
Size[] jpegSizes = null;
if (characteristics != null) {
jpegSizes = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP).getOutputSizes(ImageFormat.JPEG);
}
int width = 640;
int height = 480;
if (jpegSizes != null && 0 < jpegSizes.length) {
width = jpegSizes[0].getWidth();
height = jpegSizes[0].getHeight();
}
ImageReader reader = ImageReader.newInstance(width, height, ImageFormat.JPEG, 1);
List < Surface > outputSurfaces = new ArrayList < > (2);
outputSurfaces.add(reader.getSurface());
outputSurfaces.add(new Surface(mTextureView.getSurfaceTexture()));
final CaptureRequest.Builder captureBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
captureBuilder.addTarget(reader.getSurface());
if (mFlashMode == FLASH_MODE_OFF) {
captureBuilder.set(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_OFF);
logger.debug("FLASH OFF");
}
if (mFlashMode == CONTROL_AE_MODE_ON) {
captureBuilder.set(CaptureRequest.CONTROL_AE_MODE,
CaptureRequest.CONTROL_AE_MODE_ON);
captureBuilder.set(CaptureRequest.FLASH_MODE,
CaptureRequest.FLASH_MODE_TORCH);
logger.debug("FLASH ON");
}
if (mFlashMode == CONTROL_AE_MODE_ON_AUTO_FLASH) {
captureBuilder.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
captureBuilder.set(CaptureRequest.FLASH_MODE,
CaptureRequest.FLASH_MODE_OFF);
logger.debug("FLASH AUTO");
}
captureBuilder.set(CaptureRequest.SCALER_CROP_REGION, zoom);
int rotation = getWindowManager().getDefaultDisplay().getRotation();
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION, ORIENTATIONS.get(rotation));
final File file = new File(_pictureUri.getPath());
logger.debug("OnImageCaptureClick: _pictureUri is: " + _pictureUri.getPath());
// ************************************
// this listener is not used on the G7,
// and so the image isn't saved.
// ************************************
ImageReader.OnImageAvailableListener readerListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image image = null;
try {
image = reader.acquireLatestImage();
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
bytes = new byte[buffer.capacity()];
buffer.get(bytes);
logger.debug("onImageCaptureClick, the filesize to save is: " + bytes.toString());
save();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
if (image != null) {
image.close();
}
}
}
private void save() throws IOException {
OutputStream output = null;
try {
output = new FileOutputStream(file);
output.write(bytes);
} finally {
if (null != output) {
output.close();
}
}
}
};
// ********************************************************
// the reader sets the listener here but it is never called
// and when I get in to the CaptureCallback the BitmapUtils
// expects bytes to be populated and crashes the app
// ********************************************************
reader.setOnImageAvailableListener(readerListener, null);
final CameraCaptureSession.CaptureCallback captureListener = new CameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(#NonNull CameraCaptureSession session, #NonNull CaptureRequest request, #NonNull TotalCaptureResult result) {
super.onCaptureCompleted(session, request, result);
try {
BitmapUtils.addTimeStampAndRotate(_pictureUri, bytes);
Intent intent = new Intent(CameraActivity.this, CameraReviewPhotoActivity.class);
intent.putExtra(MediaStore.EXTRA_OUTPUT, _pictureUri);
startActivityForResult(intent, CameraActivity.kRequest_Code_Approve_Image);
} catch (IOException e) {
e.printStackTrace();
} catch (ImageReadException e) {
e.printStackTrace();
} catch (ImageWriteException e) {
e.printStackTrace();
}
}
};
mCameraDevice.createCaptureSession(outputSurfaces, new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(CameraCaptureSession session) {
try {
session.capture(captureBuilder.build(), captureListener, null);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
#Override
public void onConfigureFailed(CameraCaptureSession session) {
Log.w(TAG, "Failed to configure camera");
}
}, null);
} catch (CameraAccessException e) {
e.printStackTrace();
} finally {
takePictureButton.setEnabled(false);
mTextureView.setEnabled(false);
}
The API makes no guarantee about the order of onCaptureCompleted and OnImageAvailableListener. They may arrive in arbitrary order, depending on the device, capture settings, the load on the device, or even the particular OS build you have.
Please don't make any assumptions about it.
Instead, if you need both callbacks to fire before you process something, then wait for both to happen before you move forward. For example, check if the other callback has fired in each callback, and if so, call the method to do the processing.
I think I have found the solution to this.
I have the following phones:
Samsung S10 Plus
Motorola G7
Motorola G6
My app works on the S10 and the G6.
The S10 and G6 both call the OnImageAvailableListener function before the onCaptureCompleted callback. The G7 however calls them both the other way around onCaptureCompleted then OnImageAvailableListener.
According to https://proandroiddev.com/understanding-camera2-api-from-callbacks-part-1-5d348de65950 the correct way is onCaptureCompleted then OnImageAvailableListener.
In my code I am assuming that OnImageAvailableListener has saved the image and then OnCaptureCompleted tries to manipulate it, which causes the crash.
Looking at the INFO_SUPPORTED_HARDWARE_LEVEL of each device I have the following levels of support from none level 0 to uber level 3.
Samsung S10 Plus reports device level support level 1
Motorola G7 reports device level support level 3
Motorola G6 reports device level support level 2
My assumption at this point is that the events fire in a different order when you support the android-camera2 API at Level 3 compared to other levels.
Hope this helps
I am writing an Android app that supports saving RAW/JPEG and records videos at the same time. I tried passing 4 surfaces when creating CameraCaptureSession: preview, 2x ImageSaver and 1x PersistentInputSurface created by MediaCodec#createPersistentInputSurface. By using persistent input surface, I intend to avoid a stoppage between 2 captures.
When creating the session it fails with:
W/CameraDevice-JV-0: Stream configuration failed due to: endConfigure:380: Camera 0: Unsupported set of inputs/outputs provided
Session 0: Failed to create capture session; configuration failed
I have tried taking out all other surfaces, leaving only the PersistentInputSurface, still fails.
#Override
public void onResume() {
super.onResume();
//Some other setups...
if (persistentRecorderSurface == null) {
persistentRecorderSurface = MediaCodec.createPersistentInputSurface();
}
startBackgroundThread();
startCamera();
if (mPreviewView.isAvailable()) {
configureTransform(mPreviewView.getWidth(), mPreviewView.getHeight());
} else {
mPreviewView.setSurfaceTextureListener(mSurfaceTextureListener);
}
if (mOrientationListener != null && mOrientationListener.canDetectOrientation()) {
mOrientationListener.enable();
}
}
private void createCameraPreviewSessionLocked() {
try {
SurfaceTexture texture = mPreviewView.getSurfaceTexture();
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
Surface surface = new Surface(texture);
mPreviewRequestBuilder = mBackCameraDevice.createCaptureRequest(
CameraDevice.TEMPLATE_PREVIEW);
mPreviewRequestBuilder.addTarget(surface);
mBackCameraDevice.createCaptureSession(Arrays.asList(
surface,
mJpegImageReader.get().getSurface(),
mRAWImageReader.get().getSurface(),
persistentRecorderSurface
), new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(CameraCaptureSession session) {
synchronized (mCameraStateLock) {
if (mBackCameraDevice == null) {
return;
}
try {
setup3AControlsLocked(mPreviewRequestBuilder);
session.setRepeatingRequest(mPreviewRequestBuilder.build(),
mPreCaptureCallback, mBackgroundHandler);
mState = CameraStates.PREVIEW;
} catch (CameraAccessException | IllegalStateException e) {
e.printStackTrace();
return;
}
mSession = session;
}
}
#Override
public void onConfigureFailed(CameraCaptureSession session) {
showToast("Failed to configure camera.");
}
}, mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
It'd be helpful to see the system log lines right before that error line to confirm, but most likely:
You need to actually tie the persistentRecorderSurface to a MediaRecorder or MediaCodec, and call prepare() on those, before you create the camera capture session.
Otherwise, there's nothing actually at the other end of the persistent surface, and the camera can't tell what resolution or other settings are required.
Also keep in mind that there are limits on how many concurrent outputs you can have from the camera, depending on its supported hardware level and capabilities. There is currently no requirement that a device must support your combination of outputs (preview, record, JPEG, RAW), unfortunately, so it's very likely many or all devices will still give you an error.
I need to take pictures continuously with Camera2 API. It works fine on high end devices (for instance a Nexus 5X), but on slower ones (for instance a Samsung Galaxy A3), the preview freezes.
The code is a bit long, so I post only the most relevant parts:
Method called to start my preview:
private void startPreview() {
SurfaceTexture texture = mTextureView.getSurfaceTexture();
if(texture != null) {
try {
// We configure the size of default buffer to be the size of camera preview we want.
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
// This is the output Surface we need to start preview.
Surface surface = new Surface(texture);
// We set up a CaptureRequest.Builder with the output Surface.
mPreviewRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mPreviewRequestBuilder.addTarget(surface);
// Here, we create a CameraCaptureSession for camera preview.
mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()), new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(#NonNull CameraCaptureSession cameraCaptureSession) {
// If the camera is already closed, return:
if (mCameraDevice == null) { return; }
// When the session is ready, we start displaying the preview.
mCaptureSession = cameraCaptureSession;
// Auto focus should be continuous for camera preview.
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
mPreviewRequest = mPreviewRequestBuilder.build();
// Start the preview
try { mCaptureSession.setRepeatingRequest(mPreviewRequest, null, mPreviewBackgroundHandler); }
catch (CameraAccessException e) { e.printStackTrace(); }
}
#Override
public void onConfigureFailed(#NonNull CameraCaptureSession cameraCaptureSession) {
Log.e(TAG, "Configure failed");
}
}, null
);
}
catch (CameraAccessException e) { e.printStackTrace(); }
}
}
Method called to take a picture:
private void takePicture() {
try {
CaptureRequest.Builder captureBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
captureBuilder.addTarget(mImageReader.getSurface());
mCaptureSession.capture(captureBuilder.build(), null, mCaptureBackgroundHandler);
}
catch (CameraAccessException e) { e.printStackTrace(); }
}
And here is my ImageReader:
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(final ImageReader reader) {
mSaveBackgroundHandler.post(new Runnable() {
#Override
public void run() {
// Set the destination file:
File destination = new File(getExternalFilesDir(null), "image_" + mNumberOfImages + ".jpg");
mNumberOfImages++;
// Acquire the latest image:
Image image = reader.acquireLatestImage();
// Save the image:
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
FileOutputStream output = null;
try {
output = new FileOutputStream(destination);
output.write(bytes);
}
catch (IOException e) { e.printStackTrace(); }
finally {
image.close();
if (null != output) {
try { output.close(); }
catch (IOException e) { e.printStackTrace(); }
}
}
// Take a new picture if needed:
if(mIsTakingPictures) {
takePicture();
}
}
});
}
};
I have a button that toggle the mIsTakingPictures boolean, and makes the first takePicture call.
To recap, I'm using 3 threads:
one for the preview
one for the capture
one for the image saving
What can be the cause of this freeze?
It's impossible to avoid framing lost in your preview when you are taking images all time on weak devices. The only way to avoid this is on devices which support TEMPLATE_ZERO_SHUTTER_LAG and using a reprocessableCaptureSession. The documentation about this is pretty horrible and find a way to implement it can be a odyssey. I have this problem a few months ago and finally I found the way to implement it:
How to use a reprocessCaptureRequest with camera2 API
In that answer you can also find some Google CTS test's which also implements ReprocessableCaptureSession and shoot some burst captures with ZSL template.
Finally, you can also use a CaptureBuilder with your preview surface and the image reader surface attached, in that case your preview will continue working all time and also you will save each frame as a new picture. But you will still having the freeze problem.
I also tried implement a burst capture using a handler which dispatch a new capture call each 100 milliseconds, this second option was pretty good in performance and avoiding frame rate lost, but you will not get as many captures per second like the two ImageReader option.
Hope that my answer will help you a bit, API 2 still being a bit complex and there's not so many examples or information about it.
One thing I noticed on low end devices: the preview stops after a capture, even when using camera 1 api, so it has to be restarted manually, thus producing a small preview freeze when capturing a high resolution picture.
But the camera 2 api provides the possibility to get raw image when taking a still capture (that wasn't possible on the devices I have when using camera 1 (Huawei P7, Sony Xperia E5, wiko UFeel)). Using this feature is much faster than capturing a JPEG (maybe due to JPEG compression), so the preview can be restarted earlier, and the preview freeze is shorter. Of course using this solution you'll have to convert the picture from YUV to JPEG in a background task..
This question was asked but never answered here -- but it is somewhat different than my need, anyway.
I want to record video, while running the Google Vision library in the background, so whenever my user holds up a barcode (or approaches one closely enough) the camera will automatically detect and scan the barcode -- and all the while it is recording the video. I know the Google Vision demo is pretty CPU intensive, but when I try a simpler version of it (i.e. without grabbing every frame all the time and handing it to the detector) I'm not getting reliable barcode reads.
(I am running a Samsung Galaxy S4 Mini on KitKat 4.4.3 Unfortunately, for reasons known only to Samsung, they no longer report the OnCameraFocused event, so it is impossible to know if the camera grabbed the focus and call the barcode read then. That makes grabbing and checking every frame seem like the only viable solution.)
So to at least prove the concept, I wanted to simply modify the Google Vision Demo. (Found Here)
It seems the easiest thing to do is simply jump in the code and add a media recorder. I did this in the CameraSourcePreview method during surface create.
Like this:
private class SurfaceCallback implements SurfaceHolder.Callback
{
#Override
public void surfaceCreated(SurfaceHolder surface)
{
mSurfaceAvailable = true;
try
{
startIfReady();
if (mSurfaceAvailable)
{
Camera camera = mCameraSource.getCameraSourceCamera();
/** ADD MediaRecorder to Google Example **/
if (camera != null && recordThis)
{
if (mMediaRecorder == null)
{
mMediaRecorder = new MediaRecorder();
camera.unlock();
SurfaceHolder sh = mSurfaceView.getHolder();
mMediaRecorder.setPreviewDisplay(sh.getSurface());
mMediaRecorder.setCamera(camera);
mMediaRecorder.setAudioSource(MediaRecorder.AudioSource.CAMCORDER);
mMediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);
mMediaRecorder.setProfile(CamcorderProfile.get(CamcorderProfile.QUALITY_HIGH));
String OutputFile = Environment.getExternalStorageDirectory() + "/" +
DateFormat.format("yyyy-MM-dd_kk-mm-ss", new Date().getTime()) + ".mp4";
File newOutPut = getVideoFile();
String newOutPutFileName = newOutPut.getPath();
mMediaRecorder.setOutputFile(newOutPutFileName);
Log.d("START MR", OutputFile);
try { mMediaRecorder.prepare(); } catch (Exception e) {}
mCameraSource.mediaRecorder = mMediaRecorder;
mMediaRecorder.start();
}
}
}
}
catch (SecurityException se)
{
Log.e(TAG, "Do not have permission to start the camera", se);
}
catch (IOException e)
{
Log.e(TAG, "Could not start camera source.", e);
}
}
That DOES record things, while still handing each frame off to the Vision code. But, strangely, when I do that, the camera does not seem to call autofocus correctly, and the barcodes are not scanned -- since they are never really in focus, and therefore not recognized.
My next thought was to simply capture the frames as the barcode detector is handling the frames, and save them to the disk one by one (I can mux them together later.)
I did this in CameraSource.java.
This does not seem to be capturing all of the frames, even though I am writing them out in a separate AsyncTask running in the background, which I thought would get them eventually -- even if it took awhile to catch up. The saving was not optimized, but it looks as though it is dropping frames throughout, not just at the end.
To add this code, I tried putting it in the
private class FrameProcessingRunnable
in the run() method.
Right after the FrameBuilder Code, I added this:
if (saveImagesIsEnabled)
{
if (data == null)
{
Log.d(TAG, "data == NULL");
}
else
{
SaveImageAsync saveImage = new SaveImageAsync(mCamera.getParameters().getPreviewSize() );
saveImage.execute(data.array());
}
}
Which calls this class:
Camera.Size lastKnownPreviewSize = null;
public class SaveImageAsync extends AsyncTask<byte[], Void, Void>
{
Camera.Size previewSize;
public SaveImageAsync(Camera.Size _previewSize)
{
previewSize = _previewSize;
lastKnownPreviewSize = _previewSize;
}
#Override
protected Void doInBackground(byte[]... dataArray)
{
try
{
if (previewSize == null)
{
if (lastKnownPreviewSize != null)
previewSize = lastKnownPreviewSize;
else
return null;
}
byte[] bitmapData = dataArray[0];
if (bitmapData == null)
{
Log.d("doInBackground","NULL: ");
return null;
}
// where to put the output file (note: /sdcard requires WRITE_EXTERNAL_STORAGE permission)
File storageDir = Environment.getExternalStorageDirectory();
String imageFileName = baseFileName + "_" + Long.toString(sequentialCount++) + ".jpg";
String filePath = storageDir + "/" + "tmp" + "/" + imageFileName;
FileOutputStream out = null;
YuvImage yuvimage = new YuvImage(bitmapData, ImageFormat.NV21, previewSize.width,
previewSize.height, null);
try
{
out = new FileOutputStream(filePath);
yuvimage.compressToJpeg(new Rect(0, 0, previewSize.width,
previewSize.height), 100, out);
}
catch (Exception e)
{
e.printStackTrace();
}
finally
{
try
{
if (out != null)
{
out.close();
}
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
catch (Exception ex)
{
ex.printStackTrace();
Log.d("doInBackground", ex.getMessage());
}
return null;
}
}
I'm OK with the mediarecorder idea, or the brute force frame capture idea, but neither seem to be working correctly.
I'm trying to implement an ImageReader on my application, but i don't know why, he doesn't read anything.
List<Surface> surfaces = new ArrayList<Surface>();
Surface previewSurface = new Surface(texture);
previewRequestBuilder.addTarget(previewSurface);
recordRequestBuilder.addTarget(previewSurface);
surfaces.add(previewSurface);
Surface recorderSurface = mediaRecorder.getSurface();
surfaces.add(recorderSurface);
ImageReader mImageReader = ImageReader.newInstance(previewSize.getWidth(),previewSize.getHeight(), ImageFormat.JPEG,5);
Surface processSurface = mImageReader.getSurface();
surfaces.add(processSurface);
mImageReader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Log.v("ImageReader ","An Image");
}
},null);
cameraDevice.createCaptureSession(surfaces, new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(CameraCaptureSession cameraCaptureSession) {
captureSession = cameraCaptureSession;
updateRequest(PREVIEW_REQUEST);
}
#Override
public void onConfigureFailed(CameraCaptureSession cameraCaptureSession) {
Activity activity = getActivity();
if (null != activity) {
Toast.makeText(activity, "Failed", Toast.LENGTH_SHORT).show();
}
}
}, null);
} catch (CameraAccessException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
So i got 3 Surface : previewSurface for the display, recordSurface for recording the video and processSurface to get the images (with ImageReader) and process it.
But I don't even see my Log.v once !
Thanks by advance for your answers.
There are at least 2 reasons why your code might not work:
In your implementation of the OnImageAvailableListener, in the method onImageAvailable(ImageReader reader) you do not read and close the image. In my experience if you don't read/close the image from the reader the camera freezes. If this is the case then you should see the log message at least once (or even more times). I would suggest that you add reading and closing the image to the method:
#Override
public void onImageAvailable(ImageReader reader) {
Log.v("ImageReader ","An Image");
Image img = reader.acquireNextImage();
img.close();
}
Taking 3 streams (for 3 surfaces) of a certain size and type might not be supported on your device. You should verify what support you have on your device (LEGACY/LIMITED/FULL). For instance, your device may not support 3 simultaneous streams of maximum size. Check the documentation. There are nice tables showing what is possible and carefully check if your sizes/types are ok.