I am working on a modification of Google's Camera2 API example for Android, found here: https://github.com/googlesamples/android-Camera2Basic
I am uploading captured images to Cloudinary, and obviously need to do so in a background thread so the UI isn't blocked.
The problem I'm running into, however, is that the UI actually is blocked when the image is uploaded even though from what I understand, it shouldn't be, because the Handler is created with the Looper from a background thread like so:
private void startBackgroundThread() {
mBackgroundThread = new HandlerThread("CameraBackground");
mBackgroundThread.start();
mBackgroundHandler = new Handler(mBackgroundThread.getLooper());
}
The ImageSaver class, which is responsible for writing the captured image to disk, is as follows:
private static class ImageSaver implements Runnable {
/**
* The JPEG image
*/
private final Image mImage;
/**
* The file we save the image into.
*/
private final File mFile;
public ImageSaver(Image image, File file ) {
mImage = image;
mFile = file;
}
#Override
public void run() {
ByteBuffer buffer = mImage.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
FileOutputStream output = null;
try {
output = new FileOutputStream(mFile);
output.write(bytes);
InputStream is = new ByteArrayInputStream(bytes);
Map uploadResult = CloudinaryManager.getInstance().uploader().upload(is, ObjectUtils.asMap(
"api_key", CloudinaryManager.CLOUDINARY_API_KEY,
"api_secret", CloudinaryManager.CLOUDINARY_SECRET_KEY
));
System.out.println("result");
} catch (IOException e) {
e.printStackTrace();
} finally {
mImage.close();
if (null != output) {
try {
output.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
The ImageSaver is added to the Handler here:
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener
= new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
mBackgroundHandler.post(new ImageSaver(reader.acquireNextImage(), mFile));
}
};
I would appreciate any help or advice to point me in the right direction.
I believe it's because there is a lock in the background being used by the camera ... Since you acquiring an Image from the ImageReader, I suspect, it is holding a lock until you are done with the resource ... So as a suggestion, I would fill up the byte array inside the onImageAvailable, close the image you acquired, and send the byte array to the AsyncTask to execute saving
I had the same issue and after some investigation and testing, discovered it is not actually freezing the UI, it is freezing the camera preview which on many camera apps give the same impression.
If you take a look at the method unLockFocus() you can see that it sets the camera back to the normal state of preview.
Looking at the point at which this is called you can see it is not until the image has been saved:
.
.
.
CameraCaptureSession.CaptureCallback CaptureCallback
= new CameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(#NonNull CameraCaptureSession session,
#NonNull CaptureRequest request,
#NonNull TotalCaptureResult result) {
showToast("Saved: " + mFile);
Log.d(TAG, mFile.toString());
unlockFocus();
}
};
By calling this at an earlier point in the camera save sequence the preview is enabled and the UI appears unlocked again much earlier.
I have experimented and it appears to work if it is called just after acquiring the image and before saving it - I also removed the original call to unLockFocus in the captureCallback. Note that I have not done any proper testing of race conditions etc so I would strongly advise experimenting yourself to make sure your case works (I will update this if I do verify it more):
/**
* This a callback object for the {#link ImageReader}. "onImageAvailable" will be called when a
* still image is ready to be saved.
*/
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener
= new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Log.d(TAG,"onImageAvailable");
//Get the image
Image cameraImage = reader.acquireNextImage();
//Now unlock the focus so the UI does not look locked - note that this is a much earlier point than in the
//original Camera2Basic example from google as the original place was causing the preview to lock during any
//image manipulation and saving.
unlockFocus();
//Save the image file in the background - note check you have permissions granted by user or this will cause an exception.
mBackgroundHandler.post(new ImageSaver(getActivity().getApplicationContext(), cameraImage, outputPicFile);
}
};
Related
I am trying to access the frames of the preview on Android using this Library, and then passing the FastJavaByteArray to Zxing's Decode() method to scan for a barcode in a specific area of the preview. The preview is working fine, and when I use the normal Preview.SetPreviewCallback(this); it works fine and the OnPreviewFrame(byte[], Camera) gets called. It's just when I use Preview.SetNonMarshalingPreviewCallback(this) that the method isn't being called. I'm not to sure why this is happening and I wanted to use this so I could scan for a barcode frame by frame using Zxing. I've attached my code below.
public void Open() {
if (!closed) return;
try {
Preview = Camera.Open();
}
catch (Exception e) {
Console.WriteLine(e);
}
var parameters = Preview.GetParameters();
int numBytes = (parameters.PreviewSize.Width * parameters.PreviewSize.Height * Android.Graphics.ImageFormat.GetBitsPerPixel(parameters.PreviewFormat)) / 8;
using (FastJavaByteArray buffer = new FastJavaByteArray(numBytes))
Preview.AddCallbackBuffer(new FastJavaByteArray(numBytes));
var options = new ZXing.Mobile.MobileBarcodeScanningOptions();
options.PossibleFormats.Add(BarcodeFormat.QR_CODE);
barcodeReader = options.BuildBarcodeReader();
Preview.SetNonMarshalingPreviewCallback(this);
//Preview.SetPreviewCallback(this);
Handler handler = new Handler();
Action loop = null;
loop = () =>
{
if (!closed)
{
AutoFocusLoop();
handler.PostDelayed(loop, (long)(1000 * AF_DELAY));
}
};
handler.Post(loop);
closed = false;
}
public void OnPreviewFrame(IntPtr data, Camera camera)
{
throw new NotImplementedException();
}
I have these next two methods, takescreenshots and saveScreenshot:
public static Bitmap takeScreenshot(View oView)
{
Bitmap oBitmap = null;
if(oView!=null) {
if (oView.getWidth()>0 & oView.getHeight()>0) {
oBitmap = Bitmap.createBitmap(oView.getWidth(),
oView.getHeight(), Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(oBitmap);
try{
oView.draw(canvas);
}catch (Exception ex){
//log or whatever
}
}
}
return oBitmap;
}
public static File saveScreenshot(Bitmap oBitmap)
{
String mPath = ApplicationContext.getActivity().getCacheDir().getAbsolutePath() + "/artandwordsshared_img.jpg";
File imageFile = new File(mPath);
try{
boolean operationOK = true;
if (imageFile.exists()){
operationOK = imageFile.delete();
}
if (operationOK) {
operationOK = imageFile.createNewFile();
}
if (operationOK) {
FileOutputStream outputStream = new FileOutputStream(imageFile);
int quality = 75;
oBitmap.compress(Bitmap.CompressFormat.JPEG, quality, outputStream);
outputStream.flush();
outputStream.close();
}
}catch(java.io.IOException ex){
ex.printStackTrace();
}
return imageFile;
}
I use this "takeScreenshot" method to capture a view (I use it in some parts of my app) and the other one to save screenshot on device, and I'd like to know if I must/should (regarding good practices) call them through an AsyncTask so they run on background, or if it is not necessary at all. I'm currently using AsyncTask for both and all it's working just fine, but not sure if it is really a must, like it is, for example, for network operations.
Thank you in advance.
Edit 1: Adding the AsyncTask "TakeScreenshot" class.
class TakeScreenshot extends AsyncTask<String, Void, ImageProcessingResult>
{
private WeakReference<View> view2Capture;
private ImageListeners listener;
private String asyncTaskCaller;
TakeScreenshot(View view2Capture, ImageListeners listener, String asyncTaskCaller)
{
this.listener = listener;
this.view2Capture = new WeakReference<>(view2Capture);
this.asyncTaskCaller = asyncTaskCaller;
}
protected ImageProcessingResult doInBackground(String... urls)
{
Bitmap bitmap = null;
String result = Enum.Result.KO;
if(view2Capture.get()!=null)
{
bitmap = ImageHelper.takeScreenshot(view2Capture.get());
result = ImageHelper.saveBitmap(bitmap);
}
return new ImageProcessingResult(bitmap, result);
}
protected void onPostExecute(ImageProcessingResult ipr)
{
listener.onScreenCaptureFinished(ipr, asyncTaskCaller);
}
}
BTW, as for now takeScreenshot method called from AsyncTask is working just fine. Just trying to use good practices, and that's why my post.
I'm not sure you will be able to call first function takeScreenshot in the background Thread. Because you are performing operation with UI Draw. In any case, it makes no sense to put this small implementation to the background.
Next function saveScreenshot must be defined in the background Thread for sure. You need to eliminate the jank on UI which you would have because of using in in the foreground. Maybe you will not feel difference on new devices, but in some condition/platforms you will.
UPDATE
Seems like you new to the Android. Of course you can use AsyncTask, but people prefer other tools. AsyncTask is very old and nowadays there are bunch of alternatives. Just try to search for it.
On another hand. AsyncTask based on Java Executors(which includes ThreadPool, "Handler", MessageQueue, etc). For simple actions like yours, you can use just Thread. Clean and simple.
//Just make it once
private final Handler ui = new Handler(
Looper.getMainLooper());
//Whenever you need just call next call
new Thread("Tag").start(() -> {
//Background Work
//Whenever you need to submit something to UI
ui.post(() -> {
// Foreground Work
});
}
})
I am using camera2 api to capture images in a loop.
When I capture a image, I get callback in onCaptureCompleted method and there I use TotalCaptureResult to get information about the image like iso, exposure and timestamp. Then I store these information in a map.
After that I get the image in OnImageAvailableListener of ImageReader and I use image's getTimestamp method and ExifInterface to get exif data like iso and exposure.
Surprisingly, the values of iso and exposure is different for the image and capture result at same timestamp.
Is this normal?
Reference Code :
mSession.capture(captureRequest.build(), new CameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(#NonNull CameraCaptureSession session, #NonNull CaptureRequest request, #NonNull TotalCaptureResult result) {
int capturedISO = result.get(CaptureResult.SENSOR_SENSITIVITY);
long timeStamp = result.get(CaptureResult.SENSOR_TIMESTAMP);
/// Save somewhere to be used later
super.onCaptureCompleted(session, request, result);
}
}, backgroundHandler);
And in OnImageAvailableListener
public void onImageAvailable(ImageReader imageReader) {
if (!isRecording) {
return;
}
Image image = imageReader.acquireLatestImage();
Long timestamp = image.getTimestamp();
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.capacity()];
buffer.get(bytes);
OutputStream outputStream = null;
try {
outputStream = new FileOutputStream(file);
outputStream.write(bytes);
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
outputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
image.close();
try {
ExifInterface exifInterface = new ExifInterface(file.getAbsolutePath());
double value = exifInterface.getAttributeDouble(ExifInterface.TAG_ISO_SPEED_RATINGS, 0);
/// Compare the iso with the CaptureCallback's saved one for this timestamp. I got different values.
} catch (Exception e) {
e.printStackTrace();
}
}
Assuming the device supports the READ_SENSOR_SETTINGS capability, this would be a device-specific bug and not correct. If the device does not support that capability, then the TotalCaptureResult values are likely not correct at all, if they're even present.
Unfortunately there currently isn't a compliance test for verifying this particular combination of metadata values matches for the capture.
You need to change the values of BLACK_LEVEL_LOCK field.
Whether black-level compensation is locked to its current values or is free to vary.
When set to true(ON), the values used for black-level compensation will not change until the lock is set to false (OFF).
Since changes to certain capture parameters (such as exposure time) may require resetting of black level compensation, the camera device must report whether setting the black level lock was successful in the output result metadata.
The camera device will maintain the lock to the extent possible, only overriding the lock to OFF when changes to other request parameters require a black level recalculation or reset.
I need to take pictures continuously with Camera2 API. It works fine on high end devices (for instance a Nexus 5X), but on slower ones (for instance a Samsung Galaxy A3), the preview freezes.
The code is a bit long, so I post only the most relevant parts:
Method called to start my preview:
private void startPreview() {
SurfaceTexture texture = mTextureView.getSurfaceTexture();
if(texture != null) {
try {
// We configure the size of default buffer to be the size of camera preview we want.
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
// This is the output Surface we need to start preview.
Surface surface = new Surface(texture);
// We set up a CaptureRequest.Builder with the output Surface.
mPreviewRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mPreviewRequestBuilder.addTarget(surface);
// Here, we create a CameraCaptureSession for camera preview.
mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()), new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(#NonNull CameraCaptureSession cameraCaptureSession) {
// If the camera is already closed, return:
if (mCameraDevice == null) { return; }
// When the session is ready, we start displaying the preview.
mCaptureSession = cameraCaptureSession;
// Auto focus should be continuous for camera preview.
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
mPreviewRequest = mPreviewRequestBuilder.build();
// Start the preview
try { mCaptureSession.setRepeatingRequest(mPreviewRequest, null, mPreviewBackgroundHandler); }
catch (CameraAccessException e) { e.printStackTrace(); }
}
#Override
public void onConfigureFailed(#NonNull CameraCaptureSession cameraCaptureSession) {
Log.e(TAG, "Configure failed");
}
}, null
);
}
catch (CameraAccessException e) { e.printStackTrace(); }
}
}
Method called to take a picture:
private void takePicture() {
try {
CaptureRequest.Builder captureBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
captureBuilder.addTarget(mImageReader.getSurface());
mCaptureSession.capture(captureBuilder.build(), null, mCaptureBackgroundHandler);
}
catch (CameraAccessException e) { e.printStackTrace(); }
}
And here is my ImageReader:
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(final ImageReader reader) {
mSaveBackgroundHandler.post(new Runnable() {
#Override
public void run() {
// Set the destination file:
File destination = new File(getExternalFilesDir(null), "image_" + mNumberOfImages + ".jpg");
mNumberOfImages++;
// Acquire the latest image:
Image image = reader.acquireLatestImage();
// Save the image:
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
FileOutputStream output = null;
try {
output = new FileOutputStream(destination);
output.write(bytes);
}
catch (IOException e) { e.printStackTrace(); }
finally {
image.close();
if (null != output) {
try { output.close(); }
catch (IOException e) { e.printStackTrace(); }
}
}
// Take a new picture if needed:
if(mIsTakingPictures) {
takePicture();
}
}
});
}
};
I have a button that toggle the mIsTakingPictures boolean, and makes the first takePicture call.
To recap, I'm using 3 threads:
one for the preview
one for the capture
one for the image saving
What can be the cause of this freeze?
It's impossible to avoid framing lost in your preview when you are taking images all time on weak devices. The only way to avoid this is on devices which support TEMPLATE_ZERO_SHUTTER_LAG and using a reprocessableCaptureSession. The documentation about this is pretty horrible and find a way to implement it can be a odyssey. I have this problem a few months ago and finally I found the way to implement it:
How to use a reprocessCaptureRequest with camera2 API
In that answer you can also find some Google CTS test's which also implements ReprocessableCaptureSession and shoot some burst captures with ZSL template.
Finally, you can also use a CaptureBuilder with your preview surface and the image reader surface attached, in that case your preview will continue working all time and also you will save each frame as a new picture. But you will still having the freeze problem.
I also tried implement a burst capture using a handler which dispatch a new capture call each 100 milliseconds, this second option was pretty good in performance and avoiding frame rate lost, but you will not get as many captures per second like the two ImageReader option.
Hope that my answer will help you a bit, API 2 still being a bit complex and there's not so many examples or information about it.
One thing I noticed on low end devices: the preview stops after a capture, even when using camera 1 api, so it has to be restarted manually, thus producing a small preview freeze when capturing a high resolution picture.
But the camera 2 api provides the possibility to get raw image when taking a still capture (that wasn't possible on the devices I have when using camera 1 (Huawei P7, Sony Xperia E5, wiko UFeel)). Using this feature is much faster than capturing a JPEG (maybe due to JPEG compression), so the preview can be restarted earlier, and the preview freeze is shorter. Of course using this solution you'll have to convert the picture from YUV to JPEG in a background task..
I'm trying to implement an ImageReader on my application, but i don't know why, he doesn't read anything.
List<Surface> surfaces = new ArrayList<Surface>();
Surface previewSurface = new Surface(texture);
previewRequestBuilder.addTarget(previewSurface);
recordRequestBuilder.addTarget(previewSurface);
surfaces.add(previewSurface);
Surface recorderSurface = mediaRecorder.getSurface();
surfaces.add(recorderSurface);
ImageReader mImageReader = ImageReader.newInstance(previewSize.getWidth(),previewSize.getHeight(), ImageFormat.JPEG,5);
Surface processSurface = mImageReader.getSurface();
surfaces.add(processSurface);
mImageReader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Log.v("ImageReader ","An Image");
}
},null);
cameraDevice.createCaptureSession(surfaces, new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(CameraCaptureSession cameraCaptureSession) {
captureSession = cameraCaptureSession;
updateRequest(PREVIEW_REQUEST);
}
#Override
public void onConfigureFailed(CameraCaptureSession cameraCaptureSession) {
Activity activity = getActivity();
if (null != activity) {
Toast.makeText(activity, "Failed", Toast.LENGTH_SHORT).show();
}
}
}, null);
} catch (CameraAccessException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
So i got 3 Surface : previewSurface for the display, recordSurface for recording the video and processSurface to get the images (with ImageReader) and process it.
But I don't even see my Log.v once !
Thanks by advance for your answers.
There are at least 2 reasons why your code might not work:
In your implementation of the OnImageAvailableListener, in the method onImageAvailable(ImageReader reader) you do not read and close the image. In my experience if you don't read/close the image from the reader the camera freezes. If this is the case then you should see the log message at least once (or even more times). I would suggest that you add reading and closing the image to the method:
#Override
public void onImageAvailable(ImageReader reader) {
Log.v("ImageReader ","An Image");
Image img = reader.acquireNextImage();
img.close();
}
Taking 3 streams (for 3 surfaces) of a certain size and type might not be supported on your device. You should verify what support you have on your device (LEGACY/LIMITED/FULL). For instance, your device may not support 3 simultaneous streams of maximum size. Check the documentation. There are nice tables showing what is possible and carefully check if your sizes/types are ok.