how to pass capture information in image reader camera2 - android

I am using camera2 API to capture images with manual exposure and ISO. But sometimes the image captured has different values for ISO and exposure then the one I have specified.
Is there any way I could pass the information of the values that I set in the capture request to the image reader listener where callback comes when image is captured to see if the image is actually having the values that I specified.
I am capturing a lot of images(say a loop) with different values of ISO and exposure for every image.
This is my code to capture images:
imageReader = ImageReader.newInstance(imageWidth, imageHeight, ImageFormat.JPEG, 1);
imageReader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader imageReader) {
/// How to check the image is taken with correct values
}
}, backgroundHandler);
captureRequest = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
captureRequest.addTarget(preview);
captureRequest.addTarget(imageReader.getSurface());
captureRequest.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_OFF);
captureRequest.set(CaptureRequest.CONTROL_MODE, CaptureRequest.CONTROL_MODE_OFF);
captureRequest.set(CaptureRequest.SENSOR_SENSITIVITY, <MANUAL_ISO>);
captureRequest.set(CaptureRequest.SENSOR_EXPOSURE_TIME, <MANUAL_EXPOSURE>);
mSession.capture(captureRequest.build(), null, backgroundHandler);
This works most of the times, like if I am taking 100 photos then around 70 are taken with the values I specify and rest 30 will have different values.
What I tried:
I have tried below approach where, when I capture image, I check the values in onCaptureCompleted and create a queue indicating whether the image is taken with correct values or not. But when I get image in imageReader, I have no idea if the value in queue is for current image or some other image. This happens because I don't know when will imageReader listener be invoked for a image: It can be invoked just after onCaptureCompleted finishes or before it, or in worst case after onCaptureCompleted is invoked 2-3 times for 2-3 images as I am capturing images in loop.
Basically I need a tag to identify images in this approach but I don't know how to do that.
Here is the code for the same:
class CapturedPicture {
static Queue<Boolean> iso = new LinkedList<>();
}
mSession.capture(captureRequest.build(), new CameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(#NonNull CameraCaptureSession session, #NonNull CaptureRequest request, #NonNull TotalCaptureResult result) {
super.onCaptureCompleted(session, request, result);
int capturedISO = result.get(CaptureResult.SENSOR_SENSITIVITY);
CapturedPicture.iso.add(<MANUAL_ISO> == capturedISO);
}
}, backgroundHandler);
So I need a way to pass information to imageReader listener to indicate if the current image conforms to the settings I specified.
Any help is appreciated.
PS: I also tried saving TotalCaptureResult's SENSOR_TIMESTAMP and image.getTimestamp and comparing them but I can confirm sometimes the image with some timestamp doesn't have same parameters as captured from totalCaptureResult.

Ideally the captureResult data should match with the corresponding images based on timestamp.
If it is not, could you check if your device supports postRawSensitivityBoost or not, you may need to consider setting this as well.

Related

Need to capture a still image during face detection with MLKit and Camera2

I'm developing a Face Detection feature with Camera2 and MLKit.
In the Developer Guide, in the Performance Tips part, they say to capture images in ImageFormat.YUV_420_888 format if using the Camera2 API, which is my case.
Then, in the Face Detector part, they recommend to use an image with dimensions of at least 480x360 pixels for faces recognition in real time, which is again my case.
Ok, let's go ! Here is my code, working well
private fun initializeCamera() = lifecycleScope.launch(Dispatchers.Main) {
// Open the selected camera
cameraDevice = openCamera(cameraManager, getCameraId(), cameraHandler)
val previewSize = if (isPortrait) {
Size(RECOMMANDED_CAPTURE_SIZE.width, RECOMMANDED_CAPTURE_SIZE.height)
} else {
Size(RECOMMANDED_CAPTURE_SIZE.height, RECOMMANDED_CAPTURE_SIZE.width)
}
// Initialize an image reader which will be used to display a preview
imageReader = ImageReader.newInstance(
previewSize.width, previewSize.height, ImageFormat.YUV_420_888, IMAGE_BUFFER_SIZE)
// Retrieve preview's frame and run detector
imageReader.setOnImageAvailableListener({ reader ->
lifecycleScope.launch(Dispatchers.Main) {
val image = reader.acquireNextImage()
logD { "Image available: ${image.timestamp}" }
faceDetector.runFaceDetection(image, getRotationCompensation())
image.close()
}
}, imageReaderHandler)
// Creates list of Surfaces where the camera will output frames
val targets = listOf(viewfinder.holder.surface, imageReader.surface)
// Start a capture session using our open camera and list of Surfaces where frames will go
session = createCaptureSession(cameraDevice, targets, cameraHandler)
val captureRequest = cameraDevice.createCaptureRequest(
CameraDevice.TEMPLATE_PREVIEW).apply {
addTarget(viewfinder.holder.surface)
addTarget(imageReader.surface)
}
// This will keep sending the capture request as frequently as possible until the
// session is torn down or session.stopRepeating() is called
session.setRepeatingRequest(captureRequest.build(), null, cameraHandler)
}
Now, I want to capture a still image...and this is my problem because, ideally, I want:
a full resolution image or, as least, bigger than 480x360
in JPEG format to be able to save it
The Camera2Basic sample demonstrates how to capture an image (samples for Video and SlowMotion are crashing) and MLKit sample uses the so old Camera API !! Fortunately, I've succeeded is mixing these samples to develop my feature but I'm failed to capture a still image with a different resolution.
I think I have to stop the preview session to recreate one for image capture but I'm not sure...
What I have done is the following but it's capturing images in 480x360:
session.stopRepeating()
// Unset the image reader listener
imageReader.setOnImageAvailableListener(null, null)
// Initialize an new image reader which will be used to capture still photos
// imageReader = ImageReader.newInstance(768, 1024, ImageFormat.JPEG, IMAGE_BUFFER_SIZE)
// Start a new image queue
val imageQueue = ArrayBlockingQueue<Image>(IMAGE_BUFFER_SIZE)
imageReader.setOnImageAvailableListener({ reader - >
val image = reader.acquireNextImage()
logD {"[Still] Image available in queue: ${image.timestamp}"}
if (imageQueue.size >= IMAGE_BUFFER_SIZE - 1) {
imageQueue.take().close()
}
imageQueue.add(image)
}, imageReaderHandler)
// Creates list of Surfaces where the camera will output frames
val targets = listOf(viewfinder.holder.surface, imageReader.surface)
val captureRequest = createStillCaptureRequest(cameraDevice, targets)
session.capture(captureRequest, object: CameraCaptureSession.CaptureCallback() {
override fun onCaptureCompleted(
session: CameraCaptureSession,
request: CaptureRequest,
result: TotalCaptureResult) {
super.onCaptureCompleted(session, request, result)
val resultTimestamp = result.get(CaptureResult.SENSOR_TIMESTAMP)
logD {"Capture result received: $resultTimestamp"}
// Set a timeout in case image captured is dropped from the pipeline
val exc = TimeoutException("Image dequeuing took too long")
val timeoutRunnable = Runnable {
continuation.resumeWithException(exc)
}
imageReaderHandler.postDelayed(timeoutRunnable, IMAGE_CAPTURE_TIMEOUT_MILLIS)
// Loop in the coroutine's context until an image with matching timestamp comes
// We need to launch the coroutine context again because the callback is done in
// the handler provided to the `capture` method, not in our coroutine context
# Suppress("BlockingMethodInNonBlockingContext")
lifecycleScope.launch(continuation.context) {
while (true) {
// Dequeue images while timestamps don't match
val image = imageQueue.take()
if (image.timestamp != resultTimestamp)
continue
logD {"Matching image dequeued: ${image.timestamp}"}
// Unset the image reader listener
imageReaderHandler.removeCallbacks(timeoutRunnable)
imageReader.setOnImageAvailableListener(null, null)
// Clear the queue of images, if there are left
while (imageQueue.size > 0) {
imageQueue.take()
.close()
}
// Compute EXIF orientation metadata
val rotation = getRotationCompensation()
val mirrored = cameraFacing == CameraCharacteristics.LENS_FACING_FRONT
val exifOrientation = computeExifOrientation(rotation, mirrored)
logE {"captured image size (w/h): ${image.width} / ${image.height}"}
// Build the result and resume progress
continuation.resume(CombinedCaptureResult(
image, result, exifOrientation, imageReader.imageFormat))
// There is no need to break out of the loop, this coroutine will suspend
}
}
}
}, cameraHandler)
}
If I uncomment the new ImageReader instanciation, I have this exception:
java.lang.IllegalArgumentException: CaptureRequest contains
unconfigured Input/Output Surface!
Can anyone help me ?
This IllegalArgumentException:
java.lang.IllegalArgumentException: CaptureRequest contains unconfigured Input/Output Surface!
... obviously refers to imageReader.surface.
Meanhile (with CameraX) this works different, see CameraFragment.kt ...
Issue #197: Firebase Face Detection Api issue while using cameraX API;
there might soon be a sample application matching your use case.
ImageReader is sensitive to the choice of format and/or combination of usage flags. The documentation points certain combinations of format may be unsupported. With some Android devices (perhaps some older phone models) you might find the IllegalArgumentException is not thrown using the JPEG format. But it doesn't help much - you want something versatile.
What I have done in the past is to use ImageFormat.YUV_420_888 format (this will be backed by the hardware and ImageReader implementation). This format does not contain pre-optimizations that prevent the application accessing the image via the internal array of planes. I notice you have used it successfully already in your initializeCamera() method.
You may then extract the image data from the frame you want
Image.Plane[] planes = img.getPlanes();
byte[] data = planes[0].getBuffer().array();
and then via a Bitmap create the still image using JPEG compression, PNG, or whichever encoding you choose.
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, width, height, null);
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 100, out);
byte[] imageBytes = out.toByteArray();
Bitmap bitmap= BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
ByteArrayOutputStream out2 = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 75, out2);

Is there a way to delay a video feed from a cell phone camera in Unity?

I am making a project where you are supposed to be able to change the delay with which the feed from the cell phone camera is shown, so that people can see how their brains handle the delay/latency. I have managed to show the camera feed on a canvas that follows the camera around and fills the whole view of the Google Cardboard, but I am wondering how I could delay this video feed. Perhaps by using an image array of some sort?
I have tried searching for sollutions online, but I have come up short of an answer. I have tried a texture2D array, but the performance was really bad (I tried a modified version of this).
private bool camAvailable;
private WebCamTexture backCam;
private Texture defaultBackground;
public RawImage background;
public AspectRatioFitter fit;
// Start is called before the first frame update
void Start()
{
defaultBackground = background.texture;
WebCamDevice[] devices = WebCamTexture.devices;
if (devices.Length == 0 )
{
Debug.Log("No camera detected");
camAvailable = false;
return;
}
for (int i = 0; i < devices.Length; i++)
{
if (!devices[i].isFrontFacing)
{
backCam = new WebCamTexture(devices[i].name, Screen.width, Screen.height); // Used to find the correct camera
}
}
if (backCam == null)
{
Debug.Log("Unable to find back camera");
return;
}
backCam.Play();
background.texture = backCam;
camAvailable = true;
} //Tell me if this is not enough code, I don't really have a lot of experience in Unity, so I am unsure of how much is required for a minimal reproducible example
Should I use some sort of frame buffer or image/texture array for delaying the video? (Start "recording", wait a specified amount of time, start playing the "video" on the screen)
Thanks in advance!

How to pass input configuration to camera 2 android

I am developing an android camera application, and I wanted to pass in the capture size to configure the camera before taking a picture.
This is my code:
try {
mCaptureRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mCaptureRequestBuilder.addTarget(previewSurface);
InputConfiguration inputConfiguration = new InputConfiguration(1920, 1080, ImageFormat.JPEG); //error here.
cameraDevice.createReprocessableCaptureSession(inputConfiguration, Arrays.asList(previewSurface), new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(#NonNull CameraCaptureSession cameraCaptureSession) {
try {
cameraCaptureSession.setRepeatingRequest(mCaptureRequestBuilder.build(), null, handler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
#Override
public void onConfigureFailed(#NonNull CameraCaptureSession cameraCaptureSession) {
Toast.makeText(getApplicationContext(), "Camera Preview Failed!!", Toast.LENGTH_SHORT).show();
}
}, null);
}
So, I am trying to pass an input configuration to the camera here.
My problem is I'm getting an error on the InputConfiguration line.
This is my error:
java.lang.IllegalArgumentException: input format 256 is not valid
I tried this with a lot of different ImageFormats like JPEG, UNKNOWN, NV21 and others. It's not working.
Help me resolve this error and also if my approach is wrong in interacting with the camera do tell me.
You are working with TEMPLATE_PREVIEW, which does not support ImageFormat.JPEG.
Camera2 mandates that preview supports YUV 420, like this:
InputConfiguration inputConfiguration = new InputConfiguration(1920, 1080, ImageFormat.YUV_420_888);
Input configurations are only used reprocessing use cases, where you have an application-level circular buffer of captured partially processed frames.
When the user hits the shutter button, you send one of the captured frames back to the camera device for final processing. The input configuration is for selecting the size and format of this path back into the camera.
For simple capture applications, you only care about the output configurations.
Another sad case is described here: https://developer.android.com/reference/android/hardware/camera2/CameraCharacteristics.html#REPROCESS_MAX_CAPTURE_STALL
Check that your camera supports reprocessing or you would not pass through "input format is not valid" at all as no input would be allowed for reprocessing.
Also the absence of this key value can signal that Yuv reprocessing is not available: https://developer.android.com/reference/android/hardware/camera2/CameraMetadata#REQUEST_AVAILABLE_CAPABILITIES_YUV_REPROCESSING

camera2api auto capture current preview frame

Uses Camera 2 API on Android
For real time image processing I have a listener set up to do some image processing that gives a boolean output on whether to capture an image or no. Currently I am using camera2Raw example that has a takePicture() when a button is clicked. How can I ensure that the same frame that I processed is captured and no additional ones are captured. Please do help me out. Thanks
Link to camera2Raw
When you made a capture in your captureSession, the current frame will be capture and go through onCapturePictureComplete() method from the current callback associate to your capture:
mCaptureSession.capture(mPhotoRequestBuilder.build(), YourCallback, null);
private CameraCaptureSession.CaptureCallback YourCallback = new CameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(#NonNull CameraCaptureSession session,
#NonNull CaptureRequest request,
#NonNull TotalCaptureResult result) {
//get the iso and time exposure from the picture
Integer iso = result.get(CaptureResult.SENSOR_SENSITIVITY);
long timeExposure = result.get(CaptureResult.SENSOR_EXPOSURE_TIME);
Log.i(TAG, "[mHdrCaptureCallback][HDR] Photo: " + mHdrIndex + " Exposure: " + timeExposure);
Log.i(TAG, "[mHdrCaptureCallback][HDR] Photo: " + mHdrIndex + " ISO " + iso);
}
};
In the example above I made a capture and when it was complete, the Capture callback is call. There I just print the values of Exposure and ISO from the image result. But, when you take a picture, the onImageAvailable Listener from your current ImageReader will be called too, and there is where you will have the current frame and the image to save it.
Look at your example in Google:
/**
* This a callback object for the {#link ImageReader}. "onImageAvailable" will be called when a
* JPEG image is ready to be saved.
*/
private final ImageReader.OnImageAvailableListener mOnJpegImageAvailableListener
= new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
dequeueAndSaveImage(mJpegResultQueue, mJpegImageReader);
}
};
/**
* This a callback object for the {#link ImageReader}. "onImageAvailable" will be called when a
* RAW image is ready to be saved.
*/
private final ImageReader.OnImageAvailableListener mOnRawImageAvailableListener
= new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
dequeueAndSaveImage(mRawResultQueue, mRawImageReader);
}
};
Hope that it will help you and now you know a bit better how the save image process work's with camera2, let me know if I can help you in something else!

Android Twitter getting profile image size issue

I'm trying to get the profile images of my followers for use within a ListView as thumbnails.
These thumbnails are around 125x125, but the standard twitter4j call of User.getProfileImageURL(); returns much smaller size of 48x48 and is also recommended to not be used as the image source.
I've tried creating a ProfileImage object and supplying it as a parameter, User.getProfileImageURL(profile image object.Original),
But this code takes some time to simply retrieve the url, which when loading a list of thumbnails, is inefficient.
Any suggestions on how to go about this?
Edit
Twitter API v1 has been disabled, so my old answer is no longer valid. Refer to API v1.1, which I believe requires authentication.
If you know the screen name, the twitter api allows for you to fetch the profile image at 4 different resolutions;
https://api.twitter.com/1/users/profile_image?screen_name=Krylez&size=mini
https://api.twitter.com/1/users/profile_image?screen_name=Krylez&size=normal
https://api.twitter.com/1/users/profile_image?screen_name=Krylez&size=bigger
https://api.twitter.com/1/users/profile_image?screen_name=Krylez&size=original
The "bigger" image is 73x73, which is going to interpolate in your 125x125 container. If you're not okay with this, you can try to fetch the "original" photo, but this photo could be very large (slow) and it's not necessarily a square.
Whatever method you choose, make sure you're not fetching and/or decoding Bitmaps on the UI thread. The Android API documentation has excellent guidelines for the correct way to do this.
Also we can make use of the Twitter4j using:
mTwitter.getUserProfileImage();
From the official doc:
You can obtain a user’s most recent profile image from GET users/show. Within the user object, you’ll find the profile_image_url
and profile_image_url_https fields. These fields will contain the
resized “normal” variant of the user’s uploaded image. This “normal”
variant is typically 48x48px.
By modifying the URL, you can retrieve other variant sizings such as
“bigger”, “mini”, and “original”.
Following the code:
TwitterApiClient twitterApiClient = TwitterCore.getInstance().getApiClient();
twitterApiClient.getAccountService().verifyCredentials(false, false, new Callback<User>() {
#Override
public void success(Result<User> userResult) {
String name = userResult.data.name;
String email = userResult.data.email;
// _normal (48x48px) | _bigger (73x73px) | _mini (24x24px)
String photoUrlNormalSize = userResult.data.profileImageUrl;
String photoUrlBiggerSize = userResult.data.profileImageUrl.replace("_normal", "_bigger");
String photoUrlMiniSize = userResult.data.profileImageUrl.replace("_normal", "_mini");
String photoUrlOriginalSize = userResult.data.profileImageUrl.replace("_normal", "");
}
#Override
public void failure(TwitterException exc) {
Log.d("TwitterKit", "Verify Credentials Failure", exc);
}
});
For further information refer to Twitter API Documentation | Profile Images and Banners
To create a custom size pic, for example 90x90 you can use the createScaledBitmap() method.
private final int PROFILE_PIC_SIZE = 90;
Bitmap originalPic = null;
Bitmap resizedPic = null;
try {
InputStream in = new java.net.URL(photoUrlOriginalSize).openStream();
originalPic = BitmapFactory.decodeStream(in);
resizedPic = Bitmap.createScaledBitmap(originalPic, PROFILE_PIC_SIZE, PROFILE_PIC_SIZE, false);
} catch (Exception exc) {
Log.e("Error", exc.getMessage());
exc.printStackTrace();
}
You can use getOriginalProfileImageURL() for example. This is as large as it gets.
Smaller ones are getBiggerProfileImageURL() and getProfileImageURL().
These are the urls you retrieve:
http://pbs.twimg.com/profile_images/NUMBER/c62p-cAD_normal.jpeg
http://pbs.twimg.com/profile_images/NUMBER/c62p-cAD_bigger.jpeg
http://pbs.twimg.com/profile_images/NUMBER/c62p-cAD.jpeg

Categories

Resources