I am developing an android camera application, and I wanted to pass in the capture size to configure the camera before taking a picture.
This is my code:
try {
mCaptureRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mCaptureRequestBuilder.addTarget(previewSurface);
InputConfiguration inputConfiguration = new InputConfiguration(1920, 1080, ImageFormat.JPEG); //error here.
cameraDevice.createReprocessableCaptureSession(inputConfiguration, Arrays.asList(previewSurface), new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(#NonNull CameraCaptureSession cameraCaptureSession) {
try {
cameraCaptureSession.setRepeatingRequest(mCaptureRequestBuilder.build(), null, handler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
#Override
public void onConfigureFailed(#NonNull CameraCaptureSession cameraCaptureSession) {
Toast.makeText(getApplicationContext(), "Camera Preview Failed!!", Toast.LENGTH_SHORT).show();
}
}, null);
}
So, I am trying to pass an input configuration to the camera here.
My problem is I'm getting an error on the InputConfiguration line.
This is my error:
java.lang.IllegalArgumentException: input format 256 is not valid
I tried this with a lot of different ImageFormats like JPEG, UNKNOWN, NV21 and others. It's not working.
Help me resolve this error and also if my approach is wrong in interacting with the camera do tell me.
You are working with TEMPLATE_PREVIEW, which does not support ImageFormat.JPEG.
Camera2 mandates that preview supports YUV 420, like this:
InputConfiguration inputConfiguration = new InputConfiguration(1920, 1080, ImageFormat.YUV_420_888);
Input configurations are only used reprocessing use cases, where you have an application-level circular buffer of captured partially processed frames.
When the user hits the shutter button, you send one of the captured frames back to the camera device for final processing. The input configuration is for selecting the size and format of this path back into the camera.
For simple capture applications, you only care about the output configurations.
Another sad case is described here: https://developer.android.com/reference/android/hardware/camera2/CameraCharacteristics.html#REPROCESS_MAX_CAPTURE_STALL
Check that your camera supports reprocessing or you would not pass through "input format is not valid" at all as no input would be allowed for reprocessing.
Also the absence of this key value can signal that Yuv reprocessing is not available: https://developer.android.com/reference/android/hardware/camera2/CameraMetadata#REQUEST_AVAILABLE_CAPABILITIES_YUV_REPROCESSING
Related
I'm developing a Face Detection feature with Camera2 and MLKit.
In the Developer Guide, in the Performance Tips part, they say to capture images in ImageFormat.YUV_420_888 format if using the Camera2 API, which is my case.
Then, in the Face Detector part, they recommend to use an image with dimensions of at least 480x360 pixels for faces recognition in real time, which is again my case.
Ok, let's go ! Here is my code, working well
private fun initializeCamera() = lifecycleScope.launch(Dispatchers.Main) {
// Open the selected camera
cameraDevice = openCamera(cameraManager, getCameraId(), cameraHandler)
val previewSize = if (isPortrait) {
Size(RECOMMANDED_CAPTURE_SIZE.width, RECOMMANDED_CAPTURE_SIZE.height)
} else {
Size(RECOMMANDED_CAPTURE_SIZE.height, RECOMMANDED_CAPTURE_SIZE.width)
}
// Initialize an image reader which will be used to display a preview
imageReader = ImageReader.newInstance(
previewSize.width, previewSize.height, ImageFormat.YUV_420_888, IMAGE_BUFFER_SIZE)
// Retrieve preview's frame and run detector
imageReader.setOnImageAvailableListener({ reader ->
lifecycleScope.launch(Dispatchers.Main) {
val image = reader.acquireNextImage()
logD { "Image available: ${image.timestamp}" }
faceDetector.runFaceDetection(image, getRotationCompensation())
image.close()
}
}, imageReaderHandler)
// Creates list of Surfaces where the camera will output frames
val targets = listOf(viewfinder.holder.surface, imageReader.surface)
// Start a capture session using our open camera and list of Surfaces where frames will go
session = createCaptureSession(cameraDevice, targets, cameraHandler)
val captureRequest = cameraDevice.createCaptureRequest(
CameraDevice.TEMPLATE_PREVIEW).apply {
addTarget(viewfinder.holder.surface)
addTarget(imageReader.surface)
}
// This will keep sending the capture request as frequently as possible until the
// session is torn down or session.stopRepeating() is called
session.setRepeatingRequest(captureRequest.build(), null, cameraHandler)
}
Now, I want to capture a still image...and this is my problem because, ideally, I want:
a full resolution image or, as least, bigger than 480x360
in JPEG format to be able to save it
The Camera2Basic sample demonstrates how to capture an image (samples for Video and SlowMotion are crashing) and MLKit sample uses the so old Camera API !! Fortunately, I've succeeded is mixing these samples to develop my feature but I'm failed to capture a still image with a different resolution.
I think I have to stop the preview session to recreate one for image capture but I'm not sure...
What I have done is the following but it's capturing images in 480x360:
session.stopRepeating()
// Unset the image reader listener
imageReader.setOnImageAvailableListener(null, null)
// Initialize an new image reader which will be used to capture still photos
// imageReader = ImageReader.newInstance(768, 1024, ImageFormat.JPEG, IMAGE_BUFFER_SIZE)
// Start a new image queue
val imageQueue = ArrayBlockingQueue<Image>(IMAGE_BUFFER_SIZE)
imageReader.setOnImageAvailableListener({ reader - >
val image = reader.acquireNextImage()
logD {"[Still] Image available in queue: ${image.timestamp}"}
if (imageQueue.size >= IMAGE_BUFFER_SIZE - 1) {
imageQueue.take().close()
}
imageQueue.add(image)
}, imageReaderHandler)
// Creates list of Surfaces where the camera will output frames
val targets = listOf(viewfinder.holder.surface, imageReader.surface)
val captureRequest = createStillCaptureRequest(cameraDevice, targets)
session.capture(captureRequest, object: CameraCaptureSession.CaptureCallback() {
override fun onCaptureCompleted(
session: CameraCaptureSession,
request: CaptureRequest,
result: TotalCaptureResult) {
super.onCaptureCompleted(session, request, result)
val resultTimestamp = result.get(CaptureResult.SENSOR_TIMESTAMP)
logD {"Capture result received: $resultTimestamp"}
// Set a timeout in case image captured is dropped from the pipeline
val exc = TimeoutException("Image dequeuing took too long")
val timeoutRunnable = Runnable {
continuation.resumeWithException(exc)
}
imageReaderHandler.postDelayed(timeoutRunnable, IMAGE_CAPTURE_TIMEOUT_MILLIS)
// Loop in the coroutine's context until an image with matching timestamp comes
// We need to launch the coroutine context again because the callback is done in
// the handler provided to the `capture` method, not in our coroutine context
# Suppress("BlockingMethodInNonBlockingContext")
lifecycleScope.launch(continuation.context) {
while (true) {
// Dequeue images while timestamps don't match
val image = imageQueue.take()
if (image.timestamp != resultTimestamp)
continue
logD {"Matching image dequeued: ${image.timestamp}"}
// Unset the image reader listener
imageReaderHandler.removeCallbacks(timeoutRunnable)
imageReader.setOnImageAvailableListener(null, null)
// Clear the queue of images, if there are left
while (imageQueue.size > 0) {
imageQueue.take()
.close()
}
// Compute EXIF orientation metadata
val rotation = getRotationCompensation()
val mirrored = cameraFacing == CameraCharacteristics.LENS_FACING_FRONT
val exifOrientation = computeExifOrientation(rotation, mirrored)
logE {"captured image size (w/h): ${image.width} / ${image.height}"}
// Build the result and resume progress
continuation.resume(CombinedCaptureResult(
image, result, exifOrientation, imageReader.imageFormat))
// There is no need to break out of the loop, this coroutine will suspend
}
}
}
}, cameraHandler)
}
If I uncomment the new ImageReader instanciation, I have this exception:
java.lang.IllegalArgumentException: CaptureRequest contains
unconfigured Input/Output Surface!
Can anyone help me ?
This IllegalArgumentException:
java.lang.IllegalArgumentException: CaptureRequest contains unconfigured Input/Output Surface!
... obviously refers to imageReader.surface.
Meanhile (with CameraX) this works different, see CameraFragment.kt ...
Issue #197: Firebase Face Detection Api issue while using cameraX API;
there might soon be a sample application matching your use case.
ImageReader is sensitive to the choice of format and/or combination of usage flags. The documentation points certain combinations of format may be unsupported. With some Android devices (perhaps some older phone models) you might find the IllegalArgumentException is not thrown using the JPEG format. But it doesn't help much - you want something versatile.
What I have done in the past is to use ImageFormat.YUV_420_888 format (this will be backed by the hardware and ImageReader implementation). This format does not contain pre-optimizations that prevent the application accessing the image via the internal array of planes. I notice you have used it successfully already in your initializeCamera() method.
You may then extract the image data from the frame you want
Image.Plane[] planes = img.getPlanes();
byte[] data = planes[0].getBuffer().array();
and then via a Bitmap create the still image using JPEG compression, PNG, or whichever encoding you choose.
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, width, height, null);
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 100, out);
byte[] imageBytes = out.toByteArray();
Bitmap bitmap= BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
ByteArrayOutputStream out2 = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 75, out2);
I am using camera2 API to capture images with manual exposure and ISO. But sometimes the image captured has different values for ISO and exposure then the one I have specified.
Is there any way I could pass the information of the values that I set in the capture request to the image reader listener where callback comes when image is captured to see if the image is actually having the values that I specified.
I am capturing a lot of images(say a loop) with different values of ISO and exposure for every image.
This is my code to capture images:
imageReader = ImageReader.newInstance(imageWidth, imageHeight, ImageFormat.JPEG, 1);
imageReader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader imageReader) {
/// How to check the image is taken with correct values
}
}, backgroundHandler);
captureRequest = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
captureRequest.addTarget(preview);
captureRequest.addTarget(imageReader.getSurface());
captureRequest.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_OFF);
captureRequest.set(CaptureRequest.CONTROL_MODE, CaptureRequest.CONTROL_MODE_OFF);
captureRequest.set(CaptureRequest.SENSOR_SENSITIVITY, <MANUAL_ISO>);
captureRequest.set(CaptureRequest.SENSOR_EXPOSURE_TIME, <MANUAL_EXPOSURE>);
mSession.capture(captureRequest.build(), null, backgroundHandler);
This works most of the times, like if I am taking 100 photos then around 70 are taken with the values I specify and rest 30 will have different values.
What I tried:
I have tried below approach where, when I capture image, I check the values in onCaptureCompleted and create a queue indicating whether the image is taken with correct values or not. But when I get image in imageReader, I have no idea if the value in queue is for current image or some other image. This happens because I don't know when will imageReader listener be invoked for a image: It can be invoked just after onCaptureCompleted finishes or before it, or in worst case after onCaptureCompleted is invoked 2-3 times for 2-3 images as I am capturing images in loop.
Basically I need a tag to identify images in this approach but I don't know how to do that.
Here is the code for the same:
class CapturedPicture {
static Queue<Boolean> iso = new LinkedList<>();
}
mSession.capture(captureRequest.build(), new CameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(#NonNull CameraCaptureSession session, #NonNull CaptureRequest request, #NonNull TotalCaptureResult result) {
super.onCaptureCompleted(session, request, result);
int capturedISO = result.get(CaptureResult.SENSOR_SENSITIVITY);
CapturedPicture.iso.add(<MANUAL_ISO> == capturedISO);
}
}, backgroundHandler);
So I need a way to pass information to imageReader listener to indicate if the current image conforms to the settings I specified.
Any help is appreciated.
PS: I also tried saving TotalCaptureResult's SENSOR_TIMESTAMP and image.getTimestamp and comparing them but I can confirm sometimes the image with some timestamp doesn't have same parameters as captured from totalCaptureResult.
Ideally the captureResult data should match with the corresponding images based on timestamp.
If it is not, could you check if your device supports postRawSensitivityBoost or not, you may need to consider setting this as well.
I have a camera2 implementation. The current setup is, it uses a texture view surface to display the actual camera view and an ImageReader surface for capturing images.
Now I want to capture preview frames as well. So I tried adding a new ImageReader surface for capturing frames. But when I add that surface to createCaptureSession request, the screen goes blank. What could possibly be wrong? Below is the code that I use to add surfaces to createCaptureSession
val surface = preview.surface
?: throw CameraAccessException(CameraAccessException.CAMERA_ERROR)
val previewIRSurface = previewImageReader?.surface
?: throw CameraAccessException(CameraAccessException.CAMERA_ERROR)
val captureSurface = captureImageReader?.surface
?: throw CameraAccessException(CameraAccessException.CAMERA_ERROR)
try {
val template = if (zsl) CameraDevice.TEMPLATE_ZERO_SHUTTER_LAG else CameraDevice.TEMPLATE_PREVIEW
previewRequestBuilder = camera?.createCaptureRequest(template)
?.apply { addTarget(surface) }
?: throw CameraAccessException(CameraAccessException.CAMERA_ERROR)
val surfaces: ArrayList<Surface> = arrayListOf(surface, previewIRSurface, captureSurface)
camera?.createCaptureSession(surfaces, sessionCallback, backgroundHandler)
} catch (e: CameraAccessException) {
throw RuntimeException("Failed to start camera session")
}
The initialization of ImageReaders is like this.
private fun prepareImageReaders() {
val largestPreview = previewSizes.sizes(aspectRatio).last()
previewImageReader?.close()
previewImageReader = ImageReader.newInstance(
largestPreview.width,
largestPreview.height,
internalOutputFormat,
4 // maxImages
).apply { setOnImageAvailableListener(onPreviewImageAvailableListener, backgroundHandler) }
val largestPicture = pictureSizes.sizes(aspectRatio).last()
captureImageReader?.close()
captureImageReader = ImageReader.newInstance(
largestPicture.width,
largestPicture.height,
internalOutputFormat,
2 // maxImages
).apply { setOnImageAvailableListener(onCaptureImageAvailableListener, backgroundHandler) }
}
More clarifications about the parameters used above:
internalOutput format is either ImageFormat.JPEG or ImageFormat.YUV_420_888.
Image sizes are based on best possible sizes
It works good with either of the image readers individually but as soon as I add both together, blank screen!
Testing on Samsung Galaxy S8 with Android Oreo (8.0)
The original code is here https://github.com/pvasa/cameraview-ex/blob/development/cameraViewEx/src/main/api21/com/priyankvasa/android/cameraviewex/Camera2.kt
maxImages == 4 may be too much and exhaust your RAM. Also, it's not clear what internalOutputFormat you use, and whether it is compatible with the largestPreview size.
The bottom line is, study the long list of tables for supported surface list parameter of createCaptureSession(). Depending on your camera capabilities, the three surfaces that you use, could be too much.
From the comments below, a working solution: "The error itself doesn't say much [...] but upon searching, it is found that multiple surfaces are not supported for JPEG format. Upon changing it to YUV_420_888 it works flawlessly."
how to run the camera live view instead of playing videos or showing image in GearVR ?
I tried with gearvrf sdk, but i am getting camera view is null. please guide me to solve this.
Thanks.
GVRCameraSceneObject cameraObject = null;
try {
cameraObject = new GVRCameraSceneObject(gvrContext, 3.6f, 2.0f);
cameraObject.setUpCameraForVrMode(1); // set up 60 fps camera preview.
} catch (GVRCameraSceneObject.GVRCameraAccessException e) {
// Cannot open camera
Log.e(TAG, "Cannot open the camera",e);
}
cameraObject.getTransform().setPosition(0.0f, 0.0f, -4.0f);
scene.getMainCameraRig().addChildObject(object);
Put this code into your GVRMain in onInit method. Obviously you should initialize your scene object
Background
Android got a new API on Kitkat and Lollipop, to video capture the screen. You can do it either via the ADB tool or via code (starting from Lollipop).
Ever since the new API was out, many apps came to that use this feature, allowing to record the screen, and Microsoft even made its own Google-Now-On-tap competitor app.
Using ADB, you can use:
adb shell screenrecord /sdcard/video.mp4
You can even do it from within Android Studio itself.
The problem
I can't find any tutorial or explanation about how to do it using the API, meaning in code.
What I've found
The only place I've found is the documentations (here, under "Screen capturing and sharing"), telling me this:
Android 5.0 lets you add screen capturing and screen sharing
capabilities to your app with the new android.media.projection APIs.
This functionality is useful, for example, if you want to enable
screen sharing in a video conferencing app.
The new createVirtualDisplay() method allows your app to capture the
contents of the main screen (the default display) into a Surface
object, which your app can then send across the network. The API only
allows capturing non-secure screen content, and not system audio. To
begin screen capturing, your app must first request the user’s
permission by launching a screen capture dialog using an Intent
obtained through the createScreenCaptureIntent() method.
For an example of how to use the new APIs, see the MediaProjectionDemo
class in the sample project.
Thing is, I can't find any "MediaProjectionDemo" sample. Instead, I've found "Screen Capture" sample, but I don't understand how it works, as when I've run it, all I've seen is a blinking screen and I don't think it saves the video to a file. The sample seems very buggy.
The questions
How do I perform those actions using the new API:
start recording, optionally including audio (mic/speaker/both).
stop recording
take a screenshot instead of video.
Also, how do I customize it (resolution, requested fps, colors, time...)?
First step and the one which Ken White rightly suggested & which you may have already covered is the Example Code provided officially.
I have used their API earlier. I agree screenshot is pretty straight forward. But, screen recording is also under similar lines.
I will answer your questions in 3 sections and will wrap it up with a link. :)
1. Start Video Recording
private void startScreenRecord(final Intent intent) {
if (DEBUG) Log.v(TAG, "startScreenRecord:sMuxer=" + sMuxer);
synchronized(sSync) {
if (sMuxer == null) {
final int resultCode = intent.getIntExtra(EXTRA_RESULT_CODE, 0);
// get MediaProjection
final MediaProjection projection = mMediaProjectionManager.getMediaProjection(resultCode, intent);
if (projection != null) {
final DisplayMetrics metrics = getResources().getDisplayMetrics();
final int density = metrics.densityDpi;
if (DEBUG) Log.v(TAG, "startRecording:");
try {
sMuxer = new MediaMuxerWrapper(".mp4"); // if you record audio only, ".m4a" is also OK.
if (true) {
// for screen capturing
new MediaScreenEncoder(sMuxer, mMediaEncoderListener,
projection, metrics.widthPixels, metrics.heightPixels, density);
}
if (true) {
// for audio capturing
new MediaAudioEncoder(sMuxer, mMediaEncoderListener);
}
sMuxer.prepare();
sMuxer.startRecording();
} catch (final IOException e) {
Log.e(TAG, "startScreenRecord:", e);
}
}
}
}
}
2. Stop Video Recording
private void stopScreenRecord() {
if (DEBUG) Log.v(TAG, "stopScreenRecord:sMuxer=" + sMuxer);
synchronized(sSync) {
if (sMuxer != null) {
sMuxer.stopRecording();
sMuxer = null;
// you should not wait here
}
}
}
2.5. Pause and Resume Video Recording
private void pauseScreenRecord() {
synchronized(sSync) {
if (sMuxer != null) {
sMuxer.pauseRecording();
}
}
}
private void resumeScreenRecord() {
synchronized(sSync) {
if (sMuxer != null) {
sMuxer.resumeRecording();
}
}
}
Hope the code helps. Here is the original link to the code that I referred to and from which this implementation(Video recording) is also derived from.
3. Take screenshot Instead of Video
I think by default its easy to capture the image in bitmap format. You can still go ahead with MediaProjectionDemo example to capture screenshot.
[EDIT] : Code encrypt for screenshot
a. To create virtual display depending on device width / height
mImageReader = ImageReader.newInstance(mWidth, mHeight, PixelFormat.RGBA_8888, 2);
mVirtualDisplay = sMediaProjection.createVirtualDisplay(SCREENCAP_NAME, mWidth, mHeight, mDensity, VIRTUAL_DISPLAY_FLAGS, mImageReader.getSurface(), null, mHandler);
mImageReader.setOnImageAvailableListener(new ImageAvailableListener(), mHandler);
b. Then start the Screen Capture based on an intent or action-
startActivityForResult(mProjectionManager.createScreenCaptureIntent(), REQUEST_CODE);
Stop Media projection-
sMediaProjection.stop();
c. Then convert to image-
//Process the media capture
image = mImageReader.acquireLatestImage();
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
int pixelStride = planes[0].getPixelStride();
int rowStride = planes[0].getRowStride();
int rowPadding = rowStride - pixelStride * mWidth;
//Create bitmap
bitmap = Bitmap.createBitmap(mWidth + rowPadding / pixelStride, mHeight, Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(buffer);
//Write Bitmap to file in some path on the phone
fos = new FileOutputStream(STORE_DIRECTORY + "/myscreen_" + IMAGES_PRODUCED + ".png");
bitmap.compress(CompressFormat.PNG, 100, fos);
fos.close();
There are several implementations (full code) of Media Projection API available.
Some other links that can help you in your development-
Video Recording with MediaProjectionManager - website
android-ScreenCapture - github as per android developer's observations :)
screenrecorder - github
Capture and Record Android Screen using MediaProjection APIs - website
Hope it helps :) Happy coding and screen recording!
PS: Can you please tell me the Microsoft app you are talking about? I have not used it. Would like to try it :)