I have a camera2 implementation. The current setup is, it uses a texture view surface to display the actual camera view and an ImageReader surface for capturing images.
Now I want to capture preview frames as well. So I tried adding a new ImageReader surface for capturing frames. But when I add that surface to createCaptureSession request, the screen goes blank. What could possibly be wrong? Below is the code that I use to add surfaces to createCaptureSession
val surface = preview.surface
?: throw CameraAccessException(CameraAccessException.CAMERA_ERROR)
val previewIRSurface = previewImageReader?.surface
?: throw CameraAccessException(CameraAccessException.CAMERA_ERROR)
val captureSurface = captureImageReader?.surface
?: throw CameraAccessException(CameraAccessException.CAMERA_ERROR)
try {
val template = if (zsl) CameraDevice.TEMPLATE_ZERO_SHUTTER_LAG else CameraDevice.TEMPLATE_PREVIEW
previewRequestBuilder = camera?.createCaptureRequest(template)
?.apply { addTarget(surface) }
?: throw CameraAccessException(CameraAccessException.CAMERA_ERROR)
val surfaces: ArrayList<Surface> = arrayListOf(surface, previewIRSurface, captureSurface)
camera?.createCaptureSession(surfaces, sessionCallback, backgroundHandler)
} catch (e: CameraAccessException) {
throw RuntimeException("Failed to start camera session")
}
The initialization of ImageReaders is like this.
private fun prepareImageReaders() {
val largestPreview = previewSizes.sizes(aspectRatio).last()
previewImageReader?.close()
previewImageReader = ImageReader.newInstance(
largestPreview.width,
largestPreview.height,
internalOutputFormat,
4 // maxImages
).apply { setOnImageAvailableListener(onPreviewImageAvailableListener, backgroundHandler) }
val largestPicture = pictureSizes.sizes(aspectRatio).last()
captureImageReader?.close()
captureImageReader = ImageReader.newInstance(
largestPicture.width,
largestPicture.height,
internalOutputFormat,
2 // maxImages
).apply { setOnImageAvailableListener(onCaptureImageAvailableListener, backgroundHandler) }
}
More clarifications about the parameters used above:
internalOutput format is either ImageFormat.JPEG or ImageFormat.YUV_420_888.
Image sizes are based on best possible sizes
It works good with either of the image readers individually but as soon as I add both together, blank screen!
Testing on Samsung Galaxy S8 with Android Oreo (8.0)
The original code is here https://github.com/pvasa/cameraview-ex/blob/development/cameraViewEx/src/main/api21/com/priyankvasa/android/cameraviewex/Camera2.kt
maxImages == 4 may be too much and exhaust your RAM. Also, it's not clear what internalOutputFormat you use, and whether it is compatible with the largestPreview size.
The bottom line is, study the long list of tables for supported surface list parameter of createCaptureSession(). Depending on your camera capabilities, the three surfaces that you use, could be too much.
From the comments below, a working solution: "The error itself doesn't say much [...] but upon searching, it is found that multiple surfaces are not supported for JPEG format. Upon changing it to YUV_420_888 it works flawlessly."
Related
I use the cameraX API in Android to analyze multiple frames in a period of 5 up to 60 seconds. There are multiple conditional tasks I want to do with the images depending on what tasks the user selected. These include:
scan for barcodes/qr codes (using google mlkit)
scan for text (using google mlkit)
custom edge detection using openCV in C++ with JNI
save image as png file (losless)
show frames in app (PreviewView or ImageView)
These tasks heavily vary in workload and time to finish, so instead of waiting for each task to finish until getting a new frame, I want to receive constant frames and let each task only start with the newest frame when it's finished with it's last workload.
while MLKit takes YUV images as input, openCV uses RGBA (or BGRA), so no matter which output format I choose, I will need to convert it some way. My choice was to use RGBA_8888 as output format and convert it into a bitmap since bitmap is supported from both MLKit and OpenCV and the conversion from RGBA to bitmap is much quicker than from YUV to bitmap. But using bitmaps I get huge problems with memory to the extend of the app just getting closed by Android. Using the Android Studio Profiler, I noticed the native part of ram usage going up constantly, staying that high even after workload is done and the camera is unbound.
I read online that it is heavily suggested to recycle bitmaps after use to free up their memory space. Problem here is that all these tasks run and finish independently and I couldn't come up with a good solution for recycling the bitmap as soon as possible without heavily increasing memory usage by keeping them in memory for a certain time (like 10 seconds).
I thought about using jobs for each task and to recycle when all jobs are done, but this doesn't work for the MLKit analyses because they return using a listener, resulting in the jobs ending before the task is actually done.
I appreciate any input for how to efficiently recycle the bitmaps, using something different than bitmaps, reducing memory consumption or any code improvements in general!
Here are code samples for the image analysis and for the barcode scanner. They should suffice for giving a general idea of the running code.
val imageAnalysisBuilder =
ImageAnalysis
.Builder()
.setTargetResolution(android.util.Size(720, 1280))
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.setOutputImageFormat(OUTPUT_IMAGE_FORMAT_RGBA_8888)
val imageAnalysis = imageAnalysisBuilder.build()
imageAnalysis.setAnalyzer(Executors.newSingleThreadExecutor()) { imageProxy ->
//bitmap conversion from https://github.com/android/camera-samples
var bitmap = Bitmap.createBitmap(imageProxy.width, imageProxy.height, Bitmap.Config.ARGB_8888)
imageProxy.use { bitmap.copyPixelsFromBuffer(it.planes[0].buffer) }
val rotationDegrees = imageProxy.imageInfo.rotationDegrees
imageProxy.close()
if (!barcodeScannerBusy) {
CoroutineScope.launch { startMlKitBarcodeScanner(bitmap, rotationDegrees) }
}
if (!textRecognitionBusy) {
CoroutineScope.launch { startMlKitTextRecognition(bitmap, rotationDegrees) }
}
//more tasks with same pattern
//when to recycle bitmap???
}
private fun startMlKitBarcodeScanner(bitmap: Bitmap, rotationDegrees: Int) {
barcodeScannerBusy = true
val inputImage = InputImage.fromBitmap(bitmap, rotationDegrees)
barcodeScanner?.process(inputImage)
?.addOnSuccessListener { barcodes ->
//do stuff with barcodes
}
?.addOnFailureListener {
//failure handling
}
?.addOnCompleteListener {
barcodeScannerBusy = false
//can't recycle bitmap here since other tasks might still use it
}
}
I solved the issue by now. Mainly by using a bitmap buffer variable for each task working with the image. Downside is that in the worst case, I create the same bitmap multiple times in a row. Upside is that each task can use its own bitmap independently of any other task.
Also since the device I use is not the most powerful (quite the contrary in fact), I decided to split up some of the tasks into multiple analyzers and assign a new analyzer to the camera when needing it.
Also if copying the planes of the imageProxy multiple times the way I do it here, you need to use the rewind() method before creating a new bitmap with it.
lateinit var barcodeScannerBitmapBuffer: Bitmap
lateinit var textRecognitionBitmapBuffer: Bitmap
val imageAnalysisBuilder =
ImageAnalysis
.Builder()
.setTargetResolution(android.util.Size(720, 1280))
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.setOutputImageFormat(OUTPUT_IMAGE_FORMAT_RGBA_8888)
val imageAnalysis = imageAnalysisBuilder.build()
imageAnalysis.setAnalyzer(Executors.newSingleThreadExecutor()) { imageProxy ->
if (barcodeScannerBusy && textRecognitionBusy) {
imageProxy.close()
return#Analyzer
}
if (!::barcodeScannerBitmapBuffer.isInitialized) {
barcodeScannerBitmapBuffer = Bitmap.createBitmap(
imageProxy.width,
imageProxy.height,
Bitmap.Config.ARGB_8888
)
}
if (!::textRecognitionBitmapBuffer.isInitialized) {
textRecognitionBitmapBuffer = Bitmap.createBitmap(
imageProxy.width,
imageProxy.height,
Bitmap.Config.ARGB_8888
)
}
if (!barcodeScannerBusy) {
imageProxy.use {
//bitmap conversion from https://github.com/android/camera-samples
barcodeScannerBitmapBuffer.copyPixelsFromBuffer(it.planes[0].buffer)
it.planes[0].buffer.rewind()
}
}
if (!textRecognitionBusy) {
imageProxy.use { textRecognitionBitmapBuffer.copyPixelsFromBuffer(it.planes[0].buffer) }
}
val rotationDegrees = imageProxy.imageInfo.rotationDegrees
imageProxy.close()
if (::barcodeScannerBitmapBuffer.isInitialized &&!barcodeScannerBusy) {
startMlKitBarcodeScanner(barcodeScannerBitmapBuffer, rotationDegrees)
}
if (::textRecognitionBitmapBuffer.isInitialized && !textRecognitionBusy) {
startMlKitTextRecognition(textRecognitionBitmapBuffer, rotationDegrees)
}
}
I'm developing a Face Detection feature with Camera2 and MLKit.
In the Developer Guide, in the Performance Tips part, they say to capture images in ImageFormat.YUV_420_888 format if using the Camera2 API, which is my case.
Then, in the Face Detector part, they recommend to use an image with dimensions of at least 480x360 pixels for faces recognition in real time, which is again my case.
Ok, let's go ! Here is my code, working well
private fun initializeCamera() = lifecycleScope.launch(Dispatchers.Main) {
// Open the selected camera
cameraDevice = openCamera(cameraManager, getCameraId(), cameraHandler)
val previewSize = if (isPortrait) {
Size(RECOMMANDED_CAPTURE_SIZE.width, RECOMMANDED_CAPTURE_SIZE.height)
} else {
Size(RECOMMANDED_CAPTURE_SIZE.height, RECOMMANDED_CAPTURE_SIZE.width)
}
// Initialize an image reader which will be used to display a preview
imageReader = ImageReader.newInstance(
previewSize.width, previewSize.height, ImageFormat.YUV_420_888, IMAGE_BUFFER_SIZE)
// Retrieve preview's frame and run detector
imageReader.setOnImageAvailableListener({ reader ->
lifecycleScope.launch(Dispatchers.Main) {
val image = reader.acquireNextImage()
logD { "Image available: ${image.timestamp}" }
faceDetector.runFaceDetection(image, getRotationCompensation())
image.close()
}
}, imageReaderHandler)
// Creates list of Surfaces where the camera will output frames
val targets = listOf(viewfinder.holder.surface, imageReader.surface)
// Start a capture session using our open camera and list of Surfaces where frames will go
session = createCaptureSession(cameraDevice, targets, cameraHandler)
val captureRequest = cameraDevice.createCaptureRequest(
CameraDevice.TEMPLATE_PREVIEW).apply {
addTarget(viewfinder.holder.surface)
addTarget(imageReader.surface)
}
// This will keep sending the capture request as frequently as possible until the
// session is torn down or session.stopRepeating() is called
session.setRepeatingRequest(captureRequest.build(), null, cameraHandler)
}
Now, I want to capture a still image...and this is my problem because, ideally, I want:
a full resolution image or, as least, bigger than 480x360
in JPEG format to be able to save it
The Camera2Basic sample demonstrates how to capture an image (samples for Video and SlowMotion are crashing) and MLKit sample uses the so old Camera API !! Fortunately, I've succeeded is mixing these samples to develop my feature but I'm failed to capture a still image with a different resolution.
I think I have to stop the preview session to recreate one for image capture but I'm not sure...
What I have done is the following but it's capturing images in 480x360:
session.stopRepeating()
// Unset the image reader listener
imageReader.setOnImageAvailableListener(null, null)
// Initialize an new image reader which will be used to capture still photos
// imageReader = ImageReader.newInstance(768, 1024, ImageFormat.JPEG, IMAGE_BUFFER_SIZE)
// Start a new image queue
val imageQueue = ArrayBlockingQueue<Image>(IMAGE_BUFFER_SIZE)
imageReader.setOnImageAvailableListener({ reader - >
val image = reader.acquireNextImage()
logD {"[Still] Image available in queue: ${image.timestamp}"}
if (imageQueue.size >= IMAGE_BUFFER_SIZE - 1) {
imageQueue.take().close()
}
imageQueue.add(image)
}, imageReaderHandler)
// Creates list of Surfaces where the camera will output frames
val targets = listOf(viewfinder.holder.surface, imageReader.surface)
val captureRequest = createStillCaptureRequest(cameraDevice, targets)
session.capture(captureRequest, object: CameraCaptureSession.CaptureCallback() {
override fun onCaptureCompleted(
session: CameraCaptureSession,
request: CaptureRequest,
result: TotalCaptureResult) {
super.onCaptureCompleted(session, request, result)
val resultTimestamp = result.get(CaptureResult.SENSOR_TIMESTAMP)
logD {"Capture result received: $resultTimestamp"}
// Set a timeout in case image captured is dropped from the pipeline
val exc = TimeoutException("Image dequeuing took too long")
val timeoutRunnable = Runnable {
continuation.resumeWithException(exc)
}
imageReaderHandler.postDelayed(timeoutRunnable, IMAGE_CAPTURE_TIMEOUT_MILLIS)
// Loop in the coroutine's context until an image with matching timestamp comes
// We need to launch the coroutine context again because the callback is done in
// the handler provided to the `capture` method, not in our coroutine context
# Suppress("BlockingMethodInNonBlockingContext")
lifecycleScope.launch(continuation.context) {
while (true) {
// Dequeue images while timestamps don't match
val image = imageQueue.take()
if (image.timestamp != resultTimestamp)
continue
logD {"Matching image dequeued: ${image.timestamp}"}
// Unset the image reader listener
imageReaderHandler.removeCallbacks(timeoutRunnable)
imageReader.setOnImageAvailableListener(null, null)
// Clear the queue of images, if there are left
while (imageQueue.size > 0) {
imageQueue.take()
.close()
}
// Compute EXIF orientation metadata
val rotation = getRotationCompensation()
val mirrored = cameraFacing == CameraCharacteristics.LENS_FACING_FRONT
val exifOrientation = computeExifOrientation(rotation, mirrored)
logE {"captured image size (w/h): ${image.width} / ${image.height}"}
// Build the result and resume progress
continuation.resume(CombinedCaptureResult(
image, result, exifOrientation, imageReader.imageFormat))
// There is no need to break out of the loop, this coroutine will suspend
}
}
}
}, cameraHandler)
}
If I uncomment the new ImageReader instanciation, I have this exception:
java.lang.IllegalArgumentException: CaptureRequest contains
unconfigured Input/Output Surface!
Can anyone help me ?
This IllegalArgumentException:
java.lang.IllegalArgumentException: CaptureRequest contains unconfigured Input/Output Surface!
... obviously refers to imageReader.surface.
Meanhile (with CameraX) this works different, see CameraFragment.kt ...
Issue #197: Firebase Face Detection Api issue while using cameraX API;
there might soon be a sample application matching your use case.
ImageReader is sensitive to the choice of format and/or combination of usage flags. The documentation points certain combinations of format may be unsupported. With some Android devices (perhaps some older phone models) you might find the IllegalArgumentException is not thrown using the JPEG format. But it doesn't help much - you want something versatile.
What I have done in the past is to use ImageFormat.YUV_420_888 format (this will be backed by the hardware and ImageReader implementation). This format does not contain pre-optimizations that prevent the application accessing the image via the internal array of planes. I notice you have used it successfully already in your initializeCamera() method.
You may then extract the image data from the frame you want
Image.Plane[] planes = img.getPlanes();
byte[] data = planes[0].getBuffer().array();
and then via a Bitmap create the still image using JPEG compression, PNG, or whichever encoding you choose.
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, width, height, null);
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 100, out);
byte[] imageBytes = out.toByteArray();
Bitmap bitmap= BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
ByteArrayOutputStream out2 = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 75, out2);
I am using Pjsip library for SIP Video call. I am facing issue
displying my own view in a SurfaceView.
Here is the image:
Expected View:
Fetching Preview ID in onCallMediaState
mVideoPreview = VideoPreview(mediaInfo.videoCapDev)
mVideoWindow = VideoWindow(mediaInfo.videoIncomingWindowId)
Code I have used to display this preview in my SurfaceView:
fun updateVideoPreview(holder: SurfaceHolder) {
if (SipManager.currentCall != null &&
SipManager.currentCall?.mVideoPreview != null) {
if (videoPreviewActive) {
val vidWH = VideoWindowHandle()
vidWH.handle?.setWindow(holder.surface)
val vidPrevParam = VideoPreviewOpParam()
vidPrevParam.window = vidWH
try {
SipManager.currentCall?.mVideoPreview?.start(vidPrevParam)
} catch (e: Exception) {
println(e)
}
} else {
try {
SipManager.currentCall?.mVideoPreview?.stop()
} catch (e: Exception) {
println(e)
}
}
}
}
I know that the person on other side will always recieve mirror view of my video. But in case of my own view, this should not happen.
What I feel is I am displaying the preview which is sent to the other person. I am not getting a single hint about how to display my own view(without mirror effect) using Pjsip library.
Can anyone please help me with this?
What I did is replaced SurfaceView with TextureView and then check:
if (isFrontCamera) {
val matrix = Matrix()
matrix.setScale(-1.0f, 1.0f)
matrix.postTranslate(width.toFloat(), 0.0f)
surfacePreviewCapture.setTransform(matrix)
}
And it worked.
Hope it help others. :)
====== UPDATE ======
When I checked my back camera, the view was also flipped over there so I need to do this to make it proper:
surfacePreviewCapture.setTransform(null)
Instead of using SurfaceView, you can use TextureView for your preview which you can flipped afterwards. Have a look at How to keep android from inverting the image from the front facing camera? as a reference
I am making a project where you are supposed to be able to change the delay with which the feed from the cell phone camera is shown, so that people can see how their brains handle the delay/latency. I have managed to show the camera feed on a canvas that follows the camera around and fills the whole view of the Google Cardboard, but I am wondering how I could delay this video feed. Perhaps by using an image array of some sort?
I have tried searching for sollutions online, but I have come up short of an answer. I have tried a texture2D array, but the performance was really bad (I tried a modified version of this).
private bool camAvailable;
private WebCamTexture backCam;
private Texture defaultBackground;
public RawImage background;
public AspectRatioFitter fit;
// Start is called before the first frame update
void Start()
{
defaultBackground = background.texture;
WebCamDevice[] devices = WebCamTexture.devices;
if (devices.Length == 0 )
{
Debug.Log("No camera detected");
camAvailable = false;
return;
}
for (int i = 0; i < devices.Length; i++)
{
if (!devices[i].isFrontFacing)
{
backCam = new WebCamTexture(devices[i].name, Screen.width, Screen.height); // Used to find the correct camera
}
}
if (backCam == null)
{
Debug.Log("Unable to find back camera");
return;
}
backCam.Play();
background.texture = backCam;
camAvailable = true;
} //Tell me if this is not enough code, I don't really have a lot of experience in Unity, so I am unsure of how much is required for a minimal reproducible example
Should I use some sort of frame buffer or image/texture array for delaying the video? (Start "recording", wait a specified amount of time, start playing the "video" on the screen)
Thanks in advance!
I am developing an android camera application, and I wanted to pass in the capture size to configure the camera before taking a picture.
This is my code:
try {
mCaptureRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mCaptureRequestBuilder.addTarget(previewSurface);
InputConfiguration inputConfiguration = new InputConfiguration(1920, 1080, ImageFormat.JPEG); //error here.
cameraDevice.createReprocessableCaptureSession(inputConfiguration, Arrays.asList(previewSurface), new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(#NonNull CameraCaptureSession cameraCaptureSession) {
try {
cameraCaptureSession.setRepeatingRequest(mCaptureRequestBuilder.build(), null, handler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
#Override
public void onConfigureFailed(#NonNull CameraCaptureSession cameraCaptureSession) {
Toast.makeText(getApplicationContext(), "Camera Preview Failed!!", Toast.LENGTH_SHORT).show();
}
}, null);
}
So, I am trying to pass an input configuration to the camera here.
My problem is I'm getting an error on the InputConfiguration line.
This is my error:
java.lang.IllegalArgumentException: input format 256 is not valid
I tried this with a lot of different ImageFormats like JPEG, UNKNOWN, NV21 and others. It's not working.
Help me resolve this error and also if my approach is wrong in interacting with the camera do tell me.
You are working with TEMPLATE_PREVIEW, which does not support ImageFormat.JPEG.
Camera2 mandates that preview supports YUV 420, like this:
InputConfiguration inputConfiguration = new InputConfiguration(1920, 1080, ImageFormat.YUV_420_888);
Input configurations are only used reprocessing use cases, where you have an application-level circular buffer of captured partially processed frames.
When the user hits the shutter button, you send one of the captured frames back to the camera device for final processing. The input configuration is for selecting the size and format of this path back into the camera.
For simple capture applications, you only care about the output configurations.
Another sad case is described here: https://developer.android.com/reference/android/hardware/camera2/CameraCharacteristics.html#REPROCESS_MAX_CAPTURE_STALL
Check that your camera supports reprocessing or you would not pass through "input format is not valid" at all as no input would be allowed for reprocessing.
Also the absence of this key value can signal that Yuv reprocessing is not available: https://developer.android.com/reference/android/hardware/camera2/CameraMetadata#REQUEST_AVAILABLE_CAPABILITIES_YUV_REPROCESSING