I'm trying to capture a picture with overlay included in image capture. I was able to set overlay to previewView using cameraView.overlay.add(binding.textView). How ever, it did not save when trying to save an image with imageCapture Only the picture was saved not the overlay. How do I save an image with overlay included using PreviewView of camera x.
Please don't mark this as duplicate. I researched a lot and most of the example online are using the old camera api which does not apply to camera x library. Any help is appreciated. Thanks in advance.
Here is my code
<FrameLayout
android:id="#+id/camera_wrapper"
android:layout_width="match_parent"
android:layout_height="0dp"
app:layout_constraintTop_toTopOf="#id/space1"
app:layout_constraintBottom_toBottomOf="#id/space">
<androidx.camera.view.PreviewView
android:id="#+id/camera_view"
android:layout_width="match_parent"
android:layout_height="match_parent" />
<TextView
android:id="#+id/text_view"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center"
android:text="Hello world"
android:textSize="42sp"
android:textColor="#android:color/holo_green_dark"/>
</FrameLayout>
private lateinit var outputDirectory: File
private lateinit var cameraExecutor: ExecutorService
private var preview: Preview? = null
private var lensFacing: Int = CameraSelector.LENS_FACING_FRONT
private var imageCapture: ImageCapture? = null
private var camera: Camera? = null
private var cameraProvider: ProcessCameraProvider? = null
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
super.onViewCreated(view, savedInstanceState)
outputDirectory = getOutputDirectory()
cameraExecutor = Executors.newSingleThreadExecutor()
}
private fun setupCamera() {
val cameraProviderFuture = ProcessCameraProvider.getInstance(requireContext())
cameraProviderFuture.addListener(
Runnable {
// Used to bind the lifecycle of cameras to the lifecycle owner
cameraProvider = cameraProviderFuture.get()
// Get screen metrics used to setup camera for full screen resolution
val metrics = DisplayMetrics().also { binding.cameraView.display.getRealMetrics(it) }
Timber.d("Screen metrics: ${metrics.widthPixels} x ${metrics.heightPixels}")
val screenAspectRatio = aspectRatio(metrics.widthPixels, metrics.heightPixels)
Timber.d("Preview aspect ratio: $screenAspectRatio")
val rotation = binding.cameraView.display.rotation
// CameraProvider
val cameraProvider = cameraProvider
?: throw IllegalStateException("Camera initialization failed.")
// CameraSelector
val cameraSelector = CameraSelector.Builder().requireLensFacing(lensFacing).build()
// add text overlay *---------*
binding.cameraView.overlay.add(binding.textView)
// Preview
preview = Preview.Builder()
// We request aspect ratio but no resolution
.setTargetAspectRatio(screenAspectRatio)
// Set initial target rotation
.setTargetRotation(rotation)
.build()
// ImageCapture
imageCapture = ImageCapture.Builder()
.setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
// We request aspect ratio but no resolution to match preview config, but letting
// CameraX optimize for whatever specific resolution best fits our use cases
.setTargetAspectRatio(screenAspectRatio)
// Set initial target rotation, we will have to call this again if rotation changes
// during the lifecycle of this use case
.setTargetRotation(rotation)
.build()
// Must unbind the use-cases before rebinding them
cameraProvider.unbindAll()
try {
// A variable number of use-cases can be passed here -
// camera provides access to CameraControl & CameraInfo
camera = cameraProvider.bindToLifecycle(this, cameraSelector, preview, imageCapture)
// Attach the viewfinder's surface provider to preview use case
preview?.setSurfaceProvider(binding.cameraView.surfaceProvider)
} catch (exc: Exception) {
Toast.makeText(requireContext(), "Something went wrong. Please try again.", Toast.LENGTH_SHORT).show()
findNavController().navigateUp()
}
},
ContextCompat.getMainExecutor(requireContext())
)
}
private fun takePhoto() {
imageCapture?.let { imageCapture ->
// Create output file to hold the image
val photoFile = createFile(outputDirectory, FILENAME, PHOTO_EXTENSION)
// Setup image capture metadata
val metadata = ImageCapture.Metadata().apply {
// Mirror image when using the front camera
isReversedHorizontal = lensFacing == CameraSelector.LENS_FACING_FRONT
}
// Create output options object which contains file + metadata
val outputOptions = ImageCapture.OutputFileOptions.Builder(photoFile)
.setMetadata(metadata)
.build()
// Setup image capture listener which is triggered after photo has been taken
imageCapture.takePicture(outputOptions, cameraExecutor, object : ImageCapture.OnImageSavedCallback {
override fun onError(exc: ImageCaptureException) {
Timber.e(exc, "Photo capture failed: ${exc.message}")
}
override fun onImageSaved(output: ImageCapture.OutputFileResults) {
val savedUri = output.savedUri ?: Uri.fromFile(photoFile)
Timber.d("Photo capture succeeded: $savedUri")
// Implicit broadcasts will be ignored for devices running API level >= 24
// so if you only target API level 24+ you can remove this statement
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.N) {
requireActivity()
.sendBroadcast(Intent(android.hardware.Camera.ACTION_NEW_PICTURE, savedUri))
}
// If the folder selected is an external media directory, this is
// unnecessary but otherwise other apps will not be able to access our
// images unless we scan them using [MediaScannerConnection]
val mimeType = MimeTypeMap.getSingleton()
.getMimeTypeFromExtension(savedUri.toFile().extension)
MediaScannerConnection.scanFile(
context,
arrayOf(savedUri.toFile().absolutePath),
arrayOf(mimeType)
) { _, uri ->
Timber.d("Image capture scanned into media store: $uri")
}
}
})
}
}
You must overlay the text over the image yourself. I would suggest to use takePicture(Executor, …) that puts the Jpeg in memory; then, overlay your text using one of the libraries (not part of Android framework, neither of Jetpack), and save the result in file.
If you can compromise on image quality, you can draw the Jpeg on Bitmap canvas, and draw your text on top.
Go with this plugin https://github.com/huangyz0918/AndroidWM
.Also, if you want to build your own this can help you to refer to.
The simple usage is like below
WatermarkText watermarkText = new WatermarkText(inputText)
.setPositionX(0.5)
.setPositionY(0.5)
.setTextColor(Color.WHITE)
.setTextFont(R.font.champagne)
.setTextShadow(0.1f, 5, 5, Color.BLUE)
.setTextAlpha(150)
.setRotation(30)
.setTextSize(20);
val bmFinal: Bitmap = WatermarkBuilder
.create(applicationContext, capturedImageBitmap)
.loadWatermarkText(watermarkText)
.watermark
.outputImage
// ##### Then save it #########
fun saveImage(bitmap: Bitmap, photoFile: File) {
val output: OutputStream = FileOutputStream(photoFile)
bitmap.compress(Bitmap.CompressFormat.JPEG, 100, output)
output.flush()
output.close()
Toast.makeText(
this#MainActivity,
"Imaged Saved at ${photoFile.absolutePath}",
Toast.LENGTH_LONG
).show()
}
val photoFile = File(
outputDirectory,
SimpleDateFormat(
FILENAME_FORMAT, Locale.US
).format(System.currentTimeMillis()) + ".jpg"
)
saveImage(bmFinal, photoFile)
May be you can use CameraSource class and put your preview/overlay inside :
val cameraSource = CameraSource.Builder(requireContext(),FakeDetector()).build()
cameraSource.start(your_preview_overlay)
And after you have an API :
takePicture(CameraSource.ShutterCallback shutter, CameraSource.PictureCallback jpeg)
Camera source (https://developers.google.com/android/reference/com/google/android/gms/vision/CameraSource) is for detection, but you can create a fake Detector ( nothing to detect).
#alexcohn's answer is the preferred one if you cannot afford to lose quality. However, if quality is not big deal then you can do this.
<FrameLayout
android:id="#+id/camera_wrapper"
android:layout_width="match_parent"
android:layout_height="0dp"
app:layout_constraintTop_toTopOf="#id/space1"
app:layout_constraintBottom_toBottomOf="#id/space">
<androidx.camera.view.PreviewView
android:id="#+id/camera_view"
android:layout_width="match_parent"
android:layout_height="match_parent" />
<ImageView
android:id="#+id/selfie"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:scaleType="centerCrop"
android:visibility="gone"
tools:visibility="visible"
tools:background="#color/gray" />
<ImageView
android:id="#+id/overlay"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:scaleType="centerCrop"
tools:src="#drawable/full_frame_gd" />
</FrameLayout>
PreviewView has a build in function that gives you bitmap of the preview
val bitmap = binding.cameraView.bitmap
binding.selfie.setImageBitmap(bitmap)
binding.selfie.visibility = View.VISIBLE
cameraExecutor.shutdown()
binding.cameraView.visibility = View.GONE
Now you have two images view one for selfie and one for overlay. You can't take screen shot of the previewView. There are some limitations to it that I'm not too sure of. but, I'm sure there might be a way around it.
From here you can just take a screen capture of the two combined image views like this
private fun captureView(view: View, window: Window, bitmapCallback: (Bitmap)->Unit) {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
// Above Android O, use PixelCopy
val bitmap = Bitmap.createBitmap(view.width, view.height, Bitmap.Config.ARGB_8888)
val location = IntArray(2)
view.getLocationInWindow(location)
PixelCopy.request(
window,
Rect(
location[0],
location[1],
location[0] + view.width,
location[1] + view.height
),
bitmap,
{
if (it == PixelCopy.SUCCESS) {
bitmapCallback.invoke(bitmap)
}
},
Handler(Looper.getMainLooper()) )
} else {
val tBitmap = Bitmap.createBitmap(
view.width, view.height, Bitmap.Config.RGB_565
)
val canvas = Canvas(tBitmap)
view.draw(canvas)
canvas.setBitmap(null)
bitmapCallback.invoke(tBitmap)
}
}
and in takePhoto() function you can remove the imageCapture.takePicture logic and replace it with this.
Handler(Looper.getMainLooper()).postDelayed({
captureView(binding.cameraWrapper, requireActivity().window) {
// your new bitmap with overlay is here and you can save it to file just like any other bitmaps.
}
}, 500)
Related
I am creating an application which must implement its own camera.
I use the cameraX library provided by google.
I noticed that there is a difference between the quality of the image captured by my own application, and the image captured by the camera application installed on my phone.
although the 2 photos are captured with the same conditions (light, position...)
especially when I zoom the photo, the details of the image become more blurry to the image captured by my application
(in my own case, my phone is Google Pixel 5)
Please see these 2 photos to see the difference
Image by phone camera
Image by my app
And this is my code
/**
* Initialize CameraX, and prepare to bind the camera use cases
*/
private fun setupCamera()
{
val cameraProviderFuture : ListenableFuture<ProcessCameraProvider> = ProcessCameraProvider.getInstance(this)
cameraProviderFuture.addListener({
cameraProvider = cameraProviderFuture.get()
lensFacing = when
{
hasBackCamera() -> CameraSelector.LENS_FACING_BACK
hasFrontCamera() -> CameraSelector.LENS_FACING_FRONT
else -> throw IllegalStateException("Back and front camera are unavailable")
}
bindCameraUseCases()
setupCameraGestures()
}, ContextCompat.getMainExecutor(this))
}
/**
* Declare and bind preview, capture and analysis use cases.
*/
private fun bindCameraUseCases()
{
lifecycleScope.launch {
val cameraProvider : ProcessCameraProvider = cameraProvider ?: throw IllegalStateException("Camera initialization failed.")
// Try to apply extensions like HDR, NIGHT ##########################################
val extensionsManager : ExtensionsManager = ExtensionsManager.getInstanceAsync(this#ImageCaptureActivity, cameraProvider).await()
val defaultCameraSelector : CameraSelector = CameraSelector.Builder()
.requireLensFacing(lensFacing)
.build()
val finalCameraSelector : CameraSelector = if (extensionsManager.isExtensionAvailable(defaultCameraSelector, ExtensionMode.AUTO))
{
extensionsManager.getExtensionEnabledCameraSelector(defaultCameraSelector, ExtensionMode.AUTO)
}
else
{
defaultCameraSelector
}
// Get screen metrics used to setup camera for full screen resolution
val metrics : DisplayMetrics = resources.displayMetrics
val screenAspectRatio : Int = aspectRatio(metrics.widthPixels, metrics.heightPixels)
val rotation : Int = binding.cameraPreview.display.rotation
preview = Preview.Builder()
// We request aspect ratio but no resolution
.setTargetAspectRatio(screenAspectRatio)
// Set initial target rotation
.setTargetRotation(rotation)
.build()
imageCapture = ImageCapture.Builder()
// We request aspect ratio but no resolution to match preview config, but letting
// CameraX optimize for whatever specific resolution best fits our use cases
.setTargetAspectRatio(screenAspectRatio)
// Set initial target rotation, we will have to call this again if rotation changes
// during the lifecycle of this use case
.setTargetRotation(rotation)
.setCaptureMode(ImageCapture.CAPTURE_MODE_MAXIMIZE_QUALITY)
.setJpegQuality(100)
.build()
imageAnalyzer = ImageAnalysis.Builder()
// We request aspect ratio but no resolution
.setTargetAspectRatio(screenAspectRatio)
.build()
imageAnalyzer?.setAnalyzer(cameraExecutor, LuminosityAnalyzer {})
// Must unbind the use-cases before rebinding them
cameraProvider.unbindAll()
try
{
// A variable number of use-cases can be passed here -
// camera provides access to CameraControl & CameraInfo
camera = cameraProvider.bindToLifecycle(this#ImageCaptureActivity, finalCameraSelector, preview, imageCapture, imageAnalyzer)
// Attach the viewfinder's surface provider to preview use case
preview?.setSurfaceProvider(binding.cameraPreview.surfaceProvider)
}
catch (exception : Exception)
{
exception.printStackTrace()
}
}
}
/**
* [androidx.camera.core.ImageAnalysisConfig] requires enum value of [androidx.camera.core.AspectRatio].
* Currently it has values of 4:3 & 16:9.
*
* Detecting the most suitable ratio for dimensions provided in #params by counting absolute
* of preview ratio to one of the provided values.
*
* #param width - preview width
* #param height - preview height
* #return suitable aspect ratio
*/
private fun aspectRatio(width : Int, height : Int) : Int
{
val previewRatio : Double = max(width, height).toDouble() / min(width, height)
return if (abs(previewRatio - RATIO_4_3_VALUE) <= abs(previewRatio - RATIO_16_9_VALUE))
{
AspectRatio.RATIO_4_3
}
else
{
AspectRatio.RATIO_16_9
}
}
fun captureImage()
{
if (!permissionsOk()) return
// Get a stable reference of the modifiable image capture use case
imageCapture?.let { imageCapture ->
// Create output file to hold the image
val photoFile : File = storageUtils.createFile(
baseFolder = getOutputPath(),
fileName = System.currentTimeMillis().toString(),
fileExtension = StorageUtils.PHOTO_EXTENSION)
// Setup image capture metadata
val metadata : Metadata = Metadata().also {
// Mirror image when using the front camera
it.isReversedHorizontal = lensFacing == CameraSelector.LENS_FACING_FRONT
it.location = locationManager.lastKnownLocation
}
// Create output options object which contains file + metadata
val outputOptions : ImageCapture.OutputFileOptions = ImageCapture.OutputFileOptions.Builder(photoFile)
.setMetadata(metadata)
.build()
imagesAdapter.addImage(photoFile)
// Setup image capture listener which is triggered after photo has been taken
imageCapture.takePicture(outputOptions, cameraExecutor, object : ImageCapture.OnImageSavedCallback
{
override fun onImageSaved(output : ImageCapture.OutputFileResults)
{
val savedUri : Uri = output.savedUri ?: return
StorageUtils.showInGallery(savedUri.path)
binding.list.post {
imagesAdapter.addImage(savedUri.toFile())
binding.list.smoothScrollToPosition(imagesAdapter.itemCount)
}
}
override fun onError(exception : ImageCaptureException)
{
exception.printStackTrace()
}
})
binding.cameraPreview.postDelayed({
binding.backgroundEffect.isVisible = true
binding.cameraPreview.postDelayed({
binding.backgroundEffect.isVisible = false
}, AppUtils.VERY_FAST_ANIMATION_MILLIS)
}, AppUtils.FAST_ANIMATION_MILLIS)
}
}
How can I improve the quality of my images? Is there any thing I should do? is there a special filter or algorithm?
i need your help please
if you took photo on Pixel probably using default cam app (GCam) - this app is fulfilled with quaility improvements backed up by some AI. tough task to comptetite with the biggest in quality... try to take a photo with some 3rd party like OpenCamera and compare this picture with one got by your app
You can use CameraX Extension feature to enable HDR & Low light.
this improves the image quality significantly.
I'm trying to capture a picture with overlay included in image capture. I was able to set overlay to previewView using cameraView.overlay.add(binding.textView). How ever, it did not save when trying to save an image with imageCapture Only the picture was saved not the overlay. How do I save an image with overlay included using PreviewView of camera x.
Please don't mark this as duplicate. I researched a lot and most of the example online are using the old camera api which does not apply to camera x library. Any help is appreciated. Thanks in advance.
Here is my code
<FrameLayout
android:id="#+id/camera_wrapper"
android:layout_width="match_parent"
android:layout_height="0dp"
app:layout_constraintTop_toTopOf="#id/space1"
app:layout_constraintBottom_toBottomOf="#id/space">
<androidx.camera.view.PreviewView
android:id="#+id/camera_view"
android:layout_width="match_parent"
android:layout_height="match_parent" />
<TextView
android:id="#+id/text_view"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center"
android:text="Hello world"
android:textSize="42sp"
android:textColor="#android:color/holo_green_dark"/>
</FrameLayout>
private lateinit var outputDirectory: File
private lateinit var cameraExecutor: ExecutorService
private var preview: Preview? = null
private var lensFacing: Int = CameraSelector.LENS_FACING_FRONT
private var imageCapture: ImageCapture? = null
private var camera: Camera? = null
private var cameraProvider: ProcessCameraProvider? = null
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
super.onViewCreated(view, savedInstanceState)
outputDirectory = getOutputDirectory()
cameraExecutor = Executors.newSingleThreadExecutor()
}
private fun setupCamera() {
val cameraProviderFuture = ProcessCameraProvider.getInstance(requireContext())
cameraProviderFuture.addListener(
Runnable {
// Used to bind the lifecycle of cameras to the lifecycle owner
cameraProvider = cameraProviderFuture.get()
// Get screen metrics used to setup camera for full screen resolution
val metrics = DisplayMetrics().also { binding.cameraView.display.getRealMetrics(it) }
Timber.d("Screen metrics: ${metrics.widthPixels} x ${metrics.heightPixels}")
val screenAspectRatio = aspectRatio(metrics.widthPixels, metrics.heightPixels)
Timber.d("Preview aspect ratio: $screenAspectRatio")
val rotation = binding.cameraView.display.rotation
// CameraProvider
val cameraProvider = cameraProvider
?: throw IllegalStateException("Camera initialization failed.")
// CameraSelector
val cameraSelector = CameraSelector.Builder().requireLensFacing(lensFacing).build()
// add text overlay *---------*
binding.cameraView.overlay.add(binding.textView)
// Preview
preview = Preview.Builder()
// We request aspect ratio but no resolution
.setTargetAspectRatio(screenAspectRatio)
// Set initial target rotation
.setTargetRotation(rotation)
.build()
// ImageCapture
imageCapture = ImageCapture.Builder()
.setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
// We request aspect ratio but no resolution to match preview config, but letting
// CameraX optimize for whatever specific resolution best fits our use cases
.setTargetAspectRatio(screenAspectRatio)
// Set initial target rotation, we will have to call this again if rotation changes
// during the lifecycle of this use case
.setTargetRotation(rotation)
.build()
// Must unbind the use-cases before rebinding them
cameraProvider.unbindAll()
try {
// A variable number of use-cases can be passed here -
// camera provides access to CameraControl & CameraInfo
camera = cameraProvider.bindToLifecycle(this, cameraSelector, preview, imageCapture)
// Attach the viewfinder's surface provider to preview use case
preview?.setSurfaceProvider(binding.cameraView.surfaceProvider)
} catch (exc: Exception) {
Toast.makeText(requireContext(), "Something went wrong. Please try again.", Toast.LENGTH_SHORT).show()
findNavController().navigateUp()
}
},
ContextCompat.getMainExecutor(requireContext())
)
}
private fun takePhoto() {
imageCapture?.let { imageCapture ->
// Create output file to hold the image
val photoFile = createFile(outputDirectory, FILENAME, PHOTO_EXTENSION)
// Setup image capture metadata
val metadata = ImageCapture.Metadata().apply {
// Mirror image when using the front camera
isReversedHorizontal = lensFacing == CameraSelector.LENS_FACING_FRONT
}
// Create output options object which contains file + metadata
val outputOptions = ImageCapture.OutputFileOptions.Builder(photoFile)
.setMetadata(metadata)
.build()
// Setup image capture listener which is triggered after photo has been taken
imageCapture.takePicture(outputOptions, cameraExecutor, object : ImageCapture.OnImageSavedCallback {
override fun onError(exc: ImageCaptureException) {
Timber.e(exc, "Photo capture failed: ${exc.message}")
}
override fun onImageSaved(output: ImageCapture.OutputFileResults) {
val savedUri = output.savedUri ?: Uri.fromFile(photoFile)
Timber.d("Photo capture succeeded: $savedUri")
// Implicit broadcasts will be ignored for devices running API level >= 24
// so if you only target API level 24+ you can remove this statement
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.N) {
requireActivity()
.sendBroadcast(Intent(android.hardware.Camera.ACTION_NEW_PICTURE, savedUri))
}
// If the folder selected is an external media directory, this is
// unnecessary but otherwise other apps will not be able to access our
// images unless we scan them using [MediaScannerConnection]
val mimeType = MimeTypeMap.getSingleton()
.getMimeTypeFromExtension(savedUri.toFile().extension)
MediaScannerConnection.scanFile(
context,
arrayOf(savedUri.toFile().absolutePath),
arrayOf(mimeType)
) { _, uri ->
Timber.d("Image capture scanned into media store: $uri")
}
}
})
}
}
You must overlay the text over the image yourself. I would suggest to use takePicture(Executor, …) that puts the Jpeg in memory; then, overlay your text using one of the libraries (not part of Android framework, neither of Jetpack), and save the result in file.
If you can compromise on image quality, you can draw the Jpeg on Bitmap canvas, and draw your text on top.
Go with this plugin https://github.com/huangyz0918/AndroidWM
.Also, if you want to build your own this can help you to refer to.
The simple usage is like below
WatermarkText watermarkText = new WatermarkText(inputText)
.setPositionX(0.5)
.setPositionY(0.5)
.setTextColor(Color.WHITE)
.setTextFont(R.font.champagne)
.setTextShadow(0.1f, 5, 5, Color.BLUE)
.setTextAlpha(150)
.setRotation(30)
.setTextSize(20);
val bmFinal: Bitmap = WatermarkBuilder
.create(applicationContext, capturedImageBitmap)
.loadWatermarkText(watermarkText)
.watermark
.outputImage
// ##### Then save it #########
fun saveImage(bitmap: Bitmap, photoFile: File) {
val output: OutputStream = FileOutputStream(photoFile)
bitmap.compress(Bitmap.CompressFormat.JPEG, 100, output)
output.flush()
output.close()
Toast.makeText(
this#MainActivity,
"Imaged Saved at ${photoFile.absolutePath}",
Toast.LENGTH_LONG
).show()
}
val photoFile = File(
outputDirectory,
SimpleDateFormat(
FILENAME_FORMAT, Locale.US
).format(System.currentTimeMillis()) + ".jpg"
)
saveImage(bmFinal, photoFile)
May be you can use CameraSource class and put your preview/overlay inside :
val cameraSource = CameraSource.Builder(requireContext(),FakeDetector()).build()
cameraSource.start(your_preview_overlay)
And after you have an API :
takePicture(CameraSource.ShutterCallback shutter, CameraSource.PictureCallback jpeg)
Camera source (https://developers.google.com/android/reference/com/google/android/gms/vision/CameraSource) is for detection, but you can create a fake Detector ( nothing to detect).
#alexcohn's answer is the preferred one if you cannot afford to lose quality. However, if quality is not big deal then you can do this.
<FrameLayout
android:id="#+id/camera_wrapper"
android:layout_width="match_parent"
android:layout_height="0dp"
app:layout_constraintTop_toTopOf="#id/space1"
app:layout_constraintBottom_toBottomOf="#id/space">
<androidx.camera.view.PreviewView
android:id="#+id/camera_view"
android:layout_width="match_parent"
android:layout_height="match_parent" />
<ImageView
android:id="#+id/selfie"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:scaleType="centerCrop"
android:visibility="gone"
tools:visibility="visible"
tools:background="#color/gray" />
<ImageView
android:id="#+id/overlay"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:scaleType="centerCrop"
tools:src="#drawable/full_frame_gd" />
</FrameLayout>
PreviewView has a build in function that gives you bitmap of the preview
val bitmap = binding.cameraView.bitmap
binding.selfie.setImageBitmap(bitmap)
binding.selfie.visibility = View.VISIBLE
cameraExecutor.shutdown()
binding.cameraView.visibility = View.GONE
Now you have two images view one for selfie and one for overlay. You can't take screen shot of the previewView. There are some limitations to it that I'm not too sure of. but, I'm sure there might be a way around it.
From here you can just take a screen capture of the two combined image views like this
private fun captureView(view: View, window: Window, bitmapCallback: (Bitmap)->Unit) {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
// Above Android O, use PixelCopy
val bitmap = Bitmap.createBitmap(view.width, view.height, Bitmap.Config.ARGB_8888)
val location = IntArray(2)
view.getLocationInWindow(location)
PixelCopy.request(
window,
Rect(
location[0],
location[1],
location[0] + view.width,
location[1] + view.height
),
bitmap,
{
if (it == PixelCopy.SUCCESS) {
bitmapCallback.invoke(bitmap)
}
},
Handler(Looper.getMainLooper()) )
} else {
val tBitmap = Bitmap.createBitmap(
view.width, view.height, Bitmap.Config.RGB_565
)
val canvas = Canvas(tBitmap)
view.draw(canvas)
canvas.setBitmap(null)
bitmapCallback.invoke(tBitmap)
}
}
and in takePhoto() function you can remove the imageCapture.takePicture logic and replace it with this.
Handler(Looper.getMainLooper()).postDelayed({
captureView(binding.cameraWrapper, requireActivity().window) {
// your new bitmap with overlay is here and you can save it to file just like any other bitmaps.
}
}, 500)
I'm using my CameraX with Firebase MLKit bar-code reader to detect barcode code. Application Identifies the bar-code without a problem. But I'm trying to add bounding box which shows the area of the barcode in CameraX preview in real-time. The Bounding box information is retrieved from the bar-code detector function. But It doesn't have nither right position nor size as you can see below.
This is my layout of the activity.
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<Button
android:id="#+id/camera_capture_button"
android:layout_width="100dp"
android:layout_height="100dp"
android:layout_marginBottom="50dp"
android:scaleType="fitCenter"
android:text="Take Photo"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintRight_toRightOf="parent"
app:layout_constraintBottom_toBottomOf="parent"
android:elevation="2dp" />
<SurfaceView
android:id="#+id/overlayView"
android:layout_width="match_parent"
android:layout_height="match_parent" />
<androidx.camera.view.PreviewView
android:id="#+id/previewView"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
SurfaceView is used to draw this rectangle shape.
Barcode detection happens in the BarcodeAnalyzer class which implements ImageAnalysis.Analyzer. inside overwritten analyze function I retrieve the barcode data like below.
#SuppressLint("UnsafeExperimentalUsageError")
override fun analyze(imageProxy: ImageProxy) {
val mediaImage = imageProxy.image
val rotationDegrees = degreesToFirebaseRotation(imageProxy.imageInfo.rotationDegrees)
if (mediaImage != null) {
val analyzedImageHeight = mediaImage.height
val analyzedImageWidth = mediaImage.width
val image = FirebaseVisionImage
.fromMediaImage(mediaImage,rotationDegrees)
detector.detectInImage(image)
.addOnSuccessListener { barcodes ->
for (barcode in barcodes) {
val bounds = barcode.boundingBox
val corners = barcode.cornerPoints
val rawValue = barcode.rawValue
if(::barcodeDetectListener.isInitialized && rawValue != null && bounds != null){
barcodeDetectListener.onBarcodeDetect(
rawValue,
bounds,
analyzedImageWidth,
analyzedImageHeight
)
}
}
imageProxy.close()
}
.addOnFailureListener {
Log.e(tag,"Barcode Reading Exception: ${it.localizedMessage}")
imageProxy.close()
}
.addOnCanceledListener {
Log.e(tag,"Barcode Reading Canceled")
imageProxy.close()
}
}
}
barcodeDetectListener is a reference to an interface I create to communicate this data back into my activity.
interface BarcodeDetectListener {
fun onBarcodeDetect(code: String, codeBound: Rect, imageWidth: Int, imageHeight: Int)
}
In my main activity, I send these data to OverlaySurfaceHolder which implements the SurfaceHolder.Callback. This class is responsible for drawing a bounding box on overlayed SurfaceView.
override fun onBarcodeDetect(code: String, codeBound: Rect, analyzedImageWidth: Int,
analyzedImageHeight: Int) {
Log.i(TAG,"barcode : $code")
overlaySurfaceHolder.repositionBound(codeBound,previewView.width,previewView.height,
analyzedImageWidth,analyzedImageHeight)
overlayView.invalidate()
}
As you can see here I'm sending overlayed SurfaceView width and height for the calculation in OverlaySurfaceHolder class.
OverlaySurfaceHolder.kt
class OverlaySurfaceHolder: SurfaceHolder.Callback {
var previewViewWidth: Int = 0
var previewViewHeight: Int = 0
var analyzedImageWidth: Int = 0
var analyzedImageHeight: Int = 0
private lateinit var drawingThread: DrawingThread
private lateinit var barcodeBound :Rect
private val tag = OverlaySurfaceHolder::class.java.simpleName
override fun surfaceChanged(holder: SurfaceHolder?, format: Int, width: Int, height: Int) {
}
override fun surfaceDestroyed(holder: SurfaceHolder?) {
var retry = true
drawingThread.running = false
while (retry){
try {
drawingThread.join()
retry = false
} catch (e: InterruptedException) {
}
}
}
override fun surfaceCreated(holder: SurfaceHolder?) {
drawingThread = DrawingThread(holder)
drawingThread.running = true
drawingThread.start()
}
fun repositionBound(codeBound: Rect, previewViewWidth: Int, previewViewHeight: Int,
analyzedImageWidth: Int, analyzedImageHeight: Int){
this.barcodeBound = codeBound
this.previewViewWidth = previewViewWidth
this.previewViewHeight = previewViewHeight
this.analyzedImageWidth = analyzedImageWidth
this.analyzedImageHeight = analyzedImageHeight
}
inner class DrawingThread(private val holder: SurfaceHolder?): Thread() {
var running = false
private fun adjustXCoordinates(valueX: Int): Float{
return if(previewViewWidth != 0){
(valueX / analyzedImageWidth.toFloat()) * previewViewWidth.toFloat()
}else{
valueX.toFloat()
}
}
private fun adjustYCoordinates(valueY: Int): Float{
return if(previewViewHeight != 0){
(valueY / analyzedImageHeight.toFloat()) * previewViewHeight.toFloat()
}else{
valueY.toFloat()
}
}
override fun run() {
while(running){
if(::barcodeBound.isInitialized){
val canvas = holder!!.lockCanvas()
if (canvas != null) {
synchronized(holder) {
canvas.drawColor(Color.TRANSPARENT, PorterDuff.Mode.CLEAR)
val myPaint = Paint()
myPaint.color = Color.rgb(20, 100, 50)
myPaint.strokeWidth = 6f
myPaint.style = Paint.Style.STROKE
val refinedRect = RectF()
refinedRect.left = adjustXCoordinates(barcodeBound.left)
refinedRect.right = adjustXCoordinates(barcodeBound.right)
refinedRect.top = adjustYCoordinates(barcodeBound.top)
refinedRect.bottom = adjustYCoordinates(barcodeBound.bottom)
canvas.drawRect(refinedRect,myPaint)
}
holder.unlockCanvasAndPost(canvas)
}else{
Log.e(tag, "Cannot draw onto the canvas as it's null")
}
try {
sleep(30)
} catch (e: InterruptedException) {
e.printStackTrace()
}
}
}
}
}
}
Please can anyone point me out what am I doing wrong?
I don't have a very clear clue, but here are something you could try:
When you adjustXCoordinates, if previewWidth is 0, you return valueX.toFloat() directly. Could you add something logging to see it it actually falls into this case? Also adding some logs to print the analysis and preview dimension could be helpful as well.
Another thing worth noting is that the image you sent to the detector could have different aspect ratio from the preview View area. For example, if your camera takes a 4:3 photo, it will send it to detector. However, if your View area is 1:1, it will crop some part of the photos to display it there. In that case, you need to take this into consideration as well when adjust coordinates. Base on my testing, the image will fit into the View area based on CENTER_CROP. If you want to be really careful, probably worth checking if this is documented in the camera dev site.
Hope it helps, more or less.
I am no longer working on this project. However resonantly I worked on a camera application that uses Camera 2 API. In that application, there was a requirement to detect the object using the MLKit object detection library and show the bounding box like this on top of the camera preview. Faced the same issue like this one first and manage to get it to work finally. I'll leave my approach here. It might help someone.
Any detection library will do its detection process in a small resolution image compare to the camera preview image. When the detection library returns the combinations for the detected object we need to scale up to show it in the right position. it's called the scale factor. In order to make the calculation easy, it's better to select the analyze image size and preview image size in the same aspect ratio.
You can use the below function to get the aspect ratio of any size.
fun gcd(a: Long, b: Long): Long {
return if (b == 0L) a else gcd(b, a % b)
}
fun asFraction(a: Long, b: Long): Pair<Long,Long> {
val gcd = gcd(a, b)
return Pair((a / gcd) , b / gcd)
}
After getting the camera preview image aspect ratio, selected the analyze image size like below.
val previewFraction = DisplayUtils
.asFraction(previewSize!!.width.toLong(),previewSize!!.height.toLong())
val analyzeImageSize = characteristics
.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)!!
.getOutputSizes(ImageFormat.YUV_420_888)
.filter { DisplayUtils.asFraction(it.width.toLong(), it.height.toLong()) == previewFraction }
.sortedBy { it.height * it.width}
.first()
Finaly when you have these two values you can calculate scale factor like below.
val scaleFactor = previewSize.width / analyzedSize.width.toFloat()
Finaly before the bounding box is drawn to the multiply each opint with scale factor to get correct screen coordinations.
if detect from bitmap, your reposition method will be right as i try.
My use case is to take two images at one time. first one at 2x zoom and second one at 1x zoom level. Also, I want to save images to files.
My idea of doing it was to take the first image at 2x zoom and when the image is saved set the zoom level at 1x and take the second image when the lens has zoomed to 1x zoom level.
However, when I take the first image the preview is stuck at the first image and callback from setting 1x zoom never happens.
This is how I create the capture use cases.
private void createImageCaptureUseCases() {
ImageCapture imageCapture1 = new ImageCapture.Builder()
.setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
.build();
ImageCapture imageCapture2 = new ImageCapture.Builder()
.setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
.build();
imageCaptureUseCases.clear();
imageCaptureUseCases.add(imageCapture1);
imageCaptureUseCases.add(imageCapture2);
This is how I first start the camera session.
ListenableFuture<ProcessCameraProvider> cameraProviderFuture = ProcessCameraProvider.getInstance(getContext());
cameraProviderFuture.addListener(() -> {
try {
cameraProvider = cameraProviderFuture.get();
preview = new Preview.Builder().build();
cameraSelector = new CameraSelector.Builder()
.requireLensFacing(CameraSelector.LENS_FACING_BACK)
.build();
Camera camera = cameraProvider.bindToLifecycle(
((LifecycleOwner) this),
cameraSelector,
preview,
imageCapture);
camera.getCameraControl().setZoomRatio(2f);
preview.setSurfaceProvider(previewView.createSurfaceProvider(camera.getCameraInfo()));
} catch (InterruptedException | ExecutionException e) {}
}, ContextCompat.getMainExecutor(getContext()));
this is how the capture images is called.
private void captureImage(ImageCapture imageCapture) {
File pictureFile = ImageUtils.createImageFile(getActivity());
ImageCapture.OutputFileOptions options = new
ImageCapture.OutputFileOptions.Builder(pictureFile).build();
final Activity activity = getActivity();
imageCapture.takePicture(options, ContextCompat.getMainExecutor(activity),
new ImageCapture.OnImageSavedCallback() {
#Override
public void onImageSaved(#NonNull ImageCapture.OutputFileResults outputFileResults){
Log.i("my tag", "image Saved: " + pictureFile.getAbsolutePath());
int index = imageCaptureUseCases.indexOf(imageCapture);
cameraProvider.unbind(imageCapture);
if (index < imageCaptureUseCases.size() - 1) {
Camera camera = cameraProvider.bindToLifecycle(
(LifecycleOwner) activity,
cameraSelector,
imageCaptureUseCases.get(index + 1));
ListenableFuture future = camera.getCameraControl().setZoomRatio(1f);
future.addListener(() -> captureImage(imageCaptureUseCases.get(index + 1)),
ContextCompat.getMainExecutor(activity));
} else {
createImageCaptureUseCases();
cameraProvider.unbindAll();
Camera camera = cameraProvider.bindToLifecycle(
(LifecycleOwner) activity,
cameraSelector,
preview,
imageCaptureUseCases.get(0));
camera.getCameraControl().setZoomRatio(2f);
}
}
#Override
public void onError(#NonNull ImageCaptureException exception) {
Log.i("my tag", "image save error: " + pictureFile.getAbsolutePath());
}
});
}
You don't need multiple ImageCapture instances to capture an image with 2 zoom ratios, you can use the same instance, ImageCapture handles taking a picture and saving it/providing it, irrelevant to parameters such as the zoom ratio.
Looking at your code sample, it seems you might not being binding a Preview use case the second time you try to capture an image (with a different zoom ratio). This would explain why your preview is getting stuck after the first image capture. Keep in mind that an ImageCapture use case cannot be bound on its own, it must be bound with at least 1 Preview or ImageAnalysis use case.
Below is a sample to capture 2 images, each with a different zoom ratio. The code contains some repetition, and is all in 1 block, so it can definitely be improved.
private fun setUpCamera() {
val mainExecutor = ContextCompat.getMainExecutor(this)
val cameraProviderFuture = ProcessCameraProvider.getInstance(this)
cameraProviderFuture.addListener(Runnable {
// Wait for the camera provider to be retrieved
val cameraProvider = cameraProviderFuture.get()
// Build your use cases
val preview = Preview.Builder().build()
val imageCapture = ImageCapture.Builder().build()
// Get a camera selector to use
val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA
// Bind the use cases to a lifecycle
val camera = cameraProvider.bindToLifecycle(this, cameraSelector, preview, imageCapture)
// Set the preview surface provider
preview.setSurfaceProvider(previewView.createSurfaceProvider(camera.cameraInfo))
// Set the zoom ratio for the first photo
val cameraControl = camera.cameraControl
cameraControl.setZoomRatio(1F)
// When the previewView is clicked, take the photos
previewView.setOnClickListener {
imageCapture.takePicture(createOutputFilesOptions(), mainExecutor, object : ImageCapture.OnImageSavedCallback {
override fun onImageSaved(outputFileResults: ImageCapture.OutputFileResults) {
// First image captured and saved successfully
Log.d(TAG, "OnImageSavedCallback.onImageSaved: Image saved with zoom ratio 1F")
// Set a new zoom ratio for the second image capture
cameraControl.setZoomRatio(2F)
// Capture the second picture with a different zoom ratio
imageCapture.takePicture(createOutputFilesOptions(), mainExecutor, object : ImageCapture.OnImageSavedCallback {
override fun onImageSaved(outputFileResults: ImageCapture.OutputFileResults) {
// Second image captured and saved successfully
Log.d(TAG, "OnImageSavedCallback.onImageSaved: Image saved with zoom ratio 2F")
}
override fun onError(exception: ImageCaptureException) {
Log.e(TAG, "OnImageSavedCallback.onError", exception)
}
})
}
override fun onError(exception: ImageCaptureException) {
Log.e(TAG, "OnImageSavedCallback.onError", exception)
}
})
}
}, mainExecutor)
}
}
I'm willing to implement zoom feature in my app with CameraX API. I followed this medium post to implement pinch to zoom and it works.
The problem is when I retrieve the captured image in onCaptureSuccess callback, The image is not zoomed.
Here is the code I use to implement zoom on Camera in onCreate():
//ZOOM
val listener = object : ScaleGestureDetector.SimpleOnScaleGestureListener() {
override fun onScale(detector: ScaleGestureDetector): Boolean {
val zoomRatio = camera?.cameraInfo?.zoomState?.value?.zoomRatio ?: 0f
val scale = zoomRatio * detector.scaleFactor
camera?.cameraControl?.setZoomRatio(scale)
return true
}
}
scaleDetector = ScaleGestureDetector(context, listener)
And in method "bindCameraUseCases()" :
previewCamera.setOnTouchListener { _, event ->
scaleDetector.onTouchEvent(event)
}
The full method if needed :
/** Declare and bind preview, capture and analysis use cases */
fun bindCameraUseCases() {
// Get screen metrics used to setup camera for full screen resolution
val metrics = DisplayMetrics().also { previewCamera.display.getRealMetrics(it) }
Log.d(TAG, "Screen metrics: ${metrics.widthPixels} x ${metrics.heightPixels}")
val rotation = previewCamera.display.rotation
// Bind the CameraProvider to the LifeCycleOwner
val cameraSelector = CameraSelector.Builder().requireLensFacing(lensFacing).build()
val cameraProviderFuture = ProcessCameraProvider.getInstance(requireContext())
cameraProviderFuture.addListener(Runnable {
// CameraProvider
val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()
// Preview
preview = Preview.Builder()
.setTargetRotation(rotation)
.build()
previewCamera.preferredImplementationMode =
PreviewView.ImplementationMode.TEXTURE_VIEW // when setting to TEXTURE_VIEW, preview doesnt take full screen on back pressed
previewCamera.setOnTouchListener { _, event ->
scaleDetector.onTouchEvent(event)
}
// Default PreviewSurfaceProvider
preview?.setSurfaceProvider(previewCamera.createSurfaceProvider(camera?.cameraInfo))
val screenAspectRatio = aspectRatio(metrics.widthPixels, metrics.heightPixels)
// ImageCapture
imageCapture = ImageCapture.Builder()
.setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
.setTargetAspectRatio(screenAspectRatio)
.setTargetRotation(rotation)
.build()
// ImageAnalysis
imageAnalyzer = ImageAnalysis.Builder()
.setTargetAspectRatio(screenAspectRatio)
.setTargetRotation(rotation)
.build()
cameraProvider.unbindAll()
try {
camera = cameraProvider.bindToLifecycle(
this as LifecycleOwner, cameraSelector, preview, imageCapture, imageAnalyzer
)
} catch (exc: Exception) {
Log.e(TAG, "Use case binding failed", exc)
}
}, mainExecutor)
}
As I mentionned, zoom is working but then in onCaptureSucess, ImageProxy is not zoomed.
override fun onCaptureSuccess(image: ImageProxy) {
image.use { image ->
savedBitmap = image.imageProxyToBitmap()
///...
}
}
Here is the extension function to retrieve bitmap from imageProxy :
fun ImageProxy.imageProxyToBitmap(): Bitmap {
val buffer = this.planes[0].buffer
buffer.rewind()
val bytes = ByteArray(buffer.capacity())
buffer.get(bytes)
val bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
val matrix = Matrix()
matrix.postRotate(90f)
return Bitmap.createBitmap(bitmap, 0, 0,bitmap.width,bitmap.height, matrix, true)
}
Here are my dependencies :
// CameraX core library
def camerax_version = "1.0.0-beta02"
implementation "androidx.camera:camera-core:$camerax_version"
// CameraX Camera2 extensions
implementation "androidx.camera:camera-camera2:$camerax_version"
// CameraX Lifecycle library
implementation "androidx.camera:camera-lifecycle:$camerax_version"
// CameraX View class
implementation "androidx.camera:camera-view:1.0.0-alpha09"
Thank you for your help 🙏
not sure why this happens but we are unable to reproduce the issue.
In my test, ImageCapture always captures the image with zoom applied.
Currently I suspect this could a device issue. It could be helpful if you can provide the device name. It will also be helpful if you can verify it on other devices.