I'm creating an app that takes pictures in .dng format in order to process them. I'm using the camera2 API. I was able to take pictures and save them into my phone, but in .jpg format. But when I change my code in order to save them with .dng extension, it compiles, show me the preview on my phone, but when the picture is taken, I get an error. The part of my code that takes and saves the picture is as follows.
val reader = ImageReader.newInstance(1280, 720, ImageFormat.RAW_SENSOR, 1)
val outputSurfaces = ArrayList<Surface>(2)
outputSurfaces.add(reader.surface)
outputSurfaces.add(Surface(previewTextureView.surfaceTexture))
val captureBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE)
captureBuilder.addTarget(reader.surface)
captureBuilder.set(CaptureRequest.CONTROL_MODE, CameraMetadata.CONTROL_MODE_AUTO)
val file = File("myPath/myImageName.dng")
var captureResult: CaptureResult? = null
And my listeners :
val readerListener = object : ImageReader.OnImageAvailableListener {
override fun onImageAvailable(reader: ImageReader) {
var image: Image? = null
var output: OutputStream? = null
val dngCreator = DngCreator(cameraManager.getCameraCharacteristics("0"), captureResult)
try {
image = reader.acquireLatestImage()
output = FileOutputStream(file)
dngCreator.writeImage(output, image)
} catch (e: FileNotFoundException) {
e.printStackTrace()
} catch (e: IOException) {
e.printStackTrace()
} finally {
output?.close()
image?.close()
}
}
}
reader.setOnImageAvailableListener(readerListener, backgroundHandler)
val captureListener = object : CameraCaptureSession.CaptureCallback() {
override fun onCaptureCompleted(session: CameraCaptureSession, request: CaptureRequest, result: TotalCaptureResult) {
captureResult = result
super.onCaptureCompleted(session, request, result)
}
}
And finally I capture the session with:
cameraDevice.createCaptureSession(outputSurfaces, object : CameraCaptureSession.StateCallback() {
override fun onConfigured(session: CameraCaptureSession) {
try {
session.capture(captureBuilder.build(), captureListener, backgroundHandler)
} catch (e: CameraAccessException) {
e.printStackTrace()
}
}
override fun onConfigureFailed(session: CameraCaptureSession) {}
}, backgroundHandler)
I'm having one warning and one error that I didn't have before, when I was saving the image as jpeg:
W/CameraDevice-JV-0: Stream configuration failed due to: createSurfaceFromGbp:1106: Camera 0: No supported stream configurations with format 0x20 defined, failed to create output stream
E/CameraCaptureSession: Session 1: Failed to create capture session; configuration failed
Things that I changed in order to save a dng file are :
I replaced ImageFormat.JPEG with ImageFormat.RAW_SENSOR
I changed the file extension from .jpg to .dng
Instead of using dngCreator.writeImage(output, image), I used :
val buffer = image!!.planes[0].buffer
val bytes = ByteArray(buffer.capacity())
buffer.get(bytes)
output.write()
Since there is not a lot of information about this subject, I'm not sure if my implementation is correct.
This a bit of an old post but for raw images you cannot set the resolution to an arbitrary value. Assuming your device can do raw_sensor reads it has to be set to the sensor size. You need to do something like this.
val largestRaw = Collections.max(Arrays.asList(*map.getOutputSizes(ImageFormat.RAW_SENSOR)), CompareSizesByArea())
rawImageReader = ImageReader.newInstance(largestRaw.width, largestRaw.height, ImageFormat.RAW_SENSOR, /*maxImages*/ 5).apply { setOnImageAvailableListener(onRawImageAvailableListener, backgroundHandler) }
Unfortunately in kotlin i am now encountering:
java.lang.IllegalArgumentException: Missing metadata fields for tag AsShotNeutral (c628)
The outdated java Camera2Raw sample listed above does work though.
After some research, I found a implementation in order to save an image that was taken with the Camera2API, in a .dng file :
if (mImage.format == ImageFormat.RAW_SENSOR) {
val dngCreator = DngCreator(mCharacteristics, mCaptureResult)
var output: FileOutputStream? = null
try {
output = FileOutputStream(mFile)
dngCreator.writeImage(output, mImage)
} catch (e: IOException) {
e.printStackTrace()
} finally {
mImage.close()
closeOutput(output)
}
}
Where :
mCharacteristics are CameraCharacteristics, ie the properties describing the CameraDevice
mCaptureResult is produced by the CameraDevice after processing the CaptureRequest
mImage is the image retrieved in the function dequeuAndSaveImage : image = reader.get()!!.acquireNextImage()
mFile is the File where the image will be saved, for example :
mFile = Environment
.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES),
"RAW_" + generateTimestamp()+ ".dng"
Maybe it will help somebody, but as #Alex Cohn said, it's recommended to begin with the official sample github.com/googlesamples/android-Camera2Raw. It's written in Java and not in Kotlin, but it's not that hard to transform it, if needed.
Related
I tried many times to solve the issue I am having with the Exif Interface. What I am trying to do is to change the time stamp tag to it's regular time, but I don't think I am doing it correctly despite that is a GPS tag, also known as ExifInterface.TAG_GPS_TIMESTAMP, here is the code I am thinking of using, the result displays as null in the Logcat and ignore the rest of the information that will be displayed on it, any ideas on how to make it properly ? Thank you in advance:
Logcat:
D/CameraOrientationUtil: getRelativeImageRotation: destRotationDegrees=90, sourceRotationDegrees=90, isOppositeFacing=true, result=0
D/ImageCapture: Send image capture request [current, pending] = [0, 1]
D/ImageCapture: issueTakePicture
D/Camera2CameraImpl: {Camera#bf66d7f[id=0]} Issue capture request
D/CaptureSession: Issuing capture request.
D/InputTransport: Input channel constructed: fd=114
D/ViewRootImpl#c84535c[Toast]: setView = android.widget.LinearLayout{1b6a7eb V.E...... ......I. 0,0-0,0} TM=true MM=false
V/Toast: Text: Пбра in android.widget.Toast$TN#cb14948
D/ViewRootImpl#c84535c[Toast]: Relayout returned: old=[0,32][1280,800] new=[538,656][742,715] result=0x7 surface={true 3728250880} changed=true
D/mali_winsys: EGLint new_window_surface(egl_winsys_display *, void *, EGLSurface, EGLConfig, egl_winsys_surface **, EGLBoolean) returns 0x3000, [204x59]-format:1
D/OpenGLRenderer: eglCreateWindowSurface = 0xca143520, 0xde389808
D/ViewRootImpl#c84535c[Toast]: MSG_RESIZED: frame=[538,656][742,715] ci=[0,0][0,0] vi=[0,0][0,0] or=2
I/Model: SM-T395
I/Manufacturer: samsung
I/Artist: CadIS
I/Date Stamp: 2021:10:06
I/Time Stamp: null
I/Latitude: 42/1,43/1,87135600/10000000
I/Latitude Reference: N
I/Longitude: 23/1,15/1,49287600/10000000
I/Longitude Reference: E
I/Orientation: 0
I/Image Direction: null
private lateinit var outputDirectory: File
#RequiresApi(Build.VERSION_CODES.O)
override fun onCreate(savedInstanceState: Bundle?) {
/**Checking for all permissions to be granted through method,
*if so it will start the camera, if not checks for a request with permissions and request code**/
if (allPermissionsGranted()) {
startCamera()
} else {
ActivityCompat.requestPermissions(this, REQUIRED_PERMISSIONS, REQUEST_CODE_PERMISSIONS)
}
// Making a variable, connecting to the button and setting up a listener to initiate taking a photo through a method
val imageCaptureBtn = findViewById<FloatingActionButton>(R.id.camera_capture_button)
imageCaptureBtn.setOnClickListener {
takePhoto()
}
// Connecting the late init variable with a new method to get output directory
outputDirectory = outputDirectoryFolder()
cameraExecutor = Executors.newSingleThreadExecutor()
}
/**This method provides when clicked to take a photo and save it to the specified directory**/
#SuppressLint("RestrictedApi")
private fun takePhoto() {
Toast.makeText(this, "Обработва се ..", Toast.LENGTH_SHORT).show()
// Get a stable reference of the modifiable image capture use case
val imageCapture = imageCapture ?: return
//val fileName = SimpleDateFormat(FILENAME_FORMAT, Locale.US).format(System.currentTimeMillis()).substring(1)
// Creating a file that combined the directory and the name of that image file as JPEG format
val photoFile = File(outputDirectory, "IMG-${SimpleDateFormat(FILENAME_FORMAT, Locale.ENGLISH).format(System.currentTimeMillis())}.jpg")
// Create output options object which contains file + metadata
val outputOptions = ImageCapture.OutputFileOptions.Builder(photoFile).build()
// Set up image capture listener, which is triggered after photo has
// been taken
imageCapture.takePicture(
outputOptions, ContextCompat.getMainExecutor(this), object : ImageCapture.OnImageSavedCallback {
override fun onError(exc: ImageCaptureException) {
Toast.makeText(this#CameraActivity, "Обработването се провали!, $exc", Toast.LENGTH_SHORT).show()
Log.e(TAG, "Photo capture failed: ${exc.message}", exc)
}
#RequiresApi(Build.VERSION_CODES.N)
override fun onImageSaved(output: ImageCapture.OutputFileResults) {
val savedUri = Uri.fromFile(photoFile)
val msg = "Photo capture succeeded: $savedUri" Toast.makeText(baseContext, msg, Toast.LENGTH_SHORT).show()
Log.d(TAG, msg)
val exif = ExifInterface(photoFile)
checkExifData(exif)
}
private fun checkExifData(exifInterface: ExifInterface) {
saveExifData(exifInterface) // This method will save any custom/changed data in it and will re-use the tag as an identifier.
val exifTimeStamp = exifInterface.getAttribute(ExifInterface.TAG_GPS_TIMESTAMP)
Log.i(TAG, "$exifTimeStamp")
}
#SuppressLint("SimpleDateFormat")
private fun saveExifData(exifInterface: ExifInterface) {
try {
val timeStamp = SimpleDateFormat("HH:mm:ss").format(Date(System.currentTimeMillis()))
exifInterface.setAttribute(ExifInterface.TAG_GPS_TIMESTAMP, timeStamp)
exifInterface.saveAttributes()//This will save any written data that the tags are included.
} catch (e: Exception) {
e.printStackTrace()
Log.e("Error", "$e")
Toast.makeText(this, "$e", Toast.LENGTH_SHORT).show()
}
photoFile - This it the File variable that will use the file's exif and metadata change on the image JPEG format.
outputDirectory - Recasted variable that is used on a different method. Using the method is making a folder "Pictures" in Internal Storage(Environment.getExternalStorageDirectory()), then another File() type making another folder "CadIS for Android" within "Pictures".
FILENAME_FORMAT - This is casted in a companion object in order to rename the file to the date and time format:
I am developing a camera application that users Camera API 2. I have managed to get the image capturing and video recording functions to works. But whenever I perform either of those things my camera preview freezes. I am trying to understand what changes I need to make in order to work it fine without any issues.
After handling camera permission and selecting the camera id I invoke the following function to get a camera preview. this works fine without any issues.
private fun startCameraPreview(){
if (ContextCompat.checkSelfPermission(requireContext(), Manifest.permission.CAMERA)
== PackageManager.PERMISSION_GRANTED &&
ContextCompat.checkSelfPermission(requireContext(), Manifest.permission.RECORD_AUDIO)
== PackageManager.PERMISSION_GRANTED){
lifecycleScope.launch(Dispatchers.Main) {
camera = openCamera(cameraManager, cameraId!!, cameraHandler)
val cameraOutputTargets = listOf(viewBinding.cameraSurfaceView.holder.surface)
session = createCaptureSession(camera, cameraOutputTargets, cameraHandler)
val captureBuilder = camera.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)
captureBuilder.set(
CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE)
captureBuilder.addTarget(viewBinding.cameraSurfaceView.holder.surface)
session.setRepeatingRequest(captureBuilder.build(), null, cameraHandler)
selectMostMatchingImageCaptureSize()
selectMostMatchingVideoRecordSize()
}
}else{
requestCameraPermission()
}
}
According to my understanding when you create CaptureSession with relevant targets and when you make setRepeatingRequest out of it we get the camera preview. correct me if I'm wrong.
Here is the function I use to capture an image.
private fun captureImage(){
if (captureSize != null) {
captureImageReader = ImageReader.newInstance(
captureSize!!.width, captureSize!!.height, ImageFormat.JPEG, IMAGE_BUFFER_SIZE)
viewModel.documentPreviewSizeToCaptureSizeScaleFactor =
captureSize!!.width / previewSize!!.width.toFloat()
lifecycleScope.launch(Dispatchers.IO) {
val cameraOutputTargets = listOf(
viewBinding.cameraSurfaceView.holder.surface,
captureImageReader.surface
)
session = createCaptureSession(camera, cameraOutputTargets, cameraHandler)
takePhoto().use { result ->
Log.d(TAG, "Result received: $result")
// Save the result to disk
val output = saveResult(result)
Log.d(TAG, "Image saved: ${output.absolutePath}")
// If the result is a JPEG file, update EXIF metadata with orientation info
if (output.extension == "jpg") {
decodedExifOrientationOfTheImage =
decodeExifOrientation(result.orientation)
val exif = ExifInterface(output.absolutePath)
exif.setAttribute(
ExifInterface.TAG_ORIENTATION, result.orientation.toString()
)
exif.saveAttributes()
Log.d(TAG, "EXIF metadata saved: ${output.absolutePath}")
}
}
}
}
}
The function takePhoto() is a function I have placed in the inherited base fragment class which is responsible for setting up capture requests and saving the image.
protected suspend fun takePhoto(): CombinedCaptureResult = suspendCoroutine { cont ->
// Flush any images left in the image reader
#Suppress("ControlFlowWithEmptyBody")
while (captureImageReader.acquireNextImage() != null) {
}
// Start a new image queue
val imageQueue = ArrayBlockingQueue<Image>(IMAGE_BUFFER_SIZE)
captureImageReader.setOnImageAvailableListener({ reader ->
val image = reader.acquireNextImage()
Log.d(TAG, "Image available in queue: ${image.timestamp}")
imageQueue.add(image)
}, imageReaderHandler)
val captureRequest = session.device.createCaptureRequest(
CameraDevice.TEMPLATE_STILL_CAPTURE
).apply {
addTarget(captureImageReader.surface)
}
session.capture(captureRequest.build(), object : CameraCaptureSession.CaptureCallback() {
override fun onCaptureCompleted(
session: CameraCaptureSession,
request: CaptureRequest,
result: TotalCaptureResult
) {
super.onCaptureCompleted(session, request, result)
val resultTimestamp = result.get(CaptureResult.SENSOR_TIMESTAMP)
Log.d(TAG, "Capture result received: $resultTimestamp")
// Set a timeout in case image captured is dropped from the pipeline
val exc = TimeoutException("Image dequeuing took too long")
val timeoutRunnable = Runnable { cont.resumeWithException(exc) }
imageReaderHandler.postDelayed(timeoutRunnable, IMAGE_CAPTURE_TIMEOUT_MILLIS)
// Loop in the coroutine's context until an image with matching timestamp comes
// We need to launch the coroutine context again because the callback is done in
// the handler provided to the `capture` method, not in our coroutine context
#Suppress("BlockingMethodInNonBlockingContext")
lifecycleScope.launch(cont.context) {
while (true) {
val image = imageQueue.take()
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q &&
image.format != ImageFormat.DEPTH_JPEG &&
image.timestamp != resultTimestamp
) continue
Log.d(TAG, "Matching image dequeued: ${image.timestamp}")
// Unset the image reader listener
imageReaderHandler.removeCallbacks(timeoutRunnable)
captureImageReader.setOnImageAvailableListener(null, null)
// Clear the queue of images, if there are left
while (imageQueue.size > 0) {
imageQueue.take().close()
}
// Compute EXIF orientation metadata
val rotation = relativeOrientation.value ?: defaultOrientation()
Log.d(TAG,"EXIF rotation value $rotation")
val mirrored = characteristics.get(CameraCharacteristics.LENS_FACING) ==
CameraCharacteristics.LENS_FACING_FRONT
val exifOrientation = computeExifOrientation(rotation, mirrored)
// Build the result and resume progress
cont.resume(
CombinedCaptureResult(
image, result, exifOrientation, captureImageReader.imageFormat
)
)
}
}
}
}, cameraHandler)
}
Invoking these functions above does perform image capturing but it freezes the preview. If I want to get the preview back I need to reset the preview using the bellow function. I have to call this method end of captureImage() function.
private fun resetCameraPreview(){
lifecycleScope.launch(Dispatchers.Main) {
val cameraOutputTargets = listOf(viewBinding.cameraSurfaceView.holder.surface)
session = createCaptureSession(camera, cameraOutputTargets, cameraHandler)
val captureBuilder = camera.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)
captureBuilder.set(
CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE)
captureBuilder.addTarget(viewBinding.cameraSurfaceView.holder.surface)
session.setRepeatingRequest(captureBuilder.build(), null, cameraHandler)
}
}
Even doing this is not providing a good User experience as it freezes the preview for few seconds. When it's come to video recording this issue becomes unsolvable using even about not so good fix.
This is the function I use for video recording.
private fun startRecoding(){
if (videoRecordeSize != null) {
lifecycleScope.launch(Dispatchers.IO) {
configureMediaRecorder(videoRecodeFPS,videoRecordeSize!!)
val cameraOutputTargets = listOf(
viewBinding.cameraSurfaceView.holder.surface,
mediaRecorder.surface
)
session = createCaptureSession(camera, cameraOutputTargets, cameraHandler)
recordVideo()
}
}
}
The function recordVideo() is a function in the inherited base fragment class which is responsible for setting up capture requests and starting mediaRecorder to begin video recode to a file.
protected fun recordVideo() {
lifecycleScope.launch(Dispatchers.IO) {
val recordRequest = session.device
.createCaptureRequest(CameraDevice.TEMPLATE_RECORD).apply {
addTarget(mediaRecorder.surface)
}
session.setRepeatingRequest(recordRequest.build(), null, cameraHandler)
mediaRecorder.apply {
start()
}
}
}
While it does record the video correctly and saves the file when invoked mediaRecorder.stop(). But the whole time the camera preview is freeze even after calling mediaRecorder.stop().
What am I missing here? both of the times when I create capture sessions I have included preview surface as a target. Doesn't it enough for Camera 2 API to know that it should push frames to the surface while capturing images or recording videos? You can find the repo for this codebase here. I hope someone can help me because I'm stuck with Camera 2 API. I wish I could use cameraX but some parts are still in beta so I can't use it in production.
I'm making a card holder app, the user is supposed to take a picture of his id, contacts, notes etc. so he can later use them digitally. Problem is how do I take a camera input and save it as an image inside the application so it stays there?
You can simply use the native Camera Application of your device to get the image and then save it to the device . Android Team has done much easy for developers to perform such task.
You need to make use of ActivityContracts and MediaStore to take the image and store it into your device respectively.
Step 1 :
First Generate a Uri for your Image , in the following manner
#RequiresApi(Build.VERSION_CODES.Q)
suspend fun createPhotoUri(source: Source): Uri? {
val imageCollection = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q) {
MediaStore.Images.Media.getContentUri(MediaStore.VOLUME_EXTERNAL_PRIMARY)
} else {
MediaStore.Images.Media.EXTERNAL_CONTENT_URI
}
val dirDest = File(
Environment.DIRECTORY_PICTURES,
context.getString(R.string.app_name) + File.separator + "CAMERA"
)
val date = System.currentTimeMillis()
val fileName = "$date.jpg"
return withContext(Dispatchers.IO) {
val newImage = ContentValues().apply {
put(MediaStore.Images.Media.DISPLAY_NAME, fileName)
put(MediaStore.MediaColumns.RELATIVE_PATH, "$dirDest${File.separator}")
}
return#withContext context.contentResolver.insert(imageCollection, newImage)
}
}
Step 2:
Then when you want to capture the Image , then on the OnClickListener perform the following :
binding.takePictureButton.setOnClickListener {
viewLifecycleOwner.lifecycleScope.launch {
viewModel.createPhotoUri(Source.CAMERA)?.let { uri ->
actionTakePicture.launch(uri)
}
}
}
Step 3 :
The actionTakePicture ActivityContract is as follows :
private val actionTakePicture = registerForActivityResult(TakePicture()) { success ->
if (!success) {
Log.d(tag, "Image taken FAIL")
return#registerForActivityResult
}
Log.d(tag, "Image taken SUCCESS")
}
And you are done with capturing you Image and storing it .
Make sure you declare permission's before using the above code ,else it wont work .
The answer mentioned by #StefanoSansone can also be used . But the issue with that is you need to perfectly setup CameraX library and that might be tedious for your useCase . One should use library like CameraX if they want to have more control on Camera with other camera capabilities , when you application is more of a Camera Application . Else using the above method is perfectly fine . Saves one from tedious work .
If you are using Kotlin and Jetpack libraries, I suggest you to take a look to CameraX library.
You can use the takePicture method to take a photo with camera and save it in the storage.
A complete example can be found in the CameraX codelab
private fun takePhoto() {
// Get a stable reference of the modifiable image capture use case
val imageCapture = imageCapture ?: return
// Create time-stamped output file to hold the image
val photoFile = File(
outputDirectory,
SimpleDateFormat(FILENAME_FORMAT, Locale.US
).format(System.currentTimeMillis()) + ".jpg")
// Create output options object which contains file + metadata
val outputOptions = ImageCapture.OutputFileOptions.Builder(photoFile).build()
// Set up image capture listener, which is triggered after photo has
// been taken
imageCapture.takePicture(
outputOptions, ContextCompat.getMainExecutor(this), object : ImageCapture.OnImageSavedCallback {
override fun onError(exc: ImageCaptureException) {
Log.e(TAG, "Photo capture failed: ${exc.message}", exc)
}
override fun onImageSaved(output: ImageCapture.OutputFileResults) {
val savedUri = Uri.fromFile(photoFile)
val msg = "Photo capture succeeded: $savedUri"
Toast.makeText(baseContext, msg, Toast.LENGTH_SHORT).show()
Log.d(TAG, msg)
}
})
}
I had some difficulties last week with saving images.
I finally used sharedPreferences files and saved bitmap as text.
From what I've heard it's not a good practice, It's better to save in files and save the path.
However the code is very compact and it's working really well (in my case never have to load more than 4 pictures)
var bm= MediaStore.Images.Media.getBitmap(this.getContentResolver(), selectedImageUri)
bm.compress(Bitmap.CompressFormat.PNG, 100, baos) //bm is the bitmap object
val b = baos.toByteArray()
val encoded: String = Base64.encodeToString(b, Base64.DEFAULT)
editor.putString("backgroundBitmap",encoded)//put the bitmap as text in sharedPref files, use to back the bitmap in mainActivity
editor.commit()
I am developing an Android app that saves bitmaps as jpeg images to the external storage. It occasionally happens that the JPEGs get corrupt (see image below). I have realized that corruption (eventually) only occurs when saveExif() is called. If I comment out saveExif(), the corruption never happened. This means that it is caused by something related to EXIF and not the compression process.
I have analyzed the jpeg with software (Bad Peggy) that detected the image as corrupt due to a premature end of data segment.
Any idea how to fix it?
This is how I save the image initially:
lateinit var uri: Uri
val imageOutStream: OutputStream
val contentResolver = context.contentResolver
val mimeType = "image/jpeg"
val mediaContentUri = MediaStore.Images.Media.getContentUri(MediaStore.VOLUME_EXTERNAL_PRIMARY)
val values = ContentValues().apply {
put(MediaStore.Images.Media.DISPLAY_NAME, fileName)
put(MediaStore.Images.Media.MIME_TYPE, mimeType)
put(MediaStore.Images.Media.RELATIVE_PATH, directory)
}
contentResolver.run {
uri = context.contentResolver.insert(mediaContentUri, values)
?: return
imageOutStream = openOutputStream(uri) ?: return
}
try {
imageOutStream.use { bitmap.compress(Bitmap.CompressFormat.JPEG, photoCompression, it) }
} catch (e: java.lang.Exception) {
e.printStackTrace()
} finally {
imageOutStream.close()
}
try {
context.contentResolver.openInputStream(uri).use {
val exif = ExifInterface(context.contentResolver.openFileDescriptor(uri, "rw")!!.fileDescriptor)
saveExif(exif, context) //method ads exif metadata to image
}
}catch (e: java.lang.Exception){
}
This is how I add Exif metadata after the JPEG has been stored:
private fun saveExif(exif: ExifInterface, context: Context){
if (referenceWithCaptionExif != "" && notesExif != "") {
exif.setAttribute(ExifInterface.TAG_USER_COMMENT, "$referenceWithCaptionExif | $notesExif")
} else {
exif.setAttribute(ExifInterface.TAG_USER_COMMENT, "$referenceWithCaptionExif$notesExif")
}
if (companyExif != "") {
exif.setAttribute(ExifInterface.TAG_CAMERA_OWNER_NAME, companyExif)
val yearForExif = SimpleDateFormat("yyyy",
Locale.getDefault()).format(Date())
exif.setAttribute(ExifInterface.TAG_COPYRIGHT, "Copyright (c) $companyExif $yearForExif")
}
if (projectExif != "") {
exif.setAttribute(ExifInterface.TAG_IMAGE_DESCRIPTION, projectExif)
}
exif.setAttribute(ExifInterface.TAG_MAKER_NOTE, "Project[$projectExif] Company[$companyExif] " +
"Notes[$notesExif] Reference[$referenceExif] ReferenceType[$referenceTypeExif] Coordinates[$coordinatesExif] " +
"CoordinateSystem[$coordinateSystemExif] Accuracy[$accuracyExif] Altitude[$altitudeExif] " +
"Date[$dateTimeExif] Address[$addressExif]")
exif.setAttribute(ExifInterface.TAG_ARTIST, "${android.os.Build.MANUFACTURER} ${android.os.Build.MODEL}")
exif.setAttribute(ExifInterface.TAG_SOFTWARE, context.resources.getString(R.string.app_name))
exif.setAttribute(ExifInterface.TAG_MAKE, (android.os.Build.MANUFACTURER).toString())
exif.setAttribute(ExifInterface.TAG_MODEL, (android.os.Build.MODEL).toString())
exif.setAttribute(ExifInterface.TAG_COMPRESSION, 7.toString())
exif.setAttribute(ExifInterface.TAG_IMAGE_WIDTH, "${bitmapToProcess.width} px")
exif.setAttribute(ExifInterface.TAG_IMAGE_LENGTH, "${bitmapToProcess.height} px")
exif.setAttribute(ExifInterface.TAG_PIXEL_X_DIMENSION, "${bitmapToProcess.width} px")
exif.setAttribute(ExifInterface.TAG_PIXEL_Y_DIMENSION, "${bitmapToProcess.height} px")
exif.setAttribute(ExifInterface.TAG_GPS_ALTITUDE, altitudeExif)
exif.setAttribute(ExifInterface.TAG_GPS_ALTITUDE_REF, 0.toString())
exif.setAltitude(altitudeMetricExif)
exif.setLatLong(latitudeWGS84Exif, longitudeWGS84Exif)
exif.setAttribute(ExifInterface.TAG_GPS_TIMESTAMP, timeGPSExif)
exif.setAttribute(ExifInterface.TAG_GPS_DATESTAMP, dateGPSExif)
exif.setAttribute(ExifInterface.TAG_GPS_PROCESSING_METHOD, "GPS")
exif.setAttribute(ExifInterface.TAG_DATETIME, dateTimeOriginalExif)
exif.setAttribute(ExifInterface.TAG_DATETIME_ORIGINAL, dateTimeOriginalExif)
exif.setAttribute(ExifInterface.TAG_DATETIME_DIGITIZED, dateTimeOriginalExif)
if(Build.VERSION.SDK_INT >= Build.VERSION_CODES.N){
exif.setAttribute(ExifInterface.TAG_OFFSET_TIME_DIGITIZED, SimpleDateFormat("XXX", Locale.getDefault()).format(Date()))
exif.setAttribute(ExifInterface.TAG_OFFSET_TIME_ORIGINAL, SimpleDateFormat("XXX", Locale.getDefault()).format(Date()))
exif.setAttribute(ExifInterface.TAG_OFFSET_TIME, SimpleDateFormat("XXX", Locale.getDefault()).format(Date()))
}
exif.saveAttributes()
}
You could comment out each piece of metadata, one at a time and see if there’s a specific one that is causing the corruption. There’s a lot of programming happening for some of them so I wonder if it has to do with an incorrect string type or something.
I use the new CameraX API for taking a picture like this :
imageButton.setOnClickListener{
val file = File(externalMediaDirs.first(),
"${System.currentTimeMillis()}.jpg")
imageCapture.takePicture(executor, object :
ImageCapture.OnImageCapturedListener() {
override fun onCaptureSuccess(
image: ImageProxy,
rotationDegrees: Int)
{
// DO I NEED TO USE 'image' TO ACCESS THE IMAGE DATA FOR MANIPULATION (e.g. image.planes)
}
override fun onError(
imageCaptureError: ImageCapture.ImageCaptureError,
message: String,
cause: Throwable?
) {
val msg = "Photo capture failed: $message"
Log.e("CameraXApp", msg, cause)
}
})
imageCapture.takePicture(file, executor,
object : ImageCapture.OnImageSavedListener {
override fun onError(
imageCaptureError: ImageCapture.ImageCaptureError,
message: String,
exc: Throwable?
) {
val msg = "Photo capture failed: $message"
Log.e("CameraXApp", msg, exc)
}
override fun onImageSaved(file: File) {
val msg = "Photo capture succeeded: ${file.absolutePath}"
Log.d("CameraXApp", msg)
}
}
}
I want to apply some image processing with Renderscript when the image is captured. But I dont know how to access the pixels of the image that is captured.
Can someone provide a solution ?
I tried it with image.planes within the onCaptureSuccess() callback (see comment)
I must admit that I am new to this and do not know really what planes are. Before this, I only worked with Bitmaps ( doing some image processing on Bitmaps with Renderscript). Is there a way to turn a frame/image into a Bitmap and apply some sort image processing on it "on the fly" before saving it as a file ? If yes, how ? The official CameraX guidelines are not very helpful when it comes to this.
This is probably very late answer but the answer is in ImageCapture callback.
You get an Image with
val image = imageProxy.image
Notice this will only work when you have images in YUV format, in order to configure your camera with that you can do it with
imageCapture = ImageCapture.Builder()
.setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
.setBufferFormat(ImageFormat.YUV_420_888)
...
.build()
Now can get image.planes
The default image format is JPEG
and you can access the buffer with
buffer: ByteBuffer = imageProxy.image.planes[0].getBuffer()
and if you plan on setting it to YUV
val image = imageProxy.image
val yBuffer = image.planes[0].buffer // Y
val uBuffer = image.planes[1].buffer // U
val vBuffer = image.planes[2].buffer // V