Huawei ML Kit Text to speech duration of audio - android

I have implemented On-device TTS from the Huawei ML Kit in my app and it works well.
Now I would like to find out the duration of the synthesized audio. I want to for example display the remaining time while the audio is playing.
I tried writing the generated audio fragments from the callback to a .pcm file
override fun onAudioAvailable(
taskId: String?,
audioFragment: MLTtsAudioFragment?,
offset: Int,
range: android.util.Pair<Int, Int>?,
bundle: Bundle?
) {
if (taskId != null) {
if (audioFragment != null) {
val fileName = "audio.pcm"
writeToFile(audioFragment.audioData, fileName, true)
}
}
}
fun writeToFile(buffer: ByteArray?, strFileName: String?, append: Boolean) {
if (speechFile == null) {
val pcmFileDir: File = view.getExternalFilesDir("/PCM")!!
speechFile = File(pcmFileDir, strFileName)
}
var raf: RandomAccessFile? = null
var out: FileOutputStream? = null
try {
if (append) {
raf = RandomAccessFile(speechFile, "rw")
speechFile?.length()?.let { raf.seek(it) }
raf.write(buffer)
} else {
out = FileOutputStream(speechFile)
out.write(buffer)
out.flush()
}
} catch (e: IOException) {
e.printStackTrace()
} finally {
try {
raf?.close()
out?.close()
} catch (e: IOException) {
e.printStackTrace()
}
}
}
and getting the duration that way, but the Android MediaPlayer.getDuration() doesn't seem to work with a .pcm file.
Is there a better way to get the duration of the audio? If not then is it possible to calculate the duration of the .pcm file somehow?

Related

Getting very small PCM file from AudioRecord

I am trying to write the audio signal recorded on the phone microphone to PCM file, using Android AudioRecord class. The PCM files I am getting are too small and when I convert them to WAV there is only the click sound(10-20ms). The filesize is roughly couple KB for minute. Please help! Changing the buffer size is not helping.
val SAMPLE_RATE = 44100
val AUDIO_SOURCE = MediaRecorder.AudioSource.MIC
val CHANNEL_CONFIG = AudioFormat.CHANNEL_IN_STEREO
val AUDIO_FORMAT = AudioFormat.ENCODING_PCM_16BIT
val BUFFER_SIZE_RECORDING =
AudioRecord.getMinBufferSize(SAMPLE_RATE, CHANNEL_CONFIG, AUDIO_FORMAT)
fun startRecording(dir: String) {
isRecordingAudio = true
if (ActivityCompat.checkSelfPermission(
fragmentActivitySender,
Manifest.permission.RECORD_AUDIO
) != PackageManager.PERMISSION_GRANTED
) {
return
}
recorder = AudioRecord(
AUDIO_SOURCE, SAMPLE_RATE, CHANNEL_CONFIG, AUDIO_FORMAT,
BUFFER_SIZE_RECORDING
)
if (this::recorder.isInitialized) {
recorder.startRecording()
recordingThread = thread(true) {
writeAudioDataToFile(dir)
}
}
}
private fun writeAudioDataToFile(dir: String) {
val audioBuffer = ByteArray(BUFFER_SIZE_RECORDING)
val outputStream: FileOutputStream?
try {
outputStream = FileOutputStream(dir)
} catch (e: FileNotFoundException) {
return
}
while (isRecordingAudio) {
val read = recorder.read(audioBuffer,0, BUFFER_SIZE_RECORDING)
try {
outputStream.write(read)
// clean up file writing operations
} catch (e: IOException) {
e.printStackTrace()
}
}
try {
outputStream.flush()
outputStream.close()
} catch (e: IOException) {
Log.e(ContentValues.TAG, "exception while closing output stream $e")
e.printStackTrace()
}
}
First thing: You are flushing the file in the loop, and that is wrong. Move: outputStream.flush() outside of the loop to one line above outputStream.close() read more about it here
Second issue, you need to write the audioBuffer, not the read variable. The read variable is the result of the read method, is it successful or not. The real data is in the read buffer.

Download images from a URL, save them to App Internal Storage without blocking calls (multiple files in parallel). Using Kotlin Coroutines on Android

Basically, I am trying to download three different images(bitmaps) from a URL and save them to Apps Internal storage, and then use the URI's from the saved file to save a new Entity to my database. I am having a lot of issues with running this in parallel and getting it to work properly. As ideally all three images would be downloaded, saved and URI's returned simultaneously. Most of my issues come from blocking calls that I cannot seem to avoid.
Here's all of the relevant code
private val okHttpClient: OkHttpClient = OkHttpClient()
suspend fun saveImageToDB(networkImageModel: CBImageNetworkModel): Result<Long> {
return withContext(Dispatchers.IO) {
try {
//Upload all three images to local storage
val edgesUri = this.async {
val req = Request.Builder().url(networkImageModel.edgesImageUrl).build()
val response = okHttpClient.newCall(req).execute() // BLOCKING
val btEdges = BitmapFactory.decodeStream(response.body?.byteStream())
return#async saveBitmapToAppStorage(btEdges, ImageType.EDGES)
}
val finalUri = this.async {
val urlFinal = URL(networkImageModel.finalImageUrl) // BLOCKING
val btFinal = BitmapFactory.decodeStream(urlFinal.openStream())
return#async saveBitmapToAppStorage(btFinal, ImageType.FINAL)
}
val labelUri = this.async {
val urlLabels = URL(networkImageModel.labelsImageUrl)
val btLabel = BitmapFactory.decodeStream(urlLabels.openStream())
return#async saveBitmapToAppStorage(btLabel, ImageType.LABELS)
}
awaitAll(edgesUri, finalUri, labelUri)
if(edgesUri.getCompleted() == null || finalUri.getCompleted() == null || labelUri.getCompleted() == null) {
return#withContext Result.failure(Exception("An image couldn't be saved"))
}
} catch (e: Exception) {
Result.failure<Long>(e)
}
try {
// Result.success( db.imageDao().insertImage(image))
Result.success(123) // A placeholder untill I actually get the URI's to create my Db Entity
} catch (e: Exception) {
Timber.e(e)
Result.failure(e)
}
}
}
//Save the bitmap and return Uri or null if failed
private fun saveBitmapToAppStorage(bitmap: Bitmap, imageType: ImageType): Uri? {
val type = when (imageType) {
ImageType.EDGES -> "edges"
ImageType.LABELS -> "labels"
ImageType.FINAL -> "final"
}
val filename = "img_" + System.currentTimeMillis().toString() + "_" + type
val file = File(context.filesDir, filename)
try {
val fos = file.outputStream()
bitmap.compress(Bitmap.CompressFormat.PNG, 100, fos);
fos.close();
} catch (e: Exception) {
Timber.e(e)
return null
}
return file.toUri()
}
Here I am calling this function
viewModelScope.launch {
val imageID = appRepository.saveImageToDB(imageNetworkModel)
withContext(Dispatchers.Main) {
val uri = Uri.parse("$PAINT_DEEPLINK/$imageID")
navManager.navigate(uri)
}
}
Another issue I am facing is returning the URI in the first place and handling errors. As if one of these parts fails, I'd like to cancel the whole thing and return Result.failure(), but I am unsure on how to achieve that. As returning null just seems meh, I'd much prefer to have an error message or something along those lines.

ImageReader's onImageAvailable method doesn't call and preview shows only 8 frames in slow motion and freezes (Camera2)

I noticed strange behavior on Xiaomi Redmi Note 9 Pro. I tested the application on hundreds of phones but this problem appears only on this device and only when used ImageReader with YUV_420_888 format and 176*144 preview resolution (for example with 320 * 240 or JPEG or without ImageReader as capture surface everything works well). onImageAvailable method doesn't call, preview shows only 8 frames in slow motion and freezes, app slows down. onCaptureCompleted() in CameraCurrentParamsReceiver also calls only 8 times.
I get the smallest resolution by using getMinPreviewSize (176 * 144 for this Xiaomi phone).
const val PREVIEW_IMAGE_FORMAT = ImageFormat.YUV_420_888
const val IMAGE_READER_MAX_SIMULTANEOUS_IMAGES = 4
val previewCaptureCallback = CameraCurrentParamsReceiver(this)
private fun startPreview(cameraDevice: CameraDevice, cameraProperties: CameraProperties)
{
val imageReader = ImageReader.newInstance(cameraProperties.previewSize.width,
cameraProperties.previewSize.height,
PREVIEW_IMAGE_FORMAT,
IMAGE_READER_MAX_SIMULTANEOUS_IMAGES)
this.imageReader = imageReader
bufferedImageConverter = BufferedImageConverter(cameraProperties.previewSize.width, cameraProperties.previewSize.height)
val previewSurface = previewSurface
val previewSurfaceForCamera =
if (previewSurface != null)
{
if (previewSurface.isValid)
{
previewSurface
}
else
{
Log.w(TAG, "Invalid preview surface - camera preview display is not available")
null
}
}
else
{
null
}
val captureSurfaces = listOfNotNull(imageReader.surface, previewSurfaceForCamera)
cameraDevice.createCaptureSession(
captureSurfaces,
object : CameraCaptureSession.StateCallback()
{
override fun onConfigureFailed(cameraCaptureSession: CameraCaptureSession)
{
Log.e(TAG, "onConfigureFailed() cannot configure camera")
if (isCameraOpened(cameraDevice))
{
shutDown("onConfigureFailed")
}
}
override fun onConfigured(cameraCaptureSession: CameraCaptureSession)
{
Log.d(TAG, "onConfigured()")
if (!isCameraOpened(cameraDevice))
{
cameraCaptureSession.close()
shutDown("onConfigured.isCameraOpened")
return
}
captureSession = cameraCaptureSession
try
{
val request = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)
captureSurfaces.forEach { request.addTarget(it) }
CameraPreviewRequestInitializer.initializePreviewRequest(request, cameraProperties, controlParams, isControlParamsStrict)
captureRequestBuilder = request
val previewCallback = PreviewFrameHandler(this#Camera2)
this#Camera2.previewFrameHandler = previewCallback
imageReader.setOnImageAvailableListener(previewCallback, previewCallback.backgroundHandler)
cameraCaptureSession.setRepeatingRequest(request.build(), previewCaptureCallback, null)
}
catch (ex: CameraAccessException)
{
Log.e(TAG, "onConfigured() failed with exception", ex)
shutDown("onConfigured.CameraAccessException")
}
}
},
null)
}
private fun chooseCamera(manager: CameraManager): CameraProperties?
{
val cameraIdList = manager.cameraIdList
if (cameraIdList.isEmpty())
{
return null
}
for (cameraId in cameraIdList)
{
val characteristics = manager.getCameraCharacteristics(cameraId)
val facing = characteristics.get(CameraCharacteristics.LENS_FACING)
if (facing != null && facing == CameraCharacteristics.LENS_FACING_BACK)
{
val minPreviewSize = getMinPreviewSize(characteristics)
if (minPreviewSize == null)
{
Log.e(TAG, "chooseCamera() Cannot determine the preview size")
return null
}
Log.d(TAG, "chooseCamera() chosen camera id: $cameraId, preview size: $minPreviewSize")
return CameraProperties(cameraId,
minPreviewSize,
characteristics)
}
}
return null
}
private fun getMinPreviewSize(characteristics: CameraCharacteristics): Size?
{
val map = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)
if (map == null)
{
Log.e(TAG, "getMinPreviewSize() Map is empty")
return null
}
return map.getOutputSizes(Constants.Camera.PREVIEW_IMAGE_FORMAT)?.minBy { it.width * it.height }
}
PreviewFrameHandler and CameraCurrentParamsReceiver (previewCaptureCallback variable)
private class PreviewFrameHandler(private val parent: Camera2) : ImageReader.OnImageAvailableListener, Handler.Callback
{
val backgroundHandler: Handler
private val backgroundHandlerThread: HandlerThread = HandlerThread("Camera2.PreviewFrame.HandlerThread")
private val mainHandler: Handler = Handler(Looper.getMainLooper(), this)
/**
* Main thread.
*/
init
{
backgroundHandlerThread.start()
backgroundHandler = Handler(backgroundHandlerThread.looper)
}
fun shutDown()
{
backgroundHandlerThread.quit()
mainHandler.removeMessages(0)
}
override fun handleMessage(msg: Message?): Boolean
{
msg ?: return false
parent.cameraFrameListener.onFrame(msg.obj as RGBImage)
return true
}
/**
* Background thread.
*/
private val relativeTimestamp = RelativeTimestamp()
override fun onImageAvailable(reader: ImageReader)
{
var image: Image? = null
try
{
image = reader.acquireNextImage()
image ?: return
val rgbImage = parent.bufferedImageConverter?.convertYUV420spToRGB(image, relativeTimestamp.updateAndGetSeconds(image.timestamp))
rgbImage ?: return
mainHandler.sendMessage(mainHandler.obtainMessage(0, rgbImage))
}
catch (ex: Exception)
{
Log.e(TAG, "onImageAvailable()", ex)
}
finally
{
image?.close()
}
}
private class RelativeTimestamp
{
private var initialNanos = 0L
fun updateAndGetSeconds(currentNanos: Long): Double
{
if (initialNanos == 0L)
{
initialNanos = currentNanos
}
return nanosToSeconds(currentNanos - initialNanos)
}
}
}
/**
* Class used to read current camera params.
*/
private class CameraCurrentParamsReceiver(private val parent: Camera2) : CameraCaptureSession.CaptureCallback()
{
private var isExposureTimeExceptionLogged = false
private var isIsoExceptionLogged = false
override fun onCaptureSequenceAborted(session: CameraCaptureSession, sequenceId: Int)
{
}
override fun onCaptureCompleted(session: CameraCaptureSession, request: CaptureRequest, result: TotalCaptureResult)
{
try
{
val exposureTimeNanos = result.get(CaptureResult.SENSOR_EXPOSURE_TIME)
if (exposureTimeNanos != null)
{
parent.currentExposureTimeNanos = exposureTimeNanos
}
}
catch (ex: IllegalArgumentException)
{
if (!isExposureTimeExceptionLogged)
{
isExposureTimeExceptionLogged = true
}
}
try
{
val iso = result.get(CaptureResult.SENSOR_SENSITIVITY)
if (iso != null)
{
parent.currentIso = iso
}
}
catch (ex: IllegalArgumentException)
{
if (!isIsoExceptionLogged)
{
Log.i(TAG, "Cannot get current SENSOR_SENSITIVITY, exception: " + ex.message)
isIsoExceptionLogged = true
}
}
}
override fun onCaptureFailed(session: CameraCaptureSession, request: CaptureRequest, failure: CaptureFailure)
{
}
override fun onCaptureSequenceCompleted(session: CameraCaptureSession, sequenceId: Int, frameNumber: Long)
{
}
override fun onCaptureStarted(session: CameraCaptureSession, request: CaptureRequest, timestamp: Long, frameNumber: Long)
{
}
override fun onCaptureProgressed(session: CameraCaptureSession, request: CaptureRequest, partialResult: CaptureResult)
{
}
override fun onCaptureBufferLost(session: CameraCaptureSession, request: CaptureRequest, target: Surface, frameNumber: Long)
{
}
}
As I understand something is wrong with preview size but I cannot find correct way how to get this value and the strangest thing is that this problem appears only on this Xiaomi device. Any thoughts?
176x144 is sometimes a problematic resolution for devices. It's really only listed by camera devices because it's sometimes required for recording videos for MMS (multimedia text message) messages. These videos, frankly, look awful, but it's still frequently a requirement by cellular carriers that they work.
But on modern devices with 12 - 50 MP cameras, the camera hardware actually struggles to scale images down to 176x144 from the sensor full resolution (> 20x downscale!), so sometimes certain combinations of sizes can cause problems.
I'd generally recommend not using preview resolutions below 320x240, to minimize issues, and definitely not mix a 176x144 preview with a high-resolution still capture.

Enable Audio method issue in android agora rtc sdk

I am using interactive video broadcasting in my app.
I am attaching class in which I am using live streaming.
I am getting the audio issue when I go back from the live streaming screen to the previous screen. I still listen to the audio of the host.
previously I was using leave channel method and destroying rtc client object, but after implementing this when I go back from streaming class then it closes all users screen who are using this app because of leave channel method. after that, I removed this option from my on destroy method.
Now I am using disable audio method which disables the audio but when I open live streaming class it doesn't enable audio. Enable audio method is not working I also used the mute audio local stream method and rtc handler on user mute audio method.
I am getting error--
"LiveStreamingActivity has leaked IntentReceiver io.agora.rtc.internal.AudioRoutingController$HeadsetBroadcastReceiver#101a7a7
that was originally registered here. Are you missing a call to
unregisterReceiver()? android.app.IntentReceiverLeaked: Activity
com.allin.activities.home.homeActivities.LiveStreamingActivity has
leaked IntentReceiver
io.agora.rtc.internal.AudioRoutingController$HeadsetBroadcastReceiver#101a7a7
that was originally registered here. Are you missing a call to
unregisterReceiver()?"
Receiver is registering in SDK and exception is coming inside the SDK that is jar file I can't edit.
Please help this in resolving my issue as I have to live the app on
play store.
//firstly I have tried this but it automatically stops other
devices streaming.
override fun onDestroy() {
/* if (mRtcEngine != null) {
leaveChannel()
RtcEngine.destroy(mRtcEngine)
mRtcEngine = null
}*/
//second I have tried disabling the audio so that user will
not hear
the host voice
if (mRtcEngine != null) //
{
mRtcEngine!!.disableAudio()
}
super.onDestroy()
}
// then I when I came back from the previous screen to live streaming activity everything is initializing again but the audio is not able to audible.
override fun onResume() {
super.onResume()
Log.e("resume", "resume")
if (mRtcEngine != null) {
mRtcEngine!!.enableAudio()
// mRtcEngine!!.resumeAudio()
}
}
code I am using
//agora rtc engine and handler initialization-----------------
private var mRtcEngine: RtcEngine? = null
private var mRtcEventHandler = object : IRtcEngineEventHandler() {
#SuppressLint("LongLogTag")
override fun onFirstRemoteVideoDecoded(uid: Int, width: Int,
height: Int, elapsed: Int) {
}
override fun onUserOffline(uid: Int, reason: Int) {
runOnUiThread {
val a = reason //if login =0 user is offline
try {
if (mUid == uid) {
if (surfaceView?.parent != null)
(surfaceView?.parent as ViewGroup).removeAllViews()
if (mRtcEngine != null) {
leaveChannel()
RtcEngine.destroy(mRtcEngine)
mRtcEngine = null
}
setResult(IntentConstants.REQUEST_CODE_LIVE_STREAMING)
finish()
}
} catch (e: Exception) {
e.printStackTrace()
}
}
}
override fun onUserMuteVideo(uid: Int, muted: Boolean) {
runOnUiThread {
// onRemoteUserVideoMuted(uid, muted);
Log.e("video","muted")
}
}
override fun onAudioQuality(uid: Int, quality: Int, delay:
Short, lost: Short) {
super.onAudioQuality(uid, quality, delay, lost)
Log.e("", "")
}
override fun onUserJoined(uid: Int, elapsed: Int) {
// super.onUserJoined(uid, elapsed)
mUid = uid
runOnUiThread {
try {
setupRemoteVideo(mUid!!)
} catch (e: Exception) {
e.printStackTrace()
}
}
Log.e("differnt_uid----", mUid.toString())
}
}
private fun initAgoraEngineAndJoinChannel() {
if(mRtcEngine==null)
{
initializeAgoraEngine()
setupVideoProfile()
}
}
//initializing rtc engine class
#Throws(Exception::class)
private fun initializeAgoraEngine() {
try {
var s = RtcEngine.getSdkVersion()
mRtcEngine = RtcEngine.create(baseContext, AgoraConstants.APPLICATION_ID, mRtcEventHandler)
} catch (e: Exception) {
// Log.e(LOG_TAG, Log.getStackTraceString(e));
throw RuntimeException("NEED TO check rtc sdk init fatal error\n" + Log.getStackTraceString(e))
}
}
#Throws(Exception::class)
private fun setupVideoProfile() {
//mRtcEngine?.muteAllRemoteAudioStreams(true)
// mLogger.log("channelName account = " + channelName + ",uid = " + 0);
mRtcEngine?.enableVideo()
//mRtcEngine.clearVideoCompositingLayout();
mRtcEngine?.enableLocalVideo(false)
mRtcEngine?.setEnableSpeakerphone(false)
mRtcEngine?.muteLocalAudioStream(true)
joinChannel()
mRtcEngine?.setVideoProfile(Constants.CHANNEL_PROFILE_LIVE_BROADCASTING, true)
mRtcEngine?.setChannelProfile(Constants.CHANNEL_PROFILE_LIVE_BROADCASTING)
mRtcEngine?.setClientRole(Constants.CLIENT_ROLE_AUDIENCE,"")
val speaker = mRtcEngine?.isSpeakerphoneEnabled
val camerafocus = mRtcEngine?.isCameraAutoFocusFaceModeSupported
Log.e("", "")
}
#Throws(Exception::class)
private fun setupRemoteVideo(uid: Int) {
val container = findViewById<FrameLayout>(R.id.fl_video_container)
if (container.childCount >= 1) {
return
}
surfaceView = RtcEngine.CreateRendererView(baseContext)
container.addView(surfaceView)
mRtcEngine?.setupRemoteVideo(VideoCanvas(surfaceView, VideoCanvas.RENDER_MODE_HIDDEN, uid))
mRtcEngine?.setRemoteVideoStreamType(uid, 1)
mRtcEngine?.setCameraAutoFocusFaceModeEnabled(false)
mRtcEngine?.muteRemoteAudioStream(uid, false)
mRtcEngine?.adjustPlaybackSignalVolume(0)
// mRtcEngine.setVideoProfile(Constants.VIDEO_PROFILE_180P, false); // Earlier than 2.3.0
surfaceView?.tag = uid // for mark purpose
val audioManager: AudioManager =
this#LiveStreamingActivity.getSystemService(Context.AUDIO_SERVICE) as AudioManager
//audioManager.mode = AudioManager.MODE_IN_CALL
val isConnected: Boolean = audioManager.isWiredHeadsetOn
if (isConnected) {
/* audioManager.isSpeakerphoneOn = false
audioManager.isWiredHeadsetOn = true*/
mRtcEngine?.setEnableSpeakerphone(false)
mRtcEngine?.setDefaultAudioRoutetoSpeakerphone(false)
mRtcEngine?.setSpeakerphoneVolume(0)
mRtcEngine?.enableInEarMonitoring(true)
// Sets the in-ear monitoring volume to 50% of original volume.
mRtcEngine?.setInEarMonitoringVolume(200)
mRtcEngine?.adjustPlaybackSignalVolume(200)
} else {
/* audioManager.isSpeakerphoneOn = true
audioManager.isWiredHeadsetOn = false*/
mRtcEngine?.setEnableSpeakerphone(true)
mRtcEngine?.setDefaultAudioRoutetoSpeakerphone(true)
mRtcEngine?.setSpeakerphoneVolume(50)
mRtcEngine?.adjustPlaybackSignalVolume(50)
mRtcEngine?.enableInEarMonitoring(false)
// Sets the in-ear monitoring volume to 50% of original volume.
mRtcEngine?.setInEarMonitoringVolume(0)
}
Log.e("", "")
}
#Throws(Exception::class)
private fun joinChannel() {
mRtcEngine?.joinChannel(
null,
AgoraConstants.CHANNEL_NAME,
"Extra Optional Data",
0
) // if you do not specify the uid, we will generate the uid for you
}
#Throws(Exception::class)
private fun leaveChannel() {
mRtcEngine!!.leaveChannel()
}
I think first you want to put setupRemoteVideo in onFirstRemoteVideoDecoded callback instead of the onUserJoined callback. Also, in the onDestroy callback, you should call RtcEngine.destroy() instead of RtcEngine.destroy(mRtcEngine).

How to fix 'pink' image capture in android Camera2API?

I am trying to capture the image using 2 different lenses (wide and normal). The preview works fine for both cameras simultaneously using the new multicamera support in Camera2API. I am using Huawei Mate20Pro.
However, when I capture the picture, it only saves pink colored JPEG images. But when the object is close enough, the picture is captured perfectly. Here is what I mean. This is how a pink JPEG looks like:
However, when the object is close enough, the capture is fine. Here is how it looks like:
Here is the main activity code:
button.setOnClickListener {
if (isRunning) {
handler.removeCallbacksAndMessages(null)
restartActivity()
} else {
button.text = "Stop"
handler.postDelayed(object : Runnable {
override fun run() {
twoLens.reset()
twoLens.isTwoLensShot = true
MainActivity.cameraParams.get(dualCamLogicalId).let {
if (it?.isOpen == true) {
Logd("In onClick. Taking Dual Cam Photo on logical camera: $dualCamLogicalId")
takePicture(this#MainActivity, it)
Toast.makeText(applicationContext, "Captured!", Toast.LENGTH_SHORT).show()
}
}
handler.postDelayed(this, 1000)
}
}, 2000)
}
isRunning = !isRunning
}
}
Here is the picture capture code.
fun captureStillPicture(activity: MainActivity, params: CameraParams) {
if (!params.isOpen) {
return
}
try {
Logd("In captureStillPicture.")
val camera = params.captureSession?.getDevice()
if (null != camera) {
params.captureBuilder = camera.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE)
params.captureBuilder?.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_AUTO)
if (params.id.equals(dualCamLogicalId) && twoLens.isTwoLensShot) {
val normalParams: CameraParams? = MainActivity.cameraParams.get(normalLensId)
val wideParams: CameraParams? = MainActivity.cameraParams.get(wideAngleId)
if (null == normalParams || null == wideParams)
return
Logd("In captureStillPicture. This is a Dual Cam shot.")
params.captureBuilder?.addTarget(normalParams.imageReader?.surface!!)
params.captureBuilder?.addTarget(wideParams.imageReader?.surface!!)
params.captureBuilder?.set(CaptureRequest.CONTROL_AE_EXPOSURE_COMPENSATION, 4)
params.captureBuilder?.set(CaptureRequest.JPEG_QUALITY, 100)
if (Build.VERSION.SDK_INT >= 28) { params.captureBuilder?.set(CaptureRequest.DISTORTION_CORRECTION_MODE, CameraMetadata.DISTORTION_CORRECTION_MODE_OFF)
//This is REQUIRED to disable HDR+ on Pixel 3 - even though Pixel 3 doesn't have sepia
params.captureBuilder?.set(CaptureRequest.CONTROL_EFFECT_MODE, CameraMetadata.CONTROL_EFFECT_MODE_SEPIA)
} else {
//This is REQUIRED to disable HDR+ on Pixel 3 - even though Pixel 3 doesn't have sepia
params.captureBuilder?.set(CaptureRequest.CONTROL_EFFECT_MODE, CameraMetadata.CONTROL_EFFECT_MODE_SEPIA)
Logd("DUAL CAM DEBUG: I am setting sepia mode.")
// Logd("DUAL CAM DEBUG: I am NOT setting sepia mode.")
}
val rotation = activity.getWindowManager().getDefaultDisplay().getRotation()
var capturedImageRotation = getOrientation(params, rotation)
params.captureBuilder?.set(CaptureRequest.JPEG_ORIENTATION, capturedImageRotation)
try {
params.captureSession?.stopRepeating()
// params.captureSession?.abortCaptures()
} catch (e: CameraAccessException) {
e.printStackTrace()
}
//Do the capture
// TODO: Capture BURST HERE
if (28 <= Build.VERSION.SDK_INT)
params.captureSession?.captureSingleRequest(params.captureBuilder?.build(), params.backgroundExecutor, StillCaptureSessionCallback(activity, params))
else
params.captureSession?.capture(params.captureBuilder?.build(), StillCaptureSessionCallback(activity, params),
params.backgroundHandler)
}
} catch (e: CameraAccessException) {
e.printStackTrace()
} catch (e: IllegalStateException) {
Logd("captureStillPicture IllegalStateException, aborting: " + e)
}
}
This is how I am grabbing the captured pictures.
fun getImagesCaptured(activity: MainActivity, twoLens: TwoLensCoordinator){
Logd("Normal image timestamp: " + twoLens.normalImage?.timestamp)
Logd("Wide image timestamp: " + twoLens.wideImage?.timestamp)
val wideBuffer: ByteBuffer? = twoLens.wideImage!!.planes[0].buffer
val wideBytes = ByteArray(wideBuffer!!.remaining())
wideBuffer.get(wideBytes)
val normalBuffer: ByteBuffer? = twoLens.normalImage!!.planes[0].buffer
val normalBytes = ByteArray(normalBuffer!!.remaining())
normalBuffer.get(normalBytes)
val options = BitmapFactory.Options()
val wideMat: Mat = Mat(twoLens.wideImage!!.height, twoLens.wideImage!!.width, CvType.CV_8UC1)
val tempWideBitmap = BitmapFactory.decodeByteArray(wideBytes, 0, wideBytes.size, options)
val normalMat: Mat = Mat(twoLens.normalImage!!.height, twoLens.normalImage!!.width, CvType.CV_8UC1)
val tempNormalBitmap = BitmapFactory.decodeByteArray(normalBytes, 0, normalBytes.size, options)
save(normalBytes, "NormalShot")
save(wideBytes, "WideShot")
}
The save function is here.
fun save(bytes: Bitmap, tempName: String) {
val timeStamp = SimpleDateFormat("yyyyMMdd_HHmmss").format(Date())
val dataDir = File(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS), "TwoCameraImages")
if (!dataDir.exists()) {
dataDir.mkdir()
}
val fileName = tempName + "_IMG_$timeStamp.jpg"
val fileDir = File(dataDir.path + File.separator + fileName)
try {
val fileOutputStream = FileOutputStream(fileDir)
bytes.compress(Bitmap.CompressFormat.JPEG, 100, fileOutputStream)
//fileOutputStream.write(bytes)
fileOutputStream.close()
} catch (e: FileNotFoundException) {
e.printStackTrace()
} catch (e: IOException) {
e.printStackTrace()
}
}
I built on top of code given here: https://github.com/google/basicbokeh and switched to rear cameras, and removed the face calculations. But this pink bitmap is not going away. Any help?

Categories

Resources