I want to know if there is any efficient way to make app to app communication using IPC. I went to the guide of services that uses AIDL from the docs. But what I really want is to have an image to transfer between them.(Images are high quality) The only issue is android only let us use 1024 kB or even less to pass data through bundle. if you do more than that you'll end up with TransactionTooLargeException. Right now I'm compressing the image and passing base64 string between the two apps and it works fine. But sometimes some images can not be compressed at all. How can I do something like that. I'm compressing image using
bitmap.compress(imageformat=webp,quality=90,compress.nowrap)
the quality will reduce by 10 if I get TransactionTooLargeException. Any ideas on how to do something like that working on android?
But the thing is I don't want to open any other app. The image that I'm receiving will be processed and send a status to the image that was sent from the application. Like 'very cool image' 'very bad image'.in a string status.
Aidl link
Thanks.
var quality = 100
private fun sendImageToDevice(quality: Int, icon: Bitmap) {
Log.e(TAG, "quality of image is $quality ")
runOnUiThread {
Toast.makeText(
this#MainActivity,
"Image Quality $quality",
Toast.LENGTH_SHORT
)
.show()
}
if (quality > 0)
try {
mProcessImageService?.sendImageFormat(icon.toBase64(quality))
} catch (e: TransactionTooLargeException) {
e.message
this.quality = quality - 20
sendImageToDevice(this.quality, icon)
}
else {
}
}
fun Bitmap.toBase64(quality: Int): String {
val outputStream = ByteArrayOutputStream()
this.compress(Bitmap.CompressFormat.JPEG, quality, outputStream)
val base64String: String = Base64.encodeToString(outputStream.toByteArray(), Base64.NO_WRAP)
Log.d("MainActivity", "outputstream size is ${outputStream.size()}")
return base64String
}
Related
I use the cameraX API in Android to analyze multiple frames in a period of 5 up to 60 seconds. There are multiple conditional tasks I want to do with the images depending on what tasks the user selected. These include:
scan for barcodes/qr codes (using google mlkit)
scan for text (using google mlkit)
custom edge detection using openCV in C++ with JNI
save image as png file (losless)
show frames in app (PreviewView or ImageView)
These tasks heavily vary in workload and time to finish, so instead of waiting for each task to finish until getting a new frame, I want to receive constant frames and let each task only start with the newest frame when it's finished with it's last workload.
while MLKit takes YUV images as input, openCV uses RGBA (or BGRA), so no matter which output format I choose, I will need to convert it some way. My choice was to use RGBA_8888 as output format and convert it into a bitmap since bitmap is supported from both MLKit and OpenCV and the conversion from RGBA to bitmap is much quicker than from YUV to bitmap. But using bitmaps I get huge problems with memory to the extend of the app just getting closed by Android. Using the Android Studio Profiler, I noticed the native part of ram usage going up constantly, staying that high even after workload is done and the camera is unbound.
I read online that it is heavily suggested to recycle bitmaps after use to free up their memory space. Problem here is that all these tasks run and finish independently and I couldn't come up with a good solution for recycling the bitmap as soon as possible without heavily increasing memory usage by keeping them in memory for a certain time (like 10 seconds).
I thought about using jobs for each task and to recycle when all jobs are done, but this doesn't work for the MLKit analyses because they return using a listener, resulting in the jobs ending before the task is actually done.
I appreciate any input for how to efficiently recycle the bitmaps, using something different than bitmaps, reducing memory consumption or any code improvements in general!
Here are code samples for the image analysis and for the barcode scanner. They should suffice for giving a general idea of the running code.
val imageAnalysisBuilder =
ImageAnalysis
.Builder()
.setTargetResolution(android.util.Size(720, 1280))
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.setOutputImageFormat(OUTPUT_IMAGE_FORMAT_RGBA_8888)
val imageAnalysis = imageAnalysisBuilder.build()
imageAnalysis.setAnalyzer(Executors.newSingleThreadExecutor()) { imageProxy ->
//bitmap conversion from https://github.com/android/camera-samples
var bitmap = Bitmap.createBitmap(imageProxy.width, imageProxy.height, Bitmap.Config.ARGB_8888)
imageProxy.use { bitmap.copyPixelsFromBuffer(it.planes[0].buffer) }
val rotationDegrees = imageProxy.imageInfo.rotationDegrees
imageProxy.close()
if (!barcodeScannerBusy) {
CoroutineScope.launch { startMlKitBarcodeScanner(bitmap, rotationDegrees) }
}
if (!textRecognitionBusy) {
CoroutineScope.launch { startMlKitTextRecognition(bitmap, rotationDegrees) }
}
//more tasks with same pattern
//when to recycle bitmap???
}
private fun startMlKitBarcodeScanner(bitmap: Bitmap, rotationDegrees: Int) {
barcodeScannerBusy = true
val inputImage = InputImage.fromBitmap(bitmap, rotationDegrees)
barcodeScanner?.process(inputImage)
?.addOnSuccessListener { barcodes ->
//do stuff with barcodes
}
?.addOnFailureListener {
//failure handling
}
?.addOnCompleteListener {
barcodeScannerBusy = false
//can't recycle bitmap here since other tasks might still use it
}
}
I solved the issue by now. Mainly by using a bitmap buffer variable for each task working with the image. Downside is that in the worst case, I create the same bitmap multiple times in a row. Upside is that each task can use its own bitmap independently of any other task.
Also since the device I use is not the most powerful (quite the contrary in fact), I decided to split up some of the tasks into multiple analyzers and assign a new analyzer to the camera when needing it.
Also if copying the planes of the imageProxy multiple times the way I do it here, you need to use the rewind() method before creating a new bitmap with it.
lateinit var barcodeScannerBitmapBuffer: Bitmap
lateinit var textRecognitionBitmapBuffer: Bitmap
val imageAnalysisBuilder =
ImageAnalysis
.Builder()
.setTargetResolution(android.util.Size(720, 1280))
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.setOutputImageFormat(OUTPUT_IMAGE_FORMAT_RGBA_8888)
val imageAnalysis = imageAnalysisBuilder.build()
imageAnalysis.setAnalyzer(Executors.newSingleThreadExecutor()) { imageProxy ->
if (barcodeScannerBusy && textRecognitionBusy) {
imageProxy.close()
return#Analyzer
}
if (!::barcodeScannerBitmapBuffer.isInitialized) {
barcodeScannerBitmapBuffer = Bitmap.createBitmap(
imageProxy.width,
imageProxy.height,
Bitmap.Config.ARGB_8888
)
}
if (!::textRecognitionBitmapBuffer.isInitialized) {
textRecognitionBitmapBuffer = Bitmap.createBitmap(
imageProxy.width,
imageProxy.height,
Bitmap.Config.ARGB_8888
)
}
if (!barcodeScannerBusy) {
imageProxy.use {
//bitmap conversion from https://github.com/android/camera-samples
barcodeScannerBitmapBuffer.copyPixelsFromBuffer(it.planes[0].buffer)
it.planes[0].buffer.rewind()
}
}
if (!textRecognitionBusy) {
imageProxy.use { textRecognitionBitmapBuffer.copyPixelsFromBuffer(it.planes[0].buffer) }
}
val rotationDegrees = imageProxy.imageInfo.rotationDegrees
imageProxy.close()
if (::barcodeScannerBitmapBuffer.isInitialized &&!barcodeScannerBusy) {
startMlKitBarcodeScanner(barcodeScannerBitmapBuffer, rotationDegrees)
}
if (::textRecognitionBitmapBuffer.isInitialized && !textRecognitionBusy) {
startMlKitTextRecognition(textRecognitionBitmapBuffer, rotationDegrees)
}
}
I'm just trying to resize an image after the user launches the Image Picker from my app and chooses an image file on the local device (handling a remote image from Dropbox or something will be another battle) and while this has worked for me previously, now I'm getting this exception:
java.lang.RuntimeException: Failure delivering result ResultInfo{who=null, request=1105296364, result=-1, data=Intent { dat=content://com.android.externalstorage.documents/document/primary:Download/20170307_223207_cropped.jpg flg=0x1 }} to activity {my.app/MainActivity}: java.io.FileNotFoundException: No content provider: /document/primary:Download/20170307_223207_cropped.jpg
This occurs after the image is chosen in the Picker, because I'm running my "processing" code to locate the image, resize it, and copy it to a subfolder in the app's folder.
Like I said, this worked, but I'm not sure what's wrong now. I've tried this on the emulator as well as on my Galaxy S10 via USB debugging and it's the same result. The image is in the local storage "Download" folder on the emulator as well as my own device.
The URI looks weird (I mean the picture is just in the local storage "Download" folder) but I'm no URI expert so I assume it's fine, because that's what the Image Picker returns.
Here's the immediate code that's throwing the exception (specifically, the ImageDecoder.decodeBitmap call):
private fun copyFileToAppDataFolder(
context: Context,
imageTempPath: String
): String {
// ensure we are sent at least a non-empty path
if (imageTempPath.isEmpty()) {
return ""
}
val appDataFolder = "${context.dataDir.absolutePath}/images/firearms"
var filename = imageTempPath.substringAfterLast("/", "")
if (filename.isNullOrBlank()) {
filename = imageTempPath.substringAfterLast("%2F", "")
}
// couldn't parse filename from Uri; exit
if (filename.isNullOrBlank()) {
return ""
}
// get a bitmap of the selected image so it can be saved in an outputstream
var selectedImage: Bitmap? = null
selectedImage = if (Build.VERSION.SDK_INT <= 28) {
MediaStore.Images.Media.getBitmap(context.contentResolver, Uri.parse(imageTempPath))
} else {
ImageDecoder.decodeBitmap(ImageDecoder.createSource(context.contentResolver, Uri.parse(imageTempPath)))
}
if (selectedImage == null) {
return ""
}
val destinationImagePath: String = "$appDataFolder/$filename"
val destinationStream = FileOutputStream(destinationImagePath)
selectedImage.compress(Bitmap.CompressFormat.JPEG, 100, destinationStream)
destinationStream.close()
return destinationImagePath
}
That above function is called from my ViewModel (that processFirearmImage function is just calling the one above), where I send the result URI from the image Picker as well as the Application Context:
// this event is fired when the Image Picker returns
is AddEditFirearmEvent.AssignedPicture -> {
val resizedImagePath = ShotTrackerUtility.processFirearmImage(
event.applicationContext, // this is from LocalContext.current in Composable
event.value // result uri from image picker
)
_firearmImageUrl.value = resizedImagePath
}
I don't know, lol. I can't believe this is such a difficult thing but information for this sure seems sparse (for Compose especially, but even so) but I don't really consider launching an Image Picker and resizing the resulting image to be that weird. Any help would be great from you smart people.
Taking a step away from programming problems and coming back seems about the best bet sometimes, lol.
I came back tonight and within a couple minutes noticed that I was sending an improper Uri to the ImageDecoder.createSource method that was causing the exception. Basically this was happening:
val imageTempPath = theUriReturnedFromImagePicker.path ?: ""
ImageDecoder.decodeBitmap(ImageDecoder.createSource(context.contentResolver, Uri.parse(imageTempPath)))
And it should've been:
val imageUrl = theUriReturnedFromImagePicker
ImageDecoder.decodeBitmap(ImageDecoder.createSource(context.contentResolver, imageUri))
As I mentioned in the OP, this originally worked but I must've changed code around a bit (arguments I'm sending to various methods/classes, mostly). I'm also using that Uri.path part to get the filename of the image chosen so I overlooked and/or got confused to what I was sending to ImageDecoder.createSource.
Doh. Maybe someone else will do something dumb like me and this can help.
I'm looking for the alternate of Movie like to get the duration of GIF. I tried in imageDecoder but I can't able to get the duration.
//Deprecated
val movie = Movie.decodeStream(`is`)
val duration = movie.duration()
Movie probably still works even though it's deprecated. But if you're not going to go on to use that Movie instance to play the GIF, that's a bad way of getting the duration because it will have to load the entire thing when all you really need to find the duration is in the meta data at the beginning of the file.
You could use the Metadata Extractor library to do this.
Since it's reading from a file, it is blocking and should be done in the background. Here's an example using a suspend function to accomplish that.
/** Returns duration in ms of the GIF of the stream, 0 if it has no duration,
* or null if it could not be read. */
suspend fun InputStream.readGifDurationOrNull(): Int? = withContext(Dispatchers.IO) {
try {
val metadata = ImageMetadataReader.readMetadata(this#readGifDurationOrNull)
val gifControlDirectories = metadata.getDirectoriesOfType(GifControlDirectory::class.java)
if (gifControlDirectories.size <= 1) {
return#withContext 0
}
gifControlDirectories.sumOf {
it.getInt(GifControlDirectory.TAG_DELAY) * 10 // Gif uses 10ms units
}
} catch (e: Exception) {
Log.e("readGifDurationOrNull", "Could not read metadata from input", e)
null
}
}
Credit to this answer for how to get the appropriate duration info from the metadata.
I wanna upload some files which are 30 MB Max to my server with okhttp websocket.
The websocket transfer allows String or ByteString only.
So I want to convert my file to ByteString and then upload this to my server via websocket(Nodejs).
I use ByteString.of() to convert this byteArray like this.
val file = "../tmp/file.jpg"
try {
val encoded:ByteArray = Files.readAllBytes(Paths.get(file))
val byteString = ByteString.of(encoded,0,1024)
..send data
Log.d("log1","DATA DONE")
} catch (e: IOException) {
Log.d("log1","ERROR:"+e)
}
But what confuses me is that ByteString function takes 3 parameters..
First: ByteArray
Second: Offset
Third: Bytecount
My question is what does the last 2 parameters do and the reason behind it? I don't find any clear documentation about this. Just the roadmap that its added.
If you have any links or suggestions please let me know.
-Offset is actually where you want to start reading your bytes from.
Assume a Text file with the following data
Computer-science World
Quantum Computing
now the offset for the first line is 0 <0,Computer Science World> for the second line the offset will be <23,Quantum Computing>
-ByteCount is the number of bytes you want to count(include)
Let's help you with a piece of simple code
byte[] bytes1 = "Hello, World!".getBytes(Charsets.UTF_8);
ByteString byteString = ByteString.of(bytes1, 2, 9);
// Verify that the bytes were copied out.
Sytem.out.print(byteString.utf8());
Answer is : llo, Worl
So basically, method can be used as a substring. But since you want to send in all the bytes, you can simply use
fun of(vararg data: Byte): ByteString
I'm trying to convert image taken from resources to ByteArray which
will later be send through Socket. I've been measuring time of each of this conversion.
I've done it on both Flutter and native Android (Kotlin). All of the test were done on the same image which was about 1-2MB.
Flutter code :
sendMessage() async {
if (socket != null) {
Stopwatch start = Stopwatch()..start();
final imageBytes = await rootBundle.load('assets/images/stars.jpg');
final image = base64Encode(imageBytes.buffer.asUint8List(imageBytes.offsetInBytes, imageBytes.lengthInBytes));
print('Converting took ${start.elapsedMilliseconds}');
socket.emit("message", [image]);
}
}
Kotlin code:
private fun sendMessage() {
var message = ""
val thread = Thread(Runnable {
val start = SystemClock.elapsedRealtime()
val bitmap = BitmapFactory.decodeResource(resources, R.drawable.stars)
message = Base64.encodeToString(getBytesFromBitmap(bitmap), Base64.DEFAULT)
Log.d("Tag", "Converting time was : ${SystemClock.elapsedRealtime() - start}")
})
thread.start()
thread.join()
socket.emit("message", message)
}
private fun getBytesFromBitmap(bitmap: Bitmap): ByteArray? {
val stream = ByteArrayOutputStream()
bitmap.compress(Bitmap.CompressFormat.JPEG, 100, stream)
return stream.toByteArray()
}
I've been actually expecting native code to be much much faster than Flutter's but thats not the case.. Conversion for Flutter takes about 50ms and its around 2000-3000ms for native.
I thought that Threading may be the case, so I've tried to run this conversion on background thread for native code but it didn't help.
Can you please tell me why is there such a different in time, and how I can implement it better in native code? Is there a way to omit casting to Bitmap etc.? Maybe this makes it so long.
EDIT. Added getBytesFromBitmap function
the difference you see is that in flutter code you just read your data without any image decoding, while in kotlin you are first decoding to Bitmap and then you are compress()ing it back - if you want to speed it up simply get an InputStream by calling Resources#openRawResource and read your image resource without any decoding
It have something to do with the way you convert it to bytes... Can you please post your
getBytesFromBitmap func? Plus, the conversion in native code really should be done in background thread, please upload the your results in this case.