How to get a list of all bitmaps from webp animation using android fresco?
The code below is always returns null instead of a bitmap
val imageRequest = ImageRequestBuilder
.newBuilderWithSource(Uri.parse("https://upload.wikimedia.org/wikipedia/commons/4/41/Sunflower_from_Silesia2.jpg"))
.build()
val imagePipeline = Fresco.getImagePipeline()
val dataSource = imagePipeline.fetchDecodedImage(imageRequest, this)
dataSource.subscribe(object : BaseBitmapDataSubscriber() {
override fun onFailureImpl(dataSource: DataSource<CloseableReference<CloseableImage>>?) {
val m = dataSource
}
override fun onNewResultImpl(bitmap: Bitmap?) {
val m = bitmap
}
}, CallerThreadExecutor.getInstance())
For all interested, fresco docs has this line "This subscriber doesn’t work for animated images as those can not be represented as a single bitmap."
Related
I have questions about loading images from url. How to async load image in Jetpack Compose? I know coin library and use it is pretty simple and works great. But libraries like coil, glide, picasso are prohibited, only Android SDK and Google supported are allowed.
Thank you for your advices
Something like this? But how to display in Jetpack Compose UI?
#Composable
fun LoadImage(url: String, #DrawableRes placeHolderImage: Int): MutableState<Bitmap?> {
var bitmapState: MutableState<Bitmap?> = mutableStateOf(null)
//show place holder image
val defaultBitmap =
BitmapFactory.decodeResource(LocalContext.current.resources, placeHolderImage)
bitmapState.value = defaultBitmap
GlobalScope.launch(Dispatchers.IO) {
val urlConnection = URL(url).openConnection() as HttpURLConnection
try {
urlConnection.doInput = true
urlConnection.connect()
val input = BufferedInputStream(urlConnection.inputStream)
val bitmap = BitmapFactory.decodeStream(input)
withContext(Dispatchers.Main) {
bitmapState.value = bitmap
}
} catch (e: Exception) {
} finally {
urlConnection.disconnect()
}
}
return bitmapState
}
I'm trying to return the bitmap value from a lambda function but I get the error: lateinit property bitmap has not been initialized ... Is there a way to check if the ImageRequest is complete before returning the bitmap?
fun getBitmap(context:Context,imageUrl: String) : Bitmap{
lateinit var bitmap: Bitmap
val imageRequest = ImageRequest.Builder(context)
.data(imageUrl)
.target { drawable ->
bitmap = drawable.toBitmap() // This is the bitmap 🚨
}
.build()
ImageLoader(context).enqueue(imageRequest)
return bitmap
}
Hm... I don't have much time to explain. Just see the code and understand.
You have to use execute() instead of enqueue(). See
private suspend fun getBitmap(context: Context, url: String): Bitmap? {
var bitmap: Bitmap? = null
val request = ImageRequest.Builder(context)
.data(url)
.target(
onStart = {
Log.d(TAG, "Coil loader started.")
},
onSuccess = { result ->
Log.e(TAG, "Coil loader success.")
bitmap = result.toBitmapOrNull() // Or (result as BitmapDrawable).bitmap
},
onError = {
Log.e(TAG, "Coil loading error")
}
)
.build()
context.imageLoader.execute(request)
return bitmap
}
My application uses the camera to take pictures and then saves them in the MediaStore. I would like to put these pictures in my RecyclerView using Glide but I don't know how to do it.
A function that saves the image:
private fun imageCapture() {
// Set desired name and type of captured image
val contentValues = ContentValues().apply {
put(MediaStore.MediaColumns.DISPLAY_NAME, "${what_is_that_insect_tv.text}")
put(MediaStore.MediaColumns.DATE_MODIFIED.format("MM/dd/yyyy"), (Calendar.getInstance().timeInMillis / 1000L))
put(MediaStore.MediaColumns.MIME_TYPE, "image/jpg")
}
// Create the output file option to store the captured image in MediaStore
val outputFileOptions = ImageCapture.OutputFileOptions
.Builder(resolver, MediaStore.Images.Media.EXTERNAL_CONTENT_URI, contentValues)
.build()
// Initiate image capture
imageCapture?.takePicture(
outputFileOptions,
cameraExecutor,
object : ImageCapture.OnImageSavedCallback {
override fun onImageSaved(outputFileResults: ImageCapture.OutputFileResults) {
// Image was successfully saved to `outputFileResults.savedUri`
}
override fun onError(exception: ImageCaptureException) {
val errorType = exception.imageCaptureError
Toast.makeText(requireContext(), "$errorType", Toast.LENGTH_SHORT).show()
}
})
}
Function in Adapter
fun bind(insect: Insect){
with(itemView){
name_insect_item.text = insect.name
Glide.with(this)
.load()
.into(this.image_insect_item)
}
}
In order to use Glide you must obtain the URI of the saved image, in the onImageSaved I believe you can call .getSavedUri() like so :
override fun onImageSaved(outputFileResults: ImageCapture.OutputFileResults) {
val uri = ImageCapture.OutputFileResults.getSavedUri()
}
You should then be able to use the Uri in an ArrayList you feed to the adaptor. Got it from here :
https://developer.android.com/reference/androidx/camera/core/ImageCapture.OutputFileResults#getSavedUri()
I do this in my code like this:
Glide.with(holder.imageView.getContext())
.load(new File(hotel.imageId2))
.into(holder.imageView)
also this might help:
RecyclerView- Glide
I use the new CameraX API for taking a picture like this :
imageButton.setOnClickListener{
val file = File(externalMediaDirs.first(),
"${System.currentTimeMillis()}.jpg")
imageCapture.takePicture(executor, object :
ImageCapture.OnImageCapturedListener() {
override fun onCaptureSuccess(
image: ImageProxy,
rotationDegrees: Int)
{
// DO I NEED TO USE 'image' TO ACCESS THE IMAGE DATA FOR MANIPULATION (e.g. image.planes)
}
override fun onError(
imageCaptureError: ImageCapture.ImageCaptureError,
message: String,
cause: Throwable?
) {
val msg = "Photo capture failed: $message"
Log.e("CameraXApp", msg, cause)
}
})
imageCapture.takePicture(file, executor,
object : ImageCapture.OnImageSavedListener {
override fun onError(
imageCaptureError: ImageCapture.ImageCaptureError,
message: String,
exc: Throwable?
) {
val msg = "Photo capture failed: $message"
Log.e("CameraXApp", msg, exc)
}
override fun onImageSaved(file: File) {
val msg = "Photo capture succeeded: ${file.absolutePath}"
Log.d("CameraXApp", msg)
}
}
}
I want to apply some image processing with Renderscript when the image is captured. But I dont know how to access the pixels of the image that is captured.
Can someone provide a solution ?
I tried it with image.planes within the onCaptureSuccess() callback (see comment)
I must admit that I am new to this and do not know really what planes are. Before this, I only worked with Bitmaps ( doing some image processing on Bitmaps with Renderscript). Is there a way to turn a frame/image into a Bitmap and apply some sort image processing on it "on the fly" before saving it as a file ? If yes, how ? The official CameraX guidelines are not very helpful when it comes to this.
This is probably very late answer but the answer is in ImageCapture callback.
You get an Image with
val image = imageProxy.image
Notice this will only work when you have images in YUV format, in order to configure your camera with that you can do it with
imageCapture = ImageCapture.Builder()
.setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
.setBufferFormat(ImageFormat.YUV_420_888)
...
.build()
Now can get image.planes
The default image format is JPEG
and you can access the buffer with
buffer: ByteBuffer = imageProxy.image.planes[0].getBuffer()
and if you plan on setting it to YUV
val image = imageProxy.image
val yBuffer = image.planes[0].buffer // Y
val uBuffer = image.planes[1].buffer // U
val vBuffer = image.planes[2].buffer // V
I was working with CameraX and had hard time converting captured ImageProxy to Bitmap. After searching and trying, I formulated a solution. Later I found that it was not optimum so I changed the design. That forced me to drop hours of work.
Since I (or someone else) might need it in a future, I decided to post here as a question and post and answer to it for reference and scrutiny. Feel free to add better answer if you have one.
The relevant code is:
class ImagePickerActivity : AppCompatActivity() {
private var width = 325
private var height = 205
#RequiresApi(Build.VERSION_CODES.LOLLIPOP)
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_image_picker)
view_finder.post { startCamera() }
}
#RequiresApi(Build.VERSION_CODES.LOLLIPOP)
private fun startCamera() {
// Create configuration object for the viewfinder use case
val previewConfig = PreviewConfig.Builder().apply {
setTargetAspectRatio(Rational(1, 1))
//setTargetResolution(Size(width, height))
setLensFacing(CameraX.LensFacing.BACK)
setTargetAspectRatio(Rational(width, height))
}.build()
}
// Create configuration object for the image capture use case
val imageCaptureConfig = ImageCaptureConfig.Builder()
.apply {
setTargetAspectRatio(Rational(1, 1))
// We don't set a resolution for image capture instead, we
// select a capture mode which will infer the appropriate
// resolution based on aspect ration and requested mode
setCaptureMode(ImageCapture.CaptureMode.MIN_LATENCY)
}.build()
// Build the image capture use case and attach button click listener
val imageCapture = ImageCapture(imageCaptureConfig)
capture_button.setOnClickListener {
imageCapture.takePicture(object : ImageCapture.OnImageCapturedListener() {
override fun onCaptureSuccess(image: ImageProxy?, rotationDegrees: Int) {
//How do I get the bitmap here?
//imageView.setImageBitmap(someBitmap)
}
override fun onError(useCaseError: ImageCapture.UseCaseError?, message: String?, cause: Throwable?) {
val msg = "Photo capture failed: $message"
Toast.makeText(baseContext, msg, Toast.LENGTH_SHORT).show()
Log.e(localClassName, msg)
cause?.printStackTrace()
}
})
}
CameraX.bindToLifecycle(this, preview, imageCapture)
}
}
So the solution was to add extension method to Image and here is the code
class ImagePickerActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_image_picker)
}
private fun startCamera() {
val imageCapture = ImageCapture(imageCaptureConfig)
capture_button.setOnClickListener {
imageCapture.takePicture(object : ImageCapture.OnImageCapturedListener() {
override fun onCaptureSuccess(image: ImageProxy?, rotationDegrees: Int) {
imageView.setImageBitmap(image.image?.toBitmap())
}
//.....
})
}
}
}
fun Image.toBitmap(): Bitmap {
val buffer = planes[0].buffer
buffer.rewind()
val bytes = ByteArray(buffer.capacity())
buffer.get(bytes)
return BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
}
Slightly modified version. Using the inline function use on the Closable ImageProxy
imageCapture.takePicture(
object : ImageCapture.OnImageCapturedListener() {
override fun onCaptureSuccess(image: ImageProxy?, rotationDegrees: Int) {
image.use { image ->
val bitmap: Bitmap? = image?.let {
imageProxyToBitmap(it)
} ?: return
}
}
})
private fun imageProxyToBitmap(image: ImageProxy): Bitmap {
val buffer: ByteBuffer = image.planes[0].buffer
val bytes = ByteArray(buffer.remaining())
buffer.get(bytes)
return BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
}
Here is the safest approach, using MLKit's own implementation.
Tested and working on MLKit version 1.0.1
import com.google.mlkit.vision.common.internal.ImageConvertUtils;
Image mediaImage = imageProxy.getImage();
InputImage image = InputImage.fromMediaImage(mediaImage, imageProxy.getImageInfo().getRotationDegrees());
Bitmap bitmap = ImageConvertUtils.getInstance().getUpRightBitmap(image)
Java Implementation of Backbelt's Answer.
private Bitmap imageProxyToBitmap(ImageProxy image) {
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
return BitmapFactory.decodeByteArray(bytes,0,bytes.length,null);
}
There is second version of takePicture method at the moment (CameraX version 1.0.0-beta03). It provides several ways to persist image (OutputStream or maybe File can be useful in your case).
If you still want to convert ImageProxy to Bitmap here is my answer to similar question, which gives the correct implemetation of this conversion.
Please kindly take a look at this answer. All you need to apply it to your question is to get Image out of your ImageProxy
Image img = imaget.getImage();