Using Android native image from a NativeScript plugin - android

I'm writing a NativeScript plugin for the Stripe library. On Android, I can't figure out how to turn a resource ID from the native library into a drawable image.
One of the calls I make returns an ID obtained from R.drawable.xxx. The image is in the native library's resources. I want to turn that into an image I can draw using the tag.
I have tried this code:
import { android as androidApp } from "application";
let res = ...; // obtained from native library
let image = android.graphics.BitmapFactory.decodeResource(
androidApp.foregroundActivity.getResources(),
res);
but decodeResource() returns null. The native library is able to draw the same image, so somehow the resource is getting into the app. I just don't know how to access it from my code.
Note: On iOS this same call returns a UIImage, which is working correctly. I wish Stripe had been more consistent between their iOS and Android APIs!

I figured out the problem. Turns out my diagnosis was incorrect. The resource was being found, but it was a VectorDrawable and decodeResource() is unable to properly decode it.
The technique for converting a VectorDrawable to a Bitmap posted in this question works.

Related

Pick multiple images in react native Expo with Imagepicker

It seems like Expo only supports imagepicker for selecting one image rather than multiple images. Is there any way to pick multiple images without ejecting expo or starting new react-native-init?
Did you get the link from Oleg working?
I found that library as well, updated the Expo/SDK version because it was to old but now I get the following message when trying to open the gallery;
Invalid filter option: '(null)'. Expected one of photos, video's or all. To me unfortunately it is unclear where to place this filter exactly.
enter image description here

How to display a TIFF image in a Flutter app on an Android device?

Our application allows users to set up a library of various types of resources such as PDFs, spreadsheets, etc. I.e. just about any MIME type of document, which we store on S3.
When a user clicks to view any of these resources we basically determine if we are on an iOS device or Android. On iOS we use url_launcher to basically show just about anything.
Android is a bit more complex, but if the mime type is an image we just also use url_launcher. If not we download the file to a local file and invoke the OpenFile package to show the result (if we can).
This works generally pretty good, except for TIFF image types, which don't display natively in the browser...
Is there an easy way to show TIFF images in a full flutter screen (similar to showing it in a browser) on the Android platform? Actually for image/ mime types is there an easier way to show them rather than url_launcher in general?
The other Stack Overflow question about TIFF on Android does not address this issue as that is for a native Android app and this is for Flutter and that solution doesn't lend itself (easily that I can see) to a Flutter application.
The question is quite old but i still want to answer it :)
What you can do with the Image Library is to decode the image (whatever it is) and then encode it as a png and then pass it to the Image Widget.
Why PNG??? => in widgets/images.dart it says
/// This only accepts compressed image formats (e.g. PNG). Uncompressed
/// formats like rawRgba (the default format of [dart:ui.Image.toByteData])
/// will lead to exceptions.
Todo:
I had to import the Image Library as imgLib otherwise it was colliding with the Image Widget from Flutter...
import 'package:image/image.dart' as imgLib;
....
imgLib.Decoder dec = imgLib.findDecoderForData(response.data);
Image.memory(imgLib.encodePng(dec.decodeImage(response.data)))
Then you have an Image Widget for displaying it on the UI

Tencent NCNN detect function on android modification

The application I use (https://github.com/dangbo/ncnn-mobile.git) use a native library that gives out inference result as a tag. I need it to give me a float array from which it knows the tag. The array is already implemented in the C++ files, but changing them does not effect the application itself. I would not mind if the array was written into a string, I just need the numbers in a readable format. However, the method is native and thus I do not know how to modify this behavior.
I use the newest versions of Android Studio and NCNN. Please advise.
Simply use ndk-build on the jni folder for building.

Is there any method equivalent to getServingUrl() in swift/objective c?

Currently I'm using google app engine images for image transformations for my android app. The images will be trasformed on the fly and we can access the image with unique urls, which are generated by the getServingUrl() mathod. The documentation for which is available in https://cloud.google.com/appengine/docs/java/images/. But there is no documentation on how it can be done in ios application. Is there any equivalent method to getServingUrl() in swift code? Or google app engine doesnt support IOS?
You can always use:
https://storage.googleapis.com/{bucket name}/{image path}
where the default bucket name is typically your_app_name.appspot.com, and the image path is the name you gave the image when you stored it, like photos/avatars/user12345

Is it possible to pixelate an image in appcelerator Titanium?

I'm writing an Android and IOS app using Appcelerator Titanium, and I can't find a way to pixelate an image. The app that I'm writing, needs to do that: pixelate a given image with a parameter given by user, (the greater the number, the greater the pixels). I have found a way to do it with Xcode for IOS, and in Android SDK for Android, but if possible, I would like to do it in Titanium to avoid writing the whole app twice, one for Android and other for IOS.
Is there a way to do it?
Thank you.
If you have a native way in iOS and both Android you should wrap these as native modules and then include them in the project.
Follow this guide here on the Community wiki ->
https://wiki.appcelerator.org/display/guides2/Creating+a+New+Titanium+Module
Then you can write a function that wraps the modules and returned the processed object. eg.
var processImage = function() {
if (Titanium.Platform.name == 'android') {
// Android stuff
var imageProcess = require('ti.imageProcess');
return imageProcess.doImage('/voo/bar', /*more options */)
} else {
// etc
}
};
Instead of writing and maintaining two modules you could use a webView and use a JS library or the canvas object to pixelate the image.
A JS canvas solution to this can be found here:
https://stackoverflow.com/a/19129822/2132015

Categories

Resources