How to Use Edit images in OpenAi Kotlin Client - android

I am using openAi client with android kotlin (implementation com.aallam.openai:openai-client:2.1.3).
Is the path wrong or is the library missing?
val imgURL = Uri.parse("android.resource://" + packageName + "/" + R.drawable.face3)
try {
val images = openAI.image(
edit = ImageEditURL( // or 'ImageEditJSON'
image = FilePath(imgURL.toString()), // <-
mask = FilePath(imgURL.toString()), // <-
prompt = "a sunlit indoor lounge area with a pool containing a flamingo",
n = 1,
size = ImageSize.is1024x1024
)
);
} catch (e: Exception) {
println("error is here:"+e)
}
As can be seen, it wants a path from me, but it does not succeed even though I give the path.

I would suggest updating to version 3 of openai-kotlin, and use Okio's Source to provide files.
Assuming the images are in the res/raw folder, your example would be something like this:
val request = ImageEdit(
image = FileSource(
name = "image.png",
source = resources.openRawResource(R.raw.image).source()
),
mask = FileSource(
name = "mask.png",
source = resources.openRawResource(R.raw.mask).source()
),
prompt = "a sunlit indoor lounge area with a pool containing a flamingo",
n = 1,
size = ImageSize.is1024x1024,
)
val response = client.imageURL(request)

Related

Thumbnail not showing in Ionic/typescript app

I am having real trouble getting the thumbnail to show in an Ionic app. I've tried to emulate it on an android device, no go, I've tried to launch the app on my phone, also no go. I tried varying the src parameter in the [ion-img], and that didn't work. So I thought I would ask Stack Overflow.
The Ionic app is a simple one. I basically have a homepage.html and ts file. The app runs one in every way except that it is not displaying the thumbnail. I have the relevant ts code looking like the following:
async startRecord(){
console.log("Entering record x");
let base64 = null;
const option: CaptureVideoOptions = {
limit: 1,
duration: 5,
quality: 100
};
var dateObj = new Date();
var minutes = dateObj.getMinutes().toString();
let minStr = "";
if(minutes.length<=1){
minStr = "0" + minutes;
}
else{
minStr = minutes;
}
var date = "Date: " + dateObj.getMonth() + "-" + dateObj.getDate() + "-" + dateObj.getFullYear() + " Time: " + dateObj.getHours() + ":" + minStr;
console.log("date",date);
const id:string = uuid();
this.mediaCapture.captureVideo(option).then(async (mediaFile: MediaFile[]) => {
this.video = mediaFile[0];
this.keys = Object.keys(this.video);
await this.createThumbnail(this.video?.fullPath);
const video = {
id: id,
fullPath: this.video?.fullPath,
name: this.video?.name,
userLabel: 'undecided',
thumbnail: this.thumbnail,
serverLabel: "none",
size: this.video?.size,
dateVideoTaken: date,
};
console.log(video);
await this.util.storeVide(video);
await this.getAllVideos();
}, (error: CaptureError) => {
console.log(error.code);
alert(error.code);
});
}
I call the createThumbnail from video-capture, and that also seems to run fine. I'm using the console.log to verify that the thumbnail is successfully been created on my emulator or phone, and it is.
async createThumbnail(path){
console.clear();
console.log('Full Video Path ', path);
var numstr = new Date().getTime();
var option:CreateThumbnailOptions = {fileUri: path ,width:160, height:206, atTime:1, outputFileName: 'thumbnail' + numstr, quality:100 };
const thumbnail = await this.VideoEditor.createThumbnail(option);
console.log("Thumbnail desc.", thumbnail);
this.thumbnail = thumbnail;
}
I check the thumbnail by physically going to the location on my phone, and the image is indeed there. I then try to represent it as a thumbnail for this particular video, using this text in the HTML page:
<ion-thumbnail slot="start" class="customized-image" id="thumbnail" (click)="openVideo(video.fullPath)" >
<!-- <img src="https://gravatar.com/avatar/dba6bae8c566f9d4041fb9cd9ada7741?d=identicon&f=y"> -->
<img src="{{video?.thumbnail}}">
</ion-thumbnail>
This works fine when I call a generic thumbnail from gravatar but somehow fails when calling the thumbnail.
I tried mixing this problem all day yesterday and thought I would ask the Stack Overflow community.
Please feel free to ask for more code If this doesn't describe my problem well enough.
Also, the f key no longer works on my computer, hence the misspellings in my text. I have to copy-paste the key when I want to use it.
Here is a console log sample:

How to update the default song metadata?

I wanted to update the song metadata fields of Track, album, genre, artist and song cover image like Musicmatch.
I tried to look for the code to update the meta couldn't find any solutions.
Your question isn't about a problem and is not detailed. But I can give you a great Media Player from google samples named UAMP(Universal Android Media Player) with handle all about android media player. Link
UAMP uses a MediaMetadataCompat to update the song metadata like below code segment.
fun MediaMetadataCompat.Builder.from(jsonMusic: JsonMusic): MediaMetadataCompat.Builder {
// The duration from the JSON is given in seconds, but the rest of the code works in
// milliseconds. Here's where we convert to the proper units.
val durationMs = TimeUnit.SECONDS.toMillis(jsonMusic.duration)
id = jsonMusic.id
title = jsonMusic.title
artist = jsonMusic.artist
album = jsonMusic.album
duration = durationMs
genre = jsonMusic.genre
mediaUri = jsonMusic.source
albumArtUri = jsonMusic.image
trackNumber = jsonMusic.trackNumber
trackCount = jsonMusic.totalTrackCount
flag = MediaItem.FLAG_PLAYABLE
// To make things easier for *displaying* these, set the display properties as well.
displayTitle = jsonMusic.title
displaySubtitle = jsonMusic.artist
displayDescription = jsonMusic.album
displayIconUri = jsonMusic.image
// Add downloadStatus to force the creation of an "extras" bundle in the resulting
// MediaMetadataCompat object. This is needed to send accurate metadata to the
// media session during updates.
downloadStatus = STATUS_NOT_DOWNLOADED
// Allow it to be used in the typical builder style.
return this
}
By this component, you can update song data in the notification, lock screen, and home screen.
To update the metadata of a song we can do by using ID3 tags. We can update these using Mp3Tag editor - https://github.com/aminb/id3r
, MyID3() editor - https://github.com/ericfarng/jid3lib
and Jaudiotagger - https://github.com/Adonai/jaudiotagger.
Mp3Tag editor - Only Mp3 song type is supported
MyID3 editor - Can edit song easily but not all field provided is updated
Jaudiotagger - This supports
Mp3,
Flac,
OggVorbis,
Mp4,
Aiff,
Wav,
Wma,
Dsf
audio formats
It updated data without any issue
try {
val audioFile = AudioFileIO.read(file)
val tag = audioFile?.tagOrCreateAndSetDefault
tag?.setField(FieldKey.ARTIST, binding?.tiArtist?.text?.toString())
tag?.setField(FieldKey.ALBUM, binding?.tiAlbum?.text?.toString())
tag?.setField(FieldKey.GENRE, binding?.tiGenre?.text?.toString())
tag?.setField(FieldKey.TITLE, binding?.tiTrack?.text?.toString())
// Handle the image setting
try {
val pfd = contentResolver.openFileDescriptor(imageUri, "r") ?: return
val fis = FileInputStream(pfd.fileDescriptor)
val imgBytes = JavaUtils.readFully(fis)
val cover = AndroidArtwork()
cover.binaryData = imgBytes
cover.mimeType = ImageFormats.getMimeTypeForBinarySignature(byteArray)
cover.description = ""
cover.pictureType = PictureTypes.DEFAULT_ID
tag?.deleteArtworkField()
tag?.setField(cover)
fis.close()
// to do check the file write option for both internal and external card
// Handle the Storage Access FrameWork API if song is from SD card
if (audioFile?.file?.let { SafUtils.isSafNeeded(it, this) } == true) {
// Handle writing into SD card
// Check if SAF permission is provided then only we can update metadata
// If SAF Permission is not provided. EACCESS : Permission Denied error is displayed
// After the permission success then only we can update meta.
writeIntoSDCard()
} else {
// Handle writing into internal card
writeInInternalStorage()
}
} catch (e: Exception) { }
} catch (e: Exception) {
// Show error on failure while writing
} catch (e: Error) {
// Show error on failure while writing
}
Writing the metadata
// After update refresh the file else the changes will not be reflected
AudioFileIO.write(audioFile)
MediaScannerConnection.scanFile(context, arrayOf(file?.absolutePath ?: ""), null, null)

Faces indexed by iOS/Android app are not detected by Android/iOS App - AWS Rekognition

So I have been working on a product (Android First and then iOS) for a long time that index faces of people using AWS Rekognition and when they are again scanned later, it identifies them.
It's working great when I index a face from an Android device and then try to search it with an Android device. But if I try to search it later on iOS app, it doesn't find it. Same is the result if I go other way round. Index with iOS, search with Android, not found.
The collection ID is same while indexing and searching on both devices. I couldn't figure out how is it possible that a face indexed by one OS type, same region, same collection, couldn't be found while on other device.
If anyone here could try and help me with the issue, please do. I'll be really thankful.
Update 1: I have called "listCollections" function on both iOS and android apps. Both of them are showing different list of collections. This is the issue. But I can't figure our why it is happening. The identity pool and region is same on both of them.
Here is my Android Code to access Rekognition:
mCredentialsProvider = new CognitoCachingCredentialsProvider(
mContext,
"us-east-2:xbxfxexf-x5x5-xax7-x9xf-x5x0xexfx1xb", // Identity pool ID
Regions.US_EAST_2 // Region
);
mUUID = UUID.randomUUID().toString().replace("-", "");
mAmazonS3Client = new AmazonS3Client(mCredentialsProvider);
mAmazonS3Client.setRegion(Region.getRegion(Regions.US_EAST_2));
mAmazonRekognitionClient = new AmazonRekognitionClient(mCredentialsProvider);
if(!mAmazonS3Client.doesBucketExist(mFacesBucket)) {
mAmazonS3Client.createBucket(mFacesBucket);
}
Log.i(TAG, "Uploading image to S3 Bucket");
mAmazonS3Client.putObject(mFacesBucket, getS3ObjectName(), new File(data[0].toString()));
Log.i(TAG, "Image Uploaded");
Image image = new Image();
try {
image.setBytes(ByteBuffer.wrap(Files.toByteArray(new File(data[0].toString()))));
} catch (IOException e) {
e.printStackTrace();
}
Log.i(TAG, "Indexing image");
IndexFacesRequest indexFacesRequest =new IndexFacesRequest()
.withCollectionId(mFacesCollection)
.withImage(image)
.withExternalImageId(mUUID)
.withDetectionAttributes("ALL");
mAmazonRekognitionClient.indexFaces(indexFacesRequest);
Here is my iOS code to access Rekognition:
func uploadToCollection(img: UIImage)
{
let myIdentityPoolId="us-east-2:xbxfxexf-x5x5-xax7-x9xf-x5x0xexfx1xb"
let credentialsProvider = AWSCognitoCredentialsProvider(regionType: .USEast2, identityPoolId: myIdentityPoolId)
//store photo in s3()
let configuration = AWSServiceConfiguration(region: .USEast2, credentialsProvider: credentialsProvider)
AWSServiceManager.default().defaultServiceConfiguration = configuration
rekognitionClient = AWSRekognition.default()
guard let request = AWSRekognitionIndexFacesRequest() else
{
puts("Unable to initialize AWSRekognitionindexFaceRequest.")
return
}
var go=false
request.collectionId = "i_faces" + self.firebaseID.lowercased() //here iosCollection will be replaced by firebase Current UserID
request.detectionAttributes = ["ALL", "DEFAULT"]
request.externalImageId = self.UUID //this should be mUUID, passed as parameter to this function
let sourceImage = img
let image = AWSRekognitionImage()
image!.bytes = sourceImage.jpegData(compressionQuality: 0.7)
request.image = image
self.rekognitionClient.indexFaces(request) { (response:AWSRekognitionIndexFacesResponse?, error:Error?) in
if error == nil
{
print("Upload to Collection Complete")
}
go=true
return
}
while(go==false){}
}
Create a collection and added images to the collection and create an index. I suspect few things in your setup and code.
1) The Identity Pool Id, AWS Region used across iOS and Android
2) The name of the collection used (pay attention to the delimiters used in the collection name)
Android:
CognitoCachingCredentialsProvider credentialsProvider = new CognitoCachingCredentialsProvider(appContext, "MyPoolID", Regions.US_EAST_1);
public void searchFacesByImage() {
Image source = new Image().withS3Object(new S3Object().withBucket("us-east-1-bucket").withName("ms.jpg"));
Image ms2 = new Image().withS3Object(new S3Object().withBucket("us-east-1-bucket").withName("ms-2.jpg"));
Image ms3 = new Image().withS3Object(new S3Object().withBucket("us-east-1-bucket").withName("ms-3.jpg"));
Image ms4 = new Image().withS3Object(new S3Object().withBucket("us-east-1-bucket").withName("ms-4.jpg"));
String collectionId = "MyCollectionID";
AmazonRekognitionClient client = new AmazonRekognitionClient(credentialsProvider);
try {
System.out.println("Creating collection: " + collectionId );
CreateCollectionRequest request = new CreateCollectionRequest().withCollectionId(collectionId);
CreateCollectionResult createCollectionResult = client.createCollection(request);
System.out.println("CollectionArn : " + createCollectionResult.getCollectionArn());
System.out.println("Status code : " + createCollectionResult.getStatusCode().toString());
} catch (Exception ex) {
ex.printStackTrace();
}
IndexFacesRequest indexFacesRequest = new IndexFacesRequest();
indexFacesRequest.setImage(source);
indexFacesRequest.setCollectionId(collectionId);
client.indexFaces(indexFacesRequest);
indexFacesRequest = new IndexFacesRequest();
indexFacesRequest.setImage(ms2);
indexFacesRequest.setCollectionId(collectionId);
client.indexFaces(indexFacesRequest);
indexFacesRequest = new IndexFacesRequest();
indexFacesRequest.setImage(ms4);
indexFacesRequest.setCollectionId(collectionId);
client.indexFaces(indexFacesRequest);
SearchFacesByImageRequest searchFacesByImageRequest = new SearchFacesByImageRequest();
searchFacesByImageRequest
.withCollectionId(collectionId)
.withImage(ms3)
.withFaceMatchThreshold(80F);
SearchFacesByImageResult searchFacesByImageResult =
client.searchFacesByImage(searchFacesByImageRequest);
List <FaceMatch> faceImageMatches = searchFacesByImageResult.getFaceMatches();
for (FaceMatch face: faceImageMatches) {
Log.d(TAG, face.toString());
}
}
iOS:
Create the Cognito Credentials Provider
AWSCognitoCredentialsProvider *credentialsProvider = [[AWSCognitoCredentialsProvider alloc] initWithRegionType:AWSRegionUSEast1 identityPoolId: #"MyPoolID"];
AWSServiceConfiguration *configuration = [[AWSServiceConfiguration alloc] initWithRegion:AWSRegionUSEast1 credentialsProvider:credentialsProvider];
[AWSServiceManager defaultServiceManager].defaultServiceConfiguration = configuration;
Use the same Identity Pool Id and Region (us-east-1).
func faceIndexNoFacesSearch() {
let rekognition = AWSRekognition.default()
let faceRequest = AWSRekognitionSearchFacesByImageRequest()
do {
let image = AWSRekognitionImage()
image?.s3Object = AWSRekognitionS3Object()
image?.s3Object?.bucket = "us-east-1-bucket"
image?.s3Object?.name = "ms-2.jpg"
faceRequest!.image = image
faceRequest!.collectionId = "MyCollectionID"
rekognition.searchFaces(byImage: faceRequest!).continueWith { (response) -> Any? in
XCTAssertNil(response.error)
XCTAssertNotNil(response.result)
if let result = response.result {
XCTAssertNotNil(result.faceMatches)
}
return nil
}.waitUntilFinished()
} catch {
print("exception")
}
}
Please post questions in the comment and we can discuss there.
Ok so the problem turned out to be much different and solution was rather very simple. I posted another question regarding the same problem when I found it was a bit different and I have posted an answer as well.
Here it is:
https://stackoverflow.com/a/53128777/4395264

Output file using FFmpeg in Xamarin Android

I'm building an android app using Xamarin. The requirement of the app is to capture video from the camera and encode the video to send it across to a server.
Initially, I was using an encoder library on the server-side to encode recorded video but it was proving to be extremely unreliable and inefficient especially for large-sized video files. I have posted my issues on another thread here
I then decided to encode the video on the client-side and then send it to the server. I've found encoding to be a bit complicated and there isn't much information available on how this can be done. So, I searched for the only way I knew how to encode a video that is by using FFmpeg codec. I've found some solutions. There's a project on GitHub that demonstrates how FFmpeg is used inside a Xamarin android project. However, running the solution doesn't give any output. The project has a binary FFmpeg file which is installed to the phone directory using the code below:
_ffmpegBin = InstallBinary(XamarinAndroidFFmpeg.Resource.Raw.ffmpeg, "ffmpeg", false);
Below is the example code for encoding video into a different set of outputs:
_workingDirectory = Android.OS.Environment.ExternalStorageDirectory.AbsolutePath;
var sourceMp4 = "cat1.mp4";
var destinationPathAndFilename = System.IO.Path.Combine (_workingDirectory, "cat1_out.mp4");
var destinationPathAndFilename2 = System.IO.Path.Combine (_workingDirectory, "cat1_out2.mp4");
var destinationPathAndFilename4 = System.IO.Path.Combine (_workingDirectory, "cat1_out4.wav");
if (File.Exists (destinationPathAndFilename))
File.Delete (destinationPathAndFilename);
CreateSampleFile(Resource.Raw.cat1, _workingDirectory, sourceMp4);
var ffmpeg = new FFMpeg (this, _workingDirectory);
var sourceClip = new Clip (System.IO.Path.Combine(_workingDirectory, sourceMp4));
var result = ffmpeg.GetInfo (sourceClip);
var br = System.Environment.NewLine;
// There are callbacks based on Standard Output and Standard Error when ffmpeg binary is running as a process:
var onComplete = new MyCommand ((_) => {
RunOnUiThread(() =>_logView.Append("DONE!" + br + br));
});
var onMessage = new MyCommand ((message) => {
RunOnUiThread(() =>_logView.Append(message + br + br));
});
var callbacks = new FFMpegCallbacks (onComplete, onMessage);
// 1. The idea of this first test is to show that video editing is possible via FFmpeg:
// It results in a 150x150 movie that eventually zooms on a cat ear. This is desaturated, and there's a fade-in.
var filters = new List<VideoFilter> ();
filters.Add (new FadeVideoFilter ("in", 0, 100));
filters.Add(new CropVideoFilter("150","150","0","0"));
filters.Add(new ColorVideoFilter(1.0m, 1.0m, 0.0m, 0.5m, 1.0m, 1.0m, 1.0m, 1.0m));
var outputClip = new Clip (destinationPathAndFilename) { videoFilter = VideoFilter.Build (filters) };
outputClip.H264_CRF = "18"; // It's the quality coefficient for H264 - Default is 28. I think 18 is pretty good.
ffmpeg.ProcessVideo(sourceClip, outputClip, true, new FFMpegCallbacks(onComplete, onMessage));
//2. This is a similar version in command line only:
string[] cmds = new string[] {
"-y",
"-i",
sourceClip.path,
"-strict",
"-2",
"-vf",
"mp=eq2=1:1.68:0.3:1.25:1:0.96:1",
destinationPathAndFilename2,
"-acodec",
"copy",
};
ffmpeg.Execute (cmds, callbacks);
// 3. This lists codecs:
string[] cmds3 = new string[] {
"-codecs",
};
ffmpeg.Execute (cmds, callbacks);
// 4. This convers to WAV
// Note that the cat movie just has some silent house noise.
ffmpeg.ConvertToWaveAudio(sourceClip, destinationPathAndFilename4, 44100, 2, callbacks, true);
I have tried different commands but no output file is generated. I have tried to use another project found here but this one has the same issue. I don't get any errors but no output file is generated. I'm really hoping someone can help me find a way I can manage to use FFmpeg in my project or some way to compress video to transport it to the server.
I will really appreciate if someone can point me in the right direction.
Just figure how to get the output by adding the permission in AndroidManifest file.
android.permission.WRITE_EXTERNAL_STORAG
Please read the update on the repository, it says that there is a second package, Xamarin.Android.MP4Transcoder for Android 6.0 onwards.
Install NuGet https://www.nuget.org/packages/Xamarin.Android.MP4Transcoder/
await Xamarin.MP4Transcoder.Transcoder
.For720pFormat()
.ConvertAsync(inputFile, ouputFile, f => {
onProgress?.Invoke((int)(f * (double)100), 100);
});
return ouputFile;
For Previous Android versions
Soruce Code https://github.com/neurospeech/xamarin-android-ffmpeg
Install-Package Xamarin.Android.FFmpeg
Use this as template, this lets you log output as well as calculates progress.
You can take a look at source, this one downloads ffmpeg and verifies sha1 hash on first use.
public class VideoConverter
{
public VideoConverter()
{
}
public File ConvertFile(Context contex,
File inputFile,
Action<string> logger = null,
Action<int,int> onProgress = null)
{
File ouputFile = new File(inputFile.CanonicalPath + ".mpg");
ouputFile.DeleteOnExit();
List<string> cmd = new List<string>();
cmd.Add("-y");
cmd.Add("-i");
cmd.Add(inputFile.CanonicalPath);
MediaMetadataRetriever m = new MediaMetadataRetriever();
m.SetDataSource(inputFile.CanonicalPath);
string rotate = m.ExtractMetadata(Android.Media.MetadataKey.VideoRotation);
int r = 0;
if (!string.IsNullOrWhiteSpace(rotate)) {
r = int.Parse(rotate);
}
cmd.Add("-b:v");
cmd.Add("1M");
cmd.Add("-b:a");
cmd.Add("128k");
switch (r)
{
case 270:
cmd.Add("-vf scale=-1:480,transpose=cclock");
break;
case 180:
cmd.Add("-vf scale=-1:480,transpose=cclock,transpose=cclock");
break;
case 90:
cmd.Add("-vf scale=480:-1,transpose=clock");
break;
case 0:
cmd.Add("-vf scale=-1:480");
break;
default:
break;
}
cmd.Add("-f");
cmd.Add("mpeg");
cmd.Add(ouputFile.CanonicalPath);
string cmdParams = string.Join(" ", cmd);
int total = 0;
int current = 0;
await FFMpeg.Xamarin.FFMpegLibrary.Run(
context,
cmdParams
, (s) => {
logger?.Invoke(s);
int n = Extract(s, "Duration:", ",");
if (n != -1) {
total = n;
}
n = Extract(s, "time=", " bitrate=");
if (n != -1) {
current = n;
onProgress?.Invoke(current, total);
}
});
return ouputFile;
}
int Extract(String text, String start, String end)
{
int i = text.IndexOf(start);
if (i != -1)
{
text = text.Substring(i + start.Length);
i = text.IndexOf(end);
if (i != -1)
{
text = text.Substring(0, i);
return parseTime(text);
}
}
return -1;
}
public static int parseTime(String time)
{
time = time.Trim();
String[] tokens = time.Split(':');
int hours = int.Parse(tokens[0]);
int minutes = int.Parse(tokens[1]);
float seconds = float.Parse(tokens[2]);
int s = (int)seconds * 100;
return hours * 360000 + minutes * 60100 + s;
}
}

ListView reusing old images

I created a plugin using Picasso and it uses the android.widget.ImageView to load the cached image into.
The plugin works fine if using a Repeater but whenever i try using it with a ListView after scrolling past about the 7th item the ListView begins to reuse old images even if the image source is different
The reason why is because list views reuse the entire fragment; so what happens is that your img being reused gets the old image shown unless you clear it.
I actually use Picasso myself; and this is my current picasso library.
So if you look in my code below, when I set the new .url, I clear the existing image. (I made a comment on the specific line) -- This way the image now show blank, and then picasso loads it from either memory, disk or a remote url (in my case a remote url) and it will assign the proper image.
"use strict";
var Img = require('ui/image').Image;
var application = require("application");
var PT = com.squareup.picasso.Target.extend("Target",{
_owner: null,
_url: null,
onBitmapLoaded: function(bitmap, from) {
// Since the actual image / target is cached; it is possible that the
// target will not match so we don't replace the image already seen
if (this._url !== this._owner._url) {
return;
}
this._owner.src = bitmap;
},
onBitmapFailed: function(ed) {
console.log("Failed File", this._url);
},
onPrepareLoad: function(ed) {
}
});
Object.defineProperty(Img.prototype, "url", {
get: function () {
return this._url;
},
set: function(src) {
if (src == null || src === "") {
this._url = "";
this.src = null;
return;
}
var dest = src;
this._url = dest;
this.src = null; // -- THIS IS THE LINE TO CLEAR THE IMAGE
try {
var target = new PT();
target._owner = this;
target._url = dest;
var x = com.squareup.picasso.Picasso.with(application.android.context).load(dest).into(target);
} catch (e) {
console.log("Exception",e);
}
},
enumerable: true,
configurable: true
});
Please note you only need to require this class once, then it attaches itself to the <Image> component and adds the new .url property; this allows me to use this in the Declarative XML in all the rest of the screens and when I need picasso, I just use the .url property to have picasso take over the loading of that image.

Categories

Resources