I have a working implementation of Googles PanoramaAPI in my Application. Unfortunately, the implementation stopped working on Android 11.
The implementation looks like the following:
Uri uri = Uri.fromFile(file);
Panorama.PanoramaApi.loadPanoramaInfoAndGrantAccess(mClient, uri).setResultCallback(
new ResultCallback<PanoramaResult>() {
#Override
public void onResult(PanoramaResult result) {
if (result.getStatus().isSuccess()) {
Intent viewerIntent = result.getViewerIntent();
Log.i(TAG, "found viewerIntent: " + viewerIntent);
if (viewerIntent != null) {
startActivity(viewerIntent);
finish();
}
} else {
Log.e(TAG, "error: " + result);
}
}
});
Following Situation:
When I make a panorama image on my smartphone and load it into the given implementation, it works. The image get's loaded from /storage/emulated/0/DCIM/Camera/*.jpg and the panorama view is shown without a problem.
When I Upload the same image on a server and download it through the app, the image get's stored on /storage/emulated/0/Android/data/<applicationId>/files/*.jpg. Unfortunately, the panorama view is not able to load the image and the viewerIntent is allways null.
For me, it looks like a permission problem on Android 11, but I don't know how to fix it. I don't want to download the image into a more public area of the phone. Does anyone has an idea how to fix it?
I suspect it is related to the introduction of Scoped Storage in Android 11, but I'm not sure.
I've managed to put together an alternative to the Panorama API using the VR Kit, which I've documented in my blog post with some code snippets.
The TLDR is this:
Replace your original ImageView with a VrPanoramaView
Use the VrPanoramaView's API to hide some options (optional)
Overlay an image button over the VrPanoramaView and use it to toggle between gyroscopic and touch modes
Use touch interception to support pinch zoom (this will be in a separate blog post)
Load your image into VrPanoramaView
Related
I'm using Fresco library to display images in my Android app. I'd like to display some images (jpg or png) that I have set with public grants.
When I was doing quick tests, I just took any image from internet to set a URL, but when using the real ones that I need to use, I have the following url https://drive.google.com/uc?export=view&id=<>, but as it is a redirect and, once redirected, new url is not the image itself, Fresco is unable to display it.
I have tried Picasso as an alternative library, but with out any success.
I have also tried the download url for both libraries (https://drive.google.com/uc?export=download&id=<>). But no result.
Anybody knows how could it be possible to get this images? Or the only solution is to download it (using the second url) processing the object received store a bitmap of it and displaying it?
For downloading it, what should i use and how? retrofit?
Thanks in advance.
Fresco supports different network stacks. For example, you can use OkHttp with Fresco, which should follow redirects or modify the default one to allow redirects - or write your own based on them.
Guide for OkHttp: http://frescolib.org/docs/using-other-network-layers.html
Related GitHub issue: https://github.com/facebook/fresco/issues/61
I found a solution for this problem (but could be only applicable if you use Google Cloud or Google Script).
It consists on creating a doGet() service with the following code inside:
var file = DriveApp.getFileById(fileId)
return Utilities.base64Encode(file.getBlob().getBytes());
and use that base64 value in your app. With this format, Fresco can do the magic
It is not an immediate solution, and requires to do somework in other platform that is not your Android app, but it works perfectly.
Are you sure that there is no problems with your URLs?
Picasso works with direct URLs like: https://kudago.com/media/images/place/06/66/06662fda6309ce1ee9116d13bd1c66d5.jpg
Then you can download your image like:
Picasso.with(this)
.load(url)
.noFade()
.placeholder(R.drawable.placeholder_grey) //if you want to use a stub
.into(imageView, new com.squareup.picasso.Callback() {
#Override
public void onSuccess() {
//here you can operate with image after it is downloaded
}
#Override
public void onError() {
}
});
Hope it will help you.
New to Picasso here. I'm trying to load an image stored on AWS (specifially S3) into my Android app using Picasso but I keep getting a blank image with no errors in my logcat and nothing obvious to me from general debugging around the relevant lines of code.
The image is stored on AWS which is in development mode and currently public so it shouldn't be an issue of logins etc. I also have internet permissions enabled in my manifest.
The code does seems to work when I save a random image link on the internet but I noticed when I use my browser to go to those links it opens up a page displaying nothing but that image. The database on S3, however, is set up to auto-download the file instead of displaying such a page. Perhaps that's the cause of my problem?
Here are 2 versions of my code, neither has worked for my image on AWS (note I'm substituting my real link to AWS with AWSLink but my actual code uses the real link):
Version 1
mApartmentImageView = (ImageView) v.findViewById(R.id.details_page_apartment_picture);
Picasso.with(getActivity()).load("//AWSLink.jpg").into(mApartmentImageView);
Version 2 (tries to account for auto-download of file)
mApartmentImageView = (ImageView) v.findViewById(R.id.details_page_apartment_picture);
String path = "//AWSLink.jpg";
Picasso.with(getActivity()).load(new File(path)).into(mApartmentImageView);
String path = "//AWSLink.jpg";
path is not a valid one, Use appropriate path and check
Picasso have callbacks, use those and check.
Picasso.with(getActivity()).load(new File(path)).into(mApartmentImageView, new Callback() {
#Override
public void onSuccess() {
}
#Override
public void onError() {
}
});
Before that, pass proper file path. Try to hit the file path in browser, If the path is giving image then use it as a parameter in load method
Picasso picasso = new Picasso.Builder(getContext())
.listener(new Picasso.Listener() {
#Override
public void onImageLoadFailed(Picasso picasso, Uri uri, Exception exception) {
//Here your log
}
})
.build(); picasso.load(new File(path)).into(mApartmentImageView);
I'm doing a bit of feasability R&D on a project I've been thinking about, the problem is that I'm unfamiliar with the limitations of the Camera API or the OS itself.
I want to write a cordova app which when opened (and authorized), will take a picture every nth second and upload it to a FTP site. Consider it something like a CCTV function, where my site will continuously render the latest image to the user.
Some pseudo code:
while (true) {
var img = api.takePicture(args);
uploadImage(img);
thread.sleep(1000);
}
So my question is, can I access the camera and instruct it to take a picture without user intervention (again, after camera access is authorized)?
Examples or direction for any API call to accomplish it would be really appreciated. I saw this article, and the answer looks promising, but the OP has android and I'd to know if it behaves the same on iOS.
On a side note, how do I test my cordova application on my iPhone without buying an app store license? I only want to run it on my own device.
Solved using CameraPictureBackground plugin:
function success(imgurl) { console.log("Imgurl = " + imgurl); //here I added my function to upload the saved pictures //on my internet server using file-tranfer plugin }
function onFail(message) {
alert('Failed because: ' + message); }
function CaptureBCK() {
var options = {
name: "Image", //image suffix
dirName: "CameraPictureBackground", //foldername
orientation: "portrait", //or landscape
type: "back" //or front
};
window.plugins.CameraPictureBackground.takePicture(success, onFail, options); }
<button onclick="CaptureBCK();">Capture Photo</button> <br>
You will find your pictures under CameraPictureBackground directory in your device acrhive. I used also file-transfer plugin in order to directly upload the pictures on my server over the internet.
I am trying to develop a virtual tour android application. I have photosphere image in my local device. I need to open this image and allow the user to explore. But the image is not loading in the image view. Is there anyother way to load this image and take some actions on the image like scrolling or swiping.
You can use Photo Sphere Api's from Google play service. Below is example how to use it -
And follow below link for more info -
[http://android-developers.blogspot.in/2012/12/new-google-maps-android-api-now-part-of.html][1]
// This listener will be called with information about the given panorama.
OnPanoramaInfoLoadedListener infoLoadedListener =
new OnPanoramaInfoLoadedListener() {
#Override
public void onPanoramaInfoLoaded(ConnectionResult result,
Intent viewerIntent) {
if (result.isSuccess()) {
// If the intent is not null, the image can be shown as a
// panorama.
if (viewerIntent != null) {
// Use the given intent to start the panorama viewer.
startActivity(viewerIntent);
}
}
}
// If viewerIntent is null, the image is not a viewable panorama.
};
Create client instance and connect to it.
Once connected to the client, initiate the asynchronous check on whether
the image is a viewable panorama.
client.loadPanoramaInfo(infoLoadedListener, panoramaUri);
It seems to be the simplest thing in the world: taking a picture within your Android app using the default camera activity. However, there are many pitfalls which are covered in several posts across StackOverflow and the web as, for instance, Null Intents being passed back, the orientation of the picture not being correct or OutOfMemoryErrors.
I'm looking for a solution that allows me to
start the camera activity via the camera intent,
retrieve the Uri of the photo, and
retrieve the correct orientation of the photo.
Moreover, I would like to avoid a device configuration (manufacturer, model, os version) specific implementation as far as possible. So I'm wondering: what is the best way to achieve this?
UPDATE: January 2nd, 2014:
I tried really hard to avoid implementing different strategies based on the device manufacturer. Unfortunately, I did not get around it. Going through hundreds of posts and talking to several developers, nobody found a solution that works on all devices without implementing device manufacturer specific code.
After I posted my solution here on StackOverflow, some developers asked me to publish my code on github. So here it is now: AndroidCameraUtil on github
The code was successfully tested on a wide variety of devices with Android API-Level >= 8. For a complete list, please see the Readme file on github.
The CameraIntentHelperActivity provides the main functionality, which is also described in more detail in the following.
Calling the default camera activity:
for Samsung and Sony devices: I call the camera activity with the method call to startActivityForResult. I only set the constant CAPTURE_IMAGE_ACTIVITY_REQUEST_CODE. I do NOT set any other intent extras.
for all other devices: I call the camera activity with the method call to startActivityForResult as previously. This time, however, I additionally set the intent extra MediaStore.EXTRA_OUTPUT and provide an URI, where I want the image to be stored.
In both cases I remember the time the camera activity was started.
On camera activity result:
Mediastore: First, I try to read the photo being captured from the MediaStore. Using a mangedQuery on the MediaStore content, I retrieve the latest image being taken, as well as its orientation property and its timestamp. If I find an image and it was not taken before the camera intent was called, it is the image I was looking for. Otherwise, I dismiss the result and try one of the following approaches.
Intent extra: Second, I try to get an image Uri from intent.getData() of the returning intent. If this is not successful either, I continue with step 3.
Default photo Uri: If all of the above mentioned steps did not work, I use the image Uri I passed to the camera activity.
At this point, I retrieved the photo Uri and its orientation which I pass to my UploadPhotoActivity.
Image processing
Please take a close look at my BitmapHelper class. It is based on the code described in detail in that tutorial.
Moreover, the shrinkBitmap method also rotates the image if required based on the orientation information extracted earlier.
I hope this is helpful to some of you.
I have tested this code with a Sony Xperia Go, Samsung Galaxy SII, Samsung Galaxy SIII mini and a Samsung Galaxy Y it worked on all devices!
But on the LG E400 (2.3.6) it didn’t work and you get double pictures in the gallery. So i have added the manufacturer.contains("lge") in the void startCameraIntent() and it fixed the problem.
if(!(manufacturer.contains("samsung")) && !(manufacturer.contains("sony")) && !(manufacturer.contains("lge"))) {
String filename = System.currentTimeMillis() + ".jpg";
ContentValues values = new ContentValues();
values.put(MediaStore.Images.Media.TITLE, filename);
cameraPicUri = getContentResolver().insert(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, values);
intent.putExtra(MediaStore.EXTRA_OUTPUT, cameraPicUri);
}
On a Galaxy S3 with CM 10.1 I get a nullpointer exception in BitmapHelper:
bm = BitmapFactory.decodeFileDescriptor(fileDescriptor.getFileDescriptor(), null, options);
subsequently my UploadPhotoActivity fails at:
try {
photo = BitmapHelper.readBitmap(this, cameraPicUri);
if (photo != null) {
photo = BitmapHelper.shrinkBitmap(photo, 600, rotateXDegrees);
thumbnail = BitmapHelper.shrinkBitmap(photo, 100);
ImageView imageView = (ImageView) findViewById(R.id.sustainable_action_photo);
imageView.setImageBitmap(photo);
} else {
Log.e(TAG,"IMAGE ERROR 1");
}
} catch (Exception e) {
Log.e(TAG,"IMAGE ERROR 2");
e.printStackTrace();
}
at the second log (IMAGE ERROR 2).
After a couple of tries my camera broke and I got a "Could not connect to camera"-error.
Tested it on a nexus 7 and it works perfectly.
Edit: Narrowed it down to this:
fileDescriptor = context.getContentResolver().openAssetFileDescriptor(selectedImage, "r");
Although selectedImage contains this:
file:///storage/emulated/0/DCIM/Camera/IMG_20131023_183343.jpg
The fileDescriptor returns a FileNotFoundException. I checked the file system and the image is not saved at this location. The cameraPicUri in TakePhotoActivity points to a non existant image. I am currently checking where it all goes wrong.
Edit2: I figured out the error: Since the device is a Samsung, and tells the App that it is a Samsung device, your Samsung specific fixes are applied. Cyanogenmod does not need those fixes though, and in the end the code breaks. Once you remove
(manufacturer.contains("samsung")) &&
It works. Since this is a custom ROM you could not plan for that of course. I am trying to figure out a way to detect if the device is running cyanogenmod and then include this in your code.
Thanks for a nice camera fix!
Edit3: I fixed it to run on Cyanogenmod on the Galaxy S3 by changing your code to this:
Well, now it sometimes works, sometimes it does not. Strange.
if (getPackageManager().hasSystemFeature("com.cyanogenmod.android") || (!(manufacturer.contains("samsung")) && !(manufacturer.contains("sony")) && !(manufacturer.contains("lge"))))
I experience some problems when using this with Sony Xperia Z5.
I added this and it got a lot better.
if (buildType.contains("sony")&& buildDevice.contains("e5823")) {
setPreDefinedCameraUri = true;}
But 4 times out of 22 it restarted the camera and once it restarted two times. I restarted the App due to every test.
Is there some way to get around this or do I accept this result?
The thing is that if the camera restarts I can press the back button twice and boom, the image is there in my Imageview and saved