I am writing tests using espresso,my app intends to phone camera, where i press the click button manually,and then it migrates to the next screen,I am not able to automate the image click button in test code,how can i access camera using code through which i can do the same.
Thanks.
You should not open the camera intent or you'll have no way of getting any resulting image back from it (without pressing the camera button manually).
Have a look at the Stubbing out the Camera section of this website:
https://guides.codepath.com/android/UI-Testing-with-Espresso#stubbing-out-the-camera
This way you test your activity by simulating an actual image "returned" to your app from the Camera.
Update
And this is the method I use to get a bitmap to test:
public static Bitmap getTestBitmap(Context context, String resourceName) {
Resources resources = context.getResources();
Bitmap ret = null;
int imageResource = resources.getIdentifier(
resourceName, "drawable", context.getPackageName());
Uri pictureUri = Uri.parse(ContentResolver.SCHEME_ANDROID_RESOURCE + "://"
+ resources.getResourcePackageName(imageResource) + '/'
+ resources.getResourceTypeName(imageResource) + '/'
+ resources.getResourceEntryName(imageResource));
try {
ret = MediaStore.Images.Media.getBitmap(context.getContentResolver(), pictureUri);
} catch (Exception e) {
}
return ret;
}
And then I save the bitmap in internal storage and get the uri:
public static Uri saveToInternalStorage(Context context, Bitmap bitmapImage, String fileName) {
ContextWrapper cw = new ContextWrapper(context);
// path to /data/data/yourapp/app_data/pictures
File directory = cw.getDir("pictures", Context.MODE_PRIVATE);
// Create imageDir
File mypath = new File(directory, fileName);
FileOutputStream fos = null;
try {
fos = new FileOutputStream(mypath);
// Use the compress method on the BitMap object to write image to the OutputStream
bitmapImage.compress(Bitmap.CompressFormat.PNG, 100, fos);
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
fos.close();
} catch (Exception e) {
}
}
return Uri.fromFile(new File(mypath.getAbsolutePath()));
}
I know this is late, but it's something I struggled with myself and I would like to post an answer to help someone else. Here is how you click the camera button from a chooser (after you set it up), you use UIAutomator, as suggested by PunitD in the comments of the original post. This will pick up from where the test is showing a chooser on the screen.
public static final int waitTimeNativeApi = 6000;
public static void await(int time) {
try {
Thread.sleep(time);
} catch (InterruptedException e) {
Log.e(TAG, "Interrupted while sleeping");
}
}
private void takePhoto() {
boolean usePixels = true;
UiDevice device = UiDevice.getInstance(InstrumentationRegistry.getInstrumentation());
UiObject titleTextUI = device.findObject(new UiSelector()
.className("android.widget.TextView")
.text("Camera")
);
try {
titleTextUI.clickTopLeft();
if (usePixels) {
takePhotoForPixels(device);
} else {
takePhotoForSamsung(device);
}
} catch (UiObjectNotFoundException unofe) {
unofe.printStackTrace();
}
}
private void takePhotoForPixels(UiDevice device) {
// close the app selector to go back to our app so we can carry on with Espresso
await(waitTimeNativeApi);
//TAP on the camera icon
device.click(device.getDisplayWidth() / 2, device.getDisplayHeight() - 100);
await(waitTimeNativeApi);
//Get the OK button
device.click(device.getDisplayWidth() / 2, device.getDisplayHeight() - 100);
await(waitTimeNativeApi);
}
private void takePhotoForSamsung(UiDevice device) throws UiObjectNotFoundException {
// close the app selector to go back to our app so we can carry on with Espresso
UiObject titleTextUI = device.findObject(new UiSelector()
.className("android.widget.TextView")
.text("Camera")
);
titleTextUI.clickTopLeft();
await(waitTimeNativeApi);
//TAP on the camera icon
device.click(device.getDisplayWidth() / 2, device.getDisplayHeight() - 50);
//Get the OK button
UiObject cameraOkUi = device.findObject(new UiSelector()
.className("android.widget.TextView")
.text("OK")
);
cameraOkUi.click();
await(waitTimeNativeApi);
}
In this way, you will take an actual photo and get the results back in onActivityResult.
Related
I want to implement functionality for saving image in Downloads directory and after that offer to user to open this one in a directory (open directory in which user can find and open this image). But I've got one issue. Saving ends successfully, but when user clicks "OPEN" in snackbar and chooses app to perform this action another directory appears. It contains also "Downloads" directory as well, this Downloads directory does not contain saved images! It seems like in android we have two different "Downloads" directories.
Below is how i get path for save image:
private File getFileForImageSaving() {
String filename = getImageNameFromUrl(mImageUrl) + ".png";
File dest = new File(
Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS),
filename);
int index = 1;
while (dest.exists()) {
filename = getImageNameFromUrl(mImageUrl) + "_" + index + ".png";
dest = new File(
Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS),
filename);
index++;
}
return dest;
}
This is how i run activity for view "Download" directory and open files.
Intent intent = new Intent(Intent.ACTION_GET_CONTENT);
Uri uri = Uri.parse(
Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS).getPath());
intent.setDataAndType(uri, "image/png");
startActivity(Intent.createChooser(intent, "Open folder"));
This is how I save image. It is realy works, I've checked.
pri
vate void saveImageToFile() {
File dest = getFileForImageSaving();
new AsyncTask<Void, Void, Void>() {
#Override
protected Void doInBackground(Void... params) {
FileOutputStream out = null;
try {
dest.createNewFile();
out = new FileOutputStream(dest);
Bitmap bitmap = Glide.with(ArticleImageViewActivity.this)
.load(mImageUrl)
.asBitmap()
.into(Target.SIZE_ORIGINAL, Target.SIZE_ORIGINAL)
.get();
bitmap.compress(Bitmap.CompressFormat.PNG, 100, out);
out.flush();
Utils.showInSnackBar(
ArticleImageViewActivity.this, getString(R.string.image_has_been_successfully_saved),
Snackbar.LENGTH_LONG,
onOpenImageInDirectoryListener,
getString(R.string.open_image_in_directory));
} catch (Exception e) {
Utils.showInSnackBar(ArticleImageViewActivity.this,
getString(R.string.error_occurred_during_saving_image),
Snackbar.LENGTH_SHORT, null, null);
} finally {
if (out != null) {
try {
out.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
return null;
}
}.execute();
}
I partially resolve my problem by using Intent.ACTION_VIEW instead of Intent.ACTION_GET_CONTENT and " / " mime type instead of "image/png". But only partially because in this case user will be offered to choose a wide range of applications, but not only applications like filemanagers.
use MediaScannerConnection.scanFile to scan the file after saving. if you don't many/most galleries wont show your file.
https://developer.android.com/reference/android/media/MediaScannerConnection.html
I have an activity where I open a image picker intent and the when I get the selected Uri, I create a copy of the file in cache directory and then pass the location of the image to Picasso to load the image. I am doing this because some apps like Google Photos do not allow the actual Uris to be passed to different activities for security reasons.
Here is my code for the same :
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_external_photo_share);
ButterKnife.bind(this);
LogUtil.i(TAG, "onCreate called");
tinyDB = new TinyDB(this);
if (tinyDB.getBoolean(AppConstants.LOGIN_STATE, false)) {
imageUri = (Uri) getIntent().getExtras().get(Intent.EXTRA_STREAM);
if (imageUri == null || imageUri.toString().isEmpty()) {
ExternalPhotoShareActivity.this.finish();
}
String newActualPath = getActualPathFromUri(imageUri);
Uri newUri = null;
if (newActualPath != null) {
newUri = Uri.fromFile(new File(newActualPath));
}
Intent intent = new Intent(ExternalPhotoShareActivity.this, AddCaptionActivity.class);
intent.putExtra("externalImageUri", newUri);
intent.putExtra("externalImagePath", newActualPath);
startActivity(intent);
ExternalPhotoShareActivity.this.finish();
} else {
final Dialog dialog = new Dialog(ExternalPhotoShareActivity.this);
dialog.requestWindowFeature(Window.FEATURE_NO_TITLE);
dialog.setContentView(R.layout.dialog_external_share_error);
dialog.setCanceledOnTouchOutside(false);
dialog.show();
TextView goBack = (TextView) dialog.findViewById(R.id.textView86);
goBack.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
try {
if(dialog != null) {
if(dialog.isShowing()) {
dialog.dismiss();
}
}
} catch (final IllegalArgumentException e) {
// Handle or log or ignore
} catch (final Exception e) {
// Handle or log or ignore
}
ExternalPhotoShareActivity.this.finish();
}
});
}
}
private String getActualPathFromUri(Uri selectedImageUri) {
Bitmap bitmap = null;
try {
bitmap = getBitmapFromUri(selectedImageUri);
} catch (IOException e) {
e.printStackTrace();
}
if (bitmap == null) {
return null;
}
File imageFileFolder = new File(getCacheDir(), "galleri5");
if (!imageFileFolder.exists()) {
imageFileFolder.mkdir();
}
FileOutputStream out = null;
File imageFileName = new File(imageFileFolder, "galleri5-" + System.currentTimeMillis() + ".jpg");
try {
out = new FileOutputStream(imageFileName);
bitmap.compress(Bitmap.CompressFormat.JPEG, 100, out);
out.flush();
out.close();
} catch (IOException e) {
} finally {
if (out != null) {
try {
out.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
return imageFileName.getAbsolutePath();
}
private Bitmap getBitmapFromUri(Uri uri) throws IOException {
ParcelFileDescriptor parcelFileDescriptor =
getContentResolver().openFileDescriptor(uri, "r");
assert parcelFileDescriptor != null;
FileDescriptor fileDescriptor = parcelFileDescriptor.getFileDescriptor();
Bitmap image = BitmapFactory.decodeFileDescriptor(fileDescriptor);
parcelFileDescriptor.close();
return image;
}
I call getActualPathFromUri() in my activity's onCreate() method. Sometimes, when the images from gallery are large, it takes some seconds to load the image on the screen. So, I thought about executing these two methods in background thread so that I could show the UI while the background work is done.
I recently started using Android Support Annotations and tried to annotate the getActualPathFromUri() with #WorkerThread. But in my onCreate() method, it marks it red and says that this method should be called from Worker Thread, currently inferred thread is main.
What is the proper way of doing this? Should I even do it in background thread or not? Thanks.
The annotations #WorkerThread or #UIThread are only used for flagging a method. Android Studio will raise an error when your method is called from a thread that does not match your annotated constraint. See this documentation.
For advice on threading with AsyncTask see this android developers blog post.
In this case, with the annotation #WorkerThrad you're saying that getActualPathFromUri() should only be called from a worker thread. As you're calling it from onCreate() which runs inside the UI-thread, lint detects the issue and flags it.
The fact that you annotate the method like that, does not mean you're running the method inside a worker thread. Annotation in this case is just a way to flag the developer (that might be different than yourself if working in a team) that a particular method is meant to be called in a particular thread.
If you want to actually run that method in an actual worker thread, just call it inside an AsyncTask#doInBackground method.
EDIT: if you don't wanna use AsyncTask you can use any other Android threading mechanism, like IntentService, GcmTaskService and whatnot but, you should not run it inside the UI thread because it may cause jank.
I want to start the camera intent within my app to take a picture and save it to internal storage. I'm using the code of google developers page Capturing images or video.
In the processPictureWhenReady method I've implemented the following code to save the picture:
private void processPictureWhenReady(final String picturePath) {
Log.v("path processPictureWhenReady ", " " + picturePath);
final File pictureFile = new File(picturePath);
if (pictureFile.exists()) {
// The picture is ready; process it.
try {
Bitmap imageBitmap = BitmapFactory.decodeFile(picturePath);
int w = imageBitmap.getWidth();
int h = imageBitmap.getHeight();
Bitmap bm2 = Bitmap.createScaledBitmap(imageBitmap, w / 2,
h / 2, true);
imageBitmap = bm2.copy(bm2.getConfig(), true);
MediaStore.Images.Media.insertImage(getContentResolver(),
imageBitmap, "test", "Test");
} catch (Exception e) {
Log.e("Exc", e.getMessage());
}
}
The camera intent is starting and then I have "tap to accept" to take a picture. But then nothing happens. I have a log message in my onActivityResult method and noticed that the method is not beeing called.
This is a known issue. I have the same problem. I'm following the case here in the meantime
I've seen people try implementing the preview mode with SurfaceView (I haven't personally gotten it to work but it's worth a shot). Also check here for a similar problem.
I used this method it worked for me very well.
private void processPictureWhenReady(final String picturePath) {
final File pictureFile = new File(picturePath);
if(pictureFile.exists()){
}
if (pictureFile.exists()) {
} else {
final File parentDirectory = pictureFile.getParentFile();
FileObserver observer = new FileObserver(parentDirectory.getPath()) {
private boolean isFileWritten;
#Override
public void onEvent(int event, String path) {
if (!isFileWritten) {
// For safety, make sure that the file that was created in
// the directory is actually the one that we're expecting.
File affectedFile = new File(parentDirectory, path);
isFileWritten = (event == FileObserver.CLOSE_WRITE && affectedFile.equals(pictureFile));
if (isFileWritten) {
stopWatching();
// Now that the file is ready, recursively call
// processPictureWhenReady again (on the UI thread).
runOnUiThread(new Runnable() {
#Override
public void run() {
processPictureWhenReady(picturePath);
}
});
}
}
}
};
observer.startWatching();
}
}
I am developing an android app that captures images of a Google map (V2). I require the user to name those pictures. The user should be able to name the picture after he presses the snapshot button. Should i put an edit text into an alert dialog and then put the string from the edit text as the image name? Here is the code I use for capturing those images:
public void CaptureMapScreen() {
SnapshotReadyCallback callback = new SnapshotReadyCallback() {
Bitmap bitmap;
#Override
public void onSnapshotReady(Bitmap snapshot) {
bitmap = snapshot;
try {
FileOutputStream out = new FileOutputStream(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES)+File.separator+"Rute"+File.separator
+ "MyMapScreen" + System.currentTimeMillis() + ".png");
// above "/mnt ..... png" => is a storage path (where image will be stored) + name of image you can customize as per your Requirement
bitmap.compress(Bitmap.CompressFormat.PNG, 90, out);
} catch (Exception e) {
e.printStackTrace();
}
}
};
map.snapshot(callback);
}
I want my app to be able to capture photos without using another application. The code i used :
Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
File photo = null;
try
{
photo = this.createTemporaryFile("picture", ".jpg");
photo.delete();
}
catch(Exception e)
{
Toast.makeText(getApplicationContext(),"Error",Toast.LENGTH_LONG).show();
}
mImageUri = Uri.fromFile(photo);
intent.putExtra(MediaStore.EXTRA_OUTPUT, mImageUri);
startActivityForResult(intent, CAPTURE_IMAGE_ACTIVITY_REQUEST_CODE);
But this code uses the phone's main camera app. Can anyone give me some code ?
Taking a picture directly using the Camera class is insanely complicated to get right.
I am working on a library to simplify this, where you just add a CameraFragment to your app for the basic preview UI, and call takePicture() on it to take a picture, with various ways to configure the behavior (e.g., where the pictures get saved). However, this library is still a work in progress.
Can anyone give me some code ?
"Some code" is going to be thousands of lines long (for a complete implementation, including dealing with various device-specific oddities).
You are welcome to read the Android developer documentation on the subject.
once you have the camera preview set, you need to do the following...
protected static final int MEDIA_TYPE_IMAGE = 0;
public void capture(View v)
{
PictureCallback pictureCB = new PictureCallback() {
public void onPictureTaken(byte[] data, Camera cam) {
File picFile = getOutputMediaFile(MEDIA_TYPE_IMAGE);
if (picFile == null) {
Log.e(TAG, "Couldn't create media file; check storage permissions?");
return;
}
try {
FileOutputStream fos = new FileOutputStream(picFile);
fos.write(data);
fos.close();
} catch (FileNotFoundException e) {
Log.e(TAG, "File not found: " + e.getMessage());
e.getStackTrace();
} catch (IOException e) {
Log.e(TAG, "I/O error writing file: " + e.getMessage());
e.getStackTrace();
}
}
};
camera.takePicture(null, null, pictureCB);
}
And the getOutputMediaFile function:
private File getOutputMediaFile(int type)
{
File dir = new File(Environment.getExternalStoragePublicDirectory(
Environment.DIRECTORY_PICTURES), getPackageName());
if (!dir.exists())
{
if (!dir.mkdirs())
{
Log.e(TAG, "Failed to create storage directory.");
return null;
}
}
String timeStamp = new SimpleDateFormat("yyyMMdd_HHmmss", Locale.UK).format(new Date());
if (type == MEDIA_TYPE_IMAGE)
{
return new File(dir.getPath() + File.separator + "IMG_"+ timeStamp + ".jpg");
}
else
{
return null;
}
}
And you are done!!!
found it here
Camera was deprecated in API 21, the new way is the use android.hardware.camera2.
To enumerate, query, and open available camera devices, obtain a CameraManager instance.
To quickly summarize:
Obtain a camera manager instance by calling Context.getSystemService(String)
Get a string[] of device camera IDs by calling CameraManager.GetCameraIdList().
Call CameraManager.OpenCamera(...) with the desired camera ID from the previous step.
Once the camera is opened, the callback provided in OpenCamera(...) will be called.