Google Glass - Take a picture and save it programmatically - android

I want to start the camera intent within my app to take a picture and save it to internal storage. I'm using the code of google developers page Capturing images or video.
In the processPictureWhenReady method I've implemented the following code to save the picture:
private void processPictureWhenReady(final String picturePath) {
Log.v("path processPictureWhenReady ", " " + picturePath);
final File pictureFile = new File(picturePath);
if (pictureFile.exists()) {
// The picture is ready; process it.
try {
Bitmap imageBitmap = BitmapFactory.decodeFile(picturePath);
int w = imageBitmap.getWidth();
int h = imageBitmap.getHeight();
Bitmap bm2 = Bitmap.createScaledBitmap(imageBitmap, w / 2,
h / 2, true);
imageBitmap = bm2.copy(bm2.getConfig(), true);
MediaStore.Images.Media.insertImage(getContentResolver(),
imageBitmap, "test", "Test");
} catch (Exception e) {
Log.e("Exc", e.getMessage());
}
}
The camera intent is starting and then I have "tap to accept" to take a picture. But then nothing happens. I have a log message in my onActivityResult method and noticed that the method is not beeing called.

This is a known issue. I have the same problem. I'm following the case here in the meantime
I've seen people try implementing the preview mode with SurfaceView (I haven't personally gotten it to work but it's worth a shot). Also check here for a similar problem.

I used this method it worked for me very well.
private void processPictureWhenReady(final String picturePath) {
final File pictureFile = new File(picturePath);
if(pictureFile.exists()){
}
if (pictureFile.exists()) {
} else {
final File parentDirectory = pictureFile.getParentFile();
FileObserver observer = new FileObserver(parentDirectory.getPath()) {
private boolean isFileWritten;
#Override
public void onEvent(int event, String path) {
if (!isFileWritten) {
// For safety, make sure that the file that was created in
// the directory is actually the one that we're expecting.
File affectedFile = new File(parentDirectory, path);
isFileWritten = (event == FileObserver.CLOSE_WRITE && affectedFile.equals(pictureFile));
if (isFileWritten) {
stopWatching();
// Now that the file is ready, recursively call
// processPictureWhenReady again (on the UI thread).
runOnUiThread(new Runnable() {
#Override
public void run() {
processPictureWhenReady(picturePath);
}
});
}
}
}
};
observer.startWatching();
}
}

Related

Espresso : how to click on image click button on phone camera

I am writing tests using espresso,my app intends to phone camera, where i press the click button manually,and then it migrates to the next screen,I am not able to automate the image click button in test code,how can i access camera using code through which i can do the same.
Thanks.
You should not open the camera intent or you'll have no way of getting any resulting image back from it (without pressing the camera button manually).
Have a look at the Stubbing out the Camera section of this website:
https://guides.codepath.com/android/UI-Testing-with-Espresso#stubbing-out-the-camera
This way you test your activity by simulating an actual image "returned" to your app from the Camera.
Update
And this is the method I use to get a bitmap to test:
public static Bitmap getTestBitmap(Context context, String resourceName) {
Resources resources = context.getResources();
Bitmap ret = null;
int imageResource = resources.getIdentifier(
resourceName, "drawable", context.getPackageName());
Uri pictureUri = Uri.parse(ContentResolver.SCHEME_ANDROID_RESOURCE + "://"
+ resources.getResourcePackageName(imageResource) + '/'
+ resources.getResourceTypeName(imageResource) + '/'
+ resources.getResourceEntryName(imageResource));
try {
ret = MediaStore.Images.Media.getBitmap(context.getContentResolver(), pictureUri);
} catch (Exception e) {
}
return ret;
}
And then I save the bitmap in internal storage and get the uri:
public static Uri saveToInternalStorage(Context context, Bitmap bitmapImage, String fileName) {
ContextWrapper cw = new ContextWrapper(context);
// path to /data/data/yourapp/app_data/pictures
File directory = cw.getDir("pictures", Context.MODE_PRIVATE);
// Create imageDir
File mypath = new File(directory, fileName);
FileOutputStream fos = null;
try {
fos = new FileOutputStream(mypath);
// Use the compress method on the BitMap object to write image to the OutputStream
bitmapImage.compress(Bitmap.CompressFormat.PNG, 100, fos);
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
fos.close();
} catch (Exception e) {
}
}
return Uri.fromFile(new File(mypath.getAbsolutePath()));
}
I know this is late, but it's something I struggled with myself and I would like to post an answer to help someone else. Here is how you click the camera button from a chooser (after you set it up), you use UIAutomator, as suggested by PunitD in the comments of the original post. This will pick up from where the test is showing a chooser on the screen.
public static final int waitTimeNativeApi = 6000;
public static void await(int time) {
try {
Thread.sleep(time);
} catch (InterruptedException e) {
Log.e(TAG, "Interrupted while sleeping");
}
}
private void takePhoto() {
boolean usePixels = true;
UiDevice device = UiDevice.getInstance(InstrumentationRegistry.getInstrumentation());
UiObject titleTextUI = device.findObject(new UiSelector()
.className("android.widget.TextView")
.text("Camera")
);
try {
titleTextUI.clickTopLeft();
if (usePixels) {
takePhotoForPixels(device);
} else {
takePhotoForSamsung(device);
}
} catch (UiObjectNotFoundException unofe) {
unofe.printStackTrace();
}
}
private void takePhotoForPixels(UiDevice device) {
// close the app selector to go back to our app so we can carry on with Espresso
await(waitTimeNativeApi);
//TAP on the camera icon
device.click(device.getDisplayWidth() / 2, device.getDisplayHeight() - 100);
await(waitTimeNativeApi);
//Get the OK button
device.click(device.getDisplayWidth() / 2, device.getDisplayHeight() - 100);
await(waitTimeNativeApi);
}
private void takePhotoForSamsung(UiDevice device) throws UiObjectNotFoundException {
// close the app selector to go back to our app so we can carry on with Espresso
UiObject titleTextUI = device.findObject(new UiSelector()
.className("android.widget.TextView")
.text("Camera")
);
titleTextUI.clickTopLeft();
await(waitTimeNativeApi);
//TAP on the camera icon
device.click(device.getDisplayWidth() / 2, device.getDisplayHeight() - 50);
//Get the OK button
UiObject cameraOkUi = device.findObject(new UiSelector()
.className("android.widget.TextView")
.text("OK")
);
cameraOkUi.click();
await(waitTimeNativeApi);
}
In this way, you will take an actual photo and get the results back in onActivityResult.

Using Android Support Annotations

I have an activity where I open a image picker intent and the when I get the selected Uri, I create a copy of the file in cache directory and then pass the location of the image to Picasso to load the image. I am doing this because some apps like Google Photos do not allow the actual Uris to be passed to different activities for security reasons.
Here is my code for the same :
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_external_photo_share);
ButterKnife.bind(this);
LogUtil.i(TAG, "onCreate called");
tinyDB = new TinyDB(this);
if (tinyDB.getBoolean(AppConstants.LOGIN_STATE, false)) {
imageUri = (Uri) getIntent().getExtras().get(Intent.EXTRA_STREAM);
if (imageUri == null || imageUri.toString().isEmpty()) {
ExternalPhotoShareActivity.this.finish();
}
String newActualPath = getActualPathFromUri(imageUri);
Uri newUri = null;
if (newActualPath != null) {
newUri = Uri.fromFile(new File(newActualPath));
}
Intent intent = new Intent(ExternalPhotoShareActivity.this, AddCaptionActivity.class);
intent.putExtra("externalImageUri", newUri);
intent.putExtra("externalImagePath", newActualPath);
startActivity(intent);
ExternalPhotoShareActivity.this.finish();
} else {
final Dialog dialog = new Dialog(ExternalPhotoShareActivity.this);
dialog.requestWindowFeature(Window.FEATURE_NO_TITLE);
dialog.setContentView(R.layout.dialog_external_share_error);
dialog.setCanceledOnTouchOutside(false);
dialog.show();
TextView goBack = (TextView) dialog.findViewById(R.id.textView86);
goBack.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
try {
if(dialog != null) {
if(dialog.isShowing()) {
dialog.dismiss();
}
}
} catch (final IllegalArgumentException e) {
// Handle or log or ignore
} catch (final Exception e) {
// Handle or log or ignore
}
ExternalPhotoShareActivity.this.finish();
}
});
}
}
private String getActualPathFromUri(Uri selectedImageUri) {
Bitmap bitmap = null;
try {
bitmap = getBitmapFromUri(selectedImageUri);
} catch (IOException e) {
e.printStackTrace();
}
if (bitmap == null) {
return null;
}
File imageFileFolder = new File(getCacheDir(), "galleri5");
if (!imageFileFolder.exists()) {
imageFileFolder.mkdir();
}
FileOutputStream out = null;
File imageFileName = new File(imageFileFolder, "galleri5-" + System.currentTimeMillis() + ".jpg");
try {
out = new FileOutputStream(imageFileName);
bitmap.compress(Bitmap.CompressFormat.JPEG, 100, out);
out.flush();
out.close();
} catch (IOException e) {
} finally {
if (out != null) {
try {
out.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
return imageFileName.getAbsolutePath();
}
private Bitmap getBitmapFromUri(Uri uri) throws IOException {
ParcelFileDescriptor parcelFileDescriptor =
getContentResolver().openFileDescriptor(uri, "r");
assert parcelFileDescriptor != null;
FileDescriptor fileDescriptor = parcelFileDescriptor.getFileDescriptor();
Bitmap image = BitmapFactory.decodeFileDescriptor(fileDescriptor);
parcelFileDescriptor.close();
return image;
}
I call getActualPathFromUri() in my activity's onCreate() method. Sometimes, when the images from gallery are large, it takes some seconds to load the image on the screen. So, I thought about executing these two methods in background thread so that I could show the UI while the background work is done.
I recently started using Android Support Annotations and tried to annotate the getActualPathFromUri() with #WorkerThread. But in my onCreate() method, it marks it red and says that this method should be called from Worker Thread, currently inferred thread is main.
What is the proper way of doing this? Should I even do it in background thread or not? Thanks.
The annotations #WorkerThread or #UIThread are only used for flagging a method. Android Studio will raise an error when your method is called from a thread that does not match your annotated constraint. See this documentation.
For advice on threading with AsyncTask see this android developers blog post.
In this case, with the annotation #WorkerThrad you're saying that getActualPathFromUri() should only be called from a worker thread. As you're calling it from onCreate() which runs inside the UI-thread, lint detects the issue and flags it.
The fact that you annotate the method like that, does not mean you're running the method inside a worker thread. Annotation in this case is just a way to flag the developer (that might be different than yourself if working in a team) that a particular method is meant to be called in a particular thread.
If you want to actually run that method in an actual worker thread, just call it inside an AsyncTask#doInBackground method.
EDIT: if you don't wanna use AsyncTask you can use any other Android threading mechanism, like IntentService, GcmTaskService and whatnot but, you should not run it inside the UI thread because it may cause jank.

Android image captured stored into server is rotated when retrieved

In my code I allow user to upload photos from gallery or from camera. Then image is stored into the server through retrofit. When retrieved the photo is always rotated 90 degrees if the photo is taken using the phone's camera (regardless whether it is from invoked through the app or invoked through the camera directly). If the images are not taken using the camera, the orientation is correct. How do I resolve this?
I know if I tried to display the image directly without storing it in server, and it was possible to be in the right orientation because I could rotate it before displaying. I am doing something similar to the codes here: https://gist.github.com/Mariovc/f06e70ebe8ca52fbbbe2.
But because I need to upload to the server and then retrieve from the server again. How do I rotate it before storing to the server so that when I retrieve it, it is already in the right orientation?
Below is some parts of my codes (just some usual handling of images and displayAvatarInProfile() is called after the service finished the downloading of image):
public void displayAvatarInProfile(String filePath) {
if(filePath != null && filePath != ""){
int targetW = mProfilePicImageView.getWidth();
int targetH = mProfilePicImageView.getHeight();
Bitmap bm = ImageStorageUtils.resizePic(filePath, targetW, targetH);
mProfilePicImageView.setImageBitmap(bm);
} else {
mProfilePicImageView.setImageBitmap(null);
}
}
#Override
public void onActivityResult(int requestCode, int resultCode, Intent data) {
if(resultCode == Activity.RESULT_OK) {
Uri fileUri = null;
String filePath = null;
switch(requestCode) {
case REQUEST_PICK_IMAGE:
fileUri = data.getData();
break;
case REQUEST_CAPTURE_IMAGE:
File imageFile = ImageStorageUtils.getFile(
getActivity().getResources().getString(R.string.ApptTitle),
"avatar_" + mUser.getLogin());
filePath = imageFile.getAbsolutePath();
fileUri = ImageStorageUtils.getUriFromFilePath(mContext.get(), filePath);
break;
default:
break;
}
if(fileUri != null) {
getActivity().startService(ProfileService.makeIntentUploadAvatar(
mContext.get(), mUser.getId(), fileUri));
}
}
}
public void pickImage() {
final Intent imageGalleryIntent =
new Intent(Intent.ACTION_PICK, Media.EXTERNAL_CONTENT_URI)
.setType("image/*")
.putExtra(Intent.EXTRA_LOCAL_ONLY, true);
// Verify the intent will resolve to an Activity.
if (imageGalleryIntent.resolveActivity(getActivity().getPackageManager()) != null)
// Start an Activity to get the Image from the Image Gallery
startActivityForResult(imageGalleryIntent,
DisplayProfileFragment.REQUEST_PICK_IMAGE);
}
public void captureImage() {
Uri captureImageUri = null;
try {
captureImageUri = Uri.fromFile(ImageStorageUtils.createImageFile(
mContext.get(),
getActivity().getResources().getString(R.string.ApptTitle),
"avatar_" + mUser.getLogin()));
} catch (IOException e) {
e.printStackTrace();
}
// Create an intent that will start an Activity to get
// capture an image.
final Intent captureImageIntent =
new Intent(MediaStore.ACTION_IMAGE_CAPTURE)
.putExtra(MediaStore.EXTRA_OUTPUT, captureImageUri);
// Verify the intent will resolve to an Activity.
if (captureImageIntent.resolveActivity(getActivity().getPackageManager()) != null)
// Start an Activity to capture an image
startActivityForResult(captureImageIntent,
DisplayProfileFragment.REQUEST_CAPTURE_IMAGE);
}
Update 1
Was commented that I needed more description:
When I am uploading a file, I do this:
boolean succeeded = mImpatientServiceProxy.uploadAvatar(userId, new TypedFile("image/jpg", imageFile));
When I download, I get a retrofit.client.Response object, which I then get the InputStream from the Response and write the data to a file using an Output Stream.
final InputStream inputStream = response.getBody().in();
final OutputStream outputStream = new FileOutputStream(file);
IOUtils.copy(inputStream, outputStream);
Update 2
This is not a duplicate of existing solution because the user can upload from gallery, you do not know if it is a photo taken from resources online or from camera, so you cannot rotate the image when you display it.
Use camera api instead of intent. When you capture image using intents it displays rotated. In camera api there are methods by which you can change the rotation of the camera as well as the captured photo too.

Take Screenshot of SurfaceView

Am developing a simple camera app. I have code that takes screenshot of the whole activity and writes it to the SD card. The problem is that the SurfaceView returns a black screen.
I would like to know how to independently take a screenshot of the SurfaceView only. Here is the code that takes the screenshot of the whole activity.
findViewById(R.id.screen).setOnClickListener(new OnClickListener() {
#Override
public void onClick(View v) {
final RelativeLayout layout = (RelativeLayout) findViewById(R.id.RelativeLayout1);
layout.setVisibility(RelativeLayout.GONE);
Bitmap bitmap = takeScreenshot();
Toast.makeText(getApplicationContext(),"Please Wait", Toast.LENGTH_LONG).show();
saveBitmap(bitmap);
}
});
public Bitmap takeScreenshot() {
View rootView = findViewById(android.R.id.content).getRootView();
rootView.setDrawingCacheEnabled(true);
return rootView.getDrawingCache();
}
public void saveBitmap(Bitmap bitmap) {
final MediaPlayer cheer = MediaPlayer.create(PicShot.this, R.raw.shutter);
cheer.start();
Random generator = new Random();
int n = 10000;
n = generator.nextInt(n);
String fname = "Image-"+ n +".png";
final RelativeLayout layout = (RelativeLayout) findViewById(R.id.RelativeLayout1);
File imagePath = new File(Environment.getExternalStorageDirectory() + "/" + fname);
FileOutputStream fos;
try {
fos = new FileOutputStream(imagePath);
bitmap.compress(CompressFormat.PNG, 100, fos);
fos.flush();
fos.close();
layout.setVisibility(RelativeLayout.VISIBLE);
Intent share = new Intent(Intent.ACTION_SEND);
share.setType("image/*");
Uri uri = Uri.fromFile(imagePath);
share.putExtra(Intent.EXTRA_STREAM,uri);
startActivity(Intent.createChooser(share, "Share Image"));
} catch (FileNotFoundException e) {
Log.e("GREC", e.getMessage(), e);
} catch (IOException e) {
Log.e("GREC", e.getMessage(), e);
}
}
The SurfaceView's surface is independent of the surface on which View elements are drawn. So capturing the View contents won't include the SurfaceView.
You need to capture the SurfaceView contents separately and perform your own composition step. The easiest way to do the capture is probably to just re-render the contents, but use an off-screen bitmap as the target rather than the surface. If you're rendering with GLES to an off-screen pbuffer, you can use glReadPixels() before you swap buffers.
Update: Grafika's "texture from camera" activity demonstrates handling live video from the camera with OpenGL ES. EglSurfaceBase#saveFrame() shows how to capture GLES rendering to a Bitmap.
Update: See also this answer, which provides a bit more background.
Just copy and past code
Note: only For API level >= 24
private void takePhoto() {
// Create a bitmap the size of the scene view.
final Bitmap bitmap = Bitmap.createBitmap(
surfaceView.getWidth(),
surfaceView.getHeight(),
Bitmap.Config.ARGB_8888
);
// Create a handler thread to offload the processing of the image.
final HandlerThread handlerThread = new HandlerThread("PixelCopier");
handlerThread.start();
// Make the request to copy.
PixelCopy.request(holder.videoView, bitmap, (copyResult) -> {
if (copyResult == PixelCopy.SUCCESS) {
Log.e(TAG,bitmap.toString());
String name = String.valueOf(System.currentTimeMillis() + ".jpg");
imageFile = ScreenshotUtils.store(bitmap,name);
} else {
Toast toast = Toast.makeText(
getViewActivity(),
"Failed to copyPixels: " + copyResult,
Toast.LENGTH_LONG
);
toast.show();
}
handlerThread.quitSafely();
}, new Handler(handlerThread.getLooper()));
}
My situation related with ExoPlayer, need get bitmap of current frame
Works on API >= 24:
private val copyFrameHandler = Handler()
fun getFrameBitmap(callback: FrameBitmapCallback) {
when(val view = videoSurfaceView) {
is TextureView -> callback.onResult(view.bitmap)
is SurfaceView -> {
val bitmap = Bitmap.createBitmap(
videoSurfaceView.getWidth(),
videoSurfaceView.getHeight(),
Bitmap.Config.ARGB_8888
)
copyFrameHandler.removeCallbacksAndMessages(null)
PixelCopy.request(view, bitmap, { copyResult: Int ->
if (copyResult == PixelCopy.SUCCESS) {
callback.onResult(bitmap)
} else {
callback.onResult(null)
}
}, copyFrameHandler)
}
else -> callback.onResult(null)
}
}
fun onDestroy() {
copyFrameHandler.removeCallbacksAndMessages(null)
}
interface FrameBitmapCallback {
fun onResult(bitmap: Bitmap?)
}
If your situation allows it, using a TextureView instead of a SurfaceView will make this problem a lot easier to solve. It has a method getBitmap() that returns a Bitmap of the current frame on the TextureView.
This is how I do it. Put this method in some Util class:
/**
* Pixel copy to copy SurfaceView/VideoView into BitMap
*/
fun usePixelCopy(videoView: SurfaceView, callback: (Bitmap?) -> Unit) {
val bitmap: Bitmap = Bitmap.createBitmap(
videoView.width,
videoView.height,
Bitmap.Config.ARGB_8888
);
try {
// Create a handler thread to offload the processing of the image.
val handlerThread = HandlerThread("PixelCopier");
handlerThread.start();
PixelCopy.request(
videoView, bitmap,
PixelCopy.OnPixelCopyFinishedListener { copyResult ->
if (copyResult == PixelCopy.SUCCESS) {
callback(bitmap)
}
handlerThread.quitSafely();
},
Handler(handlerThread.looper)
)
} catch (e: IllegalArgumentException) {
callback(null)
// PixelCopy may throw IllegalArgumentException, make sure to handle it
e.printStackTrace()
}
}
Usage:
usePixelCopy(videoView) { bitmap: Bitmap? ->
processBitMp(bitmap)
}
Note: VideoView is a subclass of SurfaceView, so this method can take screenshot of video View as well.
For those of us looking for a solution for API <= 24, this is what I did as a workaround. When user clicks button for capture, close the preview captureSession and create a new captureSession for still image capture.
mCaptureSession.stopRepeating();
mCaptureSession.abortCaptures();
mCaptureSession.close();
mCaptureSession = null;
mPreviewReader = ImageReader.newInstance(previewSize.getWidth(), previewSize.getHeight(), ImageFormat.JPEG, MAX_IMAGES);
mPreviewReader.setOnImageAvailableListener(mImageAvailableListener, mBackgroundHandler);
mCameraDevice.createCaptureSession(
Arrays.asList(mPreviewReader.getSurface()),
//Arrays.asList(mSurface),
new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(#NonNull CameraCaptureSession cameraCaptureSession) {
mCaptureSession = cameraCaptureSession;
try {
final CaptureRequest.Builder captureBuilder =
mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
captureBuilder.addTarget(mPreviewReader.getSurface());
captureBuilder.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON);
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION, getOrientation(mOrientation));
Log.d(TAG, "Capture request created.");
mCaptureSession.capture(captureBuilder.build(), mCaptureCallback, mBackgroundHandler);
} catch (CameraAccessException cae) {
Log.d(TAG, cae.toString());
}
}
#Override
public void onConfigureFailed(#NonNull CameraCaptureSession cameraCaptureSession) {}
},
mBackgroundHandler
);
Then at onImageAvailableListener, you can get the image with imageReader.acquireNextImage() to get the still captured image.

How to capture a photo from the camera without intent

I want my app to be able to capture photos without using another application. The code i used :
Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
File photo = null;
try
{
photo = this.createTemporaryFile("picture", ".jpg");
photo.delete();
}
catch(Exception e)
{
Toast.makeText(getApplicationContext(),"Error",Toast.LENGTH_LONG).show();
}
mImageUri = Uri.fromFile(photo);
intent.putExtra(MediaStore.EXTRA_OUTPUT, mImageUri);
startActivityForResult(intent, CAPTURE_IMAGE_ACTIVITY_REQUEST_CODE);
But this code uses the phone's main camera app. Can anyone give me some code ?
Taking a picture directly using the Camera class is insanely complicated to get right.
I am working on a library to simplify this, where you just add a CameraFragment to your app for the basic preview UI, and call takePicture() on it to take a picture, with various ways to configure the behavior (e.g., where the pictures get saved). However, this library is still a work in progress.
Can anyone give me some code ?
"Some code" is going to be thousands of lines long (for a complete implementation, including dealing with various device-specific oddities).
You are welcome to read the Android developer documentation on the subject.
once you have the camera preview set, you need to do the following...
protected static final int MEDIA_TYPE_IMAGE = 0;
public void capture(View v)
{
PictureCallback pictureCB = new PictureCallback() {
public void onPictureTaken(byte[] data, Camera cam) {
File picFile = getOutputMediaFile(MEDIA_TYPE_IMAGE);
if (picFile == null) {
Log.e(TAG, "Couldn't create media file; check storage permissions?");
return;
}
try {
FileOutputStream fos = new FileOutputStream(picFile);
fos.write(data);
fos.close();
} catch (FileNotFoundException e) {
Log.e(TAG, "File not found: " + e.getMessage());
e.getStackTrace();
} catch (IOException e) {
Log.e(TAG, "I/O error writing file: " + e.getMessage());
e.getStackTrace();
}
}
};
camera.takePicture(null, null, pictureCB);
}
And the getOutputMediaFile function:
private File getOutputMediaFile(int type)
{
File dir = new File(Environment.getExternalStoragePublicDirectory(
Environment.DIRECTORY_PICTURES), getPackageName());
if (!dir.exists())
{
if (!dir.mkdirs())
{
Log.e(TAG, "Failed to create storage directory.");
return null;
}
}
String timeStamp = new SimpleDateFormat("yyyMMdd_HHmmss", Locale.UK).format(new Date());
if (type == MEDIA_TYPE_IMAGE)
{
return new File(dir.getPath() + File.separator + "IMG_"+ timeStamp + ".jpg");
}
else
{
return null;
}
}
And you are done!!!
found it here
Camera was deprecated in API 21, the new way is the use android.hardware.camera2.
To enumerate, query, and open available camera devices, obtain a CameraManager instance.
To quickly summarize:
Obtain a camera manager instance by calling Context.getSystemService(String)
Get a string[] of device camera IDs by calling CameraManager.GetCameraIdList().
Call CameraManager.OpenCamera(...) with the desired camera ID from the previous step.
Once the camera is opened, the callback provided in OpenCamera(...) will be called.

Categories

Resources