I'm trying to implement camera app with certain features. I need to take a picture using phone's camera and then manipulate that image. However, I don't need to save the picture to the file, I only need to get some "data" from the picture. Is there a way to take picture and then immediately load it as bitmap or do I need to at least save it, read it and then delete it.
I read this tutorial: https://developer.android.com/training/camera/photobasics.html, but saving and deleting images seems heavy on processor, so I'd like to avoid it if I can. Ty
As Saiteja Prasadam notes, if you skip EXTRA_OUTPUT on ACTION_IMAGE_CAPTURE, you will get a thumbnail Bitmap back via getExtra("data").
Beyond that, you can work with the camera APIs directly (e.g., android.hardware.Camera).
You could try creating a ContentProvider that supports writing to a memory buffer instead of a file, then use a content: Uri pointing to that provider in EXTRA_OUTPUT. Getting this to work correctly, and without fragmenting your heap space, may be difficult or impossible.
Overall, bear in mind that you might not have heap space for a full-resolution image anyway. I do not know what "some "data" from the picture" means, exactly, but if it depends upon loading the whole image into RAM, you are going to have challenges implementing that.
Here is the relevant API documentation:
https://developer.android.com/reference/android/hardware/Camera.html#takePicture(android.hardware.Camera.ShutterCallback,%20android.hardware.Camera.PictureCallback,%20android.hardware.Camera.PictureCallback,%20android.hardware.Camera.PictureCallback)
void takePicture (Camera.ShutterCallback shutter,
Camera.PictureCallback raw,
Camera.PictureCallback postview,
Camera.PictureCallback jpeg)
shutter: the callback for image capture moment, or null
raw: the callback for raw (uncompressed) image data, or null
postview: callback with postview image data, may be null
jpeg: the callback for JPEG image data, or null
Start Intent
Intent cameraIntent = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE);
startActivityForResult(cameraIntent, CAMERA_REQUEST);
Activity Result
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode == CAMERA_REQUEST && resultCode == RESULT_OK) {
Bitmap photo = (Bitmap) data.getExtras().get("data");
}
}
Note: This thumbnail image from "data" might be good for an icon, but not a lot more. Dealing with a full-sized image takes a bit more work.
Related
Starting with this sample project [ https://github.com/googlesamples/android-vision/tree/master/visionSamples/ocr-reader ], I have been able to implement filtering in the OcrDetectorProcessor.receiveDetections() method.
This works, but com.google.android.gms.vision.text.TextRecognizer appears to search the the entire screen for characters.
I presume that the receiveDetections() method could be called more frequently if a smaller portion of the screen were being scanned for characters instead of the entire screen.
Is it possible to specify a smaller portion of the screen to be scanned? It should be straight-forward to direct the user, through a change to the graphic overly, to position their camera so that this smaller portion of the screen contained the target text, but I'm unsure as to how to tell the processor to use just a small portion of the frame when doing it's OCR processing.
What would need to be altered to specify that the OCR should operate on a subset of the frame?
ADDITIONAL INFORMATION:
I tried to subclass TextRecognizer, but it's marked final, and the source appears to be closed.
So I'm expanding the question to how the functionality of the ocr-reader sample could be replicated using Tesseract.
I found this link, but haven't explored converting the concepts there into camera frames as opposed to a single image file.
I had a similar issue and resolved it by using Tesseract and a simple cropping library called "Android Image Cropper" - Link here .
Basically I just crop the image before passing it for processing. Here is a small sample of my code:
This line will start new activity for a result:
CropImage.activity().setGuidelines(CropImageView.Guidelines.ON).start((Activity) view.getContext());
After that you just need to override onActivityResult. My solution looks like this:
#Override
protected void onActivityResult(int requestCode, int resultCode, #Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if(resultCode == RESULT_OK){
if(requestCode == CropImage.CROP_IMAGE_ACTIVITY_REQUEST_CODE){
CropImage.ActivityResult result = CropImage.getActivityResult(data);
Bitmap bmp = null;
try {
InputStream is = context.getContentResolver().openInputStream(result.getUri());
BitmapFactory.Options options = new BitmapFactory.Options();
bmp = BitmapFactory.decodeStream(is, null, options);
} catch (Exception ex) {
Log.i(getClass().getSimpleName(), ex.getMessage());
Toast.makeText(context, errorConvert, Toast.LENGTH_SHORT).show();
}
ivImage.setImageBitmap(bmp);
doOCR(bmp);
}
}
}
As you can see, at the end I am passing the already cropped image for OCR in the doOCR() method. You can just pass it to your OCR function and it should work like a charm.
If you plan to do something similar don't forget to add the dependency:
//Crop library dependency
api 'com.theartofdev.edmodo:android-image-cropper:2.8.+'
And also add the following to your Manifest file:
<activity android:name="com.theartofdev.edmodo.cropper.CropImageActivity"
android:theme="#style/Base.Theme.AppCompat"/>
</application>
Hope this helped and good luck :)
I'm developing an application that takes a photo and saves it in Android/data/package/files. I would like to reduce storage used, so I would like to resize the photo before saving it. For the moment I'm calling new intent passing as extra the output path. Is possible to pass also the size wanted or is possible to have the bitmap before saving it?
public void takePicture(View view){
Intent pictureIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
File f = new File(getPath(nameEdit.getText().toString()));
path = f.getAbsolutePath();
pictureIntent.putExtra(MediaStore.EXTRA_OUTPUT, Uri.fromFile(f));
if (pictureIntent.resolveActivity(getPackageManager())!=null){
startActivityForResult(pictureIntent,REQUEST_IMAGE_CAPTURE);
}
}
Is possible to pass also the size wanted
No.
is possible to have the bitmap before saving it?
Not really. You could write your own ContentProvider and use a Uri for that, rather than a file. When the camera app tries saving the content, you would get that information in memory first. However, you will then crash with an OutOfMemoryError, as you will not have enough heap space to hold a full-resolution photo, in all likelihood.
You can use BitmapFactory with inSampleSize set in the BitmapFactory.Options to read in the file once it has been written by the camera, save the resized image as you see fit, then delete the full-resolution image. Or, skip EXTRA_OUTPUT, and you will get a thumbnail image returned to you, obtained by calling getExtra("data") on the Intent passed into onHandleIntent().
In my application I would like to open an image and "load" it to the app using File Manager. I've already doneit using Intent.ACTION_GET_CONTENT and onActivityResult() method. Everything works fine, except from the path which I get. It is in a strange format and I can't show the image in ImageView
public void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode == SELECT_PICTURE) {
Uri selectedImageUri = data.getData();
mFileManagerString = selectedImageUri.getPath();
showPreviewDialog(mFileManagerString);
}
}
Here mFileManagerString = /document/image:19552
When I call a method to show the image in the ImageView:
public void setPreviewIV(String pPath) {
Bitmap bmImg = BitmapFactory.decodeFile(pPath);
receiptPreviewIV.setImageBitmap(bmImg);
}
I get the following error:
E/BitmapFactory: Unable to decode stream: java.io.FileNotFoundException: /document/image:19552: open failed: ENOENT (No such file or directory)
How to get the proper path to the image, so that I can show it in the ImageView?
How to get the proper path to the image
You already have it. It is the Uri that is passed into onActivityResult() via the Intent parameter. Pulling just getPath() out of that Uri is pointless. That's akin to thinking that /questions/33827687/wrong-image-file-path-obtained-from-android-file-manager is a path on your hard drive, just because your Web browser shows a URL that contains that path.
In addition, your current code is loading and decoding the Bitmap on the main application thread. Do not do this, because it freezes the UI while that work is being done.
I recommend that you use any one of the many image-loading libraries available for Android. All the decent ones (e.g., Picasso, Universal Image Loader) not only take a Uri as input but also will load the image on a background thread for you, applying it to your ImageView on the main application thread only when the bitmap is ready.
If, for whatever reason, you feel that you have to do this yourself, use ContentResolver and its openInputStream() to get an InputStream for the data represented by the Uri. Then, pass that to decodeStream() on BitmapFactory. Do all of that in a background thread (e.g., doInBackground() of an AsyncTask), and then apply the Bitmap to the ImageView on the main application thread (e.g., onPostExecute() of an AsyncTask).
It looks like in camera image capture, one can only capture either thumbnail or full image but not both in one pass because
public void startCamera() {
...
camera.putExtra("output", imageUri); (step 1)
...
needs to be declared before
...
startActivityForResult(camera, IMAGE_CAPTURE); (step 2)
...
Bundle extras = camera.getExtras();
mImageBitmap = (Bitmap) extras.get("data");
imageView.setImageBitmap(mImageBitmap);
...
But once "onActivityResult" returns, the full image is already saved into imageUri and the buffer cleared. But to capture the thumbnail of an image taken, the code needs to be executed after "startActivityForResult". The problem is the image buffer is cleared once the image is saved in step 2. To capture the image thumbnail, one will need to skip saving the full image in step 1 in order to capture the thumbnail image in step 2.
I can use an alternative to save the full image, reload the full image into bitmap, scale the image into a thumbnail size and resave the image but it seems to be redundant. Any idea if I can do both in one pass?
Check out MediaStore.Images.Thumbnails, and specifically getThumbnail (near the bottom): http://developer.android.com/reference/android/provider/MediaStore.Images.Thumbnails.html .
If that doesn't work, yes, you will have to manually re-scale and save the thumbnail yourself.
I make a camera and try to capture a picture. Since the original data is YUV, I turn it into RGB using function:
static public void decodeYUV420SP(byte[] rgbBuf, byte[] yuv420sp,int width, int height)
However, the photo saved is completely black, there is no content in it.
I also found the following way:
mBitmap = BitmapFactory.decodeByteArray(data, 0, data.length);
but the project was shut down.
Are there any other effective ways to save a photo? Thank you!
An old post, but it speaks of a similar problem that I have so I might as well answer the part I know :)
You're probably doing it wrong. I suggest you use the JPEG callback to store the image:
mCamera.takePicture(null, null, callbackJPEG);
This way you will get JPEG data into the routine which you can store into a file unmodified:
final Camera.PictureCallback mCall = new Camera.PictureCallback()
{
#Override
public void onPictureTaken(byte[] data, Camera camera)
{
//Needs <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
File sdCard = Environment.getExternalStorageDirectory();
File file = new File(sdCard, "pic.jpg");
fil = new FileOutputStream(file);
fil.write(data);
fil.close();
}
}
As far as the black picture goes, I have found that placing a simple Thread.sleep(250) between camera.startPreview() and camera.takePicture() takes care of that particular problem on my Galaxy Nexus.
I have no idea why this delay is necessary. Even if I add camera.setOneShotPreviewCallback() and call camera.takePicture() from the callback, the image comes out black if I don't first delay...
Oh, and the delay is not just "some" delay. It has to be some pretty long value. For example, 250ms sometimes works, sometimes not on my phone.
The complete black photo is a result of immediate call to mCamera.takePicture() after calling mCamera.startPreview(). Android should be given appropriate time to process its autofocus activity before taking the actual picture. The blackness is result of erratic exposure caused due to interruption while the autofocus was happening.
I recommend calling mCamera.autoFocus() right after mCamera.startPreview().
The mCamera.takePicture() should be called in the callback function of the autofocus function call.
This flow ensures that the picture is taken after the autofocus is complete and removes blackness or exposure issues from the image taken.
The delay mentioned in Velis' answer works for some devices because those devices complete autofocus activity. Ensuring proper callback flow removes this arbitrary delay and would work on every device.
I solved this issue using following argument:
final CaptureRequest.Builder captureBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
When I was using TEMPLATE_STILL_CAPTURE instead of TEMPLATE_PREVIEW, which was capturing my image as full black image. This thing works in my case.