I created a layout where my surface view has been resized from the "original size", I did the following:
ViewGroup.LayoutParams params = mSurfaceView.getLayoutParams();
int mPixels=getResources().getDimensionPixelSize(R.dimen.profile_pic_dimension); //200
params.width=mPixels;
params.height=mPixels;
mSurfaceView.setLayoutParams(params);
this.setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_PORTRAIT);
and I obtain a view like this:
http://img412.imageshack.us/img412/9379/layoutp.png
but then.. whenever I take the picture.. it doesn't saves as how I can see it on the view... but instead it saves it normal size like if the view was taking the whole screen (like if the surfaceview was never resized).
I tried using this:
Camera.ShutterCallback mShutterCallback = new Camera.ShutterCallback() {
#Override
public void onShutter() {
Camera.Parameters p = mCamera.getParameters();
p.setPictureSize(mSurfaceView.getWidth(), mSurfaceView.getHeight());
mCamera.setParameters(p);
}
};
but still didn't get the image size i wanted to.
You cannot just supply any width and height value when setting the picture (or preview) size. In stead, you'll have to use one of the items from the list of supported sizes, which you can retrieve by calling getSupportedPictureSizes() (there's an equivalent for the preview).
So far I haven't come across any devices that support taking square pictures, so after a 4:3, 16:9, 16:10, or whatever ratio, picture has been taken, you will have to 'cut' out the relevant part yourself, and optionally scale it down to a specific dimension. Probably the most straightforward way to do this, is by creating a new Bitmap as a subset from a source Bitmap (and then optionally scale it down). In order to get the source bitmap, you'll likely need to hook up a PictureCallback to the camera with takePicture(...).
Related
My application needs to capture some pictures of a given size (lets say WxH) in a portrait orientation mode.
In a general case the size WxH I want is not supported by the camera therefore I will need to crop the captured picture in order to match my specifications.
This apparently simple program is driving me crazy for the problem of "good" corrispondence among preview and picture sizes and format.
Let me explain:
I need a format (let's give some numbers: 800x600) for the output image, and I have to take pictures in a portrait screen orientation. My camera by default is in Landscape mode therefore it takes pictures with a Width much larger than the height. But since I want a portrait preview I need to rotate the image and as a consequence I get images with an height much larger than the width (the transpose of the original image I guess).
In this scenario I need to cut a horizontally extended rectangle from a bigger vertically extended rectangle and I would like to do that by having an accettable large preview.
the problem of cropping the out image from the picture does not scare me (for the moment), the mean problem is the matching among what the user sees into the preview and what the camera actually captures.
for each possible phone I need to:
- chose a suitable camera picture size with respect the desired image format
- chose a suitable camera preview size with respect to the picture size and format.
- hide the preview parts that will be cropped.
And with the constraints of no distortion and large preview.
How to do it in general?
What I thought and tried:
the main algorithm steps are:
- get the optimal picture size once known the desired format
- get the optimal preview size once known the picture size
- hide the parts not capturable of the preview
- crop the image
tried method 1)
A) I get the optimal picture size by minimizing the area difference (I could also check the aspect ratio affinity is not very important). (Size is a custom type different from Camera.Size)
public Size getOptimalPictureSize(List<Camera.Size> sizes) {
Size opt = new Size();
float objf = Float.MAX_VALUE;
float v;
for(Camera.Size s : sizes){
if(s.height<target_size.width || s.width<target_size.height)
continue;
v = (s.height-target_size.width)*s.width + (s.width-target_size.height)*target_size.width;
if(v<objf){
opt.width=s.width;
opt.height=s.height;
objf=v;
}
}
return opt;
}
B) I get the optimal preview size by finding the best compromise among different aspect ratio (with respect to the picture size) :
#Override
public Size getOptimalPreviewSize(Size picSize,List<android.hardware.Camera.Size> sizes) {
Size opt = new Size();
double objf = Double.MAX_VALUE;
double aspratio = picSize.getAspectRatio();
double v;
for(Camera.Size s : sizes){
v = Math.abs( ((double)s.width)/((double)s.height) - aspratio )/(Math.max(((double)s.width)/((double)s.height), aspratio));
if(v<objf){
objf=v;
opt.width=s.width;
opt.height=s.height;
}
}
return opt;
}
C) hiding methods for displaying only capturable parts....(discussed later)
** Trial 2) **
A) I get the picture and preview sizes by minimizing an optimality functions that weights at the same time the misfit among camera image aspect ratio and desired one and the misfit among the preview and picture aspect ratio.
public void setOptimalCameraSizes(List<Camera.Size> preview_sizes,List<Camera.Size> picture_sizes,Size preview_out, Size picture_out) {
double objf=Double.MAX_VALUE;
double tmp;
for(Camera.Size pts : picture_sizes){
for(Camera.Size pws : preview_sizes){
tmp = percv(((double)pws.height)/((double)pws.width),target_size.getAspectRatio())
+ percv(((double)pws.width)/((double)pws.height),((double)pts.width)/((double)pts.height));
if(tmp<objf){
preview_out.set(pws.width, pws.height);
picture_out.set(pts.width, pts.height);
objf=tmp;
}
}
}
}
where
percv(a,b) = |a-b|/max(|a|,|b|) measures the relative deviation (and thus is dimensionless).
C) some hiding methods...
Ok this two sizes selection methods are the best I found and chose good sizes, but they have a physiological problem that comes from the camera landscape orientation... they can only produce vertical rectangular images and this implies that when I draw the preview I can get two cases:
1. I set the surface dimensions so as not distort the preview image -> due to the huge height this reflects in a very small valid area in which the image is visible (so the user experience is hurted)
2. I set the maximum possible width -> I can obtain (it depends on the preview aspect ratio) distorted previews but much bigger than in case 1.
How to avoid these problems??
What I though is work on phase C) of the algorithm (hiding phase) and I tried to:
trial 1: make the camera preview go beyond the screen sizes. This will allow me to make an arbitrary zoom in the area of interest and make the screen crop the preview. I tried using a scrollview but it didn't work and I don't know why. The topology was simple a root scrollview and inside a FrameLayout with the attached surfaceview but the surface always filled the screen leading to horrible distortions.
trial 2: capture the camera frame and manipulate them directly by overriding the onPreviewFrame(.) method: I got a misteryous error in locking the canvas (IllegalArgumentException)
How can I solve this?
I ve created my custom camera on my app.When I press capture button it saves the taken photo where I set it before.
Everything is fine for now. But the problem is the phone saves the photo as phones pixels I mean I want to 600x600 pixel but everyphone saves the picture as its camera defaults.
How can I solve this?
I use this example for my custom camera : link
You must have to re size the bitmap of of your taken picture Like...
Bitmap b = BitmapFactory.decodeByteArray(imageAsBytes, 0, imageAsBytes.length)
profileImage.setImageBitmap(Bitmap.createScaledBitmap(b, 120, 120, false));
then after delete the previous image which one was taken by camera
Whats happen in your case:
1.) always custom camera returns it default preview size you can get your preview and picture size using the code and chooses as depending on your requirement.
Camera.Parameters params = mCamera.getParameters();
List<Camera.Size> sizes = params.getSupportedPreviewSizes();
List<Camera.Size> sizes = params.getSupportedPictureSizes()
I have a 1024x1024 pixel image in a jpg file. I am trying to render it onscreen with libgdx such that it fills the whole screen. At this stage I am not concerned with the image preserving its aspect ratio.
In my show() method I parse the jpg and initialize the sprite thus:
mWidth = Gdx.graphics.getWidth();
mHeight = Gdx.graphics.getHeight();
mCamera = new OrthographicCamera(1, mHeight/mWidth);
mBatch = new SpriteBatch();
mTexture = new Texture(Gdx.files.internal("my jpg file"));
mTexture.setFilter(TextureFilter.Linear, TextureFilter.Linear);
TextureRegion region = new TextureRegion(mTexture);
mSprite = new Sprite(region);
mSprite.setSize(0.99f, 0.99f);
mSprite.setOrigin(mSprite.getWidth()/2, mSprite.getHeight()/2);
mSprite.setPosition(-mSprite.getWidth()/2, -mSprite.getHeight()/2);
and in the render() method, I draw the sprite thus
Gdx.gl.glClearColor(1, 1, 1, 1);
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
mBatch.setProjectionMatrix(mCamera.combined);
mBatch.begin();
mSprite.draw(mBatch);
mBatch.end();
but all that is actually rendered is a blank white screen. What am I doing wrong?
You should use
mCamera = new OrthographicCamera(mWidth, mHeight);
in stead of
mCamera = new OrthographicCamera(1, mHeight/mWidth);
in most cases, unless you want to scale things in a different way.
Check if your code has actually found and read the file successfully. If not, check things like full path, file extension, intermediate spaces etc.
In resize method, try adding following
mBatch.getProjectionMatrix().setToOrtho2D(0, 0, mWidth, mHeight);
If it isn't working even then, I'd recommend falling back to the working libgdx logo sprite being drawn when you create a new project with setup ui. Change things slowly from there.
For reference, use https://code.google.com/p/libgdx-users/wiki/Sprites
Good luck.
To the best of my knowledge, mSprite.setSize(0.99f, 0.99f); sets the pixel width and height of a sprite whereas setScale scales the sprite's dimensions. Setting setSize to something larger such as 256 x 256 should make your sprite visible and setting it to the resolution of the screen should make it fill the screen if all goes well. The examples linked by Tanmay Patil are great so look into them if you're having trouble.
I am currently using an intent to take a picture, like this:
Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
intent.putExtra(MediaStore.EXTRA_OUTPUT, Uri.fromFile(file));
intent.putExtra("return-data", true);
startActivityForResult(intent, CAMERA_REQUEST);
But I really need to set the image size as close to a square as possible. So after researching it seems you need to do something like this:
Camera camera = Camera.open();
Parameters params = camera.getParameters();
List<Camera.Size> sizes = params.getSupportedPictureSizes();
// Once you will see the supported Sizes, you can define them with the method :
setPictureSize(int width, int height);
My questions are, do these work together or is it an either/or? Which method works best for my needs?
Again, I have 96px by 96px box for user profile pics. After user takes the picture I want the image to fill entire box without stretching. Is the best way to set size at the point of picture being taken, or alter the imageView or bitmap (or ...?) after the fact? OR, should I just let the users crop the picture and I can define the cropping area dimensions?
Edit: See bounty for updated question.
This code allows me to pick image from gallery. Crop and use my aspect ratio. I did similar code for using camera. After user takes picture, it immediately launches an activity to crop the picture.
Intent photoPickerIntent = new Intent(
Intent.ACTION_PICK,
android.provider.MediaStore.Images.Media.EXTERNAL_CONTENT_URI);
photoPickerIntent.setType("image/*");
photoPickerIntent.putExtra("crop", "true");
photoPickerIntent.putExtra("outputX", 150);
photoPickerIntent.putExtra("outputY", 150);
photoPickerIntent.putExtra("aspectX", 1);
photoPickerIntent.putExtra("aspectY", 1);
photoPickerIntent.putExtra("scale", true);
photoPickerIntent.putExtra(MediaStore.EXTRA_OUTPUT, getTempUri());
photoPickerIntent.putExtra("outputFormat",
Bitmap.CompressFormat.JPEG.toString());
startActivityForResult(photoPickerIntent, RESULT_LOAD_IMAGE);
You can use Bitmap.createScaleBitmap(Bitmap src, int destWidth, int destHeight, boolean filter) to resize a bitmap
Do these work together or is it an either/or? Which method works best for my needs?
No, they do not work together. When you use the camera with an Intent, you are asking the system to take a picture for you (i.e. the user gets prompted with the default camera app and takes a picture or chooses one from gallery).
When you use the second approach, you create your own Camera object, which is much more customizable than the first one.
With the first method you will have to scale the image afterwards, while with the second, you can take the picture directly in the correct size, so the main difference is that the first is controlled by android, and the second by your app directly, so the second method works best for your needs. if you don't want to scale afterwards.
If you're going to scale anyway (read below), then it doesn't matter which approach you use to take the picture, I'd say in that case, use the default app.
Is the best way to set size at the point of picture being taken, or alter the imageView or bitmap (or ...?) after the fact? OR, should I just let the users crop the picture and I can define the cropping area dimensions?
That depends on your needs. In the first case your app alters the picture taken, so you choose which portion of the original picture will be the final 96x96 picture. The problem with this method is that you can accidentally crop/strech/manipulate the image in a wrong -or at least, unexpected- manner for the user (i.e. removing part of their face).
When you use the second method, you are providint the user with the freedom to choose what part of the picture they want.
If you are able to automatically detect the best portion of the original image, you should go with the first method, but as you are actually asking which is the best method, you should really go with the second one, because it provides more flexibility to the end user.
(1) how can it be auto cropped to fit my ImageView (96dp x 96dp),
It seems like you already have a solution for getting the crop correct. If you want to convert density pixel unit to pixel unit that corresponds to the phone, you need to use the
public int dpToPixel(int dp) {
Resources r = getResources();
float px = TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_DIP, dp, r.getDisplayMetrics());
}
So you can use following as extra
photoPickerIntent.putExtra("outputX", dpToPixel(96));
photoPickerIntent.putExtra("outputY", dpToPixel(96));
(2) How can I shrink the picture in size (make a new bitmap?). These need to be loaded quickly on ListViews. So instead of a 1MB pic being upload, I need one more like 2,000k.
If you use image that are around the size of 96dp, you don't really need to worry about image sizes. Creating and destroying new bitmaps is usually very dangerous when you try to do multiple of them in an activity. You can easily run into issues such as "Out of Memory" issues. Therefore, it is more advised to use Uri's instead. So, save the uri that you are putting in the intent, and make a list of uris' to display on the listview however you want.
On a imageview, it is setImageURI(Uri uri).
Hope that answers all your questions.
I'm trying to 'take a photo' of both the camera preview, and an overlayed GLSurfaceView.
I have the camera preview element working, via camera.takePicture() and PictureCallback(), but now need to either include the GLSurfaceView elements, or capture the current screen seperately and merge the two bitmaps into one file.
I have tried to grab an image from the surfaceView using the code below, but this just results in a null bitmap.
public Bitmap grabImage() {
this.setDrawingCacheEnabled(true);
Bitmap b = null;
try {
b = this.getDrawingCache(true);
if (b==null) {
b = this.getDrawingCache(true);
}
} catch (Exception e) {
e.printStackTrace();
}
this.setDrawingCacheEnabled(false);
return b;
}
I would appreciate any thoughts/ snippets on this. Many thanks in advance.
I've done something similar and it was a bit convoluted but not too terrible.
In my case I'm using camera preview frame which I decode to a bitmap. Then get a canvas from that bitmap pass it to call to draw() on the views (surfaceview or otherwise) that I want drawn over top of the picture.
Bitmap bm;
MySurfaceViewImpl sv;
Canvas c = new Canvas(bm);
sv.draw(c);
You will need to use your own View implementation to handle the fact that the canvas size is going to change and you'll need to rescale things between the calls to draw() that happen in the normal running of your app and the when you call it manually as the canvas from the picture size is almost certainly going to be different than what's being drawn to the screen.
Also, the primary reason I'm using preview frames rather than captured pictures is due to memory limits. Very few phones support smallish sized pictures but all support reasonable sizes for preview frames. Getting a full size camera picture into a bitmap is probably too much memory. On devices with less than 24MB heap, I'm ok with about a 600 x 480 image and about 4 views that get drawn on top of that but it gets tight.
In your case, you'll probably need to scale the bitmap down to be able to pass a canvas from it to a view.
Good luck!