I intend to create an application that can take photos in the following way:
When the user touches the screen, it starts to take photos
It takes several photos within a few microseconds, each with different focus
In pseudocode:
Camera camera = getAndroidCamera();
for(i<10)
{
camera.setFocus(i*0.1);
camera.takePhoto(path, pictureName+i);
}
So basically I intend to take photos of the same object with different values of focus.
According to this, it is not possible, only assisted autofocus is viable.
Can you confirm it?
If possible, how should I do it? Should I set autofocus to different areas?
Answer -- Android setFocusArea and Auto Focus
All I had to do is cancel previously called autofocus. Basically the correct order of actions is this:
protected void focusOnTouch(MotionEvent event) {
if (camera != null) {
camera.cancelAutoFocus();
Rect focusRect = calculateTapArea(event.getX(), event.getY(), 1f);
Rect meteringRect = calculateTapArea(event.getX(), event.getY(), 1.5f);
Parameters parameters = camera.getParameters();
parameters.setFocusMode(Parameters.FOCUS_MODE_AUTO);
parameters.setFocusAreas(Lists.newArrayList(new Camera.Area(focusRect, 1000)));
if (meteringAreaSupported) {
parameters.setMeteringAreas(Lists.newArrayList(new Camera.Area(meteringRect, 1000)));
}
camera.setParameters(parameters);
camera.autoFocus(this);
}}
..... update
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
...
Parameters p = camera.getParameters();
if (p.getMaxNumMeteringAreas() > 0) {
this.meteringAreaSupported = true;
}
...
}
/**
* Convert touch position x:y to {#link Camera.Area} position -1000:-1000 to 1000:1000.
*/
private Rect calculateTapArea(float x, float y, float coefficient) {
int areaSize = Float.valueOf(focusAreaSize * coefficient).intValue();
int left = clamp((int) x - areaSize / 2, 0, getSurfaceView().getWidth() - areaSize);
int top = clamp((int) y - areaSize / 2, 0, getSurfaceView().getHeight() - areaSize);
RectF rectF = new RectF(left, top, left + areaSize, top + areaSize);
matrix.mapRect(rectF);
return new Rect(Math.round(rectF.left), Math.round(rectF.top), Math.round(rectF.right), Math.round(rectF.bottom));
}
private int clamp(int x, int min, int max) {
if (x > max) {
return max;
}
if (x < min) {
return min;
}
return x;
}
Related
I have a sprite sheet that has 7 columns. I want to animate (change the column) smoothly. However, if I don't control sprite speed using update % 10, it animates too fast, consequently, the animation is not recognizable. At the moment, I switch between sprite columns every 10 frames, this will not be consistent based on the FPS of the different android devices. Is there a standard way to control sprite animation speed?
private final int BMP_ROWS = 1;
private final int BMP_COLUMNS = 7;
// sprite update logic
public boolean update(float deltaTime) {
delayCounterMs += deltaTime;
animate = true;
if (update % 10 == 0) {
currentFrame = ++currentFrame % BMP_COLUMNS;
int srcX = currentFrame * width;
src = new Rect(srcX, 0, srcX + width, height);
dst = new Rect(x, y, x + width, y + height);
}
update++;
return false;
}
// sprite draw logic
public void draw(Canvas canvas) {
if (animate) {
canvas.drawBitmap(bmp, src, dst, null);
}
}
I'm writing an app which displays map. User can zoom and pan. Map is rotated according to magnetometer's value (map is rotated in opposite direction of device rotation).
For scaling I'm using ScaleGestureDetector and passing scale factor to Matrix.scaleM.
For panning I'm using this code:
GlSurfaceView side:
private void handlePanAndZoom(MotionEvent event) {
int action = MotionEventCompat.getActionMasked(event);
// Get the index of the pointer associated with the action.
int index = MotionEventCompat.getActionIndex(event);
int xPos = (int) MotionEventCompat.getX(event, index);
int yPos = (int) MotionEventCompat.getY(event, index);
mScaleDetector.onTouchEvent(event);
switch (action) {
case MotionEvent.ACTION_DOWN:
mRenderer.handleStartPan(xPos, yPos);
break;
case MotionEvent.ACTION_MOVE:
if (!mScaleDetector.isInProgress()) {
mRenderer.handlePan(xPos, yPos);
}
break;
}
}
Renderer side:
private static final PointF mPanStart = new PointF();
public void handleStartPan(final int x, final int y) {
runOnGlThread(new Runnable() {
#Override
public void run() {
windowToWorld(x, y, mPanStart);
}
});
}
private static final PointF mCurrentPan = new PointF();
public void handlePan(final int x, final int y) {
runOnGlThread(new Runnable() {
#Override
public void run() {
windowToWorld(x, y, mCurrentPan);
float dx = mCurrentPan.x - mPanStart.x;
float dy = mCurrentPan.y - mPanStart.y;
mOffsetX += dx;
mOffsetY += dy;
updateModelMatrix();
mPanStart.set(mCurrentPan);
}
});
}
windowToWorld function uses gluUnProject and works because I'm using it for many other tasks. UpdateModelMatrix:
private void updateModelMatrix() {
Matrix.setIdentityM(mScaleMatrix,0);
Matrix.scaleM(mScaleMatrix, 0, mScale, mScale, mScale);
Matrix.setRotateM(mRotationMatrix, 0, mAngle, 0, 0, 1.0f);
Matrix.setIdentityM(mTranslationMatrix,0);
Matrix.translateM(mTranslationMatrix, 0, mOffsetX, mOffsetY, 0);
// Model = Scale * Rotate * Translate
Matrix.multiplyMM(mIntermediateMatrix, 0, mScaleMatrix, 0, mRotationMatrix, 0);
Matrix.multiplyMM(mModelMatrix, 0, mIntermediateMatrix, 0, mTranslationMatrix, 0);
}
Same mModelMatrix is used in gluUnproject of windowToWorld function for point translation.
So my problem two-fold:
The panning occurs twice slower than the movement of finger on the device's screen
At some point when panning continuously for a few seconds (making circles on screen, for example) the map starts to 'shake'. The amplitude of this shaking is getting bigger and bigger. Looks like some value adds up in handlePan iterations and causes this effect.
Any idea why these happen?
Thank you in advance, Greg.
Well, the problem with my code is this line:
mPanStart.set(mCurrentPan);
Simply because I drag in world coordinates, and update offset, but current location stays the same. This was my bug.
Removing this line will fix everything.
I have a Canvas that is scaled so everything fits better:
#Override
public void draw(Canvas c){
super.draw(c);
final float scaleFactorX = getWidth()/(WIDTH*1.f);
final float scaleFactorY = getHeight()/(HEIGHT*1.f);
if(c!=null) {
final int savedState = c.save();
c.scale(scaleFactorX, scaleFactorY);
(rendering)
c.restoreToCount(savedState);
}
}
It scales based on these two:
public static final int WIDTH = 856;
public static final int HEIGHT = 1050;
Which causes the problem that the coordinates of the MotionEvent that handles touch events is not equal to the coordinates that is created with the Canvas. This causes problems when I try to check collision between the MotionEvent Rect and the Rect of a class that is based on the rendering scale. This causes the class SuperCoin's X coordinate to not be equal to MotionEvent X coordinates.
Usually, MotionEvent's coordinates, both X and Y is way bigger than the screen's max size(defined by WIDTH and HEIGHT)
#Override
public boolean onTouchEvent(MotionEvent e) {
super.onTouchEvent(e);
switch (MotionEventCompat.getActionMasked(e)) {
case MotionEvent.ACTION_DOWN:
case MotionEvent.ACTION_POINTER_DOWN:
(...)
Rect r = new Rect((int)e.getX(), (int)e.getY(), (int)e.getX() + 3, (int)e.getY() + 3);
if(superCoins.size() != 0) {
for (SuperCoin sc : superCoins) {
if (sc.checkCollision(r)) {
progress++;
superCoins.remove(sc);
}
}
}
break;
}
return true;
}
And the SuperCoin:
public class SuperCoin {
private Bitmap bm;
public int x, y, orgY;
Clicker cl;
private Long startTime;
Random r = new Random();
public SuperCoin(Bitmap bm, int x, int y, Clicker c){
this.x = x;
this.y = y;
this.orgY = y;
this.bm = bm;
this.cl = c;
startTime = System.nanoTime();
bounds = new Rect(x, y, x + bm.getWidth(), y + bm.getHeight());
}
private Rect bounds;
public boolean checkCollision(Rect second){
if(second.intersect(bounds)){
return true;
}
return false;
}
private int velX = 0, velY = 0;
public void render(Canvas c){
long elapsed = (System.nanoTime()-startTime)/1000000;
if(elapsed>50) {
int cx;
cx = r.nextInt(2);
if(cx == 0){
velX = r.nextInt(4);
}else if(cx == 1){
velX = -r.nextInt(4);
}
velY = r.nextInt(10) + 1;
startTime = System.nanoTime();
}
if(x < 0) velX = +2;
if(x > Clicker.WIDTH) velX = -2;
x += velX;
y -= velY;
c.drawBitmap(bm, x, y, null);
}
}
How can I check collision between the two different when the MotionEvent X coordinate is bigger than the screen's scaled max coordinates?
Honestly, I am not completly sure why the Rect defined in the SuperCoin class is different from the one defined in the onTouchEvent method. I'm guessing because the X and Y is permanently different between the one defined by MotionEvent and the ones defined by the scaled canvas. The Rect in the SuperCoin class goes by the width of the Bitmap it has been passed. It scales it with the width and height of the Bitmap.
After looking through StackOverflow and Google for the past 2 days looking for something that comes close to a solution, I came over this: Get Canvas coordinates after scaling up/down or dragging in android Which solved the problem. It was really hard to find because the title was slightly misleading(of the other question)
float px = e.getX() / mScaleFactorX;
float py = e.getY() / mScaleFactorY;
int ipy = (int) py;
int ipx = (int) px;
Rect r = new Rect(ipx, ipy, ipx+2, ipy+2);
I added this as an answer and accepting it so it no longer will be an unanswered question as it is solved. The code above converts the coordinates to integers so they can be used for checking collision between the finger and the object I'm checking with
Don't scale the canvas directly. Make a Matrix object, scale that once. Then concat that to the canvas. Then you can make an inverted matrix for your touch events.
And just make the invert matrix whenever you change the view matrix:
viewMatrix = new Matrix();
viewMatrix.scale(scalefactor,scalefactor);
invertMatrix = new Matrix(viewMatrix);
invertMatrix.invert(invertMatrix);
Then apply these two matrices to the relevant events.
#Override
public boolean onTouchEvent(MotionEvent event) {
event.transform(invertMatrix);
And then on the draw events, concat the matrix.
#Override
protected void onDraw(Canvas canvas) {
canvas.concat(viewMatrix);
And you're done. everything is taken care of for you. Whatever modifications you do to the view matrix will change your viewbox and your touch events will be translated into that same scene too.
If you want to add panning or rotation, or even skew the view, it's all taken care of. Just apply that to the matrix, get the inverted matrix, and the view will look that way and the touch events will respond as you expect.
I'm using the following code in my resize method to maintain aspect ratio across multiple screen sizes:
#Override
public void resize(int width, int height)
{
float aspectRatio = (float)width / (float)height;
float scale = 1f;
Vector2 crop = new Vector2(0, 0);
if (aspectRatio > globals.ASPECT_RATIO)
{
scale = (float)height / (float)globals.VIRTUAL_HEIGHT;
crop.x = (width - globals.VIRTUAL_WIDTH * scale) / 2f;
}
else if (aspectRatio < globals.ASPECT_RATIO)
{
scale = (float)width / (float)globals.VIRTUAL_WIDTH;
crop.y = (height - globals.VIRTUAL_HEIGHT * scale) / 2f;
}
else
{
scale = (float)width / (float)globals.VIRTUAL_WIDTH;
}
float w = (float)globals.VIRTUAL_WIDTH * scale;
float h = (float)globals.VIRTUAL_HEIGHT * scale;
viewport = new Rectangle(crop.x, crop.y, w, h);
}
VIRTUAL_WIDTH, VIRTUAL_HEIGHT and ASPECT_RATIO are set as follows:
public final int VIRTUAL_WIDTH = 800;
public final int VIRTUAL_HEIGHT = 480;
public final float ASPECT_RATIO = (float)VIRTUAL_WIDTH / (float)VIRTUAL_HEIGHT;
This works perfectly with regards to maintaining the correct ratio when the screen size changes. However, camera.uproject (which I call before all touch events) doesn't work properly - touch positions are not correct when the resize code changes the screen size to anything other than 800x480.
Here's how I setup my camera in my create() method:
camera = new OrthographicCamera(globals.VIRTUAL_WIDTH, globals.VIRTUAL_HEIGHT);
camera.setToOrtho(true, globals.VIRTUAL_WIDTH, globals.VIRTUAL_HEIGHT);
And this is the start of my render() method:
camera.update();
Gdx.graphics.getGL20().glViewport((int)viewport.x, (int)viewport.y,
(int)viewport.width, (int)viewport.height);
Gdx.graphics.getGL20().glClearColor(0, 0, 0, 1);
Gdx.graphics.getGL20().glClear(GL20.GL_COLOR_BUFFER_BIT);
If I ignore the resize code and set the glViewport method call to Gdx.graphics.getWidth() and Gdx.graphics.getHeight(), camera.unproject works, but I obviously lose the maintaining of the aspect ratio. Anyone have any ideas?
Here's how I perform the unproject on my touch events:
private Vector3 touchPos = new Vector3(0, 0, 0);
public boolean touchDown (int x, int y, int pointer, int newParam)
{
touchPos.x = x;
touchPos.y = y;
touchPos.z = 0;
camera.unproject(touchPos);
//touch event handling goes here...
}
UPDATE
I've made a little more progress with this by implementing a Stage object, but it's still not working perfectly.
Here's how I now setup the stage and camera in my create method:
stage = new Stage();
camera = new OrthographicCamera(globals.VIRTUAL_WIDTH, globals.VIRTUAL_HEIGHT);
camera.setToOrtho(true, globals.VIRTUAL_WIDTH, globals.VIRTUAL_HEIGHT);
stage.setCamera(camera);
Here's my resize code:
public void resize(int width, int height)
{
Vector2 size = Scaling.fit.apply(800, 480, width, height);
int viewportX = (int)(width - size.x) / 2;
int viewportY = (int)(height - size.y) / 2;
int viewportWidth = (int)size.x;
int viewportHeight = (int)size.y;
Gdx.gl.glViewport(viewportX, viewportY, viewportWidth, viewportHeight);
stage.setViewport(800, 480, true, viewportX, viewportY, viewportWidth, viewportHeight);
}
Here's the start of my render code:
stage.getCamera().update();
Gdx.graphics.getGL20().glClearColor(0, 0, 0, 1);
Gdx.graphics.getGL20().glClear(GL20.GL_COLOR_BUFFER_BIT);
spriteBatch.setProjectionMatrix(stage.getCamera().combined);
And here's how I handle touch events:
private Vector3 touchPos = new Vector3(0, 0, 0);
public boolean touchDown (int x, int y, int pointer, int newParam)
{
touchPos.x = x;
touchPos.y = y;
touchPos.z = 0;
stage.getCamera().unproject(touchPos);
//touch event handling goes here...
}
This now works when the screen is at default size and when the screen is enlarged. However, when the screen size is reduced the touch points become more and more inaccurate the smaller the screen gets.
Managed to find the answer to this. I simply needed to use the camera.unproject method call which takes Vector3, viewportX, viewportY, viewportWidth and viewportHeight parameters. The required viewport params are those calculated by the resize code shown above.
To solve your issue there is already a implemented solution inside of the new Stage system. Please take a look at the wiki scene2d #viewport. To have a fixed aspec ratio you do not need to resize and fit in manually. Here is the example with blackbars from the wiki:
public void resize (int width, int height) {
Vector2 size = Scaling.fit.apply(800, 480, width, height);
int viewportX = (int)(width - size.x) / 2;
int viewportY = (int)(height - size.y) / 2;
int viewportWidth = (int)size.x;
int viewportHeight = (int)size.y;
Gdx.gl.glViewport(viewportX, viewportY, viewportWidth, viewportHeight);
stage.setViewport(800, 480, true, viewportX, viewportY, viewportWidth, viewportHeight);
}
I am developing a game for Android using LibGDX. I have added pinch zoom and pan. My issue is how to keep from going outside of the play area. As it is, you can pan outside of the play area into blackness. When zoomed out fully I know how to deal with it, I just said:
if(camera.zoom == 1.0f) ;
else {
}
But, if zoomed in, how do I accomplish this. I know this is not that complicated, I just can't seem to figure it out. Upon creation I set the camera to the middle of the screen. I know how to pan, I am using camera.translate(-input.deltaX, -input.deltaY, 0), I just need to test before this call to see if the position is outside of the play area. When I am zoomed in, how do I test if I am at the edge of the screen?
You can use one of
camera.frustum.boundsInFrustum(BoundingBox box)
camera.frustum.pointInFrustum(Vector3 point)
camera.frustum.sphereInFrustum(Vector3 point, float radius)
to check if a point/box/sphere is within your camera's view.
What I normally do is define 4 boxes around my world where the player should not be allowed to see. If the camera is moved and one of the boxes is in the frustum, I move the camera back to the previous position.
Edit: AAvering has implemented this in code below.
Credit goes to Matsemann for idea, here is the implementation I used.
Make a custom MyCamera class extending OrthographicCamera and add the following code:
BoundingBox left, right, top, bottom = null;
public void setWorldBounds(int left, int bottom, int width, int height) {
int top = bottom + height;
int right = left + width;
this.left = new BoundingBox(new Vector3(left - 2, 0, 0), new Vector3(left -1, top, 0));
this.right = new BoundingBox(new Vector3(right + 1, 0, 0), new Vector3(right + 2, top, 0));
this.top = new BoundingBox(new Vector3(0, top + 1, 0), new Vector3(right, top + 2, 0));
this.bottom = new BoundingBox(new Vector3(0, bottom - 1, 0), new Vector3(right, bottom - 2, 0));
}
Vector3 lastPosition = new Vector3();
#Override
public void translate(float x, float y) {
lastPosition.set(position.x, position.y, 0);
super.translate(x, y);
}
public void translateSafe(float x, float y) {
translate(x, y);
update();
ensureBounds();
update();
}
public void ensureBounds() {
if (frustum.boundsInFrustum(left) || frustum.boundsInFrustum(right) || frustum.boundsInFrustum(top) || frustum.boundsInFrustum(bottom)) {
position.set(lastPosition);
}
}
Now, in you custom sceene or whathever you use (in my case it was a custom Board class) call:
camera.setWorldBounds()
and in your GestureListener.pan method you can call
camera.translateSafe(x, y);
it should keep your camera in bounds
Here's the code I call after the position of the camera is updated due to panning or zooming in my 2D game using an orthographic camera. It corrects the camera position so that it doesn't show anything outside the borders of the play area.
float camX = camera.position.x;
float camY = camera.position.y;
Vector2 camMin = new Vector2(camera.viewportWidth, camera.viewportHeight);
camMin.scl(camera.zoom/2); //bring to center and scale by the zoom level
Vector2 camMax = new Vector2(borderWidth, borderHeight);
camMax.sub(camMin); //bring to center
//keep camera within borders
camX = Math.min(camMax.x, Math.max(camX, camMin.x));
camY = Math.min(camMax.y, Math.max(camY, camMin.y));
camera.position.set(camX, camY, camera.position.z);
camMin is the lowest left corner that the camera can be without showing anything outside of the play area and is also the offset from a corner of the camera to the center.
camMax is the opposite highest right location the camera can be in.
The key part I'm guessing you're missing is scaling the camera size by the zoom level.
Here's my solution:
float minCameraX = camera.zoom * (camera.viewportWidth / 2);
float maxCameraX = worldSize.x - minCameraX;
float minCameraY = camera.zoom * (camera.viewportHeight / 2);
float maxCameraY = worldSize.y - minCameraY;
camera.position.set(Math.min(maxCameraX, Math.max(targetX, minCameraX)),
Math.min(maxCameraY, Math.max(targetY, minCameraY)),
0);
Where:
targetX and targetY are world coordinates of where your target is.
worldSize is a Vector2 of the size of the world.
I don't have enough reputation to write comments, so I'll point to some previous answers.
AAverin's solution with bounding box that's made with Matsemann's idea isn't good because it annoyingly slows when you are near the one edge (boundary) and trying to translate diagonally in which case you are panning to one side out of bounds and other in proper direction.
I strongly suggest that you try solution from the bottom of handleInput method presented at
https://github.com/libgdx/libgdx/wiki/Orthographic-camera
That one works smoothly, and some of the previous answers look like that one but this one uses MathUtils.clamp wihch is a straight forward and much cleaner.
Perfect class for this, (partly thanks to AAverin)
This class not only sticks into the bounds it also snaps into the bounds when you zoom.
Call these for setting bounds and moving the camera.
camera.setWorldBounds()
camera.translateSafe(x, y);
When zooming call
camera.attemptZoom();
And here's the class:
public class CustomCamera extends OrthographicCamera
{
public CustomCamera() {}
public CustomCamera(float viewportWidth, float viewportHeight)
{
super(viewportWidth, viewportHeight);
}
BoundingBox left, right, top, bottom = null;
public void setWorldBounds(int left, int bottom, int width, int height) {
int top = bottom + height;
int right = left + width;
this.left = new BoundingBox(new Vector3(left - 2, 0, 0), new Vector3(left -1, top, 0));
this.right = new BoundingBox(new Vector3(right + 1, 0, 0), new Vector3(right + 2, top, 0));
this.top = new BoundingBox(new Vector3(0, top + 1, 0), new Vector3(right, top + 2, 0));
this.bottom = new BoundingBox(new Vector3(0, bottom - 1, 0), new Vector3(right, bottom - 2, 0));
}
Vector3 lastPosition;
#Override
public void translate(float x, float y) {
lastPosition = new Vector3(position);
super.translate(x, y);
}
public void translateSafe(float x, float y) {
translate(x, y);
update();
ensureBounds();
update();
}
public void ensureBounds()
{
if(isInsideBounds())
{
position.set(lastPosition);
}
}
private boolean isInsideBounds()
{
if(frustum.boundsInFrustum(left) || frustum.boundsInFrustum(right) || frustum.boundsInFrustum(top) || frustum.boundsInFrustum(bottom))
{
return true;
}
return false;
}
public void attemptZoom(float newZoom)
{
this.zoom = newZoom;
this.snapCameraInView();
}
private void snapCameraInView()
{
float halfOfCurrentViewportWidth = ((viewportWidth * zoom) / 2f);
float halfOfCurrentViewportHeight = ((viewportHeight * zoom) / 2f);
//Check the vertical camera.
if(position.x - halfOfCurrentViewportWidth < 0f) //Is going off the left side.
{
//Snap back.
float amountGoneOver = position.x - halfOfCurrentViewportWidth;
position.x += Math.abs(amountGoneOver);
}
else if(position.x + halfOfCurrentViewportWidth > viewportWidth)
{
//Snap back.
float amountGoneOver = (viewportWidth - (position.x + halfOfCurrentViewportWidth));
position.x -= Math.abs(amountGoneOver);
}
//Check the horizontal camera.
if(position.y + halfOfCurrentViewportHeight > viewportHeight)
{
float amountGoneOver = (position.y + halfOfCurrentViewportHeight) - viewportHeight;
position.y -= Math.abs(amountGoneOver);
}
else if(position.y - halfOfCurrentViewportHeight < 0f)
{
float amountGoneOver = (position.y - halfOfCurrentViewportHeight);
position.y += Math.abs(amountGoneOver);
}
}
}
The CustomCamera class given doesn't work very well. I used it to map a pinch gesture to zoomSafe and the camera would bounce/flash from left to right constantly when on the edge of the bounds. The camera also doesnt work properly with panning. If you try to pan along the edge of the bounds it doesnt pan anywhere as if the edges are "sticky". This is because it just translates back to the last position instead of just adjusting the coordinate that it outside the bounds.