In my app I have scanner view, which must scan two barcodes with zxing library. Top barcode is in PDF417 format, bottom - in DATAMATRIX. I used https://github.com/dm77/barcodescanner as a base, but with main difference: I have to use image coordinates as scan area. Main algorithm:
Depends of current step, scan activity passes current scan area to scanner view in screen coordinates. These coordinates calculated as follows:
public static Rect getScanRectangle(View view) {
int[] l = new int[2];
view.measure(0, 0);
view.getLocationOnScreen(l);
return new Rect(l[0], l[1], l[0] + view.getMeasuredWidth(), l[1] + view.getMeasuredHeight());
}
2.In the scanner view, in onPreviewFrame method, camera preview size is received from camera parameters. When I translated byte data from camera to bitmap image in memory, I saw, that it rotated 90 degrees cw, and camera resolution not equals screen resolution. So, I have to map screen coordinates into camera (or surface view) coordinates:
private Rect normalizeScreenCoordinates(Rect input, int cameraWidth, int cameraHeight) {
if(screenSize == null) {
screenSize = new Point();
Display display = activity.getWindowManager().getDefaultDisplay();
display.getSize(screenSize);
}
int height = screenSize.x;
int width = screenSize.y;
float widthCoef = (float)cameraWidth/width;
float heightCoef = (float)cameraHeight/height;
Rect result = new Rect((int)(input.top * widthCoef), (int)(input.left * heightCoef), (int)(input.bottom * widthCoef), (int)(input.right * heightCoef));
return result;
}
After that, translated coordinates passes into axing and on most test devices all works fine. But not on Nexus 5X. First, there are serious gap between display size and activity.getWindow().getDecorView() sizes. Maybe this is related to status bar size, which is translucent and for some reason it's height maybe not calculated. But, even after I added vertical offset, there are something wrong with scan area. What's may be reason for that error?
Related
I am investigating Augmented Reality on Android.
I am using ARCore and Sceneform within an Android application.
I have tried out the sample projects and now would like to develop my own application.
One effect I would like to achieve is to combine/overlay an image (say .jpeg or .png) with a live feed from the devices onboard camera.
The image will have a transparent background that allows the user to see the live feed and image simultaneously
However I do not want the overlayed image to be a fixed/static watermark, When the user zooms in, out or pans the overlayed image must also zoom in, out and pan etc.
I do not wish the overplayed image to become 3d or anything of that nature.
Is this effect possible with Sceneform? or will I need to use other 3rd party libraries and/or tools to achieve the desired results.
UPDATE
The user is drawing on a blank sheet of white paper. The sheet of paper is orientated so that the user is comfortably drawing (either left or right handed). The user is free to move the sheet of paper while they complete their image.
An Android device is held above the sheet of paper filming the user drawing their selected image.
The live camera feed is being cast to a large TV or monitor screen.
To aid the user they have selected a static image to "trace" or "Copy".
This image is chosen on the Android device and is being combined with the live camera stream within the Android application.
The user can zoom in and out on their drawing and the combined live stream and selected static image will also zoom in and out, this will enable the user to make an accurate copy of the selected static image by drawing "Free Hand".
When the user looks directly at the sheet of paper, they only see their drawing.
When the user views the cast live stream of them drawing on the TV or monitor they see their drawing and the chosen static image superimposed. The user can control the transparency of the static image to assist them in making an accurate copy of it.
I think what you are looking for is to use AR to display an image so that the image stays in place, for example over a sheet of paper in order to act as a guide for drawing a copy of the image on the paper.
There are 2 parts to this. First is to locate the sheet of paper, the second is to place the image over the paper and keep it there as the phone moves around.
Locating the sheet of paper can be done just by detecting the plane with the paper (having some contrast, or pattern or something vs. a plain white sheet of paper will help), then tap on where the center of the page should be. This is done in the HelloSceneform sample.
If you want to have a more accurate bounding of the paper, you could tap the 4 corners of the paper, and then create anchors there. To do this register a plane tapped listener in onCreate()
arFragment.setOnTapArPlaneListener(this::onPlaneTapped);
Then in onPlaneTapped, create the 4 anchorNodes. Once you have 4, initialize the drawing to be displayed.
private void onPlaneTapped(HitResult hitResult, Plane plane, MotionEvent event) {
if (cornerAnchors.size() != 4) {
AnchorNode corner = createCornerNode(hitResult.createAnchor());
arFragment.getArSceneView().getScene().addChild(corner);
cornerAnchors.add(corner);
}
if (cornerAnchors.size() == 4 && drawingNode == null) {
initializeDrawing();
}
}
To initialize the drawing, create a Sceneform Texture from the bitmap or drawable. This can be from a resource or a file URL. You want the texture to show the whole image, and scale as the model holding it is resized.
private void initializeDrawing() {
Texture.Sampler sampler = Texture.Sampler.builder()
.setWrapMode(Texture.Sampler.WrapMode.CLAMP_TO_EDGE)
.setMagFilter(Texture.Sampler.MagFilter.NEAREST)
.setMinFilter(Texture.Sampler.MinFilter.LINEAR_MIPMAP_LINEAR)
.build();
Texture.builder()
.setSource(this, R.drawable.logo_google_developers)
.setSampler(sampler)
.build()
.thenAccept(texture -> {
MaterialFactory.makeTransparentWithTexture(this, texture)
.thenAccept(this::buildDrawingRenderable);
});
}
The model to hold the texture is just a flat quad sized to the smallest dimension between the corners. This is the same logic as laying out a quad using OpenGL.
private void buildDrawingRenderable(Material material) {
Integer[] indices = {
0, 1, 3, 3, 1, 2
};
//Calculate the center of the corners.
float min_x = Float.MAX_VALUE;
float max_x = Float.MIN_VALUE;
float min_z = Float.MAX_VALUE;
float max_z = Float.MIN_VALUE;
for (AnchorNode node : cornerAnchors) {
float x = node.getWorldPosition().x;
float z = node.getWorldPosition().z;
min_x = Float.min(min_x, x);
max_x = Float.max(max_x, x);
min_z = Float.min(min_z, z);
max_z = Float.max(max_z, z);
}
float width = Math.abs(max_x - min_x);
float height = Math.abs(max_z - min_z);
float extent = Math.min(width / 2, height / 2);
Vertex[] vertices = {
Vertex.builder()
.setPosition(new Vector3(-extent, 0, extent))
.setUvCoordinate(new Vertex.UvCoordinate(0, 1)) // top left
.build(),
Vertex.builder()
.setPosition(new Vector3(extent, 0, extent))
.setUvCoordinate(new Vertex.UvCoordinate(1, 1)) // top right
.build(),
Vertex.builder()
.setPosition(new Vector3(extent, 0, -extent))
.setUvCoordinate(new Vertex.UvCoordinate(1, 0)) // bottom right
.build(),
Vertex.builder()
.setPosition(new Vector3(-extent, 0, -extent))
.setUvCoordinate(new Vertex.UvCoordinate(0, 0)) // bottom left
.build()
};
RenderableDefinition.Submesh[] submeshes = {
RenderableDefinition.Submesh.builder().
setMaterial(material)
.setTriangleIndices(Arrays.asList(indices))
.build()
};
RenderableDefinition def = RenderableDefinition.builder()
.setSubmeshes(Arrays.asList(submeshes))
.setVertices(Arrays.asList(vertices)).build();
ModelRenderable.builder().setSource(def)
.setRegistryId("drawing").build()
.thenAccept(this::positionDrawing);
}
The last part is to position the quad in the center of the corners, and create a Transformable node so the image can be nudged into position, rotated, or scaled to be the perfect size.
private void positionDrawing(ModelRenderable drawingRenderable) {
//Calculate the center of the corners.
float min_x = Float.MAX_VALUE;
float max_x = Float.MIN_VALUE;
float min_z = Float.MAX_VALUE;
float max_z = Float.MIN_VALUE;
for (AnchorNode node : cornerAnchors) {
float x = node.getWorldPosition().x;
float z = node.getWorldPosition().z;
min_x = Float.min(min_x, x);
max_x = Float.max(max_x, x);
min_z = Float.min(min_z, z);
max_z = Float.max(max_z, z);
}
Vector3 center = new Vector3((min_x + max_x) / 2f,
cornerAnchors.get(0).getWorldPosition().y, (min_z + max_z) / 2f);
Anchor centerAnchor = null;
Vector3 screenPt = arFragment.getArSceneView().getScene().getCamera().worldToScreenPoint(center);
List<HitResult> hits = arFragment.getArSceneView().getArFrame().hitTest(screenPt.x, screenPt.y);
for (HitResult hit : hits) {
if (hit.getTrackable() instanceof Plane) {
centerAnchor = hit.createAnchor();
break;
}
}
AnchorNode centerNode = new AnchorNode(centerAnchor);
centerNode.setParent(arFragment.getArSceneView().getScene());
drawingNode = new TransformableNode(arFragment.getTransformationSystem());
drawingNode.setParent(centerNode);
drawingNode.setRenderable(drawingRenderable);
}
The intended AR reference image can be scaled with ARobjects as points for the sizing of the template for the user.
The more complex AR images will not work easily, since the AR image is overlaid on top of the users tracing, and this will obstruct the tip of their pen/pencil.
My solution is to chromakey the white paper. This will replace the white paper with the chosen image or live feed. Moving the paper around as you specified would be an issue, unless you have a means of tracking the paper position.
As you can see in this example, AR objects are in front, while chromakey is background. Tracing surface (paper) would be in the center.
Reference to this example is on the link below.
RJ
YouTube - AR tracked environment
I have two bitmaps that I draw onto the center of a canvas:
One is only a background, it's a spirit level in top view which doesnt move. The second one is a bitmap looking like a "air bubble". When the user tilts the phone, the sensors read the tilt and the air bubble moves according to the sensor values along the x-axis. However, I need to make sure that the air bubble doesnt move too far, e.g out of the background-bitmap.
So I tried to which x coordinate the bubble can travel to,
before I have to set xPos = xPos -1 using trial and error
This works fine on my device.
To clarify: On my phone, the air bubble could move to the coordinate x = 50 from the middle of the screen. This would be the point, where the bitmap is at the very left of the background spirit level.
On a larger phone, the position x = 50 is too far left, and therefore looking like the air bubble travelled out of the water level.
Now I've tried following:
I calculated the area in % in which the air bubble can move. Let's say that
is 70% of the entire width of the bitmap. So I tried to calculate the two x boundary values:
leftBoundary = XmiddlePoint - (backgroundBitmap.getWidth() * 0.35);
rightBoundary = XmiddlePoint + (backgroundBitmap.getWidth() * 0.35);
...which doesnt work when testing with different screen sizes :(
Is it possible to compensate for different screen sizes and densities using absolute coordinates or do I have to rethink my idea?
If you need any information that I forgot about, please let me know. If this question has already been answered, I would appreciate a link :) Thanks in advance!
Edit:
I load my bitmaps like this:
private Bitmap backgroundBitmap;
private static final int BITMAP_WIDTH = 1898;
private static final int BITMAP_HEIGHT = 438;
public class SimulationView extends View implements SensorEventListener{
public SimulationView(Context context){
Bitmap map = BitmapFactory.decodeResource(getResources, R.mipmap.backgroundImage);
backgroundBitmap = Bitmap.createScaledBitmap(map, BITMAP_WIDTH, BITMAP_HEIGHT, true;
}
and draw it like this:
protected void onDraw(Canvas canvas){
canvas.drawBitmap(backgroundBitmap, XmiddlePoint - BITMAP_WIDTH / 2, YmiddlePont - BITMAP_HEIGHT / 2, null);
}
backgroundBitmap.getWidth() and getHeight() prints out the correct sizes.
Calculating like mentioned above would return following boundaries:
DisplayMetrics displayMetrics = new DisplayMetrics();
((Activity) getContext()).getWindowManager().getDefaultDisplay().getMetrics(displayMetrics);
int width = displayMetrics.widthPixels;
//which prints out width = 2392
xMiddlePoint = width / 2;
// = 1196
leftBoundary = xMiddlePoint - (BITMAP.getWidth()* 0.35);
// = 531,7
However, when I use trial and error, the right x coordinate seems to be at around 700.
I've come across a great explanation on how to fix my issue here.
As user AgentKnopf explained, you have to scale coordinates or bitmaps like this:
X = (targetScreenWidth / defaultScreenWidth) * defaultXCoordinate
Y = (targetScreenHeight / defaultScreenHeight) * defaultYCoordinate
which, in my case, translates to:
int defaultScreenWidth = 1920;
int defaultXCoordinate = 333;
DisplayMetrics displayMetrics = new DisplayMetrics();
((Activity) getContext()).getWindowManager().getDefaultDisplay().getMetrics(displayMetrics);
displayWidth = displayMetrics.widthPixels;
leftBoundary = (displayWidth / defaultScreenWidth) * defaultXCoordinates
Am creating a document scanning application in android, am using OpenCV and Scan library in my project for cropping,I have created a rectangle using drawrect in camera view, now I need to capture the images inside that rectangle portion only and display it in another activity.
The image in question:
For me , I will take whole image, then crop.
Your question : "how do I know which part of the image is inside the rectangular portion, then only I can pass it nah, hope u understood". My answer is you can using relativity scaling of whole image dimension and camera display screen dimension. Then you will know which part of rectangular to be cropped.
This is the code example.
Note that you need to fill some codes to make it can save file into jpg, and save it after cropped.
// 1. Save your bitmap to file
public class MyPictureCallback implements Camera.PictureCallback {
#Override
public void onPictureTaken(byte[] data, Camera camera) {
try {
//mPictureFile is a file to save the captured image
FileOutputStream fos = new FileOutputStream(mPictureFile);
fos.write(data);
fos.close();
} catch (FileNotFoundException e) {
Log.d(TAG, "File not found: " + e.getMessage());
}
}
}
// Somewhere in your code
// 2.1 Load bitmap from your .jpg file
Bitmap bitmap = BitmapFactory.decodeFile(path+"/mPictureFile_name.jpg");
// 2.2 Rotate the bitmap to be the same as display, if need.
... Add some bitmap rotate code
// 2.3 Size of rotated bitmap
int bitWidth = bitmap.getWidth();
int bitHeight = bitmap.getHeight();
// 3. Size of camera preview on screen
int preWidth = preview.getWidth();
int preHeight = preview.getHeight();
// 4. Scale it.
// Assume you draw Rect as "canvas.drawRect(60, 50, 210, 297, paint);" command
int startx = 60 * bitWidth / preWidth;
int starty = 50 * bitHeight / preHeight;
int endx = 210 * bitWidth / preWidth;
int endy = 297 * bitHeight / preHeight;
// 5. Crop image
Bitmap blueArea = Bitmap.createBitmap(bitmap, startx, starty, endx, endy);
// 6. Save Crop bitmap to file
This will work for you: How to programmatically take a screenshot in Android?
Make sure that the view (v1 in the code sample's case) passed in Bitmap.createBitmap(v1.getDrawingCache()) is a viewgroup that contains the image you want ot send to the second activity
Edit:
I don't think your intended flow is feasible. As far as I know, camera intents don't take arguments allowing to draw such a rectangle (I could be wrong though).
Instead, I suggest you take a picture, and then edit it with a library such as this one (https://github.com/ArthurHub/Android-Image-Cropper) or programatically as suggested above.
I am trying to make a simple face detection app consisting of a SurfaceView (essentially a camera preview) and a custom View (for drawing purposes) stacked on top. The two views are essentially the same size, stacked on one another in a RelativeLayout. When a person's face is detected, I want to draw a white rectangle on the custom View around their face.
The Camera.Face.rect object returns the face bound coordinates using the coordinate system explained here and the custom View uses the coordinate system described in the answer to this question. Some sort of conversion is needed before I can use it to draw on the canvas.
Therefore, I wrote an additional method ScaleFacetoView() in my custom view class (below) I redraw the custom view every time a face is detected by overriding the OnFaceDetection() method. The result is the white box appears correctly when a face is in the center. The problem I noticed is that it does not correct track my face when it moves to other parts of the screen.
Namely, if I move my face:
Up - the box goes left
Down - the box goes right
Right - the box goes upwards
Left - the box goes down
I seem to have incorrectly mapped the values when scaling the coordinates. Android docs provide this method of converting using a matrix, but it is rather confusing and I have no idea what it is doing. Can anyone provide some code on the correct way of converting Camera.Face coordinates to View coordinates?
Here's the code for my ScaleFacetoView() method.
public void ScaleFacetoView(Face[] data, int width, int height, TextView a){
//Extract data from the face object and accounts for the 1000 value offset
mLeft = data[0].rect.left + 1000;
mRight = data[0].rect.right + 1000;
mTop = data[0].rect.top + 1000;
mBottom = data[0].rect.bottom + 1000;
//Compute the scale factors
float xScaleFactor = 1;
float yScaleFactor = 1;
if (height > width){
xScaleFactor = (float) width/2000.0f;
yScaleFactor = (float) height/2000.0f;
}
else if (height < width){
xScaleFactor = (float) height/2000.0f;
yScaleFactor = (float) width/2000.0f;
}
//Scale the face parameters
mLeft = mLeft * xScaleFactor; //X-coordinate
mRight = mRight * xScaleFactor; //X-coordinate
mTop = mTop * yScaleFactor; //Y-coordinate
mBottom = mBottom * yScaleFactor; //Y-coordinate
}
As mentioned above, I call the custom view like so:
#Override
public void onFaceDetection(Face[] arg0, Camera arg1) {
if(arg0.length == 1){
//Get aspect ratio of the screen
View parent = (View) mRectangleView.getParent();
int width = parent.getWidth();
int height = parent.getHeight();
//Modify xy values in the view object
mRectangleView.ScaleFacetoView(arg0, width, height);
mRectangleView.setInvalidate();
//Toast.makeText( cc ,"Redrew the face.", Toast.LENGTH_SHORT).show();
mRectangleView.setVisibility(View.VISIBLE);
//rest of code
Using the explanation Kenny gave I manage to do the following.
This example works using the front facing camera.
RectF rectF = new RectF(face.rect);
Matrix matrix = new Matrix();
matrix.setScale(1, 1);
matrix.postScale(view.getWidth() / 2000f, view.getHeight() / 2000f);
matrix.postTranslate(view.getWidth() / 2f, view.getHeight() / 2f);
matrix.mapRect(rectF);
The returned Rectangle by the matrix has all the right coordinates to draw into the canvas.
If you are using the back camera I think is just a matter of changing the scale to:
matrix.setScale(-1, 1);
But I haven't tried that.
The Camera.Face class returns the face bound coordinates using the image frame that the phone would save into its internal storage, rather than using the image displayed in the Camera Preview. In my case, the images were saved in a different manner from the camera, resulting in a incorrect mapping. I had to manually account for the discrepancy by taking the coordinates, rotating it counter clockwise 90 degrees and flipping it on the y-axis prior to scaling it to the canvas used for the custom view.
EDIT:
It would also appear that you can't change the way the face bound coordinates are returned by modifying the camera capture orientation using the Camera.Parameters.setRotation(int) method either.
I'm trying to use something like a compass, passing it longitude/latitude values to let it point to a specific location, my code can draw an arrow while moving the phone (using GPS) to determine the location.
I want to use an image instead of the drawing thing
public void draw(Canvas canvas) {
double angle = calculateAngle(currentLongitude, currentLatitude, targetLongitude, targetLatitude);
//Correction;
angle-=90;
//Correction for azimuth
angle-=azimuth;
if((getContext() instanceof Activity) && ((Activity)getContext()).getWindowManager().getDefaultDisplay().getOrientation()==Configuration.ORIENTATION_PORTRAIT)angle-=90;
while(angle<0)angle=angle+360;
Rect rect = canvas.getClipBounds();
int height = rect.bottom-rect.top;
int width = rect.right-rect.left;
int left = rect.left;
int top = rect.top;
}
You need to do two things:
Rotate the pointer image
Draw the resulting bitmap to screen.
A few tricks, it might be faster to pre-rotate the image when your program starts up rather than rotating it every time you draw; however it would also take more memory.