Line width limitation in opengl - android

I am trying to build an android application for display the geographic data(like google map) with opengl ,and it seems that opengl have a limitation of the line width when drawing lines:
gl.glLineWidth(10); //or gl.glLineWidth(20);
......
gl.glDrawArrays(GL10.GL_LINES, 0, 2);
And this is what I got:
It seems that this is the max width of a line rendered by opengl.
However when I see the google map, I found that it can render a much width line for a road like this:
What's is the problem?
BTW, I wonder if it is desirable to use a game engine in my application?

Yes, OpenGL implementations do have a limitation for line widths. I believe it varies by implementation. There are a few options, but the easiest one is to just draw a rectangle that is as tall as you need and extends from point A to point B. If you need a 50 point line width, for example, you can get the 4 corners of the rectangle by finding the vector from A to B and projecting + and - 25 points along the normal to that vector at the end points.

Related

OpenCV: Finding the pixel width from squares of known size

In OpenCV I use the camera to capture a scene containing two squares a and b, both at the same distance from the camera, whose known real sizes are, say, 10cm and 30cm respectively. I find the pixel widths of each square, which let's say are 25 and 40 pixels (to get the 'pixel-width' OpenCV detects the squares as cv::Rect objects and I read their width field).
Now I remove square a from the scene and change the distance from the camera to square b. The program gets the width of square b now, which let's say is 80. Is there an equation, using the configuration of the camera (resolution, dpi?) which I can use to work out what the corresponding pixel width of square a would be if it were placed back in the scene at the same distance as square b?
The math you need for your problem can be found in chapter 9 of "Multiple View Geometry in Computer Vision", which happens to be freely available online: https://www.robots.ox.ac.uk/~vgg/hzbook/hzbook2/HZepipolar.pdf.
The short answer to your problem is:
No not in this exact format. Given you are working in a 3D world, you have one degree of freedom left. As a result you need to get more information in order to eliminate this degree of freedom (e.g. by knowing the depth and/or the relation of the two squares with respect to each other, the movement of the camera...). This mainly depends on your specific situation. Anyhow, reading and understanding chapter 9 of the book should help you out here.
PS: to me it seems like your problem fits into the broader category of "baseline matching" problems. Reading around about this, in addition to epipolar geometry and the fundamental matrix, might help you out.
Since you write of "squares" with just a "width" in the image (as opposed to "trapezoids" with some wonky vertex coordinates) I assume that you are considering an ideal pinhole camera and ignoring any perspective distortion/foreshortening - i.e. there is no lens distortion and your planar objects are exactly parallel to the image/sensor plane.
Then it is a very simple 2D projective geometry problem, and no separate knowledge of the camera geometry is needed. Just write down the projection equations in the first situation: you have 4 unknowns (the camera focal length, the common depth of the squares, the horizontal positions of their left sides (say), and 4 equations (the projections of each of the left and right sides of the squares). Solve the system and keep the focal length and the relative distance between the squares. Do the same in the second image, but now with known focal length, and compute the new depth and horizontal location of square b. Then add the previously computed relative distance to find where square a would be.
In order to understand the transformations performed by the camera to project the 3D world in the 2D image you need to know its calibration parameters. These are basically divided into two sets:
Intrensic parameters: These are fixed parameters that are specific for each camera. They are normally represented by a Matrix called k.
Extrensic parameters: These depend on the camera position in the 3D world. Normally they are represented by two matrices: R and T where the first one represents the rotation and the second one represents the translation
In order to calibrate a camera your need some pattern (basically a set of 3D points which coordinates are known). There are several examples for this in OpenCV library which provides support to perform the camera calibration:
http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html
Once you have your camera calibrated you can transform from 3D to 2D easily by the following equation:
Pimage = K · R · T · P3D
So it will not only depend on the position of the camera but it depends on all the calibration parameters. The following presentation go through the camera calibration details and the different steps and equations that are used during the 3D <-> Image transformations.
https://www.cs.umd.edu/class/fall2013/cmsc426/lectures/camera-calibration.pdf
With this in mind you can project whatever 3D point to the image and get its coordinate on it. The reverse transformation is not unique since going back from 2D to 3D will give you a line instead of a unique point.

scalability of an 2D game with the help of openGl and some general questions

i am planing to reengineer my prototype of a 2D role-plaing game to a "real product". In that case i think about using openGl instead of Android Canvas.
The reason why i think about it is because i want the game to work on devices with different screen resolutions. To do so i thought about using a Cameraview from openGl faced on an wall where my 2D gametextures are moving. If the resolution of the current device is to small for the whole game-content i want to move the cameraview so that the character is always at the middle till the cameraframe gets to the edges of the "wall".
Is this a possible solution for it or would you rather chose a different way?
Is it even possible to draw sprites in openGl as i can do with canvas? Simply like several Layers above each other. At first the tiles than the figures with for example simple squares as lifebars (first background than the red life above it) and so on.
positionRect = new Rect(this.getPositionX(), this.getPositionY()
- (this.spriteHeight - Config.BLOCKSIZE), this.getPositionX()
+ Config.BLOCKSIZE, this.getPositionY() + Config.BLOCKSIZE);
spritRect = new Rect(0, 0, Config.BLOCKSIZE, spriteHeight);
canvas.drawBitmap(this.picture, spritRect, positionRect, null);
If so how do i start with getting the first background and maybe first Dot(an .png picture)? I didnt find any tutorial what gives me the right kick off. I know how to sett up the projekt for an GLSurfaceView and so on.
You will need a bit of adjustments but it is possible and quite easy. Since you can get a tutorial and start some simple programs I will just give you some pointers:
First of all you should look into projections. You can use "glFrustumf" or "glOrthof" on the projection matrix. First one is used more for 3D so use Ortho. The parameters inside this method will represent the coordinate system borders of your screen. If you want them to be the same as most "view" systems insert values: top=0, left=0, right=view.width, right=view.height.
Now you can create a square buffer instead of rect as in
float[] buffer =
{origin.x, origin.y,
origin.x, origin.y+size.height,
origin.x+size.width, origin.y+size.height,
origin.x+size.width, origin.y,
};
And texture coordinates as
float[] textureCoordinates =
{.0, .0,
.0, 1,
1, 1,
1, .0,
};
You will also need to load the texture(s) (in some initialization and only once if possible) for witch use google or stack overflow since it depends on the platform...
And this is pretty much all you need to join it in your draw method:
enableClientState(vertexArray)
enableClientState(texCoordArray)
enable(texture2d)
//for each object:
vertexPointer(buffer)
texCoordPointer(textureCoordinates) //unless all are same
bindTexture(theTextureRepresentingTheSpriteYouWant)
draw(triangleStrip, 4)
As for moving use translate on the model matrix:
pushMatrix()
translatef(x, y, .0)
drawScene()
popMatrix()
When working with opengl, you aren't constrained to the "window" dimensions. You will define a projection matrix and viewport for your world, and then to change the "camera", you would just play with the projection matrix and viewport. If I were you, i'd pick up a book on opengl before starting this project so you are aware of how opengl works.
Also, if you are working with java then you will want to use the GLSurfaceView class. This handles all of the threading and everything for you, so you don't need to worry about it.

OSMDroid PathOverlay drawing is corrupted at high zoom levels

I'm using OSMdroid to implement a mapping application.
I have implemented a custom MapTileProvider that uses a tile source that allows zoom levels up to 22.
The default MAPNIK provider only allows zooms to level 18.
The problem is that any PathOverlay instances draw perfectly until zoom level 19, but then
are not drawn properly at zoom level 20-22. it looks like someone's rubbed out the path with an eraser over 90% of the path length (see screenshots below).
I've stepped through the draw() method of PathOverlay and exerything seems to be calculating correctly (the intermediate points appear correct for ZoomLevel 22, and then the XY projections are dividing by 22-ZoomLevel to get current screen coordinates).
Can anyone provide some insight as to what the problem is, and how to resolve it?
The same thing happens if I invoke the MapView using Cloudmade small tiles, which allows zooms up until level 20 and is a 'built-in' osmDroid tile provider class.
//mMapTileProvider = new HighResMapTileProvider(this);
mMapTileProvider = new MapTileProviderBasic(this,TileSourceFactory.CLOUDMADESMALLTILES);
mMapView = new MapView(this, 256, mResourceProxy,mMapTileProvider);
So the problem does not appear to be with the tile source or provider but with the canvas drawing method. Any ideas on how to resolve this?
At zoomLevel 19 I can see my paths nicely:
But here is that same path at the next zoom level:
I've found a workaround. It's not ideal, but it does work.
Firstly, if I add a
canvas.drawCircle(screenPoint1.x, screenPoint1.y, 5, cpaint);
to PathOverlay's draw() method I see this, so I know the co-ordinates are at least being calculated correctly.
So the problem seems to be related to the underlying line draw method in Android.
After some trial and error, I found that setting the STROKE WIDTH to 0.0 for the PathOverlay's Paint object fixes the problem, but the line is obviously only 1 pixel wide.
Adding a check for the current zoom level in PathOverlay.draw() to set the stroke width will keep the current behaviour for levels <20 and draw a hairline path for higher zoom levels.
Some other things I noticed:
The circles become squares at zoom level 21&22. This strongly suggests that there's some floating point precision issues when passing very large (x,y) co-ordinates to Path.lineTo / canvas.drawCircle etc, e.g. mPath.lineTo(131000001,38000001)
Setting stroke width to say, 5 sort-of works up to zoom level 21, but the same problem crops up at level 22 again
Update: This has been fixed in osmdroid 3.0.9.
Original Answer:
This issue appears to be rooted in Android. I believe it is due to a rounding error that happens when you scroll a view to a large offset (possibly due to use of SKScalar). This bug can be isolated by creating a new android project with an activity and a view:
Begin with the view at the origin with no scroll offset yet.
Draw a circle on the canvas: canvas.drawCircle(screenCenterX, screenCenterY, 100, mPaint)
Scroll the view to a large number: mainView.scrollTo(536870912, 536870912)
Draw a circle on the canvas: canvas.drawCircle(newScreenCenterX, newScreenCenterY, 100, mPaint)
The first circle draws ok, the second one draws distorted. For further evidence, try to draw your path near 0 lat/0 long and zoom in - notice the distortion no longer appears.
I will update the osmdroid ticket with some possible solutions workarounds.
I am fairly sure this is because the default PathOverlay only draws the line between points in the view. If a the point is outside of the curent view the line segment is not drawen. At lower zoom levels you just don't see that the bit of the line going off the view is not showen as all the sections are small.
I think you have 2 options.
The easy but maybe not the best is to put more points into the path, then at least the problem will be less noticable. If this is a good idea or not will depend how many points you already have.
The corect solution will be to extend the class and do your own draw method then clip the line segmet that gose off the view to the view edge. Idealy you would contribute your code back to the source.
I have the same problem and I posted a bug on OSMDroid here:
http://code.google.com/p/osmdroid/issues/detail?can=2&start=0&num=100&q=221&colspec=ID%20Type%20Status%20Priority%20Milestone%20Owner%20Summary&groupby=&sort=&id=221
I'm not sure if it's a problem of OSMDroid or just a problem of too big canvas.
Other workaround would be drawing on a new, smaller canvas (size of current visible mapView) and using drawBitmap in the top left corner of a big canvas. Just remember not to create new bitmap each draw - because it's expensive.
All shapes are drawn perfectly fine but Im struggling with other problem with not smooth panning in levels > 18. You can see that when you pan the map it is not moving every pixel as it is in levels < 18, but it jumps over few pixels.
I agree that this is OSMDROID related bug because I got the same working DrawPath function (Zoom level 21 & 22) using Google MapView. I hope they will be able to address this issue.
To solve this problem, we need to implement the line clipping algorithm.
By clipping with the map viewport, the path is adding more vertexes.
Thus it will be drawed correctly at big zoom level.

Scaleable to camera 3d line on an android device

I need to draw a growing 3d line using open gl on an Android device.
The problem is I need to draw lines that scale with a "laser" type effect on them.
Originally I just thought of drawing simple gl lines or line loops but they wont scale if the camera is moved closer to them - like a fly through.
My next thought was to generate a cylinder mesh and extrude it as I would do a line in real time, accounting for 90 degree turns by adding a 45degree rotation after extruding from the end point a new cylinder, turning the end 45degrees again and extruding another cylinder to create the new line extension and so on and so-forth...
Problem with cylinders is the near clipping plane will clip through them.
Anyone have a better thought or idea they can throw at me for this?
Problem with cylinders is the near clipping plane will clip through them.
This will be the case with any kind of geometry. You can however use depth clamping to avoid some of the effects of clipping. See here for details http://arcsynthesis.org/gltut/Positioning/Tut05%20Depth%20Clamping.html

Android 2D opengl line circle

I am new to open gl, and have been trying to do some basic 2d openGL in android. I am able to set up my 2D view, and draw squares and triangles. I am trying to draw a circle and am not exactly sure how to do it. I have found several techniques while searching, one using triangles rotated around the center w/ the given radius, this will not work as I do not want a filled circle. I also found other suggestions to do this with lines moving around the outer edge of the circle.
I have chosen to implement the latter. The issue I am having is with the IndexBuffer that is passed into glDrawElements, if my circle(lines) buffer has too many points, I am unable to create the byte array to create the IndexBuffer, as the maximum value for a byte is 127, any help or direction on how to do this would be appreciated.
Use an IntBuffer, that should allow you to use as many indices as you'll ever need.

Categories

Resources