WebGL position different on desktop and android phone - android

I'm currently trying to create a website that contains a WebGL canvas. Everything worked fine and I got my plane rendered, but when I opened my website on my samsung galaxy SIII mini, the planes origin point seems to be different
You can check the website at http://portfolio.kamidesigns.be
The canvas is located under thesis -> Occluder simplification using planar sections
Here are some images to show what's wrong.
Desktop
Cellphone
The plane on my cellphone is located on the top right corner although the positions of the vertices of the plane are
var positions = new Float32Array([-0.5, 0.5, 0,
-0.5, -0.5, 0,
0.5, 0.5, 0,
0.5, -0.5, 0]);
If someone can help me, it would be very much appreciated.

You have a bunch of typos in your code which prevent vertex array object from being created properly. This leads to "default values" being fed through the vertex pipeline resulting in different behavior on different browsers.
Firstly, change your VAO initialization to this ( you missed var vao initialization):
function createVAO()
{
var vao = gl.extOESVertexArrayObject.createVertexArrayOES();
gl.extOESVertexArrayObject.bindVertexArrayOES(vao);
return vao;
}
Secondly, in your storeDataInAttributeList calls you need to supply the result of gl.getAttribLocation for attributeNumber. This is a good thing to do to make your code stay correct if you modify the shader.
storeDataInAttributeList(program.aPosition, 3, positions);
storeDataInAttributeList(program.aTextureCoords, 2, textureCoords);
storeDataInAttributeList(program.aNormals, 3, normals);
And lastly, in your shader the inNormal attrib is not used which results in gl.getAttribLocation(program, "inNormal"); returning -1
You may want to safe-guard your storeDataInAttributeList against such cases like this:
function storeDataInAttributeList(attributeNumber, coordinateSize, data)
{
if( attributeNumber >= 0 )
{
vbo = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vbo);
gl.bufferData(gl.ARRAY_BUFFER, data, gl.STATIC_DRAW);
gl.enableVertexAttribArray(attributeNumber);
gl.vertexAttribPointer(attributeNumber, coordinateSize, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
}
}
After making these 3 changes your page started displaying the same results on android/pc browser.
My final advice would be to avoid using OES_vertex_array_object extensions as e.g. my stock browser on Samsung Galaxy Tab 2 doesn't support it ( not that old a device ).

Related

why detected object jumps using OpenCV

Ok I have a strange problem. I'll try to describe it as best as I can.
I've learned my app to detect a car when looking on it from the side
Imgproc.cvtColor(aInputFrame, grayscaleImage, Imgproc.COLOR_RGBA2RGB);
MatOfRect objects = new MatOfRect();
// Use the classifier to detect cars
if (cascadeClassifier != null) {
cascadeClassifier.detectMultiScale(grayscaleImage, objects, 1.1, 1,
2, new Size(absoluteObjectSize, absoluteObjectSize),
new Size());
}
for (int i = 0; i < dataArray.length; i++) {
Core.rectangle(aInputFrame, dataArray[i].tl(), dataArray[i].br(),
new Scalar(0, 255, 0, 255), 3);
mRenderer.setCameraPosition(-5, 5, 60f);
}
Now, this code works nice. I mean that i detects cars and it marks them with green rectangle. The problem is that the marked rectangle jumps like hell. I mean even when the phone is hold still the rectangle jumps from left to right to middle. There is never one still rectangle. I hope I've described the problem properly. I would like to stabilizy the marking cause I want to draw an overlay based on it and I can't make it to jump like this
See(1) the parameters for detectMultiScale and it expects
image of type CV_8U. You will need to convert to gray scale image
with COLOR_RGBA2GRAY instead of COLOR_RGBA2RGB
In detectMultiScale, increase the number of neighbours parameter to avoid false positives.
Suggestion: If input is a video stream, don't run
detectMultiScale on every frame. It is slow even if you use LBP
cascades. Try detection in one frame, followed by tracking techniques.

glUniformMatrix2fv on Samsung Galaxy Tab Pro with Android GLES20 -- works on Droid Bionic

I'm working on shaders for an android OpenGL ES 2.0 program. This is the error message. I google'd it and found nothing.
java.lang.IllegalArgumentException: length - offset < count*4 < needed
at android.opengl.GLES20.glUniformMatrix2fv(Native Method)
This works on my Droid Bionic, but not my Samsung Galaxy Tab Pro. The actual line in question reads as follows:
GLES20.glUniformMatrix2fv(m_u_texture_position, 1, false, m_u_texture_position_floats, 0);
m_u_texture_position_floats is a 2 element array of floats. Anyone know why this is?
glUniformMatrix2fv() sets the value for a uniform of type mat2. mat2 is a 2 by 2 matrix, so it requires 4 floats.
For a uniform variable with 2 values, the type in the shader code should be vec2, and you will use glUniform2fv() to set the value:
GLES20.glUniform2fv(m_u_texture_position, 1, m_u_texture_position_floats, 0);

PointCloud from two undistorted images

I want to do some Structure from Motion using OpenCV. This should happen on Android.
Currently I am having the cameraMatrix (intrinsic parameters) and the distortion coefficients from the camera calibration.
The user should now take 2 images from building and the app should generate a pointcloud.
Note: the user maybe also rotates the camera of the smartphone a little bit as he moves along one side of the building...
At the current point, I have the following information:
the undistorted left image
the undistorted right image
a list of good matches using SIFT
the homography matrix
the fundamental matrix
I've searched the internet and now I am very confused how I should proceed...
Some say I need to use stereoRectify for getting Q and use Q with reprojectImageTo3D() for getting the pointCloud.
Others say that I need to use stereoRectifyUncalibrated and use H1 and H2 from this method to fill all the parameters of triangulatePoints.
In triangulatePoints I need the projectionMatrix of each camera/image but from my understanding this seems definitly wrong.
So for me there are some problems:
How do I get R and T (Rotation and Translation) from all the information I already have
If I use stereoRectify, the first 4 parameters are cameraMatrix1, distortionCoeff1, cameraMatrix2, distortionCoeff2) - If I do not have a stereoCamera like Kinect, are the ameraMatrix1 and cameraMatrix2 equals for my setup (mono camera on a smartphone)
How can I obtain Q (guess if I have R and T I can get it from stereoRectify)
Is there anonther way of getting the projectioMatrices for each camera so I can use the triangulationmethod provided by OpenCV
I know this are a lot of questions, but googeling confused me so I need to get this straight. I hope someone can help me with my problems.
Thanks
PS as this are more theoretical questions I did not post some code. If you want / need to see code or the values of my camera calibration, just ask and I will add them to my posting.
I wrote something about using Farneback's optical flow for Structure from Motion before. You can read the details here.
But here's the code snippet, it's a somewhat working, but not great implementation. Hope that you can use it as a reference.
/* Try to find essential matrix from the points */
Mat fundamental = findFundamentalMat( left_points, right_points, FM_RANSAC, 0.2, 0.99 );
Mat essential = cam_matrix.t() * fundamental * cam_matrix;
/* Find the projection matrix between those two images */
SVD svd( essential );
static const Mat W = (Mat_<double>(3, 3) <<
0, -1, 0,
1, 0, 0,
0, 0, 1);
static const Mat W_inv = W.inv();
Mat_<double> R1 = svd.u * W * svd.vt;
Mat_<double> T1 = svd.u.col( 2 );
Mat_<double> R2 = svd.u * W_inv * svd.vt;
Mat_<double> T2 = -svd.u.col( 2 );
static const Mat P1 = Mat::eye(3, 4, CV_64FC1 );
Mat P2 =( Mat_<double>(3, 4) <<
R1(0, 0), R1(0, 1), R1(0, 2), T1(0),
R1(1, 0), R1(1, 1), R1(1, 2), T1(1),
R1(2, 0), R1(2, 1), R1(2, 2), T1(2));
/* Triangulate the points to find the 3D homogenous points in the world space
Note that each column of the 'out' matrix corresponds to the 3d homogenous point
*/
Mat out;
triangulatePoints( P1, P2, left_points, right_points, out );
/* Since it's homogenous (x, y, z, w) coord, divide by w to get (x, y, z, 1) */
vector<Mat> splitted = {
out.row(0) / out.row(3),
out.row(1) / out.row(3),
out.row(2) / out.row(3)
};
merge( splitted, out );
return out;
This isn't OpenCV, but here is an example of exactly what you are asking for:
http://boofcv.org/index.php?title=Example_Stereo_Single_Camera
There is an Android demonstration application which includes that code here:
https://play.google.com/store/apps/details?id=org.boofcv.android

Slow glTexSubImage2D performance on Nexus 10/Android 4.2.2 (Samsung Exynos 5 w/ Mali-T604)

I have an Android app that decodes video into yuv420p format then renders video frames using OpenGLES.
I use glTexSubImage2D() to upload y/u/v buffer to GPU then do a YUV2RGB conversion using shader. All EGL/OpenGL setup/rendering code is native code.
Now I am not saying there is no problem with my code, but considering the same code is running perfecting fine on iOS (iPad/iPhone), Nexus 7, Kindle HD 8.9, Samsung Note 1 and a few other cheap chinese tablets (A31/RockChip 3188) running Android 4.0/4.1/4.2. I would say it's less likely my code is wrong. On those devices, glTexSubImage2D() uses less than 16ms to upload a SD or 720P HD texture.
However, on Nexus 10, glTexSubImage2D() it takes about 50~90ms for a SD or 720P HD texture which is way too slow for a 30fps or 60fps video.
I would like to know
1) if I should pick a different texture format (RGBA or BGRA). Is there a ways to detect which is the best texture format used by a GPU?
2) if there is a feature that is 'OFF' on all other SOCs but set to 'ON' on Exynos 5. For example, the automatic MIPMAP generation option. (I have it off, btw)
3) if this is a known issue of Samsung Exynos SOC - I can't find a support forum for Exynos CPU
4) Is there any option I need to set when configure the EGL surface? like, transparency, surface format, etc? (I have no idea what I am talking about)
5) It could mean GPU is doing an implicit format conversion but I checked GL_LUMINANCE is always used. Again it works on all other platform.
6) anything else?
My EGL config:
const EGLint attribs[] = {
EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_NONE
};
Initial setup:
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, ctx->frameW, ctx->frameH, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, NULL); /* also for U/V */
subsequent partial replacement:
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, ctx->frameW, ctx->frameH, GL_LUMINANCE, GL_UNSIGNED_BYTE, yBuffer); /*also for U/V */
I am trying to render video at ~30FPS or ~60FPS at SD or 720P HD resolution.
This is a known driver issue that we have reported to ARM. A future update should fix it.
EDIT Status update
We've now managed to reproduce slow upload conditions for one path on the public firmware, which you are possibly hitting, and this will be fixed in the next driver release.
If you double-buffer texture IDs (e.g. frame N = ID X, N+1 = ID Y, N+2 = ID X, N+3 = ID Y, etc) for the textures you are uploading to it should help avoid this on the current firmware.
Thanks,
Iso
I can confirm this has been fixed in Android 4.3 - I'm seeing a performance increase by a factor of 2-3 with RGBA format and by a factor of 10-50 with other texture formats over Android 4.2.2. These results apply for both glTexImage2D and glTexSubImage2D. (Can't add comments yet so I had to put this here)
EDIT: If you're stuck with 4.2.2, you could try using RGBA texture instead, it should have better performance (3-10x or so with larger power-of-two texture sizes).

glDrawTexiOES on Nvidia Tegra 3

I can't get the ((GL11Ext) gl).glDrawTexfOES / glDrawTexiOES extension to work on my Tegra 3 device (HTC One X). With GL Errors enabled, I get the following GLException: "out of memory".
The same code works on every other Android device / emulator that I try, but with my One X I always get this error. I've tried reducing the texture size right down but it makes no difference. (Yes, the texture dimensions are always powers of 2).
Has any body else experienced this problem?? Any Ideas? Thanks.
It looks like Tegra 3 just doesn't support this extension. So in the end, I changed TexFont to render textured "quads" and it seems to work ok.
for(int lines = fntTexHeight-1; lines>0 ; --lines) {
pix.put(bits, lines * lineLen, lineLen);
}
**pix.position(0);** //need this
// Place bitmap in texture
gl.glBindTexture(GL10.GL_TEXTURE_2D, texID);

Categories

Resources