glDrawTexiOES on Nvidia Tegra 3 - android

I can't get the ((GL11Ext) gl).glDrawTexfOES / glDrawTexiOES extension to work on my Tegra 3 device (HTC One X). With GL Errors enabled, I get the following GLException: "out of memory".
The same code works on every other Android device / emulator that I try, but with my One X I always get this error. I've tried reducing the texture size right down but it makes no difference. (Yes, the texture dimensions are always powers of 2).
Has any body else experienced this problem?? Any Ideas? Thanks.

It looks like Tegra 3 just doesn't support this extension. So in the end, I changed TexFont to render textured "quads" and it seems to work ok.

for(int lines = fntTexHeight-1; lines>0 ; --lines) {
pix.put(bits, lines * lineLen, lineLen);
}
**pix.position(0);** //need this
// Place bitmap in texture
gl.glBindTexture(GL10.GL_TEXTURE_2D, texID);

Related

GL_INVALID_FRAMEBUFFER_OPERATION: Framebuffer is not complete or incompatible with command

i have made a 360 image viewer on unity and i am changing image texture dynamically using c# script , so its work fine on PC unity but when i run it on android device it say error below:
OPENGL NATIVE PLUG-IN ERROR: GL_INVALID_FRAMEBUFFER_OPERATION: Framebuffer is not complete or incompatible with command
My code below
IEnumerator registerFunc(WWW www)
{
yield return www;
if (www.error == null)
//if(true)
{
Debug.Log("OK - CountTime");
texturas = www.texture;
www.Dispose();
www = null;
sphereMS.material.mainTexture = texturas;// here i am getting error
}
else
{
Debug.Log("ERROR");
}
}
So any one can help me out in this how i can solve it.Thanks!
resolution is 9999x2429 and sphereMS is public variable to which i have
assigned sphere using drag and drop
That's the problem. 9999 is really big. The limit on most Android devices is about 2048. Anything above this is a problem unless you are running this on very high end and expensive Android device.
One thing to try is replace
texturas = www.texture;
with
texturas = new Texture2D(4, 4);
texturas.LoadImage(www.bytes);
The LoadImage function is used to load jpg/png image into a Texture. It might fail too due to the size of the image.
If LoadImage fails too then you should have different Texture resolution for each platform on your server. The Android platform should request for a lower resolution version of your Textures. Just keep reducing the Textures resolution until it stops crashing. 2048 resolution should be fine.
I suggest you read this post about texture size limit in Unity.
EDIT:
If you have changed the resolution to below 2048 and the problem is still there then this is a bug.
Install Unity 2017.2.0p1 or 2017.3.0b3 to get the fix. Few people experience this with the google-cardboard plugin.
Mostly it happens on Android devices using Mali GPU, even Asphalt 8 couldn't run on these devices (at least old ones).
I tried on Huawei Mate 10 and Samsung Galaxy A50s, as both are using Mali-G72 MP12 and MP3 respectively so getting the same error and the image gets black/blank (couldn't render).
While same works fine on other Android and iOS devices
use parametre
-disable-gpu
or
app.disableHardwareAcceleration()
look here
enter link description here

How to improve OpenCV face detection performance in android?

I am working on a project in android in which i am using OpenCV to detect faces from all the images which are in the gallery. The process of getting faces from the images is performing in the service. Service continuously working till all the images are processed. It is storing the detected faces in the internal storage and also showing in the grid view if activity is opened.
My code is:
CascadeClassifier mJavaDetector=null;
public void getFaces()
{
for (int i=0 ; i<size ; i++)
{
File file=new File(urls.get(i));
imagepath=urls.get(i);
defaultBitmap=BitmapFactory.decodeFile(file, bitmapFatoryOptions);
mJavaDetector = new CascadeClassifier(FaceDetector.class.getResource("lbpcascade_frontalface").getPath());
Mat image = new Mat (defaultBitmap.getWidth(), defaultBitmap.getHeight(), CvType.CV_8UC1);
Utils.bitmapToMat(defaultBitmap,image);
MatOfRect faceDetections = new MatOfRect();
try
{
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));
}
catch(Exception e)
{
e.printStackTrace();
}
if(faceDetections.toArray().length>0)
{
}
}
}
Everything is fine but it is detection faces very slow. The performance is very slow. When i debug the code then i found the line which is taking time is:
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));
I have checked multiple post for this problem but i didn't get any solution.
Please tell me what should i do to solve this problem.
Any help would be greatly appreciated. Thank you.
You should pay attention to the parameters of detectMultiScale():
scaleFactor – Parameter specifying how much the image size is reduced at each image scale. This parameter is used to create a scale pyramid. It is necessary because the model has a fixed size during training. Without pyramid the only size to detect would be this fix one (which can be read from the XML also). However the face detection can be scale-invariant by using multi-scale representation i.e., detecting large and small faces using the same detection window.
scaleFactor depends on the size of your trained detector, but in fact, you need to set it as high as possible while still getting "good" results, so this should be determined empirically.
Your 1.1 value can be a good value for this purpose. It means, a relative small step is used for resizing (reduce size by 10%), you increase the chance of a matching size with the model for detection is found. If your trained detector has the size 10x10 then you can detect faces with size 11x11, 12x12 and so on. But in fact a factor of 1.1 requires roughly double the # of layers in the pyramid (and 2x computation time) than 1.2 does.
minNeighbors – Parameter specifying how many neighbours each candidate rectangle should have to retain it.
Cascade classifier works with a sliding window approach. By applying this approach, you slide a window through over the image than you resize it and search again until you can not resize it further. In every iteration the true outputs (of cascade classifier) are stored but unfortunately it actually detects many false positives. And to eliminate false positives and get the proper face rectangle out of detections, neighbourhood approach is applied. 3-6 is a good value for it. If the value is too high then you can lose true positives too.
minSize – Regarding to the sliding window approach of minNeighbors, this is the smallest window that cascade can detect. Objects smaller than that are ignored. Usually cv::Size(20, 20) are enough for face detections.
maxSize – Maximum possible object size. Objects bigger than that are ignored.
Finally you can try different classifiers based on different features (such as Haar, LBP, HoG). Usually, LBP classifiers are a few times faster than Haar's, but also less accurate.
And it is also strongly recommended to look over these questions:
Recommended values for OpenCV detectMultiScale() parameters
OpenCV detectMultiScale() minNeighbors parameter
Instead reading images as Bitmap and then converting them to Mat via using Utils.bitmapToMat(defaultBitmap,image) you can directly use Mat image = Highgui.imread(imagepath); You can check here for imread() function.
Also, below line takes too much time because the detector is looking for faces with at least having Size(20, 20) which is pretty small. Check this video for visualization of face detection using OpenCV.
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));

Get OpenGL max texture size

I'm developing an Android app that's going to work with bitmaps extensively and I'm looking for a reliable way to get the maximum texture size for OpenGL on different devices.
I know the minimum size = 2048x2048, but that's not good enough since there are already tablets out there with much higher resolutions (2560x1600 for example)
So is there a reliable way to get this information?
So far I've tried:
Canvas.getMaximumBitmapWidth() (Returns 32766, instead of 2048)
GLES10.glGetIntegerv(GL10.GL_MAX_TEXTURE_SIZE ...) (Returns 0)
I'm working with minimum-sdk = 15 (ICS) and I'm testing it on a Asus Transformer TF700t Infinity
Does anyone know another way to get it?
Or will I have to compile a list of known GPUs with their max canvas size?
try using this code
int[] maxTextureSize = new int[1];
GLES10.glGetIntegerv(GL10.GL_MAX_TEXTURE_SIZE, maxTextureSize, 0);
maxTextureSize stores the size limit for decoded image such as 4096x4096, 8192x8192 . Remember to run this piece of code in the MainThread or you will get Zero.
This will give you the maximum height allowed.
Canvas canvas = new Canvas();
canvas.getMaximumBitmapHeight() / 8

Slow glTexSubImage2D performance on Nexus 10/Android 4.2.2 (Samsung Exynos 5 w/ Mali-T604)

I have an Android app that decodes video into yuv420p format then renders video frames using OpenGLES.
I use glTexSubImage2D() to upload y/u/v buffer to GPU then do a YUV2RGB conversion using shader. All EGL/OpenGL setup/rendering code is native code.
Now I am not saying there is no problem with my code, but considering the same code is running perfecting fine on iOS (iPad/iPhone), Nexus 7, Kindle HD 8.9, Samsung Note 1 and a few other cheap chinese tablets (A31/RockChip 3188) running Android 4.0/4.1/4.2. I would say it's less likely my code is wrong. On those devices, glTexSubImage2D() uses less than 16ms to upload a SD or 720P HD texture.
However, on Nexus 10, glTexSubImage2D() it takes about 50~90ms for a SD or 720P HD texture which is way too slow for a 30fps or 60fps video.
I would like to know
1) if I should pick a different texture format (RGBA or BGRA). Is there a ways to detect which is the best texture format used by a GPU?
2) if there is a feature that is 'OFF' on all other SOCs but set to 'ON' on Exynos 5. For example, the automatic MIPMAP generation option. (I have it off, btw)
3) if this is a known issue of Samsung Exynos SOC - I can't find a support forum for Exynos CPU
4) Is there any option I need to set when configure the EGL surface? like, transparency, surface format, etc? (I have no idea what I am talking about)
5) It could mean GPU is doing an implicit format conversion but I checked GL_LUMINANCE is always used. Again it works on all other platform.
6) anything else?
My EGL config:
const EGLint attribs[] = {
EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_NONE
};
Initial setup:
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, ctx->frameW, ctx->frameH, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, NULL); /* also for U/V */
subsequent partial replacement:
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, ctx->frameW, ctx->frameH, GL_LUMINANCE, GL_UNSIGNED_BYTE, yBuffer); /*also for U/V */
I am trying to render video at ~30FPS or ~60FPS at SD or 720P HD resolution.
This is a known driver issue that we have reported to ARM. A future update should fix it.
EDIT Status update
We've now managed to reproduce slow upload conditions for one path on the public firmware, which you are possibly hitting, and this will be fixed in the next driver release.
If you double-buffer texture IDs (e.g. frame N = ID X, N+1 = ID Y, N+2 = ID X, N+3 = ID Y, etc) for the textures you are uploading to it should help avoid this on the current firmware.
Thanks,
Iso
I can confirm this has been fixed in Android 4.3 - I'm seeing a performance increase by a factor of 2-3 with RGBA format and by a factor of 10-50 with other texture formats over Android 4.2.2. These results apply for both glTexImage2D and glTexSubImage2D. (Can't add comments yet so I had to put this here)
EDIT: If you're stuck with 4.2.2, you could try using RGBA texture instead, it should have better performance (3-10x or so with larger power-of-two texture sizes).

Chart drawing fine on emulator but not on phone

I drew a basic Smith chart on the canvas using circles, arcs and lines. I have run the app on numerous size emulator screens and all work perfectly but once I tried it on an actual device (Android level 2.3.5) the chart does not line up as it should i.e. some objects are out of place.
While writing the code I was careful to use get.Width() and get.Height() for the parameters instead of using pixels so that the app would work correctly on all devices. Below is an example of the code i used:
canvas.drawCircle(canvas.getWidth()*1/2, canvas.getHeight()*3/8, canvas.getWidth()*475/1000, black);
canvas.drawCircle(canvas.getWidth()*5/8, canvas.getWidth()*5/8, canvas.getWidth()*349/1000, black);
canvas.drawCircle(canvas.getWidth()*6/8, canvas.getWidth()*5/8, canvas.getWidth()*228/1000, black);
canvas.drawCircle(canvas.getWidth()*7/8, canvas.getWidth()*5/8, canvas.getWidth()*103/1000, black);
arc0.set(canvas.getWidth()/2, canvas.getHeight()*-139/700, canvas.getWidth()*100/69, canvas.getHeight()*3/8);
arc1.set(canvas.getWidth()*-6/112, canvas.getHeight()*-80/100, canvas.getWidth()*195/100, canvas.getHeight()*72/192);
arc2.set(canvas.getWidth()*7/10, canvas.getHeight()*70/700, canvas.getWidth()*125/100, canvas.getHeight()*3/8);
arc3.set(canvas.getWidth()/2, canvas.getHeight()*3/8, canvas.getWidth()*100/69, canvas.getHeight()*91/96);
arc4.set(canvas.getWidth()*-8/112, canvas.getHeight()*3/8, canvas.getWidth()*195/100, canvas.getHeight()*150/100);
arc5.set(canvas.getWidth()*7/10, canvas.getHeight()*3/8, canvas.getWidth()*125/100, canvas.getHeight()*65/100);
The graph lines up fine on all the different size emulator screens i have tried, so would anybody be able to tell me why it doesn't line up on an actual device. Thanks
You have to use percentage values not numbers... moreover try to take a look on conversions of the value u r getting from getwidth(). Possibly some values are missing..

Categories

Resources