I can successfully create and load ETC textures in Android,
using the calls:
ETC1Texture etc1tex = new ETC1Texture(...);
gl11.glCompressedTexImage2D(GL10.GL_TEXTURE_2D, 0/*level*/,
ETC1.ETC1_RGB8_OES/*internal format*/,
etc1tex.getWidth(), etc1tex.getHeight(),
0/*border*/,
etc1tex.getData().capacity()/*imagesize*/,
etc1tex.getData());
But now i need to update this texture with new image data.
I am using the call to SubImage :
GL11.glCompressedTexSubImage2D( GL10.GL_TEXTURE_2D, 0/*level*/,
0, 0, etc1tex.getWidth(), etc1tex.getHeight(),
ETC1.ETC1_RGB8_OES,
etc1tex.getData().capacity(),
etc1tex.getData());
which takes more or less the same paramaters, as previous call.
But its not working, my texture does'nt even change a bit.
But If i simply replace the SubImage call with the first one, i can see some distortion in the texture when it updates...
Does anyone know how i can use this CompressedTexSubImage call
yeah ,i meet the same problem.
i use glCompressedTexImage2D(texinfo.glTarget + face, level,glInternalFormat, pixelWidth, pixelHeight, 0,faceLodSize, data);
it works.
but when i use
glCompressedTexImage2D(texinfo.glTarget + face, level,glInternalFormat, pixelWidth, pixelHeight, 0,faceLodSize, NULL);
and then
glCompressedTexSubImage2D(texinfo.glTarget + face, level, 0, 0, pixelWidth , pixelHeight , glInternalFormat,faceLodSize, data);
it dose not work.
gl error is GL_INVALID_OPERATION
i need to use glCompressedTexSubImage2D,because i load one texture not in one buffer .
may be load into more than one tile buffer.
if one tile loaded completed, than call glCompressedTexSubImage2D to handle it.
According to the API (https://www.khronos.org/opengles/sdk/1.1/docs/man/glCompressedTexSubImage2D.xml)
"The required paletted formats do not allow subimage updates, but
other formats defined by extensions may."
I assume this means that for ETC1 compression, subimage just isn't allowed.
Related
Im currently trying to display a video frame using opengl.
So far it works but I have some color problem.
Im using this as my
Reference for my logic
I have this code
//YUV420SP data
uint8_t *decodedBuff = AMediaCodec_getOutputBuffer(d->codec, status, &bufSize);
buildTexture(decodedBuff, decodedBuff+w*h, decodedBuff+w*h, w, h);
renderFrame();
but it displays with wrong color.
decodedBuff = Y
decodedBuff+w*h = U
decodedBuff+w*h*5 = V
but this separation formula is for YUV420P.
Do you guys happen to know whats for YUV420SP?
Your help is very much appreciated
If you are doing it this way you are doing it wrong. You should never manually read raw data from video surfaces in fragment shaders.
Generate a SurfaceTexture, bind it to an OpenGL ES texture, and use EGL_image_external to access the texture via an external image sampler.
This will give you direct access to the video data in your shader, including automatic handling of the memory format and color conversion, in many cases for "free" because it's backed by GPU hardware acceleration.
I want to take screenshot of current frame in OpenGL for further processing and I'm trying to improve the performance of glReadPixels by using PBO to asynchronously read framebuffers.
I'm under the impression that glReadPixels after GL_PIXEL_PACK_BUFFER is bound to buffer should return immediately, but it actually takes similar or even more time than not using PBO.
Here are samples of my codes:
// Setup PBO
GLES30.glGenBuffers(nPbo, pboIndex, 0);
for(int i=0;i<nPbo; i++){
GLES30.glBindBuffer (GL_PIXEL_PACK_BUFFER, pboIndex[i]);
GLES30.glBufferData(GL_PIXEL_PACK_BUFFER, size, null,GL_STREAM_READ);
}
GLES30.glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
......
// For each frame, trigger async transfer of framebuffer to PBO.
// Note that I don't even map the PBO to memory yet
GLES30.glBindBuffer (GL_PIXEL_PACK_BUFFER, pboIndex[index]);
// The following is a JNI method to overload glReadPixels in GLES20.glReadPixels,
// to allow passing int offset to the last param in order to use PBO,
// and slowdown (around 500ms on my device) happens here
GLES3PBOReadPixelsFix.glReadPixelsPBO(0, 0, mWidth, mHeight, GLES30.GL_RGBA, GLES30.GL_UNSIGNED_BYTE, 0);
GLES30.glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
Based on this article, the cause of the slowdown could be due to conversion between internal format, which may be GL_BGRA, and pixel transfer format, which is GL_RGBA in my code. Changing the transfer format to GL_RGB will reduce the latency of glReadPixels to around 100ms, but when I map the buffer with GLES30.glMapBufferRange the output frame doesn't look rendered correctly. I also tried the GL_BGRA format in GLES11Ext but it will cause GL_INVALID_OPERATION in glReadPixel.
Is there any other way to make glReadPixels on Android return immediately so that PBO can improve performance?
As Reto has suggested, it turns out to be an implementation specific issue. The GPU that I was originally testing with is Adreno 306. When I test the same codes on Samsung Note 4 (Adreno 420), it works as expected. So it's always worthwhile to test on different devices and GPUs for such types of issues.
I am working on a project in android in which i am using OpenCV to detect faces from all the images which are in the gallery. The process of getting faces from the images is performing in the service. Service continuously working till all the images are processed. It is storing the detected faces in the internal storage and also showing in the grid view if activity is opened.
My code is:
CascadeClassifier mJavaDetector=null;
public void getFaces()
{
for (int i=0 ; i<size ; i++)
{
File file=new File(urls.get(i));
imagepath=urls.get(i);
defaultBitmap=BitmapFactory.decodeFile(file, bitmapFatoryOptions);
mJavaDetector = new CascadeClassifier(FaceDetector.class.getResource("lbpcascade_frontalface").getPath());
Mat image = new Mat (defaultBitmap.getWidth(), defaultBitmap.getHeight(), CvType.CV_8UC1);
Utils.bitmapToMat(defaultBitmap,image);
MatOfRect faceDetections = new MatOfRect();
try
{
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));
}
catch(Exception e)
{
e.printStackTrace();
}
if(faceDetections.toArray().length>0)
{
}
}
}
Everything is fine but it is detection faces very slow. The performance is very slow. When i debug the code then i found the line which is taking time is:
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));
I have checked multiple post for this problem but i didn't get any solution.
Please tell me what should i do to solve this problem.
Any help would be greatly appreciated. Thank you.
You should pay attention to the parameters of detectMultiScale():
scaleFactor – Parameter specifying how much the image size is reduced at each image scale. This parameter is used to create a scale pyramid. It is necessary because the model has a fixed size during training. Without pyramid the only size to detect would be this fix one (which can be read from the XML also). However the face detection can be scale-invariant by using multi-scale representation i.e., detecting large and small faces using the same detection window.
scaleFactor depends on the size of your trained detector, but in fact, you need to set it as high as possible while still getting "good" results, so this should be determined empirically.
Your 1.1 value can be a good value for this purpose. It means, a relative small step is used for resizing (reduce size by 10%), you increase the chance of a matching size with the model for detection is found. If your trained detector has the size 10x10 then you can detect faces with size 11x11, 12x12 and so on. But in fact a factor of 1.1 requires roughly double the # of layers in the pyramid (and 2x computation time) than 1.2 does.
minNeighbors – Parameter specifying how many neighbours each candidate rectangle should have to retain it.
Cascade classifier works with a sliding window approach. By applying this approach, you slide a window through over the image than you resize it and search again until you can not resize it further. In every iteration the true outputs (of cascade classifier) are stored but unfortunately it actually detects many false positives. And to eliminate false positives and get the proper face rectangle out of detections, neighbourhood approach is applied. 3-6 is a good value for it. If the value is too high then you can lose true positives too.
minSize – Regarding to the sliding window approach of minNeighbors, this is the smallest window that cascade can detect. Objects smaller than that are ignored. Usually cv::Size(20, 20) are enough for face detections.
maxSize – Maximum possible object size. Objects bigger than that are ignored.
Finally you can try different classifiers based on different features (such as Haar, LBP, HoG). Usually, LBP classifiers are a few times faster than Haar's, but also less accurate.
And it is also strongly recommended to look over these questions:
Recommended values for OpenCV detectMultiScale() parameters
OpenCV detectMultiScale() minNeighbors parameter
Instead reading images as Bitmap and then converting them to Mat via using Utils.bitmapToMat(defaultBitmap,image) you can directly use Mat image = Highgui.imread(imagepath); You can check here for imread() function.
Also, below line takes too much time because the detector is looking for faces with at least having Size(20, 20) which is pretty small. Check this video for visualization of face detection using OpenCV.
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));
On Android using OpenGL ES 2.0 I try to perform certain performance tests using different internal texture formats.
Initially I have a lot of RGBA textures (png) which I want to load and store internally in a different format with OpenGL (for example RGB and LUMINANCE). I load my textures using glTexImage2D like this:
Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(),resourceId);
...
int size = bitmap.getRowBytes() * bitmap.getHeight();
ByteBuffer b = ByteBuffer.allocate(size);
bitmap.copyPixelsToBuffer(b);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, bitmap.getWidth(),
bitmap.getHeight(), 0, GLES20.GL_RGBA,
GLES20.GL_UNSIGNED_BYTE, b);
This works fine, however if I change the first GLES20.GL_RGBA (The internalFormat parameter) to anything else (GLES20.GL_RGB or GLES20.GL_LUMINANCE) my texture appears all black. Changing the second GLES20.GL_RGBA to the same value will display something - but obviously not correctly as the original data is RGBA.
I thought maybe it has something to with the shader code - that maybe texture2D(..) returns a different value because the internal format of the texture is different. My shader code is simply:
gl_FragColor = texture2D(texture, fragment_texture_coordinate);
I tried changing this around too, but no luck yet. So I thought maybe glTex2DImage is not at all working as I think it does (I am not an expert on this area whatsoever).
What am I doing wrong?
Edit:
I overlooked this little detail on texImage2D. It appears that:
internalformat must match format. No conversion between formats is supported during texture image processing. type may be used as a hint to specify how much precision is desired, but a GL implementation may choose to store the texture array at any internal resolution it chooses.
What I gather from this, is that if you want to store your textures different from their original format you'll have to convert it yourself.
Your fragment shader must be written to agree with the format you are giving to glTexImage2D(). For GL_RGB, it should force the alpha to 1.0, like this:
vec3 Color_RGB = texture2D(sampler2d, texCoordinate);
gl_FragColor = vec4(Color_RGB, 1.0);
But, for GL_RGBA, it should look like this:
vec4 Color_RGBA = texture2D(sampler2d, texCoordinate);
gl_FragColor = Color_RGBA;
And, as has been discussed, you can only use the Android Bitmap class for textures if your PNG files have no transparency. This article explains that:
http://software.intel.com/en-us/articles/porting-opengl-games-to-android-on-intel-atom-processors-part-1
I have an Android application that displays VGA (640x480) frames using OpenGL ES. The application reads each frame from a movie file and updates the texture accordingly.
My problem is that, it is taking almost 30 ms. to draw each frame using OpenGL. Similar test using the Canvas/drawBitmap was around 6 ms on the same device.
I'm following the same OpenGL calls that VLC Media Player is using, so I'm assuming that those are optimized for this purpose.
I just wanted to hear your thoughts and ideas about it?
Are you sure that the bitmap are loaded with RBG_565?Try this :
BitmapFactory.Options opt = new BitmapFactory.Options();
opt.inPreferredConfig = Bitmap.Config.RGB_565;
bm = BitmapFactory.decodeByteArray(temp, 0, temp.length,opt);
Let me know!
Which are the calls you are using ?
make sure that u create texture only once (glTexImage2D) and next time just update it with new buffer.You can also disable other gl things like depthbuffer,stencil,accumulation,lighting, etc...
If none of these helps , check you opengl implementation. make sure that that uses hardware(gpu)