glReadPixels failed with format GL_ALPHA - android

I wanna draw font on the android game by the freetype library. Get the glyph texture by the library and upload to the FBO, which i used to rendering the string label;
when i run this code, it would be ok, and i get the excepted data, the font shows ok,
for (int j = 0; j < height; j ++) {
glReadPixels ( 0, j, width, 1,
GL_RGBA, GL_UNSIGNED_BYTE, data + j*bytesPerRow);
}
But after i change the format to GL_ALPHA, it is always return 0 on the android device,
and the gl error log: got error: 0x500, so it means ,i can't read the pixels by GL_ALPHA?
the wrong code as:
for (int j = 0; j < height; j ++) {
glReadPixels ( 0, j, width, 1,
GL_ALPHA, GL_UNSIGNED_BYTE, data + j*bytesPerRow);
}
i don't know why, any help?

OpenGL ES is only required to support 2 format / data type pairs in a call to glReadPixels (...).
GL_RGBA, GL_UNSIGNED_BYTE (you already know this one)
Query: GL_IMPLEMENTATION_COLOR_READ_FORMAT and GL_IMPLEMENTATION_COLOR_READ_TYPE
You have discovered unfortunately that GL_ALPHA, GL_UNSIGNED_BYTE is NOT the second supported format / data type pair.
To figure out what the second supported pair is, consider the following code:
GLint imp_fmt, imp_type;
glGetIntegerv (GL_IMPLEMENTATION_COLOR_READ_FORMAT, &imp_fmt);
glGetIntegerv (GL_IMPLEMENTATION_COLOR_READ_TYPE, &imp_type);
printf ("Supported Color Format/Type: %x/%x\n", imp_fmt, imp_type);
You will have to adjust the code accordingly, since this is C and you are using Java... but you get the idea.
Chances are very good that your implementation does not have a single-channel format for use with glReadPixels (...) considering there is no single-channel color-renderable format without the extension: GL_EXT_texture_rg.

Related

Run inference with an openCV image

I have an Android Project with OpenCV4.0.1 and TFLite installed.
And I want to make an inference with a pretrained MobileNetV2 of an cv::Mat which I extracted and cropped from a CameraBridgeViewBase (Android style).
But it's kinda difficult.
I followed this example.
That does the inference about a ByteBuffer variable called "imgData" (line 71, class: org.tensorflow.lite.examples.classification.tflite.Classifier)
That imgData looks been filled on the method called "convertBitmapToByteBuffer" from the same class (line 185), adding pixel by pixel form a bitmap that looks to be cropped little before.
private int[] intValues = new int[224 * 224];
Mat _croppedFace = new Mat() // Cropped image from CvCameraViewFrame.rgba() method.
float[][] outputVal = new float[1][1]; // Output value from my MobileNetV2 // trained model (i've changed the output on training, tested on python)
// Following: https://stackoverflow.com/questions/13134682/convert-mat-to-bitmap-opencv-for-android
Bitmap bitmap = Bitmap.createBitmap(_croppedFace.cols(), _croppedFace.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(_croppedFace, bitmap);
convertBitmapToByteBuffer(bitmap); // This call should be used as the example one.
// runInference();
_tflite.run(imgData, outputVal);
But, it looks that the input_shape of my NN is not correct, but I'm following the MobileNet example because my NN it's a MobileNetV2.
I've solved the error, but I'm sure that it isn't the best way to do it.
Keras MobilenetV2 input_shape is: (nBatches, 224, 224, nChannels).
I just want to predict a single image, so, nBaches == 1, and I'm working on RGB mode, so nChannels == 3
// Nasty nasty, but works. nBatches == 2? -- _cropped.shape() == (244, 244), 3 channels.
float [][][][] _inputValue = new float[2][_cropped.cols()][_cropped.rows()][3];
// Fill the _inputValue
for(int i = 0; i < _croppedFace.cols(); ++i)
for (int j = 0; j < _croppedFace.rows(); ++j)
for(int z = 0; z < 3; ++z)
_inputValue [0][i][j][z] = (float) _croppedFace.get(i, j)[z] / 255; // DL works better with 0:1 values.
/*
Output val, has this shape, but I don't really know why.
I'm sure that one's of that 2's is for nClasses (I'm working with 2 classes)
But I don't really know why it's using the other one.
*/
float[][] outputVal = new float[2][2];
// Tensorflow lite interpreter
_tflite.run(_inputValue , outputVal);
On python has the same shape:
Python prediction:
[[XXXXXX, YYYYY]] <- Sure for the last layer that I made, this is just a prototype NN.
Hope some one got help, and also that someone can improve the answer because this is not very optimized.

Unity native OpenGL texture displayed four times

I'm currently facing a problem I simply don't understand.
I employ ARCore for an inside out tracking task. Since I need to do some additional image processing I use Unitys capability to load a native c++ plugin. At the very end of each frame I pass the image in YUV_420_888 format as raw byte array to my native plugin.
A texture handle is created right at the beginning of the components initialization:
private void CreateTextureAndPassToPlugin()
{
Texture2D tex = new Texture2D(640, 480, TextureFormat.RGBA32, false);
tex.filterMode = FilterMode.Point;
tex.Apply();
debug_screen_.GetComponent<Renderer>().material.mainTexture = tex;
// Pass texture pointer to the plugin
SetTextureFromUnity(tex.GetNativeTexturePtr(), tex.width, tex.height);
}
Since I only need the grayscale image I basically ignore the UV part of the image and only use the y coordinates as displayed in the following:
uchar *p_out;
int channels = 4;
for (int r = 0; r < image_matrix->rows; r++) {
p_out = image_matrix->ptr<uchar>(r);
for (int c = 0; c < image_matrix->cols * channels; c++) {
unsigned int idx = r * y_row_stride + c;
p_out[c] = static_cast<uchar>(image_data[idx]);
p_out[c + 1] = static_cast<uchar>(image_data[idx]);
p_out[c + 2] = static_cast<uchar>(image_data[idx]);
p_out[c + 3] = static_cast<uchar>(255);
}
}
then each frame the image data is put into a GL texture:
GLuint gltex = (GLuint)(size_t)(g_TextureHandle);
glBindTexture(GL_TEXTURE_2D, gltex);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 640, 480, GL_RGBA, GL_UNSIGNED_BYTE, current_image.data);
I know that I use way too much memory by creating and passing the texture as RGBA but since GL_R8 is not supported by OpenGL ES3 and GL_ALPHA always lead to internal OpenGL errors I just pass the greyscale value to each color component.
However in the end the texture is rendered as can be seen in the following image:
At first I thought, that the reason for this may lie in the other channels having the same values, however setting all other channels than the first one to any value does not have any impact.
Am I missing something OpenGL texture creation wise?
YUV_420_888 is a multiplane texture, where the luminance plane only contains a single channel per pixel.
for (int c = 0; c < image_matrix->cols * channels; c++) {
unsigned int idx = r * y_row_stride + c;
Your loop bounds assume c is in multiple of 4 channels, which is right for the output surface, but you then use it also when computing the input surface index. The input surface plane you are using only contains one channel, so idx is wrong.
In general you are also over writing the same memory multiple times - the loop increments c by one each iteration but you then write to c, c+1, c+2, and c+3 so overwrite three of the values you wrote last time.
Shorter answer - your OpenGL ES code is fine, but I think you're filling the texture with bad data.
Untested, but I think you need:
for (int c = 0; c < image_matrix->cols * channels; c += channels) {
unsigned int idx = (r * y_row_stride) + (c / channels);

Crash Android Program On OpenCV Core.DCT() Method

I want to write an android program to compute dct of the input image, and I am using opencv android framework. first of all I convert input image to grayscale and then i want to compute dct using Core.dct() with 16*16 block. this is the dct computation part :
int rownum = 1;
for (int i = 1; i <= M - B + 1; i++) {
for (int j = 1; j <= N - B + 1; j++) {
Mat SubImg = GrayImage.submat(new Range(i, i+B-1), new Range(j, j+B-1));
Mat Block_DCT = Mat.zeros(16, 16, SubImg.type());
Mat Block_DCT_Quantized = Mat.zeros(16, 16, Block_DCT.type());
Core.dct(SubImg, Block_DCT);
Core.divide(Block_DCT, SQ16, Block_DCT_Quantized);
Mat row = Mat.zeros(1, B*B, Block_DCT_Quantized.type());
Block_DCT_Quantized.reshape(1, B*B);
rownum=rownum+1;
}
}
I debug the code and figure out that the application crash on the line Core.dct() !!!
i don't know what is the problem, but I suppose it's because of input type ...
can any body help me ? what should I do ?
Update :
I found the solution, the problem was because of size and type of Mat objects, actually Core.dct() method just work with this type Mat: cv_64fc1. I use SubImg.convertTo() function to change the type and correct the size of block, so it worked...
thanks to Rui Marques for answering...
dct needs float input.
you have to convert your image to CvType.CV_32F or CvType.CV_64F before applying dct / dft

Using GL_UNSIGNED_SHORT_4_4_4_4 for textures

I have a method for loading texture into OpenGL:
bool AndroidGraphicsManager::setTexture(
UINT& id, UCHAR* image, UINT width, UINT height,
bool wrapURepeat, bool wrapVRepeat, bool useMipmaps)
{
glGenTextures(1, &id);
glBindTexture(GL_TEXTURE_2D, id);
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA,
width, height, 0, GL_RGBA,
GL_UNSIGNED_BYTE, (GLvoid*) image);
int minFilter = GL_LINEAR;
if (useMipmaps) {
glGenerateMipmap(GL_TEXTURE_2D);
minFilter = GL_NEAREST_MIPMAP_NEAREST;
}
glTexParameteri(GL_TEXTURE_2D,
GL_TEXTURE_MIN_FILTER, minFilter);
glTexParameteri(
GL_TEXTURE_2D,
GL_TEXTURE_MAG_FILTER, GL_NEAREST);
int wrap_u = wrapURepeat ?
GL_REPEAT : GL_CLAMP_TO_EDGE;
int wrap_v = wrapVRepeat ?
GL_REPEAT : GL_CLAMP_TO_EDGE;
glTexParameteri(
GL_TEXTURE_2D,
GL_TEXTURE_WRAP_S, wrap_u);
glTexParameteri(
GL_TEXTURE_2D,
GL_TEXTURE_WRAP_T, wrap_v);
glBindTexture(GL_TEXTURE_2D, 0);
return !checkGLError("Loading texture.");
}
This works fine. I load texture using libpng. This gives me array of unsigned chars. Then I pass this array to my method specified above. Pictures are 32bit, so I assume in UCHAR array each UCHAR contains single color component and four UCHARs make up one pixel. I wanted to try using 16 bit textures. I changed GL_UNSIGNED_BYTE to GL_UNSIGNED_SHORT_4_4_4_4. But apparently this isn't enough because it gives me this result:
What else do I need to change to be able to properly display 16 bit textures?
[EDIT] As #DatenWolf suggested I tried using this code:
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA4,
width, height, 0, GL_RGBA,
GL_UNSIGNED_SHORT_4_4_4_4, (GLvoid*) image);
I used it on Linux version of my engine, but the result was the exact same as in previous android screenshot. So is it safe to assume that the problem is with libpng and the way it produces my UCHAR array?
[EDIT2] So I finally managed to compress 32bit texture to 16bit.
All I needed was a simple iteration through 32bit texture data and a few bitshifting operations to compress 32 bits into 16.
void* rgba8888ToRgba4444(void* in, int size) {
int pixelCount = size / 4;
ULONG* input = static_cast<ULONG*>(in);
USHORT* output = new USHORT[pixelCount];
for (int i = 0; i < pixelCount; i++) {
ULONG pixel = input[i];
// Unpack the source data as 8 bit values
UINT r = pixel & 0xff;
UINT g = (pixel >> 8) & 0xff;
UINT b = (pixel >> 16) & 0xff;
UINT a = (pixel >> 24) & 0xff;
// Convert to 4 bit vales
r >>= 4; g >>= 4; b >>= 4; a >>= 4;
output[i] = r << 12 | g << 8 | b << 4 | a;
}
return static_cast<void*>(output);
}
I thought that for mobile device this would yield a performance increase, but unfortunately I saw no gain in performance on Galaxy SIII.
The best thing would probably be to use an image library like devil for this kind of thing.
But if you want to convert the image data you have you get from libpbg you can do something similar to the code below.
Keep in mind that you trade size for speed when doing this.
struct Color {
unsigned char r:4;
unsigned char g:4;
unsigned char b:4;
unsigned char a:4;
};
float factor = 16 / 255;
Color colors[imgWidth*imgHeight];
for(int i = 0; i < imgWidth*imgHeight; ++i)
{
colors[i].r = image[i*4]*factor;
colors[i].g = image[i*4+1]*factor;
colors[i].b = image[i*4+2]*factor;
colors[i].a = image[i*4+3]*factor;
}
But apparently this isn't enough
The token you've changed tells OpenGL the format of the data in the array passed to it. So you have to adjust that as well. However OpenGL may convert it into any internal format it desires, since you didn't force it into a particular format. That's what the internal format parameter is for (which works independently on the external data type). So if you want to have internally 16 bits resolution you must change the internal format parameter to GL_RGBA4. The data may remain in 8 bits per pixel format. So in your case
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA4,
width, height, 0, GL_RGBA,
…, (GLvoid*) image);
The type parameter must match the layout of your data. If you've got originally 8 bits per pixel then GL_UNSIGNED_BYTE. But if it's prepackaged RGBA in ushorts nibbles then GL_UNSIGNED_SHORT_4_4_4_4

OpenGL (ES) -- glBindBuffer throws IllegalArgumentException: remaining < size()

I've made a buffer of vertices that correctly draw when using glDrawArrays, however they fail to load into a VBO. Here's the code:
FloatBuffer circleBuffer = ByteBuffer.allocateDirect(numVertices * 3 *
4).order(ByteOrder.nativeOrder()).asFloatBuffer();
for (int j = 0; j &lt numVertices; j++) {
circleBuffer.put((float) (Math.cos(theta)));
circleBuffer.put((float) (Math.sin(theta)));
circleBuffer.put(1);
theta += 2 * Math.PI / (numVertices);
}
int[] buffer = new int[1];
int circleIndex=0;
gl11.glGenBuffers(1, buffer,0);
circleIndex = buffer[0];
gl11.glBindBuffer(GL11.GL_ARRAY_BUFFER, circleIndex);
gl11.glBufferData(GL11.GL_ARRAY_BUFFER, circleBuffer.capacity() * 4,
circleBuffer, GL11.GL_STATIC_DRAW);
I outputed the capacity of the buffer and it is 105, and the remaining is 0. I also tried reassigning the FloatBuffer as a Buffer. What's wrong here? Thanks!
ERROR/AndroidRuntime(7127): java.lang.IllegalArgumentException: remaining() &lt size
ERROR/AndroidRuntime(7127): at com.google.android.gles_jni.GLImpl.glBufferData(Native Method)
EDIT -- Solution
buffer.flip();
A Java exception which should be deliberately thrown by methods that don't like their parameters. It extends RuntimeException, which means it does not need to be caught.
The singular name notwithstanding, can represent unsatisfied constraint between more parameters. The more you use and check the parameters, the more you move towards exception in the method invocation proper.
In many cases, code that is throwing NullPointerException should be argument-checking and throwing this, with a decent explanitory message.

Categories

Resources