I'm currently facing a problem I simply don't understand.
I employ ARCore for an inside out tracking task. Since I need to do some additional image processing I use Unitys capability to load a native c++ plugin. At the very end of each frame I pass the image in YUV_420_888 format as raw byte array to my native plugin.
A texture handle is created right at the beginning of the components initialization:
private void CreateTextureAndPassToPlugin()
{
Texture2D tex = new Texture2D(640, 480, TextureFormat.RGBA32, false);
tex.filterMode = FilterMode.Point;
tex.Apply();
debug_screen_.GetComponent<Renderer>().material.mainTexture = tex;
// Pass texture pointer to the plugin
SetTextureFromUnity(tex.GetNativeTexturePtr(), tex.width, tex.height);
}
Since I only need the grayscale image I basically ignore the UV part of the image and only use the y coordinates as displayed in the following:
uchar *p_out;
int channels = 4;
for (int r = 0; r < image_matrix->rows; r++) {
p_out = image_matrix->ptr<uchar>(r);
for (int c = 0; c < image_matrix->cols * channels; c++) {
unsigned int idx = r * y_row_stride + c;
p_out[c] = static_cast<uchar>(image_data[idx]);
p_out[c + 1] = static_cast<uchar>(image_data[idx]);
p_out[c + 2] = static_cast<uchar>(image_data[idx]);
p_out[c + 3] = static_cast<uchar>(255);
}
}
then each frame the image data is put into a GL texture:
GLuint gltex = (GLuint)(size_t)(g_TextureHandle);
glBindTexture(GL_TEXTURE_2D, gltex);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 640, 480, GL_RGBA, GL_UNSIGNED_BYTE, current_image.data);
I know that I use way too much memory by creating and passing the texture as RGBA but since GL_R8 is not supported by OpenGL ES3 and GL_ALPHA always lead to internal OpenGL errors I just pass the greyscale value to each color component.
However in the end the texture is rendered as can be seen in the following image:
At first I thought, that the reason for this may lie in the other channels having the same values, however setting all other channels than the first one to any value does not have any impact.
Am I missing something OpenGL texture creation wise?
YUV_420_888 is a multiplane texture, where the luminance plane only contains a single channel per pixel.
for (int c = 0; c < image_matrix->cols * channels; c++) {
unsigned int idx = r * y_row_stride + c;
Your loop bounds assume c is in multiple of 4 channels, which is right for the output surface, but you then use it also when computing the input surface index. The input surface plane you are using only contains one channel, so idx is wrong.
In general you are also over writing the same memory multiple times - the loop increments c by one each iteration but you then write to c, c+1, c+2, and c+3 so overwrite three of the values you wrote last time.
Shorter answer - your OpenGL ES code is fine, but I think you're filling the texture with bad data.
Untested, but I think you need:
for (int c = 0; c < image_matrix->cols * channels; c += channels) {
unsigned int idx = (r * y_row_stride) + (c / channels);
Related
I have an Android Project with OpenCV4.0.1 and TFLite installed.
And I want to make an inference with a pretrained MobileNetV2 of an cv::Mat which I extracted and cropped from a CameraBridgeViewBase (Android style).
But it's kinda difficult.
I followed this example.
That does the inference about a ByteBuffer variable called "imgData" (line 71, class: org.tensorflow.lite.examples.classification.tflite.Classifier)
That imgData looks been filled on the method called "convertBitmapToByteBuffer" from the same class (line 185), adding pixel by pixel form a bitmap that looks to be cropped little before.
private int[] intValues = new int[224 * 224];
Mat _croppedFace = new Mat() // Cropped image from CvCameraViewFrame.rgba() method.
float[][] outputVal = new float[1][1]; // Output value from my MobileNetV2 // trained model (i've changed the output on training, tested on python)
// Following: https://stackoverflow.com/questions/13134682/convert-mat-to-bitmap-opencv-for-android
Bitmap bitmap = Bitmap.createBitmap(_croppedFace.cols(), _croppedFace.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(_croppedFace, bitmap);
convertBitmapToByteBuffer(bitmap); // This call should be used as the example one.
// runInference();
_tflite.run(imgData, outputVal);
But, it looks that the input_shape of my NN is not correct, but I'm following the MobileNet example because my NN it's a MobileNetV2.
I've solved the error, but I'm sure that it isn't the best way to do it.
Keras MobilenetV2 input_shape is: (nBatches, 224, 224, nChannels).
I just want to predict a single image, so, nBaches == 1, and I'm working on RGB mode, so nChannels == 3
// Nasty nasty, but works. nBatches == 2? -- _cropped.shape() == (244, 244), 3 channels.
float [][][][] _inputValue = new float[2][_cropped.cols()][_cropped.rows()][3];
// Fill the _inputValue
for(int i = 0; i < _croppedFace.cols(); ++i)
for (int j = 0; j < _croppedFace.rows(); ++j)
for(int z = 0; z < 3; ++z)
_inputValue [0][i][j][z] = (float) _croppedFace.get(i, j)[z] / 255; // DL works better with 0:1 values.
/*
Output val, has this shape, but I don't really know why.
I'm sure that one's of that 2's is for nClasses (I'm working with 2 classes)
But I don't really know why it's using the other one.
*/
float[][] outputVal = new float[2][2];
// Tensorflow lite interpreter
_tflite.run(_inputValue , outputVal);
On python has the same shape:
Python prediction:
[[XXXXXX, YYYYY]] <- Sure for the last layer that I made, this is just a prototype NN.
Hope some one got help, and also that someone can improve the answer because this is not very optimized.
I'm trying to convert an YUV image to grayscale, so basically I just need the Y values.
To do so I wrote this little piece of code (with frame being the YUV image):
imageConversionTime = System.currentTimeMillis();
size = frame.getSize();
byte nv21ByteArray[] = frame.getImage();
int lol;
for (int i = 0; i < size.width; i++) {
for (int j = 0; j < size.height; j++) {
lol = size.width*j + i;
yMatrix.put(j, i, nv21ByteArray[lol]);
}
}
bitmap = Bitmap.createBitmap(size.width, size.height, Bitmap.Config.ARGB_8888);
Utils.matToBitmap(yMatrix, bitmap);
imageConversionTime = System.currentTimeMillis() - imageConversionTime;
However, this takes about 13500 ms. I need it to be A LOT faster (on my computer it takes 8.5 ms in python) (I work on a Motorola Moto E 4G 2nd generation, not super powerful but it should be enough for converting images right?).
Any suggestions?
Thanks in advance.
First of all I would assign size.width and size.height to a variable. I don't think the compiler will optimize this by default, but I am not sure about this.
Furthermore Create a byte[] representing the result instead of using a Matrix.
Then you could do something like this:
int[] grayScalePixels = new int[size.width * size.height];
int cntPixels = 0;
In your inner loop set
grayScalePixels[cntPixels] = nv21ByteArray[lol];
cntPixels++;
To get your final image do the following:
Bitmap grayScaleBitmap = Bitmap.createBitmap(grayScalePixels, size.width, size.height, Bitmap.Config.ARGB_8888);
Hope it works properly (I have not tested it, however at least the shown principle should be applicable -> relying on a byte[] instead of Matrix)
Probably 2 years too late but anyways ;)
To convert to gray scale, all you need to do is set the u/v values to 128 and leave the y values as is. Note that this code is for YUY2 format. You can refer to this document for other formats.
private void convertToBW(byte[] ptrIn, String filePath) {
// change all u and v values to 127 (cause 128 will cause byte overflow)
byte[] ptrOut = Arrays.copyOf(ptrIn, ptrIn.length);
for (int i = 0, ptrInLength = ptrOut.length; i < ptrInLength; i++) {
if (i % 2 != 0) {
ptrOut[i] = (byte) 127;
}
}
convertToJpeg(ptrOut, filePath);
}
For NV21/NV12, I think the loop would change to:
for (int i = ptrOut.length/2, ptrInLength = ptrOut.length; i < ptrInLength; i++) {}
Note: (didn't try this myself)
Also I would suggest to profile your utils method and createBitmap functions separately.
I came across one problem to render the camera image after some process on its YUV buffer.
I am using the example video-overlay-jni-example and in the method OnFrameAvailable I am creating a new frame buffer using the cv::Mat...
Here is how I create a new frame buffer:
cv::Mat frame((int) yuv_height_ + (int) (yuv_height_ / 2), (int) yuv_width_, CV_8UC1, (uchar *) yuv_temp_buffer_.data());
After process, I copy the frame.data to the yuv_temp_buffer_ in order to render it on the texture: memcpy(&yuv_temp_buffer_[0], frame.data, yuv_size_);
And this works fine...
The problem starts when I try to execute an OpenCV method findChessboardCorners... using the frame that I've created before.
The method findChessboardCorners takes about 90ms to execute (11 fps), however, it seems to be rendering in a slower rate. (It appears to be rendering in ~0.5 fps on the screen).
Here is the code of the OnFrameAvailable method:
void AugmentedRealityApp::OnFrameAvailable(const TangoImageBuffer* buffer) {
if (yuv_drawable_ == NULL){
return;
}
if (yuv_drawable_->GetTextureId() == 0) {
LOGE("AugmentedRealityApp::yuv texture id not valid");
return;
}
if (buffer->format != TANGO_HAL_PIXEL_FORMAT_YCrCb_420_SP) {
LOGE("AugmentedRealityApp::yuv texture format is not supported by this app");
return;
}
// The memory needs to be allocated after we get the first frame because we
// need to know the size of the image.
if (!is_yuv_texture_available_) {
yuv_width_ = buffer->width;
yuv_height_ = buffer->height;
uv_buffer_offset_ = yuv_width_ * yuv_height_;
yuv_size_ = yuv_width_ * yuv_height_ + yuv_width_ * yuv_height_ / 2;
// Reserve and resize the buffer size for RGB and YUV data.
yuv_buffer_.resize(yuv_size_);
yuv_temp_buffer_.resize(yuv_size_);
rgb_buffer_.resize(yuv_width_ * yuv_height_ * 3);
AllocateTexture(yuv_drawable_->GetTextureId(), yuv_width_, yuv_height_);
is_yuv_texture_available_ = true;
}
std::lock_guard<std::mutex> lock(yuv_buffer_mutex_);
memcpy(&yuv_temp_buffer_[0], buffer->data, yuv_size_);
///
cv::Mat frame((int) yuv_height_ + (int) (yuv_height_ / 2), (int) yuv_width_, CV_8UC1, (uchar *) yuv_temp_buffer_.data());
if (!stam.isCalibrated()) {
Profiler profiler;
profiler.startSampling();
stam.initFromChessboard(frame, cv::Size(9, 6), 100);
profiler.endSampling();
profiler.print("initFromChessboard", -1);
}
///
memcpy(&yuv_temp_buffer_[0], frame.data, yuv_size_);
swap_buffer_signal_ = true;
}
Here is the code of the method initFromChessBoard:
bool STAM::initFromChessboard(const cv::Mat& image, const cv::Size& chessBoardSize, int squareSize)
{
cv::Mat rvec = cv::Mat(cv::Size(3, 1), CV_64F);
cv::Mat tvec = cv::Mat(cv::Size(3, 1), CV_64F);
std::vector<cv::Point2d> imagePoints, imageBoardPoints;
std::vector<cv::Point3d> boardPoints;
for (int i = 0; i < chessBoardSize.height; i++)
{
for (int j = 0; j < chessBoardSize.width; j++)
{
boardPoints.push_back(cv::Point3d(j*squareSize, i*squareSize, 0.0));
}
}
//getting only the Y channel (many of the functions like face detect and align only needs the grayscale image)
cv::Mat gray(image.rows, image.cols, CV_8UC1);
gray.data = image.data;
bool found = findChessboardCorners(gray, chessBoardSize, imagePoints, cv::CALIB_CB_FAST_CHECK);
#ifdef WINDOWS_VS
printf("Number of chessboard points: %d\n", imagePoints.size());
#elif ANDROID
LOGE("Number of chessboard points: %d", imagePoints.size());
#endif
for (int i = 0; i < imagePoints.size(); i++) {
cv::circle(image, imagePoints[i], 6, cv::Scalar(149, 43, 0), -1);
}
}
Is anyone having the same problem after process something in the YUV buffer to render on the texture?
I did a test using other device rather than the project Tango using camera2 API, and the rendering process on the screen appears to be the same rate of the OpenCV function process itself.
I appreciate any help.
I had a similar problem. My app slowed down after using the copied yuv buffer and doing some image processing with OpenCV. I would recommand you to use the tango_support library to access the yuv image buffer by doing the following:
In your config function:
int AugmentedRealityApp::TangoSetupConfig() {
TangoSupport_createImageBufferManager(TANGO_HAL_PIXEL_FORMAT_YCrCb_420_SP, 1280, 720, &yuv_manager_);
}
In your callback function:
void AugmentedRealityApp::OnFrameAvailable(const TangoImageBuffer* buffer) {
TangoSupport_updateImageBuffer(yuv_manager_, buffer);
}
In your render thread:
void AugmentedRealityApp::Render() {
TangoImageBuffer* yuv = new TangoImageBuffer();
TangoSupport_getLatestImageBuffer(yuv_manager_, &yuv);
cv::Mat yuv_frame, rgb_img, gray_img;
yuv_frame.create(720*3/2, 1280, CV_8UC1);
memcpy(yuv_frame.data, yuv->data, 720*3/2*1280); // yuv image
cv::cvtColor(yuv_frame, rgb_img, CV_YUV2RGB_NV21); // rgb image
cvtColor(rgb_img, gray_img, CV_RGB2GRAY); // gray image
}
You can share the yuv_manger with other objects/threads so you can access the yuv image buffer wherever you want.
I wanna draw font on the android game by the freetype library. Get the glyph texture by the library and upload to the FBO, which i used to rendering the string label;
when i run this code, it would be ok, and i get the excepted data, the font shows ok,
for (int j = 0; j < height; j ++) {
glReadPixels ( 0, j, width, 1,
GL_RGBA, GL_UNSIGNED_BYTE, data + j*bytesPerRow);
}
But after i change the format to GL_ALPHA, it is always return 0 on the android device,
and the gl error log: got error: 0x500, so it means ,i can't read the pixels by GL_ALPHA?
the wrong code as:
for (int j = 0; j < height; j ++) {
glReadPixels ( 0, j, width, 1,
GL_ALPHA, GL_UNSIGNED_BYTE, data + j*bytesPerRow);
}
i don't know why, any help?
OpenGL ES is only required to support 2 format / data type pairs in a call to glReadPixels (...).
GL_RGBA, GL_UNSIGNED_BYTE (you already know this one)
Query: GL_IMPLEMENTATION_COLOR_READ_FORMAT and GL_IMPLEMENTATION_COLOR_READ_TYPE
You have discovered unfortunately that GL_ALPHA, GL_UNSIGNED_BYTE is NOT the second supported format / data type pair.
To figure out what the second supported pair is, consider the following code:
GLint imp_fmt, imp_type;
glGetIntegerv (GL_IMPLEMENTATION_COLOR_READ_FORMAT, &imp_fmt);
glGetIntegerv (GL_IMPLEMENTATION_COLOR_READ_TYPE, &imp_type);
printf ("Supported Color Format/Type: %x/%x\n", imp_fmt, imp_type);
You will have to adjust the code accordingly, since this is C and you are using Java... but you get the idea.
Chances are very good that your implementation does not have a single-channel format for use with glReadPixels (...) considering there is no single-channel color-renderable format without the extension: GL_EXT_texture_rg.
I want to display for example an *.obj file.
and normal, in OpenGL I use instruction :
glBegin(Traing..);
glVertex3f(Face[i].VertexIndex);
glTexcoords2f(Face[i].TexcoordIndex);
glNormal(Face[i].NormalIndex);
glEnd();
But in Android OpenGL i don't have this functions...
i have an DrawElements(...);
but when I want draw face 34/54/3 ( vertex/texcord/normal index of arrays)
it's drawing linear 34/34/34...
so how I can draw a *.obj file?
I search in the web and I found this topic :
http://www.anddev.org/android-2d-3d-graphics-opengl-problems-f55/obj-import-to-opengl-trouble-t48883.html So.. I writing an Model editor in C# to my game and I wrote something like that for test :
public void display2()
{
GL.EnableClientState(ArrayCap.VertexArray);
GL.EnableClientState(ArrayCap.TextureCoordArray);
GL.EnableClientState(ArrayCap.NormalArray);
double[] vertexBuff = new double[faces.Count * 3 * 3];
double[] normalBuff = new double[faces.Count * 3 * 3];
double[] texcorBuff = new double[faces.Count * 3 * 2];
foreach (face f in faces)
{
for (int i = 0; i < f.vector.Length; i++)
{
vertexBuff[i_3] = mesh[f.vector[i]].X;
vertexBuff[i_3 + 1] = mesh[f.vector[i]].Y;
vertexBuff[i_3 + 2] = mesh[f.vector[i]].Z;
normalBuff[i_3] = normal[f.normal[i]].X;
normalBuff[i_3 + 1] = normal[f.normal[i]].Y;
normalBuff[i_3 + 2] = normal[f.normal[i]].Z;
texcorBuff[i_2] = texture[f.texCord[i]].X;
texcorBuff[i_2 + 1] = texture[f.texCord[i]].Y;
i_3 += 3;
i_2 += 2;
}
}
GL.VertexPointer<double>(3, VertexPointerType.Double, 0, vertexBuff);
GL.TexCoordPointer<double>(2, TexCoordPointerType.Double, 0, texcorBuff);
GL.NormalPointer<double>(NormalPointerType.Double, 0, normalBuff);
GL.DrawArrays(BeginMode.Triangles, 0, faces.Count * 3);
GL.DisableClientState(ArrayCap.VertexArray);
GL.DisableClientState(ArrayCap.TextureCoordArray);
GL.DisableClientState(ArrayCap.NormalArray);
}
and it's working.. but I think that this could be more optimized?...
I don't want to change my data of model to the arraysbuffer,
because it takes too much space in memory.. any suggestion?
I'm not an Android programmer but I assume it uses OpenGL-ES in which these functions are deprecated (and by the way missing).
Tutorials explaining the good solution are drawn amongst a bunch of others that show how to draw triangles with glVertex3f functions (because it gives easy and fast results but totally pointless). I find it tragic since NOBODY should use those things.
glBegin/glEnd, glVertex3f, glTexcoords2f, and such functions are now deprecated for performance sake (they are "slow" because we have to limit the number of calls to the graphic library). I won't expand much on that since you can search for it if interested.
Instead, make use of Vertex and Indices buffers. I'm sorry because I have no "perfect" link to recommend, but you should easily get what you need on google :)
However, I dug up some come from an ancient C# project:
Note: OpenTK binding change functions name but they remain very close to the OGL ones, for example glVertex3f becomes GL.Vertex3.
The Vertex definition
A simple struct to store your custom vertex's informations (position, normal (if needed), color...)
[System.Runtime.InteropServices.StructLayout(System.Runtime.InteropServices.LayoutKind.Sequential, Pack = 1)]
public struct Vertex
{
public Core.Math.Vector3 Position;
public Core.Math.Vector3 Normal;
public Core.Math.Vector2 UV;
public uint Coloring;
public Vertex(float x, float y, float z)
{
this.Position = new Core.Math.Vector3(x, y, z);
this.Normal = new Core.Math.Vector3(0, 0, 0);
this.UV = new Core.Math.Vector2(0, 0);
System.Drawing.Color color = System.Drawing.Color.Gray;
this.Coloring = (uint)color.A << 24 | (uint)color.B << 16 | (uint)color.G << 8 | (uint)color.R;
}
}
The Vertex Buffer class
It's a wrapper class around an OpenGL buffer object to handle our vertex format.
public class VertexBuffer
{
public uint Id;
public int Stride;
public int Count;
public VertexBuffer(Graphics.Objects.Vertex[] vertices)
{
int size;
// We create an OpenGL buffer object
GL.GenBuffers(1, out this.Id); //note: out is like passing an object by reference in C#
this.Stride = OpenTK.BlittableValueType.StrideOf(vertices); //size in bytes of the VertexType (Vector3 size*2 + Vector2 size + uint size)
this.Count = vertices.Length;
// Fill the buffer with our vertices data
GL.BindBuffer(BufferTarget.ArrayBuffer, this.Id);
GL.BufferData(BufferTarget.ArrayBuffer, (System.IntPtr)(vertices.Length * this.Stride), vertices, BufferUsageHint.StaticDraw);
GL.GetBufferParameter(BufferTarget.ArrayBuffer, BufferParameterName.BufferSize, out size);
if (vertices.Length * this.Stride != size)
throw new System.ApplicationException("Vertex data not uploaded correctly");
}
}
The Indices Buffer class
Very similar to the vertex buffer, it stores vertex indices of each face of your model.
public class IndexBuffer
{
public uint Id;
public int Count;
public IndexBuffer(uint[] indices)
{
int size;
this.Count = indices.Length;
GL.GenBuffers(1, out this.Id);
GL.BindBuffer(BufferTarget.ElementArrayBuffer, this.Id);
GL.BufferData(BufferTarget.ElementArrayBuffer, (System.IntPtr)(indices.Length * sizeof(uint)), indices,
BufferUsageHint.StaticDraw);
GL.GetBufferParameter(BufferTarget.ElementArrayBuffer, BufferParameterName.BufferSize, out size);
if (indices.Length * sizeof(uint) != size)
throw new System.ApplicationException("Indices data not uploaded correctly");
}
}
Drawing buffers
Then, to render a triangle, you have to create one Vertex Buffer to store vertices' positions. One Indice buffer containing the indices of the vertices [0, 1, 2] (pay attention to the counter-clockwise rule, but it's the same with glVertex3f method)
When done, just call this function with specified buffers. Note you can use multiple sets of indices whith only one vertex buffer to render only some faces each time.
void DrawBuffer(VertexBuffer vBuffer, IndexBuffer iBuffer)
{
// 1) Ensure that the VertexArray client state is enabled.
GL.EnableClientState(ArrayCap.VertexArray);
GL.EnableClientState(ArrayCap.NormalArray);
GL.EnableClientState(ArrayCap.TextureCoordArray);
// 2) Bind the vertex and element (=indices) buffer handles.
GL.BindBuffer(BufferTarget.ArrayBuffer, vBuffer.Id);
GL.BindBuffer(BufferTarget.ElementArrayBuffer, iBuffer.Id);
// 3) Set up the data pointers (vertex, normal, color) according to your vertex format.
GL.VertexPointer(3, VertexPointerType.Float, vBuffer.Stride, new System.IntPtr(0));
GL.NormalPointer(NormalPointerType.Float, vBuffer.Stride, new System.IntPtr(Vector3.SizeInBytes));
GL.TexCoordPointer(2, TexCoordPointerType.Float, vBuffer.Stride, new System.IntPtr(Vector3.SizeInBytes * 2));
GL.ColorPointer(4, ColorPointerType.UnsignedByte, vBuffer.Stride, new System.IntPtr(Vector3.SizeInBytes * 3 + Vector2.SizeInBytes));
// 4) Call DrawElements. (Note: the last parameter is an offset into the element buffer and will usually be IntPtr.Zero).
GL.DrawElements(BeginMode.Triangles, iBuffer.Count, DrawElementsType.UnsignedInt, System.IntPtr.Zero);
//Disable client state
GL.DisableClientState(ArrayCap.VertexArray);
GL.DisableClientState(ArrayCap.NormalArray);
GL.DisableClientState(ArrayCap.TextureCoordArray);
}
I hope this can help ;)
See this tutorial on glVertex arrays