JavaCV Convert Int Array to Gray Mat - android

I'm trying to do the most basic of things using JavaCV on Android but still failing.
I want to convert yuv byte[] array received from Camera.PreviewCallback OnPreview into grayscale Mat for further processing.
public void onPreviewFrame(byte[] data, Camera camera) {
/* Extract grayscale from yuv */
ByteBuffer gray = ByteBuffer.allocate(data.length * 4);
IntBuffer intBuffer = gray.asIntBuffer();
int p;
int size = width*height;
for(int i = 0; i < size; i++) {
p = data[i] & 0xFF;
intBuffer.put(0xff000000 | p<<16 | p<<8 | p);
}
Mat g = new Mat(height,width,CV_8UC3);
byte test[] = gray.array();
g.data().put(test); /* crashes here */
In my debug inspection it seems that it crashes because of lack of memory. It doesn't give any error tracke. Just a message like this
A/libc: Fatal signal 11 (SIGSEGV), code 1, fault addr 0xa0c6a000 in
tid 5475 (com.aztech.jcv)
I have tried CV_8UC1 but it still fails. The issue is probably with memory mismatch. Pls can someone suggest a working alternative. Using inbuilt cvCvtColor doesn't decode properly - it gives an all black mat with strange lines.

Related

Android NDK converting YUV420 AIMAGE to RGB

I'm working on streaming camera frames using NDK CAMERA2 API. the format of my AIMAGE_READER is YUV420;however I would like to convert it to RGB. I looked up some Java examples that do the same, but for some reason it doesn't work well and the image is distorted.
the frame resolution is 640X480.
can someone tell me what I am doing wrong here
void NativeCamera::previewImageCallback(void *context, AImageReader *reader)
{
Log::Debug(TAG) << "previewImageCallback" << Log::Endl;
AImage *previewImage = nullptr;
auto status = AImageReader_acquireLatestImage(reader, &previewImage);
if(status !=AMEDIA_OK)
{
return;
}
std::thread processor([=]()
{
uint8_t *dataY = nullptr;
uint8_t *dataU = nullptr;
uint8_t *dataV = nullptr;
int lenY = 0;
int lenU = 0;
int lenV = 0;
AImage_getPlaneData(previewImage, 0, (uint8_t**)&dataY, &lenY);
AImage_getPlaneData(previewImage, 1, (uint8_t**)&dataU, &lenU);
AImage_getPlaneData(previewImage, 2, (uint8_t**)&dataV, &lenV);
uchar buff[lenY+lenU+lenV];
memcpy(buff+0,dataY,lenY);
memcpy(buff+lenY,dataV,lenV);
memcpy(buff+lenY+lenV,dataU,lenU);
cv::Mat yuvMat(480+240,640,CV_8UC1,&buff);
cv::Mat rgbMat;
cv::cvtColor(yuvMat,rgbMat,cv::COLOR_YUV2RGB_NV21,3);
//colorBuffer defined elsewhere
memcpy((char*)colorBuffer,rgbMat.data,640*480*3);
That happens because YUV420p format has 6 bytes per 4 pixels, but RGB has 12 bytes per 4 pixel. So you have to repack your array first, cv::cvtColor - does not do it. Check here about YUV420p packing.

Unity native OpenGL texture displayed four times

I'm currently facing a problem I simply don't understand.
I employ ARCore for an inside out tracking task. Since I need to do some additional image processing I use Unitys capability to load a native c++ plugin. At the very end of each frame I pass the image in YUV_420_888 format as raw byte array to my native plugin.
A texture handle is created right at the beginning of the components initialization:
private void CreateTextureAndPassToPlugin()
{
Texture2D tex = new Texture2D(640, 480, TextureFormat.RGBA32, false);
tex.filterMode = FilterMode.Point;
tex.Apply();
debug_screen_.GetComponent<Renderer>().material.mainTexture = tex;
// Pass texture pointer to the plugin
SetTextureFromUnity(tex.GetNativeTexturePtr(), tex.width, tex.height);
}
Since I only need the grayscale image I basically ignore the UV part of the image and only use the y coordinates as displayed in the following:
uchar *p_out;
int channels = 4;
for (int r = 0; r < image_matrix->rows; r++) {
p_out = image_matrix->ptr<uchar>(r);
for (int c = 0; c < image_matrix->cols * channels; c++) {
unsigned int idx = r * y_row_stride + c;
p_out[c] = static_cast<uchar>(image_data[idx]);
p_out[c + 1] = static_cast<uchar>(image_data[idx]);
p_out[c + 2] = static_cast<uchar>(image_data[idx]);
p_out[c + 3] = static_cast<uchar>(255);
}
}
then each frame the image data is put into a GL texture:
GLuint gltex = (GLuint)(size_t)(g_TextureHandle);
glBindTexture(GL_TEXTURE_2D, gltex);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 640, 480, GL_RGBA, GL_UNSIGNED_BYTE, current_image.data);
I know that I use way too much memory by creating and passing the texture as RGBA but since GL_R8 is not supported by OpenGL ES3 and GL_ALPHA always lead to internal OpenGL errors I just pass the greyscale value to each color component.
However in the end the texture is rendered as can be seen in the following image:
At first I thought, that the reason for this may lie in the other channels having the same values, however setting all other channels than the first one to any value does not have any impact.
Am I missing something OpenGL texture creation wise?
YUV_420_888 is a multiplane texture, where the luminance plane only contains a single channel per pixel.
for (int c = 0; c < image_matrix->cols * channels; c++) {
unsigned int idx = r * y_row_stride + c;
Your loop bounds assume c is in multiple of 4 channels, which is right for the output surface, but you then use it also when computing the input surface index. The input surface plane you are using only contains one channel, so idx is wrong.
In general you are also over writing the same memory multiple times - the loop increments c by one each iteration but you then write to c, c+1, c+2, and c+3 so overwrite three of the values you wrote last time.
Shorter answer - your OpenGL ES code is fine, but I think you're filling the texture with bad data.
Untested, but I think you need:
for (int c = 0; c < image_matrix->cols * channels; c += channels) {
unsigned int idx = (r * y_row_stride) + (c / channels);

Rendering issue on Project Tango using OpenCV image processing

I came across one problem to render the camera image after some process on its YUV buffer.
I am using the example video-overlay-jni-example and in the method OnFrameAvailable I am creating a new frame buffer using the cv::Mat...
Here is how I create a new frame buffer:
cv::Mat frame((int) yuv_height_ + (int) (yuv_height_ / 2), (int) yuv_width_, CV_8UC1, (uchar *) yuv_temp_buffer_.data());
After process, I copy the frame.data to the yuv_temp_buffer_ in order to render it on the texture: memcpy(&yuv_temp_buffer_[0], frame.data, yuv_size_);
And this works fine...
The problem starts when I try to execute an OpenCV method findChessboardCorners... using the frame that I've created before.
The method findChessboardCorners takes about 90ms to execute (11 fps), however, it seems to be rendering in a slower rate. (It appears to be rendering in ~0.5 fps on the screen).
Here is the code of the OnFrameAvailable method:
void AugmentedRealityApp::OnFrameAvailable(const TangoImageBuffer* buffer) {
if (yuv_drawable_ == NULL){
return;
}
if (yuv_drawable_->GetTextureId() == 0) {
LOGE("AugmentedRealityApp::yuv texture id not valid");
return;
}
if (buffer->format != TANGO_HAL_PIXEL_FORMAT_YCrCb_420_SP) {
LOGE("AugmentedRealityApp::yuv texture format is not supported by this app");
return;
}
// The memory needs to be allocated after we get the first frame because we
// need to know the size of the image.
if (!is_yuv_texture_available_) {
yuv_width_ = buffer->width;
yuv_height_ = buffer->height;
uv_buffer_offset_ = yuv_width_ * yuv_height_;
yuv_size_ = yuv_width_ * yuv_height_ + yuv_width_ * yuv_height_ / 2;
// Reserve and resize the buffer size for RGB and YUV data.
yuv_buffer_.resize(yuv_size_);
yuv_temp_buffer_.resize(yuv_size_);
rgb_buffer_.resize(yuv_width_ * yuv_height_ * 3);
AllocateTexture(yuv_drawable_->GetTextureId(), yuv_width_, yuv_height_);
is_yuv_texture_available_ = true;
}
std::lock_guard<std::mutex> lock(yuv_buffer_mutex_);
memcpy(&yuv_temp_buffer_[0], buffer->data, yuv_size_);
///
cv::Mat frame((int) yuv_height_ + (int) (yuv_height_ / 2), (int) yuv_width_, CV_8UC1, (uchar *) yuv_temp_buffer_.data());
if (!stam.isCalibrated()) {
Profiler profiler;
profiler.startSampling();
stam.initFromChessboard(frame, cv::Size(9, 6), 100);
profiler.endSampling();
profiler.print("initFromChessboard", -1);
}
///
memcpy(&yuv_temp_buffer_[0], frame.data, yuv_size_);
swap_buffer_signal_ = true;
}
Here is the code of the method initFromChessBoard:
bool STAM::initFromChessboard(const cv::Mat& image, const cv::Size& chessBoardSize, int squareSize)
{
cv::Mat rvec = cv::Mat(cv::Size(3, 1), CV_64F);
cv::Mat tvec = cv::Mat(cv::Size(3, 1), CV_64F);
std::vector<cv::Point2d> imagePoints, imageBoardPoints;
std::vector<cv::Point3d> boardPoints;
for (int i = 0; i < chessBoardSize.height; i++)
{
for (int j = 0; j < chessBoardSize.width; j++)
{
boardPoints.push_back(cv::Point3d(j*squareSize, i*squareSize, 0.0));
}
}
//getting only the Y channel (many of the functions like face detect and align only needs the grayscale image)
cv::Mat gray(image.rows, image.cols, CV_8UC1);
gray.data = image.data;
bool found = findChessboardCorners(gray, chessBoardSize, imagePoints, cv::CALIB_CB_FAST_CHECK);
#ifdef WINDOWS_VS
printf("Number of chessboard points: %d\n", imagePoints.size());
#elif ANDROID
LOGE("Number of chessboard points: %d", imagePoints.size());
#endif
for (int i = 0; i < imagePoints.size(); i++) {
cv::circle(image, imagePoints[i], 6, cv::Scalar(149, 43, 0), -1);
}
}
Is anyone having the same problem after process something in the YUV buffer to render on the texture?
I did a test using other device rather than the project Tango using camera2 API, and the rendering process on the screen appears to be the same rate of the OpenCV function process itself.
I appreciate any help.
I had a similar problem. My app slowed down after using the copied yuv buffer and doing some image processing with OpenCV. I would recommand you to use the tango_support library to access the yuv image buffer by doing the following:
In your config function:
int AugmentedRealityApp::TangoSetupConfig() {
TangoSupport_createImageBufferManager(TANGO_HAL_PIXEL_FORMAT_YCrCb_420_SP, 1280, 720, &yuv_manager_);
}
In your callback function:
void AugmentedRealityApp::OnFrameAvailable(const TangoImageBuffer* buffer) {
TangoSupport_updateImageBuffer(yuv_manager_, buffer);
}
In your render thread:
void AugmentedRealityApp::Render() {
TangoImageBuffer* yuv = new TangoImageBuffer();
TangoSupport_getLatestImageBuffer(yuv_manager_, &yuv);
cv::Mat yuv_frame, rgb_img, gray_img;
yuv_frame.create(720*3/2, 1280, CV_8UC1);
memcpy(yuv_frame.data, yuv->data, 720*3/2*1280); // yuv image
cv::cvtColor(yuv_frame, rgb_img, CV_YUV2RGB_NV21); // rgb image
cvtColor(rgb_img, gray_img, CV_RGB2GRAY); // gray image
}
You can share the yuv_manger with other objects/threads so you can access the yuv image buffer wherever you want.

Manipulating android camera frame data in JNI

I'm trying to use the NDK to do some image processing. I am NOT using opencv.
I am fairly new to Android so I was doing this in steps. I started by writing a simple app that would let me capture video from the camera and display it to the screen. I have this done.
Then I tried to manipulate the camera data in native. However, onPreviewFrame uses a byte array to capture frame information. This is my code -
public void onPreviewFrame(byte[] arg0, Camera arg1)
{
if (imageFormat == ImageFormat.NV21)
{
if ( !bProcessing )
{
FrameData = arg0;
mHandler.post(callnative);
}
}
}
And the callnative runnable is like so -
private Runnable callnative = new Runnable()
{
public void run()
{
bProcessing = true;
String returnNative = callTorch(MainActivity.assetManager, PreviewSizeWidth, PreviewSizeHeight, FrameData, pixels);
bitmap.setPixels(pixels, 0, PreviewSizeWidth, 0, 0, PreviewSizeWidth, PreviewSizeHeight);
MycameraClass.setImageBitmap(bitmap);
bProcessing = false;
}
};
The problem is, I need to use FrameData in native as the float datatype. However, it is in the form of a bytearray. I wanted to know how the frame data is stored. Is this a 2 dimensional array of bytes? So the camera returns an 8 bit image and stores this as 640x480 bytes? If that is so, in what form does C interpret this byte data type? Can I simply convert it to float? I have this in native -
jbyte *nativeData;
nativeData = (env)->GetByteArrayElements(NV21FrameData,NULL);
__android_log_print(ANDROID_LOG_INFO, "Nativeprint", "nativedata is: %d",(int)nativeData[0]);
However, this prints -22 which leads me to believe that I am trying to print out a pointer. I am not sure why that is the case though.
I would appreciate any help on this.
You will not be able to get any float data type from the pixels buffer. the data are in bytes, which in C is the char datatype.
So this:
jbyte *nativeData = (env)->GetByteArrayElements(NV21FrameData,NULL);
is the same as this:
char *nativeData = (char *)((env)->GetByteArrayElements(NV21FrameData, NULL));
The data is stored as 1 dimension array, so you will retrieve each pixel operating by width, height, and x and y calculations.
Also remember the preview camera frames from your sample are in YUV420sp, this means you will need to convert the data from YUV to RGB before you can set it in a bitmap.

Convert RGB565 to Greyscale (8 bits)

I am trying to convert a rgb565 image (video stream from the Android phone camera) into a greyscale (8 bits) image.
So far I got to the following code (the conversion is computed in native code using the Android NDK). Note that my input image is 640*480 and I want to crop it to make it fit in a 128*128 buffer.
#define RED(a) ((((a) & 0xf800) >> 11) << 3)
#define GREEN(a) ((((a) & 0x07e0) >> 5) << 2)
#define BLUE(a) (((a) & 0x001f) << 3)
typedef unsigned char byte;
void toGreyscale(byte *rgbs, int widthIn, int heightIn, byte *greyscales)
{
const int textSize = 128;
int x,y;
short* rgbPtr = (short*)rgbs;
byte *greyPtr = greyscales;
// rgbs arrives in RGB565 (16 bits) format
for (y=0; y<textSize; y++)
{
for (x=0; x<textSize; x++)
{
short pixel = *(rgbPtr++);
int r = RED(pixel);
int g = GREEN(pixel);
int b = BLUE(pixel);
*(greyPtr++) = (byte)((r+g+b) / 3);
}
rgbPtr += widthIn - textSize;
}
}
The image is sent to the function like this
jbyte* cImageIn = env->GetByteArrayElements(imageIn, &b);
jbyte* cImageOut = (jbyte*)env->GetDirectBufferAddress(imageOut);
toGreyscale((byte*)cImageIn, widthIn, heightIn, (byte*)cImageOut);
The result I get is a horizontally-reversed image (no idea why...the UVs to display the result are correct...), but the biggest problem is that only the red channel is actually correct when I display them separately. The green and blue channels are all messed up and I have no idea why. I checked on the Internet and all the resources I found showed that the masks I am using are correct. Any idea where the mistake could be?
Thanks!
May be an endianess issue?
You could check quickly by reversing the two bytes of your 16 bits word before shifting out the RGB components.

Categories

Resources