I am trying to scan a local image through ZBar, but as ZBar don't give any documentation for Android but only give detailed documentation for iPhone I had customized camera test activity too much. But I didn't get any success.
In ZBar cameratest activity
PreviewCallback previewCb = new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Parameters parameters = camera.getParameters();
Size size = parameters.getPreviewSize();
Image barcode = new Image(size.width, size.height, "Y800");
barcode.setData(data);
int result = scanner.scanImage(barcode);
if (result != 0) {
previewing = false;
mCamera.setPreviewCallback(null);
mCamera.stopPreview();
SymbolSet syms = scanner.getResults();
for (Symbol sym : syms) {
scanText.setText("barcode result " + sym.getData());
barcodeScanned = true;
}
}
}
};
I want to customize this code so that it uses a local image from the gallery and gives me the result. How do I customize this code for giving a local image from the gallery and scan that image?
Try this out:
Bitmap barcodeBmp = BitmapFactory.decodeResource(getResources(),
R.drawable.barcode);
int width = barcodeBmp.getWidth();
int height = barcodeBmp.getHeight();
int pixels = new int;
barcodeBmp.getPixels(pixels, 0, width, 0, 0, width, height);
Image barcode = new Image(width, height, "RGB4");
barcode.setData(pixels);
int result = scanner.scanImage(barcode.convert("Y800"));
Or using the API, refer to HOWTO: Scan images using the API.
Java port of Zbar's scanner accepts only Y800 and GRAY pixels format (https://github.com/ZBar/ZBar/blob/master/java/net/sourceforge/zbar/ImageScanner.java) which is ok for raw bytes captured from the camera preview. But images from the Androis's Gallery are JPEG comressed usually and their pixels are not in Y800, so you can make scanner work by converting image's pixels to Y800 format. See this official support forum's thread for sample code. To calculate pixels array length just use imageWidth*imageHeight formula.
#shujatAli your example image palette's format is grayscale, convertation it to RGB made your code snippet to work for me. You can change pallete's format using some image manipulation program. I used GIMP.
I don't know clearly for Android, but on iOS do as:
//Action when user tap on button to call ZBarReaderController
- (IBAction)brownQRImageFromAlbum:(id)sender {
ZBarReaderController *reader = [ZBarReaderController new];
reader.readerDelegate = self;
reader.sourceType = UIImagePickerControllerSourceTypePhotoLibrary; // Set ZbarReaderController point to the local album
ZBarImageScanner *scanner = reader.scanner;
[scanner setSymbology: ZBAR_QRCODE
config: ZBAR_CFG_ENABLE
to: 1];
[self presentModalViewController: reader animated: YES];
}
- (void) imagePickerController: (UIImagePickerController *) picker
didFinishPickingMediaWithInfo: (NSDictionary *) info {
UIImage *imageCurrent = (UIImage*)[info objectForKey:UIImagePickerControllerOriginalImage];
self.imageViewQR.image = imageCurrent;
imageCurrent = nil;
// ADD: get the decode results
id<NSFastEnumeration> results =
[info objectForKey: ZBarReaderControllerResults];
ZBarSymbol *symbol = nil;
for (symbol in results)
break;
NSLog(#"Content: %#", symbol.data);
[picker dismissModalViewControllerAnimated: NO];
}
Reference for more details: http://zbar.sourceforge.net/iphone/sdkdoc/optimizing.html
Idea from HOWTO: Scan images using the API:
#include <iostream>
#include <Magick++.h>
#include <zbar.h>
using namespace std;
using namespace zbar;
int main (int argc, char **argv)
{
if(argc < 2)
return(1);
// Create a reader
ImageScanner scanner;
// Configure the reader
scanner.set_config(ZBAR_NONE, ZBAR_CFG_ENABLE, 1);
// Obtain image data
Magick::Image magick(argv[1]); // Read an image file
int width = magick.columns(); // Extract dimensions
int height = magick.rows();
Magick::Blob blob; // Extract the raw data
magick.modifyImage();
magick.write(&blob, "GRAY", 8);
const void *raw = blob.data();
// Wrap image data
Image image(width, height, "Y800", raw, width * height);
// Scan the image for barcodes
int n = scanner.scan(image);
// Extract results
for (Image::SymbolIterator symbol = image.symbol_begin();
symbol != image.symbol_end();
++symbol) {
// Do something useful with results
cout << "decoded " << symbol->get_type_name()
<< " symbol \"" << symbol->get_data() << '"' << endl;
}
// Clean up
image.set_data(NULL, 0);
return(0);
}
Follow the above code and change it so it is relevant in your language programming.
Related
Good morning.
I am making a camera video player using ffmpeg.
During the production process, we are confronted with one problem.
If you take one frame through ffmpeg, decode the frame, and sws_scale it to fit the screen size, it will take too long and the camera image will be burdened.
For example, when the incoming input resolution is 1920 * 1080, and the resolution of my phone is 2550 * 1440, the speed of sws_scale is about 6 times slower.
[Contrast when changing to the same size]
Currently, the NDK converts sws_scale to the resolution that was input from the camera, so the speed is improved and the image is not interrupted.
However, SurfaceView is full screen, but input resolution is below full resolution.
Scale AVFrame
ctx->m_SwsCtx = sws_getContext(
ctx->m_CodecCtx->width,
ctx->m_CodecCtx->height,
ctx->m_CodecCtx->pix_fmt,
//width, // 2550 (SurfaceView)
//height, // 1440
ctx->m_CodecCtx->width, // 1920 (Camera)
ctx->m_CodecCtx->height, // 1080
AV_PIX_FMT_RGBA,
SWS_FAST_BILINEAR,
NULL, NULL, NULL);
if(ctx->m_SwsCtx == NULL)
{
__android_log_print(
ANDROID_LOG_DEBUG,
"[ VideoStream::SetResolution Fail ] ",
"[ Error Message : %s ]",
"SwsContext Alloc fail");
SET_FIELD_TO_INT(pEnv, ob, err, 0x40);
return ob;
}
sws_scale(
ctx->m_SwsCtx,
(const uint8_t * const *)ctx->m_SrcFrame->data,
ctx->m_SrcFrame->linesize,
0,
ctx->m_CodecCtx->height,
ctx->m_DstFrame->data,
ctx->m_DstFrame->linesize);
PDRAWOBJECT drawObj = (PDRAWOBJECT)malloc(sizeof(DRAWOBJECT));
if(drawObj != NULL)
{
drawObj->m_Width = ctx->m_Width;
drawObj->m_Height = ctx->m_Height;
drawObj->m_Format = WINDOW_FORMAT_RGBA_8888;
drawObj->m_Frame = ctx->m_DstFrame;
SET_FIELD_TO_INT(pEnv, ob, err, -1);
SET_FIELD_TO_LONG(pEnv, ob, addr, (jlong)drawObj);
}
Draw SurfaceView;
PDRAWOBJECT d = (PDRAWOBJECT)drawObj;
long long curr1 = CurrentTimeInMilli();
ANativeWindow *window = ANativeWindow_fromSurface(pEnv, surface);
ANativeWindow_setBuffersGeometry(window, 0, 0, WINDOW_FORMAT_RGBA_8888);
ANativeWindow_setBuffersGeometry(
window,
d->m_Width,
d->m_Height,
WINDOW_FORMAT_RGBA_8888);
ANativeWindow_Buffer windowBuffer;
ANativeWindow_lock(window, &windowBuffer, 0);
uint8_t * dst = (uint8_t*)windowBuffer.bits;
int dstStride = windowBuffer.stride * 4;
uint8_t * src = (uint8_t*) (d->m_Frame->data[0]);
int srcStride = d->m_Frame->linesize[0];
for(int h = 0; h < d->m_Height; ++h)
{
// Draw SurfaceView;
memcpy(dst + h * dstStride, src + h * srcStride, srcStride);
}
ANativeWindow_unlockAndPost(window);
ANativeWindow_release(window);
Result;
enter image description here
I would like to change the whole screen from full screen to full screen. Is there a way to change the size of a SurfaceView in NDK or Android, rather than sws_scale?
Thank you.
You don't need to scale your video. Actually, you don't even need to convert it to RGB (this is also a significant burden for the CPU).
The trick is to use OpenGL render with a shader that takes YUV input and displays this texture scaled tho your screen.
Start with this solution (reusing code from Android system): https://stackoverflow.com/a/14999912/192373
I need to do some real-time image processing with the camera preview data, such as face detection which is a c++ library, and then display the processed preview with face labeled on screen.
I have read http://nezarobot.blogspot.com/2016/03/android-surfacetexture-camera2-opencv.html and Eddy Talvala's answer from Android camera2 API - Display processed frame in real time. Following the two webpages, I managed to build the app(no calling the face detection lib, only trying to display preview using ANativeWindow), but everytime I run this app on Google Pixel - 7.1.0 - API 25 running on Genymotion, the app always collapses throwing the following log
08-28 14:23:09.598 2099-2127/tau.camera2demo A/libc: Fatal signal 11 (SIGSEGV), code 2, fault addr 0xd3a96000 in tid 2127 (CAMERA2)
[ 08-28 14:23:09.599 117: 117 W/ ]
debuggerd: handling request: pid=2099 uid=10067 gid=10067 tid=2127
I googled this but no answer found.
The whole project on Github:https://github.com/Fung-yuantao/android-camera2demo
Here is the key code(I think).
Code in Camera2Demo.java:
private void startPreview(CameraDevice camera) throws CameraAccessException {
SurfaceTexture texture = mPreviewView.getSurfaceTexture();
// to set PREVIEW size
texture.setDefaultBufferSize(mPreviewSize.getWidth(),mPreviewSize.getHeight());
surface = new Surface(texture);
try {
// to set request for PREVIEW
mPreviewBuilder = camera.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
} catch (CameraAccessException e) {
e.printStackTrace();
}
mImageReader = ImageReader.newInstance(mImageWidth, mImageHeight, ImageFormat.YUV_420_888, 2);
mImageReader.setOnImageAvailableListener(mOnImageAvailableListener,mHandler);
mPreviewBuilder.addTarget(mImageReader.getSurface());
//output Surface
List<Surface> outputSurfaces = new ArrayList<>();
outputSurfaces.add(mImageReader.getSurface());
/*camera.createCaptureSession(
Arrays.asList(surface, mImageReader.getSurface()),
mSessionStateCallback, mHandler);
*/
camera.createCaptureSession(outputSurfaces, mSessionStateCallback, mHandler);
}
private CameraCaptureSession.StateCallback mSessionStateCallback = new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(CameraCaptureSession session) {
try {
updatePreview(session);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
#Override
public void onConfigureFailed(CameraCaptureSession session) {
}
};
private void updatePreview(CameraCaptureSession session)
throws CameraAccessException {
mPreviewBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_AUTO);
session.setRepeatingRequest(mPreviewBuilder.build(), null, mHandler);
}
private ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
// get the newest frame
Image image = reader.acquireNextImage();
if (image == null) {
return;
}
// print image format
int format = reader.getImageFormat();
Log.d(TAG, "the format of captured frame: " + format);
// HERE to call jni methods
JNIUtils.display(image.getWidth(), image.getHeight(), image.getPlanes()[0].getBuffer(), surface);
//ByteBuffer buffer = image.getPlanes()[0].getBuffer();
//byte[] bytes = new byte[buffer.remaining()];
image.close();
}
};
Code in JNIUtils.java:
import android.media.Image;
import android.view.Surface;
import java.nio.ByteBuffer;
public class JNIUtils {
// TAG for JNIUtils class
private static final String TAG = "JNIUtils";
// Load native library.
static {
System.loadLibrary("native-lib");
}
public static native void display(int srcWidth, int srcHeight, ByteBuffer srcBuffer, Surface surface);
}
Code in native-lib.cpp:
#include <jni.h>
#include <string>
#include <android/log.h>
//#include <android/bitmap.h>
#include <android/native_window_jni.h>
#define LOGE(...) __android_log_print(ANDROID_LOG_ERROR, "Camera2Demo", __VA_ARGS__)
extern "C" {
JNIEXPORT jstring JNICALL Java_tau_camera2demo_JNIUtils_display(
JNIEnv *env,
jobject obj,
jint srcWidth,
jint srcHeight,
jobject srcBuffer,
jobject surface) {
/*
uint8_t *srcLumaPtr = reinterpret_cast<uint8_t *>(env->GetDirectBufferAddress(srcBuffer));
if (srcLumaPtr == nullptr) {
LOGE("srcLumaPtr null ERROR!");
return NULL;
}
*/
ANativeWindow * window = ANativeWindow_fromSurface(env, surface);
ANativeWindow_acquire(window);
ANativeWindow_Buffer buffer;
ANativeWindow_setBuffersGeometry(window, srcWidth, srcHeight, 0/* format unchanged */);
if (int32_t err = ANativeWindow_lock(window, &buffer, NULL)) {
LOGE("ANativeWindow_lock failed with error code: %d\n", err);
ANativeWindow_release(window);
return NULL;
}
memcpy(buffer.bits, srcBuffer, srcWidth * srcHeight * 4);
ANativeWindow_unlockAndPost(window);
ANativeWindow_release(window);
return NULL;
}
}
After I commented the memcpy out, the app no longer collapses but displays nothing. So I guess the problem is now turning to how to correctly use memcpy to copy the captured/processed buffer to buffer.bits.
Update:
I change
memcpy(buffer.bits, srcBuffer, srcWidth * srcHeight * 4);
to
memcpy(buffer.bits, srcLumaPtr, srcWidth * srcHeight * 4);
the app no longer collapses and starts to display but it's displaying something strange.
As mentioned by yakobom, you're trying to copy a YUV_420_888 image directly into a RGBA_8888 destination (that's the default, if you haven't changed it). That won't work with just a memcpy.
You need to actually convert the data, and you need to ensure you don't copy too much - the sample code you have copies width*height*4 bytes, while a YUV_420_888 image takes up only stride*height*1.5 bytes (roughly). So when you copied, you were running way off the end of the buffer.
You also have to account for the stride provided at the Java level to correctly index into the buffer. This link from Microsoft has a useful diagram.
If you just care about the luminance (so grayscale output is enough), just duplicate the luminance channel into the R, G, and B channels. The pseudocode would be roughly:
uint8_t *outPtr = buffer.bits;
for (size_t y = 0; y < height; y++) {
uint8_t *rowPtr = srcLumaPtr + y * srcLumaStride;
for (size_t x = 0; x < width; x++) {
*(outPtr++) = *rowPtr;
*(outPtr++) = *rowPtr;
*(outPtr++) = *rowPtr;
*(outPtr++) = 255; // gamma for RGBA_8888
++rowPtr;
}
}
You'll need to read the srcLumaStride from the Image object (row stride of the first Plane) and pass it down via JNI as well.
Just to put it as an answer, to avoid a long chain of comments - such a crash issue may be due to improper size of bites being copied by the memcpy (UPDATE following other comments: In this case it was due to forbidden direct copy).
If you are now getting a weird image, it is probably another issue - I would suspect the image format, try to modify that.
I am developing custom camera API 2 app, and I notice that the capture format conversion is different on some devices when I use ImageReader callback.
For example in Nexus 4 doesn't work fine and in Nexus5X looks OK, here is the output.
I initialize the ImageReader in this form:
mImageReader = ImageReader.newInstance(320, 240, ImageFormat.YUV_420_888,2);
And my callback is simple callback ImageReader Callback.
mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable( ImageReader reader) {
try {
mBackgroundHandler.post(
new ImageController(reader.acquireNextImage())
);
}
catch(Exception e)
{
//exception
}
}
};
And in the case of Nexus 4: I had this error.
D/qdgralloc: gralloc_lock_ycbcr: Invalid format passed: 0x32315659
When I try to write the raw file in both devices, I have these different images. So I understand that the Nexus 5X image has NV21 codification and the Nexus 4 has YV12 codification.
I found a specification of image format and I try to get the format in ImageReader.
There are YV12 and NV21 options, but obviously, I get the YUV_420_888 format when I try to obtain the format.
int test=mImageReader.getImageFormat();
So is there any way to get the camera input format (NV21 or YV12) to discriminate this codification types in the camera class? CameraCharacteristics maybe?
Thanks in advance.
Unai.
PD: I use OpenGL for displayin RGB images, and I use Opencv to make the conversions to YUV_420_888.
YUV_420_888 is a wrapper that can host (among others) both NV21 and YV12 images. You must use the planes and strides to access individual colors:
ByteBuffer Y = image.getPlanes()[0];
ByteBuffer U = image.getPlanes()[1];
ByteBuffer V = image.getPlanes()[2];
If the underlying pixels are in NV21 format (as on Nexus 4), the pixelStride will be 2, and
int getU(image, col, row) {
return getPixel(image.getPlanes()[1], col/2, row/2);
}
int getPixel(plane, col, row) {
return plane.getBuffer().get(col*plane.getPixelStride() + row*plane.getRowStride());
}
We take half column and half row because this is how U and V (chroma) planes are stored in 420 image.
This code is for illustration, it is very inefficient, you probably want to access pixels at bulk, using get(byte[], int, int), or via a fragment shader, or via JNI function GetDirectBufferAddress in native code. What you cannot use, is method plane.array(), because the planes are guaranteed to be direct byte buffers.
Here useful method which converts from YV12 to NV21.
public static byte[] fromYV12toNV21(#NonNull final byte[] yv12,
final int width,
final int height) {
byte[] nv21 = new byte[yv12.length];
final int size = width * height;
final int quarter = size / 4;
final int vPosition = size; // This is where V starts
final int uPosition = size + quarter; // This is where U starts
System.arraycopy(yv12, 0, nv21, 0, size); // Y is same
for (int i = 0; i < quarter; i++) {
nv21[size + i * 2] = yv12[vPosition + i]; // For NV21, V first
nv21[size + i * 2 + 1] = yv12[uPosition + i]; // For Nv21, U second
}
return nv21;
}
I came across one problem to render the camera image after some process on its YUV buffer.
I am using the example video-overlay-jni-example and in the method OnFrameAvailable I am creating a new frame buffer using the cv::Mat...
Here is how I create a new frame buffer:
cv::Mat frame((int) yuv_height_ + (int) (yuv_height_ / 2), (int) yuv_width_, CV_8UC1, (uchar *) yuv_temp_buffer_.data());
After process, I copy the frame.data to the yuv_temp_buffer_ in order to render it on the texture: memcpy(&yuv_temp_buffer_[0], frame.data, yuv_size_);
And this works fine...
The problem starts when I try to execute an OpenCV method findChessboardCorners... using the frame that I've created before.
The method findChessboardCorners takes about 90ms to execute (11 fps), however, it seems to be rendering in a slower rate. (It appears to be rendering in ~0.5 fps on the screen).
Here is the code of the OnFrameAvailable method:
void AugmentedRealityApp::OnFrameAvailable(const TangoImageBuffer* buffer) {
if (yuv_drawable_ == NULL){
return;
}
if (yuv_drawable_->GetTextureId() == 0) {
LOGE("AugmentedRealityApp::yuv texture id not valid");
return;
}
if (buffer->format != TANGO_HAL_PIXEL_FORMAT_YCrCb_420_SP) {
LOGE("AugmentedRealityApp::yuv texture format is not supported by this app");
return;
}
// The memory needs to be allocated after we get the first frame because we
// need to know the size of the image.
if (!is_yuv_texture_available_) {
yuv_width_ = buffer->width;
yuv_height_ = buffer->height;
uv_buffer_offset_ = yuv_width_ * yuv_height_;
yuv_size_ = yuv_width_ * yuv_height_ + yuv_width_ * yuv_height_ / 2;
// Reserve and resize the buffer size for RGB and YUV data.
yuv_buffer_.resize(yuv_size_);
yuv_temp_buffer_.resize(yuv_size_);
rgb_buffer_.resize(yuv_width_ * yuv_height_ * 3);
AllocateTexture(yuv_drawable_->GetTextureId(), yuv_width_, yuv_height_);
is_yuv_texture_available_ = true;
}
std::lock_guard<std::mutex> lock(yuv_buffer_mutex_);
memcpy(&yuv_temp_buffer_[0], buffer->data, yuv_size_);
///
cv::Mat frame((int) yuv_height_ + (int) (yuv_height_ / 2), (int) yuv_width_, CV_8UC1, (uchar *) yuv_temp_buffer_.data());
if (!stam.isCalibrated()) {
Profiler profiler;
profiler.startSampling();
stam.initFromChessboard(frame, cv::Size(9, 6), 100);
profiler.endSampling();
profiler.print("initFromChessboard", -1);
}
///
memcpy(&yuv_temp_buffer_[0], frame.data, yuv_size_);
swap_buffer_signal_ = true;
}
Here is the code of the method initFromChessBoard:
bool STAM::initFromChessboard(const cv::Mat& image, const cv::Size& chessBoardSize, int squareSize)
{
cv::Mat rvec = cv::Mat(cv::Size(3, 1), CV_64F);
cv::Mat tvec = cv::Mat(cv::Size(3, 1), CV_64F);
std::vector<cv::Point2d> imagePoints, imageBoardPoints;
std::vector<cv::Point3d> boardPoints;
for (int i = 0; i < chessBoardSize.height; i++)
{
for (int j = 0; j < chessBoardSize.width; j++)
{
boardPoints.push_back(cv::Point3d(j*squareSize, i*squareSize, 0.0));
}
}
//getting only the Y channel (many of the functions like face detect and align only needs the grayscale image)
cv::Mat gray(image.rows, image.cols, CV_8UC1);
gray.data = image.data;
bool found = findChessboardCorners(gray, chessBoardSize, imagePoints, cv::CALIB_CB_FAST_CHECK);
#ifdef WINDOWS_VS
printf("Number of chessboard points: %d\n", imagePoints.size());
#elif ANDROID
LOGE("Number of chessboard points: %d", imagePoints.size());
#endif
for (int i = 0; i < imagePoints.size(); i++) {
cv::circle(image, imagePoints[i], 6, cv::Scalar(149, 43, 0), -1);
}
}
Is anyone having the same problem after process something in the YUV buffer to render on the texture?
I did a test using other device rather than the project Tango using camera2 API, and the rendering process on the screen appears to be the same rate of the OpenCV function process itself.
I appreciate any help.
I had a similar problem. My app slowed down after using the copied yuv buffer and doing some image processing with OpenCV. I would recommand you to use the tango_support library to access the yuv image buffer by doing the following:
In your config function:
int AugmentedRealityApp::TangoSetupConfig() {
TangoSupport_createImageBufferManager(TANGO_HAL_PIXEL_FORMAT_YCrCb_420_SP, 1280, 720, &yuv_manager_);
}
In your callback function:
void AugmentedRealityApp::OnFrameAvailable(const TangoImageBuffer* buffer) {
TangoSupport_updateImageBuffer(yuv_manager_, buffer);
}
In your render thread:
void AugmentedRealityApp::Render() {
TangoImageBuffer* yuv = new TangoImageBuffer();
TangoSupport_getLatestImageBuffer(yuv_manager_, &yuv);
cv::Mat yuv_frame, rgb_img, gray_img;
yuv_frame.create(720*3/2, 1280, CV_8UC1);
memcpy(yuv_frame.data, yuv->data, 720*3/2*1280); // yuv image
cv::cvtColor(yuv_frame, rgb_img, CV_YUV2RGB_NV21); // rgb image
cvtColor(rgb_img, gray_img, CV_RGB2GRAY); // gray image
}
You can share the yuv_manger with other objects/threads so you can access the yuv image buffer wherever you want.
I am trying to show a JPEG to a ANativeWindow with the Android NDK.
I am getting the ANativeWindow* by doing:
_window = ANativeWindow_fromSurface(env, surface)
I am loading the jpeg, with libjpeg-turbo, by doing:
if (tjDecompressHeader2(tj, jpeg, jpegSize, &width, &height, &subsamp) == 0) {
int format = TJPF_ARGB;
int pitch = _windowWidth * tjPixelSize[format];
_bufferSize = pitch * _windowHeight;
_buffer = realloc(_buffer, _bufferSize);
tjDecompress2(tj, jpeg, jpegSize, _buffer, _windowWidth, pitch, _windowHeight, format, 0);
}
My Question is, how to show the decoded jpeg on the surface ? I am currently doing this:
ANativeWindow_Buffer surface_buffer;
if (ANativeWindow_lock(_window, &surface_buffer, NULL) == 0) {
memcpy(surface_buffer.bits, _buffer, _bufferSize);
ANativeWindow_unlockAndPost(_window);
}
But the result (see below) is not what I was expecting. What should I do before sending the buffer to the surface ?
Thanks
Just need to set ANativeWindow's format with ANativeWindow_setBuffersGeometry(_window, 0, 0, WINDOW_FORMAT_RGBA_8888). And then use TJPF_RGBA format instead of TJPF_ARGB.