I need to do some real-time image processing with the camera preview data, such as face detection which is a c++ library, and then display the processed preview with face labeled on screen.
I have read http://nezarobot.blogspot.com/2016/03/android-surfacetexture-camera2-opencv.html and Eddy Talvala's answer from Android camera2 API - Display processed frame in real time. Following the two webpages, I managed to build the app(no calling the face detection lib, only trying to display preview using ANativeWindow), but everytime I run this app on Google Pixel - 7.1.0 - API 25 running on Genymotion, the app always collapses throwing the following log
08-28 14:23:09.598 2099-2127/tau.camera2demo A/libc: Fatal signal 11 (SIGSEGV), code 2, fault addr 0xd3a96000 in tid 2127 (CAMERA2)
[ 08-28 14:23:09.599 117: 117 W/ ]
debuggerd: handling request: pid=2099 uid=10067 gid=10067 tid=2127
I googled this but no answer found.
The whole project on Github:https://github.com/Fung-yuantao/android-camera2demo
Here is the key code(I think).
Code in Camera2Demo.java:
private void startPreview(CameraDevice camera) throws CameraAccessException {
SurfaceTexture texture = mPreviewView.getSurfaceTexture();
// to set PREVIEW size
texture.setDefaultBufferSize(mPreviewSize.getWidth(),mPreviewSize.getHeight());
surface = new Surface(texture);
try {
// to set request for PREVIEW
mPreviewBuilder = camera.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
} catch (CameraAccessException e) {
e.printStackTrace();
}
mImageReader = ImageReader.newInstance(mImageWidth, mImageHeight, ImageFormat.YUV_420_888, 2);
mImageReader.setOnImageAvailableListener(mOnImageAvailableListener,mHandler);
mPreviewBuilder.addTarget(mImageReader.getSurface());
//output Surface
List<Surface> outputSurfaces = new ArrayList<>();
outputSurfaces.add(mImageReader.getSurface());
/*camera.createCaptureSession(
Arrays.asList(surface, mImageReader.getSurface()),
mSessionStateCallback, mHandler);
*/
camera.createCaptureSession(outputSurfaces, mSessionStateCallback, mHandler);
}
private CameraCaptureSession.StateCallback mSessionStateCallback = new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(CameraCaptureSession session) {
try {
updatePreview(session);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
#Override
public void onConfigureFailed(CameraCaptureSession session) {
}
};
private void updatePreview(CameraCaptureSession session)
throws CameraAccessException {
mPreviewBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_AUTO);
session.setRepeatingRequest(mPreviewBuilder.build(), null, mHandler);
}
private ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
// get the newest frame
Image image = reader.acquireNextImage();
if (image == null) {
return;
}
// print image format
int format = reader.getImageFormat();
Log.d(TAG, "the format of captured frame: " + format);
// HERE to call jni methods
JNIUtils.display(image.getWidth(), image.getHeight(), image.getPlanes()[0].getBuffer(), surface);
//ByteBuffer buffer = image.getPlanes()[0].getBuffer();
//byte[] bytes = new byte[buffer.remaining()];
image.close();
}
};
Code in JNIUtils.java:
import android.media.Image;
import android.view.Surface;
import java.nio.ByteBuffer;
public class JNIUtils {
// TAG for JNIUtils class
private static final String TAG = "JNIUtils";
// Load native library.
static {
System.loadLibrary("native-lib");
}
public static native void display(int srcWidth, int srcHeight, ByteBuffer srcBuffer, Surface surface);
}
Code in native-lib.cpp:
#include <jni.h>
#include <string>
#include <android/log.h>
//#include <android/bitmap.h>
#include <android/native_window_jni.h>
#define LOGE(...) __android_log_print(ANDROID_LOG_ERROR, "Camera2Demo", __VA_ARGS__)
extern "C" {
JNIEXPORT jstring JNICALL Java_tau_camera2demo_JNIUtils_display(
JNIEnv *env,
jobject obj,
jint srcWidth,
jint srcHeight,
jobject srcBuffer,
jobject surface) {
/*
uint8_t *srcLumaPtr = reinterpret_cast<uint8_t *>(env->GetDirectBufferAddress(srcBuffer));
if (srcLumaPtr == nullptr) {
LOGE("srcLumaPtr null ERROR!");
return NULL;
}
*/
ANativeWindow * window = ANativeWindow_fromSurface(env, surface);
ANativeWindow_acquire(window);
ANativeWindow_Buffer buffer;
ANativeWindow_setBuffersGeometry(window, srcWidth, srcHeight, 0/* format unchanged */);
if (int32_t err = ANativeWindow_lock(window, &buffer, NULL)) {
LOGE("ANativeWindow_lock failed with error code: %d\n", err);
ANativeWindow_release(window);
return NULL;
}
memcpy(buffer.bits, srcBuffer, srcWidth * srcHeight * 4);
ANativeWindow_unlockAndPost(window);
ANativeWindow_release(window);
return NULL;
}
}
After I commented the memcpy out, the app no longer collapses but displays nothing. So I guess the problem is now turning to how to correctly use memcpy to copy the captured/processed buffer to buffer.bits.
Update:
I change
memcpy(buffer.bits, srcBuffer, srcWidth * srcHeight * 4);
to
memcpy(buffer.bits, srcLumaPtr, srcWidth * srcHeight * 4);
the app no longer collapses and starts to display but it's displaying something strange.
As mentioned by yakobom, you're trying to copy a YUV_420_888 image directly into a RGBA_8888 destination (that's the default, if you haven't changed it). That won't work with just a memcpy.
You need to actually convert the data, and you need to ensure you don't copy too much - the sample code you have copies width*height*4 bytes, while a YUV_420_888 image takes up only stride*height*1.5 bytes (roughly). So when you copied, you were running way off the end of the buffer.
You also have to account for the stride provided at the Java level to correctly index into the buffer. This link from Microsoft has a useful diagram.
If you just care about the luminance (so grayscale output is enough), just duplicate the luminance channel into the R, G, and B channels. The pseudocode would be roughly:
uint8_t *outPtr = buffer.bits;
for (size_t y = 0; y < height; y++) {
uint8_t *rowPtr = srcLumaPtr + y * srcLumaStride;
for (size_t x = 0; x < width; x++) {
*(outPtr++) = *rowPtr;
*(outPtr++) = *rowPtr;
*(outPtr++) = *rowPtr;
*(outPtr++) = 255; // gamma for RGBA_8888
++rowPtr;
}
}
You'll need to read the srcLumaStride from the Image object (row stride of the first Plane) and pass it down via JNI as well.
Just to put it as an answer, to avoid a long chain of comments - such a crash issue may be due to improper size of bites being copied by the memcpy (UPDATE following other comments: In this case it was due to forbidden direct copy).
If you are now getting a weird image, it is probably another issue - I would suspect the image format, try to modify that.
Related
I came across one problem to render the camera image after some process on its YUV buffer.
I am using the example video-overlay-jni-example and in the method OnFrameAvailable I am creating a new frame buffer using the cv::Mat...
Here is how I create a new frame buffer:
cv::Mat frame((int) yuv_height_ + (int) (yuv_height_ / 2), (int) yuv_width_, CV_8UC1, (uchar *) yuv_temp_buffer_.data());
After process, I copy the frame.data to the yuv_temp_buffer_ in order to render it on the texture: memcpy(&yuv_temp_buffer_[0], frame.data, yuv_size_);
And this works fine...
The problem starts when I try to execute an OpenCV method findChessboardCorners... using the frame that I've created before.
The method findChessboardCorners takes about 90ms to execute (11 fps), however, it seems to be rendering in a slower rate. (It appears to be rendering in ~0.5 fps on the screen).
Here is the code of the OnFrameAvailable method:
void AugmentedRealityApp::OnFrameAvailable(const TangoImageBuffer* buffer) {
if (yuv_drawable_ == NULL){
return;
}
if (yuv_drawable_->GetTextureId() == 0) {
LOGE("AugmentedRealityApp::yuv texture id not valid");
return;
}
if (buffer->format != TANGO_HAL_PIXEL_FORMAT_YCrCb_420_SP) {
LOGE("AugmentedRealityApp::yuv texture format is not supported by this app");
return;
}
// The memory needs to be allocated after we get the first frame because we
// need to know the size of the image.
if (!is_yuv_texture_available_) {
yuv_width_ = buffer->width;
yuv_height_ = buffer->height;
uv_buffer_offset_ = yuv_width_ * yuv_height_;
yuv_size_ = yuv_width_ * yuv_height_ + yuv_width_ * yuv_height_ / 2;
// Reserve and resize the buffer size for RGB and YUV data.
yuv_buffer_.resize(yuv_size_);
yuv_temp_buffer_.resize(yuv_size_);
rgb_buffer_.resize(yuv_width_ * yuv_height_ * 3);
AllocateTexture(yuv_drawable_->GetTextureId(), yuv_width_, yuv_height_);
is_yuv_texture_available_ = true;
}
std::lock_guard<std::mutex> lock(yuv_buffer_mutex_);
memcpy(&yuv_temp_buffer_[0], buffer->data, yuv_size_);
///
cv::Mat frame((int) yuv_height_ + (int) (yuv_height_ / 2), (int) yuv_width_, CV_8UC1, (uchar *) yuv_temp_buffer_.data());
if (!stam.isCalibrated()) {
Profiler profiler;
profiler.startSampling();
stam.initFromChessboard(frame, cv::Size(9, 6), 100);
profiler.endSampling();
profiler.print("initFromChessboard", -1);
}
///
memcpy(&yuv_temp_buffer_[0], frame.data, yuv_size_);
swap_buffer_signal_ = true;
}
Here is the code of the method initFromChessBoard:
bool STAM::initFromChessboard(const cv::Mat& image, const cv::Size& chessBoardSize, int squareSize)
{
cv::Mat rvec = cv::Mat(cv::Size(3, 1), CV_64F);
cv::Mat tvec = cv::Mat(cv::Size(3, 1), CV_64F);
std::vector<cv::Point2d> imagePoints, imageBoardPoints;
std::vector<cv::Point3d> boardPoints;
for (int i = 0; i < chessBoardSize.height; i++)
{
for (int j = 0; j < chessBoardSize.width; j++)
{
boardPoints.push_back(cv::Point3d(j*squareSize, i*squareSize, 0.0));
}
}
//getting only the Y channel (many of the functions like face detect and align only needs the grayscale image)
cv::Mat gray(image.rows, image.cols, CV_8UC1);
gray.data = image.data;
bool found = findChessboardCorners(gray, chessBoardSize, imagePoints, cv::CALIB_CB_FAST_CHECK);
#ifdef WINDOWS_VS
printf("Number of chessboard points: %d\n", imagePoints.size());
#elif ANDROID
LOGE("Number of chessboard points: %d", imagePoints.size());
#endif
for (int i = 0; i < imagePoints.size(); i++) {
cv::circle(image, imagePoints[i], 6, cv::Scalar(149, 43, 0), -1);
}
}
Is anyone having the same problem after process something in the YUV buffer to render on the texture?
I did a test using other device rather than the project Tango using camera2 API, and the rendering process on the screen appears to be the same rate of the OpenCV function process itself.
I appreciate any help.
I had a similar problem. My app slowed down after using the copied yuv buffer and doing some image processing with OpenCV. I would recommand you to use the tango_support library to access the yuv image buffer by doing the following:
In your config function:
int AugmentedRealityApp::TangoSetupConfig() {
TangoSupport_createImageBufferManager(TANGO_HAL_PIXEL_FORMAT_YCrCb_420_SP, 1280, 720, &yuv_manager_);
}
In your callback function:
void AugmentedRealityApp::OnFrameAvailable(const TangoImageBuffer* buffer) {
TangoSupport_updateImageBuffer(yuv_manager_, buffer);
}
In your render thread:
void AugmentedRealityApp::Render() {
TangoImageBuffer* yuv = new TangoImageBuffer();
TangoSupport_getLatestImageBuffer(yuv_manager_, &yuv);
cv::Mat yuv_frame, rgb_img, gray_img;
yuv_frame.create(720*3/2, 1280, CV_8UC1);
memcpy(yuv_frame.data, yuv->data, 720*3/2*1280); // yuv image
cv::cvtColor(yuv_frame, rgb_img, CV_YUV2RGB_NV21); // rgb image
cvtColor(rgb_img, gray_img, CV_RGB2GRAY); // gray image
}
You can share the yuv_manger with other objects/threads so you can access the yuv image buffer wherever you want.
I am trying to scan a local image through ZBar, but as ZBar don't give any documentation for Android but only give detailed documentation for iPhone I had customized camera test activity too much. But I didn't get any success.
In ZBar cameratest activity
PreviewCallback previewCb = new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Parameters parameters = camera.getParameters();
Size size = parameters.getPreviewSize();
Image barcode = new Image(size.width, size.height, "Y800");
barcode.setData(data);
int result = scanner.scanImage(barcode);
if (result != 0) {
previewing = false;
mCamera.setPreviewCallback(null);
mCamera.stopPreview();
SymbolSet syms = scanner.getResults();
for (Symbol sym : syms) {
scanText.setText("barcode result " + sym.getData());
barcodeScanned = true;
}
}
}
};
I want to customize this code so that it uses a local image from the gallery and gives me the result. How do I customize this code for giving a local image from the gallery and scan that image?
Try this out:
Bitmap barcodeBmp = BitmapFactory.decodeResource(getResources(),
R.drawable.barcode);
int width = barcodeBmp.getWidth();
int height = barcodeBmp.getHeight();
int pixels = new int;
barcodeBmp.getPixels(pixels, 0, width, 0, 0, width, height);
Image barcode = new Image(width, height, "RGB4");
barcode.setData(pixels);
int result = scanner.scanImage(barcode.convert("Y800"));
Or using the API, refer to HOWTO: Scan images using the API.
Java port of Zbar's scanner accepts only Y800 and GRAY pixels format (https://github.com/ZBar/ZBar/blob/master/java/net/sourceforge/zbar/ImageScanner.java) which is ok for raw bytes captured from the camera preview. But images from the Androis's Gallery are JPEG comressed usually and their pixels are not in Y800, so you can make scanner work by converting image's pixels to Y800 format. See this official support forum's thread for sample code. To calculate pixels array length just use imageWidth*imageHeight formula.
#shujatAli your example image palette's format is grayscale, convertation it to RGB made your code snippet to work for me. You can change pallete's format using some image manipulation program. I used GIMP.
I don't know clearly for Android, but on iOS do as:
//Action when user tap on button to call ZBarReaderController
- (IBAction)brownQRImageFromAlbum:(id)sender {
ZBarReaderController *reader = [ZBarReaderController new];
reader.readerDelegate = self;
reader.sourceType = UIImagePickerControllerSourceTypePhotoLibrary; // Set ZbarReaderController point to the local album
ZBarImageScanner *scanner = reader.scanner;
[scanner setSymbology: ZBAR_QRCODE
config: ZBAR_CFG_ENABLE
to: 1];
[self presentModalViewController: reader animated: YES];
}
- (void) imagePickerController: (UIImagePickerController *) picker
didFinishPickingMediaWithInfo: (NSDictionary *) info {
UIImage *imageCurrent = (UIImage*)[info objectForKey:UIImagePickerControllerOriginalImage];
self.imageViewQR.image = imageCurrent;
imageCurrent = nil;
// ADD: get the decode results
id<NSFastEnumeration> results =
[info objectForKey: ZBarReaderControllerResults];
ZBarSymbol *symbol = nil;
for (symbol in results)
break;
NSLog(#"Content: %#", symbol.data);
[picker dismissModalViewControllerAnimated: NO];
}
Reference for more details: http://zbar.sourceforge.net/iphone/sdkdoc/optimizing.html
Idea from HOWTO: Scan images using the API:
#include <iostream>
#include <Magick++.h>
#include <zbar.h>
using namespace std;
using namespace zbar;
int main (int argc, char **argv)
{
if(argc < 2)
return(1);
// Create a reader
ImageScanner scanner;
// Configure the reader
scanner.set_config(ZBAR_NONE, ZBAR_CFG_ENABLE, 1);
// Obtain image data
Magick::Image magick(argv[1]); // Read an image file
int width = magick.columns(); // Extract dimensions
int height = magick.rows();
Magick::Blob blob; // Extract the raw data
magick.modifyImage();
magick.write(&blob, "GRAY", 8);
const void *raw = blob.data();
// Wrap image data
Image image(width, height, "Y800", raw, width * height);
// Scan the image for barcodes
int n = scanner.scan(image);
// Extract results
for (Image::SymbolIterator symbol = image.symbol_begin();
symbol != image.symbol_end();
++symbol) {
// Do something useful with results
cout << "decoded " << symbol->get_type_name()
<< " symbol \"" << symbol->get_data() << '"' << endl;
}
// Clean up
image.set_data(NULL, 0);
return(0);
}
Follow the above code and change it so it is relevant in your language programming.
I'm trying to encode an image to video using ffmpeg library.
I have these global params:
//Global params
AVCodec *codec;
AVCodecContext *codecCtx;
uint8_t *output_buffer;
int output_buffer_size;
I divided the encoding to 3 methods:
Initialize the encoder:
jint Java_com_camera_simpledoublewebcams2_CameraPreview_initencoder(JNIEnv* env,jobject thiz){
avcodec_register_all();
avcodec_init();
av_register_all();
int fps = 30;
/* find the H263 video encoder */
codec = avcodec_find_encoder(CODEC_ID_H263);
if (!codec) {
LOGI("avcodec_find_encoder() run fail.");
return -5;
}
//allocate context
codecCtx = avcodec_alloc_context();
/* put sample parameters */
codecCtx->bit_rate = 400000;
/* resolution must be a multiple of two */
codecCtx->width = 176;
codecCtx->height = 144;
/* frames per second */
codecCtx->time_base = (AVRational){1,fps};
codecCtx->pix_fmt = PIX_FMT_YUV420P;
codecCtx->codec_id = CODEC_ID_H263;
codecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
/* open it */
if (avcodec_open(codecCtx, codec) < 0) {
LOGI("avcodec_open() run fail.");
return -10;
}
//init buffer
output_buffer_size = 500000;
output_buffer = malloc(output_buffer_size);
return 0;
}
Encoding the image:
jint Java_com_camera_simpledoublewebcams2_CameraPreview_encodejpeg(JNIEnv* env,jobject thiz,jchar* cImage, jint imageSize){
int out_size;
AVFrame *picture;
AVFrame *outpic;
uint8_t *outbuffer;
//allocate frame
picture = avcodec_alloc_frame();
outpic = avcodec_alloc_frame();
int nbytes = avpicture_get_size(PIX_FMT_YUV420P, codecCtx->width, codecCtx->height);
outbuffer = (uint8_t*)av_malloc(nbytes);
outpic->pts = 0;
//fill picture with image
avpicture_fill((AVPicture*)picture, (uint8_t*)cImage, PIX_FMT_RGBA, codecCtx->width, codecCtx->height);
//fill outpic with empty image
avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, codecCtx->width, codecCtx->height);
//rescale the image
struct SwsContext* fooContext = sws_getContext(codecCtx->width, codecCtx->height,
PIX_FMT_RGBA,
codecCtx->width, codecCtx->height,
PIX_FMT_YUV420P,
SWS_FAST_BILINEAR, NULL, NULL, NULL);
sws_scale(fooContext, picture->data, picture->linesize, 0, codecCtx->height, outpic->data, outpic->linesize);
//encode the image
out_size = avcodec_encode_video(codecCtx, output_buffer, output_buffer_size, outpic);
out_size += avcodec_encode_video(codecCtx, output_buffer, output_buffer_size, outpic);
//release pictures
av_free(outbuffer);
av_free(picture);
av_free(outpic);
return out_size;
}
And closing the encoder:
void Java_com_camera_simpledoublewebcams2_CameraPreview_closeencoder(JNIEnv* env,jobject thiz){
free(output_buffer);
avcodec_close(codecCtx);
av_free(codecCtx);
}
When I send the first image, I get a result from the encoder. When I try to send another image the program crashes.
I tried calling init once and then the images, then the close - didn't work.
I tried calling the init and the close for every image - didn't work.
Any suggestions?
Thanks!
EDIT: After further research I found that the problem is at sws_scale method.
Still don't know what is causing this issue...
out_size = avcodec_encode_video(codecCtx, output_buffer,output_buffer_size, outpic);
out_size += avcodec_encode_video(codecCtx, output_buffer, output_buffer_size,outpic);
Why are you encoding twice?
Perhaps the error is due to that double encoding. Try removing the second encoding.
In the process of tracking severe memory issues in my app, I looked at several heap dumps from my app, and most of the time I have a HUGE bitmap that I don't know of.
It takes 9.4MB, or 9,830,400 bytes, or actually a 1280x1920 image at 4 bytes per pixels.
I checked in Eclipse MAT, it is indeed a byte[9830400], that has one incoming reference which is a android.graphics.Bitmap.
I'd like to dump this to a file and try to see it. I can't understand where is it coming from. My biggest image in all my drawables is a 640x960 png, which takes less than 3MB.
I tried to use Eclipse to "copy value to file", but I think it simply prints the buffer to the file, and I don't know any image software that can read a stream of bytes and display it as a 4 bytes per pixel image.
Any idea?
Here's what I tried: dump the byte array to a file, push it to /sdcard/img, and load an activity like this:
#Override
public void onCreate(final Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
try {
final File inputFile = new File("/sdcard/img");
final FileInputStream isr = new FileInputStream(inputFile);
final Bitmap bmp = BitmapFactory.decodeStream(isr);
ImageView iv = new ImageView(this);
iv.setImageBitmap(bmp);
setContentView(iv);
Log.d("ImageTest", "Image was inflated");
} catch (final FileNotFoundException e) {
Log.d("ImageTest", "Image was not inflated");
}
}
I didn't see anything.
Do you know how is encoded the image? Say it is stored into byte[] buffer. buffer[0] is red, buffer[1] is green, etc?
See here for an easier answer: MAT (Eclipse Memory Analyzer) - how to view bitmaps from memory dump
TL;DR - Install GIMP and load the image as raw RGB Alpha
OK -- After quite some unsuccessful tries, I finally got something out of this byte array. I wrote this simple C program to convert the byte array to a Windows Bitmap file. I'm dropping the code in case somebody is interested.
I compiled this against VisualC 6.0 and gcc 3.4.4, it should work on any OS (tested on Windows, Linux and MacOS X).
#include <stdio.h>
#include <math.h>
#include <string.h>
#include <stdlib.h>
/* Types */
typedef unsigned char byte;
typedef unsigned short uint16_t;
typedef unsigned int uint32_t;
typedef int int32_t;
/* Constants */
#define RMASK 0x00ff0000
#define GMASK 0x0000ff00
#define BMASK 0x000000ff
#define AMASK 0xff000000
/* Structures */
struct bmpfile_magic {
unsigned char magic[2];
};
struct bmpfile_header {
uint32_t filesz;
uint16_t creator1;
uint16_t creator2;
uint32_t bmp_offset;
};
struct bmpfile_dibheader {
uint32_t header_sz;
uint32_t width;
uint32_t height;
uint16_t nplanes;
uint16_t bitspp;
uint32_t compress_type;
uint32_t bmp_bytesz;
int32_t hres;
int32_t vres;
uint32_t ncolors;
uint32_t nimpcolors;
uint32_t rmask, gmask, bmask, amask;
uint32_t colorspace_type;
byte colorspace[0x24];
uint32_t rgamma, ggamma, bgamma;
};
/* Displays usage info and exits */
void usage(char *cmd) {
printf("Usage:\t%s <img_src> <img_dest.bmp> <width> <height>\n"
"\timg_src:\timage byte buffer obtained from Eclipse MAT, using 'copy > save value to file' while selecting the byte[] buffer corresponding to an android.graphics.Bitmap\n"
"\timg_dest:\tpath to target *.bmp file\n"
"\twidth:\t\tpicture width, obtained in Eclipse MAT, selecting the android.graphics.Bitmap object and seeing the object member values\n"
"\theight:\t\tpicture height\n\n", cmd);
exit(1);
}
/* C entry point */
int main(int argc, char **argv) {
FILE *in, *out;
char *file_in, *file_out;
int w, h, W, H;
byte r, g, b, a, *image;
struct bmpfile_magic magic;
struct bmpfile_header header;
struct bmpfile_dibheader dibheader;
/* Parse command line */
if (argc < 5) {
usage(argv[0]);
}
file_in = argv[1];
file_out = argv[2];
W = atoi(argv[3]);
H = atoi(argv[4]);
in = fopen(file_in, "rb");
out = fopen(file_out, "wb");
/* Check parameters */
if (in == NULL || out == NULL || W == 0 || H == 0) {
usage(argv[0]);
}
/* Init BMP headers */
magic.magic[0] = 'B';
magic.magic[1] = 'M';
header.filesz = W * H * 4 + sizeof(magic) + sizeof(header) + sizeof(dibheader);
header.creator1 = 0;
header.creator2 = 0;
header.bmp_offset = sizeof(magic) + sizeof(header) + sizeof(dibheader);
dibheader.header_sz = sizeof(dibheader);
dibheader.width = W;
dibheader.height = H;
dibheader.nplanes = 1;
dibheader.bitspp = 32;
dibheader.compress_type = 3;
dibheader.bmp_bytesz = W * H * 4;
dibheader.hres = 2835;
dibheader.vres = 2835;
dibheader.ncolors = 0;
dibheader.nimpcolors = 0;
dibheader.rmask = RMASK;
dibheader.gmask = BMASK;
dibheader.bmask = GMASK;
dibheader.amask = AMASK;
dibheader.colorspace_type = 0x57696e20;
memset(&dibheader.colorspace, 0, sizeof(dibheader.colorspace));
dibheader.rgamma = dibheader.bgamma = dibheader.ggamma = 0;
/* Read picture data */
image = (byte*) malloc(4*W*H);
if (image == NULL) {
printf("Could not allocate a %d-byte buffer.\n", 4*W*H);
exit(1);
}
fread(image, 4*W*H, sizeof(byte), in);
fclose(in);
/* Write header */
fwrite(&magic, sizeof(magic), 1, out);
fwrite(&header, sizeof(header), 1, out);
fwrite(&dibheader, sizeof(dibheader), 1, out);
/* Convert the byte array to BMP format */
for (h = H-1; h >= 0; h--) {
for (w = 0; w < W; w++) {
r = *(image + w*4 + 4 * W * h);
b = *(image + w*4 + 4 * W * h + 1);
g = *(image + w*4 + 4 * W * h + 2);
a = *(image + w*4 + 4 * W * h + 3);
fwrite(&b, 1, 1, out);
fwrite(&g, 1, 1, out);
fwrite(&r, 1, 1, out);
fwrite(&a, 1, 1, out);
}
}
free(image);
fclose(out);
}
So using this tool I was able to recognise the picture used to generate this 1280x1920 bitmap.
I found that starting from latest version of Android Studio (2.2.2 as of writing), you can view the bitmap file directly:
Open the ‘Android Monitor’ tab (at the bottom left) and then Memory tab.
Press the ‘Dump Java Heap’ button
Choose the ‘Bitmap’ Class Name for the current snapshot, select each Instance of bitmap and view what image exactly consume more memory than expected. (screens 4 and 5)
Choose the Bitmap class name…
Select each Instance of bitmap
and right click on it, select View Bitmap
Just take the input to the image and convert it into a bitmap object by using the fileinput stream/datastream. Also add logs for seeing data for each image that gets used.
You could enable an usb connection and copy the file to an other computer with more tools to investigate.
Some devices could be configured to dump the current screen to file system when the start button is pressed. Maybe this happens to you.
I try to read an image from sdcard (in emulator) and then create a Bitmap image with the
BitmapFactory.decodeByteArray
method. I set the options:
options.inPrefferedConfig = Bitmap.Config.ARGB_8888
options.inDither = false
Then I extract the pixels into a ByteBuffer.
ByteBuffer buffer = ByteBuffer.allocateDirect(width*height*4)
bitmap.copyPixelsToBuffer(buffer)
I use this ByteBuffer then in the JNI to convert it into RGB format and want to calculate on it.
But always I get false data - I test without modifying the ByteBuffer. Only thing I do is to put it into the native method into JNI. Then cast it into a unsigned char* and convert it back into a ByteBuffer before returning it back to Java.
unsigned char* buffer = (unsinged char*)(env->GetDirectBufferAddress(byteBuffer))
jobject returnByteBuffer = env->NewDirectByteBuffer(buffer, length)
Before displaying the image I get data back with
bitmap.copyPixelsFromBuffer( buffer )
But then it has wrong data in it.
My Question is if this is because the image is internally converted into RGB 565 or what is wrong here?
.....
Have an answer for it:
->>> yes, it is converted internally to RGB565.
Does anybody know how to create such an bitmap image from PNG with ARGB8888 pixel format?
If anybody has an idea, it would be great!
An ARGB_8888 Bitmap (on pre Honeycomb versions) is natively stored in the RGBA format.
So the alpha channel is moved at the end. You should take this into account when accessing a Bitmap's pixels natively.
I assume you are writing code for a version of Android lower than 3.2 (API level < 12), because since then the behavior of the methods
BitmapFactory.decodeFile(pathToImage);
BitmapFactory.decodeFile(pathToImage, opt);
bitmapObject.createScaledBitmap(bitmap, desiredWidth, desiredHeight, false /*filter?*/);
has changed.
On older platforms (API level < 12) the BitmapFactory.decodeFile(..) methods try to return a Bitmap with RGB_565 config by default, if they can't find any alpha, which lowers the quality of an iamge. This is still ok, because you can enforce an ARGB_8888 bitmap using
options.inPrefferedConfig = Bitmap.Config.ARGB_8888
options.inDither = false
The real problem comes when each pixel of your image has an alpha value of 255 (i.e. completely opaque). In that case the Bitmap's flag 'hasAlpha' is set to false, even though your Bitmap has ARGB_8888 config. If your *.png-file had at least one real transparent pixel, this flag would have been set to true and you wouldn't have to worry about anything.
So when you want to create a scaled Bitmap using
bitmapObject.createScaledBitmap(bitmap, desiredWidth, desiredHeight, false /*filter?*/);
the method checks whether the 'hasAlpha' flag is set to true or false, and in your case it is set to false, which results in obtaining a scaled Bitmap, which was automatically converted to the RGB_565 format.
Therefore on API level >= 12 there is a public method called
public void setHasAlpha (boolean hasAlpha);
which would have solved this issue. So far this was just an explanation of the problem.
I did some research and noticed that the setHasAlpha method has existed for a long time and it's public, but has been hidden (#hide annotation). Here is how it is defined on Android 2.3:
/**
* Tell the bitmap if all of the pixels are known to be opaque (false)
* or if some of the pixels may contain non-opaque alpha values (true).
* Note, for some configs (e.g. RGB_565) this call is ignore, since it does
* not support per-pixel alpha values.
*
* This is meant as a drawing hint, as in some cases a bitmap that is known
* to be opaque can take a faster drawing case than one that may have
* non-opaque per-pixel alpha values.
*
* #hide
*/
public void setHasAlpha(boolean hasAlpha) {
nativeSetHasAlpha(mNativeBitmap, hasAlpha);
}
Now here is my solution proposal. It does not involve any copying of bitmap data:
Checked at runtime using java.lang.Reflect if the current
Bitmap implementation has a public 'setHasAplha' method.
(According to my tests it works perfectly since API level 3, and i haven't tested lower versions, because JNI wouldn't work). You may have problems if a manufacturer has explicitly made it private, protected or deleted it.
Call the 'setHasAlpha' method for a given Bitmap object using JNI.
This works perfectly, even for private methods or fields. It is official that JNI does not check whether you are violating the access control rules or not.
Source: http://java.sun.com/docs/books/jni/html/pitfalls.html (10.9)
This gives us great power, which should be used wisely. I wouldn't try modifying a final field, even if it would work (just to give an example). And please note this is just a workaround...
Here is my implementation of all necessary methods:
JAVA PART:
// NOTE: this cannot be used in switch statements
private static final boolean SETHASALPHA_EXISTS = setHasAlphaExists();
private static boolean setHasAlphaExists() {
// get all puplic Methods of the class Bitmap
java.lang.reflect.Method[] methods = Bitmap.class.getMethods();
// search for a method called 'setHasAlpha'
for(int i=0; i<methods.length; i++) {
if(methods[i].getName().contains("setHasAlpha")) {
Log.i(TAG, "method setHasAlpha was found");
return true;
}
}
Log.i(TAG, "couldn't find method setHasAlpha");
return false;
}
private static void setHasAlpha(Bitmap bitmap, boolean value) {
if(bitmap.hasAlpha() == value) {
Log.i(TAG, "bitmap.hasAlpha() == value -> do nothing");
return;
}
if(!SETHASALPHA_EXISTS) { // if we can't find it then API level MUST be lower than 12
// couldn't find the setHasAlpha-method
// <-- provide alternative here...
return;
}
// using android.os.Build.VERSION.SDK to support API level 3 and above
// use android.os.Build.VERSION.SDK_INT to support API level 4 and above
if(Integer.valueOf(android.os.Build.VERSION.SDK) <= 11) {
Log.i(TAG, "BEFORE: bitmap.hasAlpha() == " + bitmap.hasAlpha());
Log.i(TAG, "trying to set hasAplha to true");
int result = setHasAlphaNative(bitmap, value);
Log.i(TAG, "AFTER: bitmap.hasAlpha() == " + bitmap.hasAlpha());
if(result == -1) {
Log.e(TAG, "Unable to access bitmap."); // usually due to a bug in the own code
return;
}
} else { //API level >= 12
bitmap.setHasAlpha(true);
}
}
/**
* Decodes a Bitmap from the SD card
* and scales it if necessary
*/
public Bitmap decodeBitmapFromFile(String pathToImage, int pixels_limit) {
Bitmap bitmap;
Options opt = new Options();
opt.inDither = false; //important
opt.inPreferredConfig = Bitmap.Config.ARGB_8888;
bitmap = BitmapFactory.decodeFile(pathToImage, opt);
if(bitmap == null) {
Log.e(TAG, "unable to decode bitmap");
return null;
}
setHasAlpha(bitmap, true); // if necessary
int numOfPixels = bitmap.getWidth() * bitmap.getHeight();
if(numOfPixels > pixels_limit) { //image needs to be scaled down
// ensures that the scaled image uses the maximum of the pixel_limit while keeping the original aspect ratio
// i use: private static final int pixels_limit = 1280*960; //1,3 Megapixel
imageScaleFactor = Math.sqrt((double) pixels_limit / (double) numOfPixels);
Bitmap scaledBitmap = Bitmap.createScaledBitmap(bitmap,
(int) (imageScaleFactor * bitmap.getWidth()), (int) (imageScaleFactor * bitmap.getHeight()), false);
bitmap.recycle();
bitmap = scaledBitmap;
Log.i(TAG, "scaled bitmap config: " + bitmap.getConfig().toString());
Log.i(TAG, "pixels_limit = " + pixels_limit);
Log.i(TAG, "scaled_numOfpixels = " + scaledBitmap.getWidth()*scaledBitmap.getHeight());
setHasAlpha(bitmap, true); // if necessary
}
return bitmap;
}
Load your lib and declare the native method:
static {
System.loadLibrary("bitmaputils");
}
private static native int setHasAlphaNative(Bitmap bitmap, boolean value);
Native section ('jni' folder)
Android.mk:
LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)
LOCAL_MODULE := bitmaputils
LOCAL_SRC_FILES := bitmap_utils.c
LOCAL_LDLIBS := -llog -ljnigraphics -lz -ldl -lgcc
include $(BUILD_SHARED_LIBRARY)
bitmapUtils.c:
#include <jni.h>
#include <android/bitmap.h>
#include <android/log.h>
#define LOG_TAG "BitmapTest"
#define Log_i(...) __android_log_print(ANDROID_LOG_INFO,LOG_TAG,__VA_ARGS__)
#define Log_e(...) __android_log_print(ANDROID_LOG_ERROR,LOG_TAG,__VA_ARGS__)
// caching class and method IDs for a faster subsequent access
static jclass bitmap_class = 0;
static jmethodID setHasAlphaMethodID = 0;
jint Java_com_example_bitmaptest_MainActivity_setHasAlphaNative(JNIEnv * env, jclass clazz, jobject bitmap, jboolean value) {
AndroidBitmapInfo info;
void* pixels;
if (AndroidBitmap_getInfo(env, bitmap, &info) < 0) {
Log_e("Failed to get Bitmap info");
return -1;
}
if (info.format != ANDROID_BITMAP_FORMAT_RGBA_8888) {
Log_e("Incompatible Bitmap format");
return -1;
}
if (AndroidBitmap_lockPixels(env, bitmap, &pixels) < 0) {
Log_e("Failed to lock the pixels of the Bitmap");
return -1;
}
// get class
if(bitmap_class == NULL) { //initializing jclass
// NOTE: The class Bitmap exists since API level 1, so it just must be found.
bitmap_class = (*env)->GetObjectClass(env, bitmap);
if(bitmap_class == NULL) {
Log_e("bitmap_class == NULL");
return -2;
}
}
// get methodID
if(setHasAlphaMethodID == NULL) { //initializing jmethodID
// NOTE: If this fails, because the method could not be found the App will crash.
// But we only call this part of the code if the method was found using java.lang.Reflect
setHasAlphaMethodID = (*env)->GetMethodID(env, bitmap_class, "setHasAlpha", "(Z)V");
if(setHasAlphaMethodID == NULL) {
Log_e("methodID == NULL");
return -2;
}
}
// call java instance method
(*env)->CallVoidMethod(env, bitmap, setHasAlphaMethodID, value);
// if an exception was thrown we could handle it here
if ((*env)->ExceptionOccurred(env)) {
(*env)->ExceptionDescribe(env);
(*env)->ExceptionClear(env);
Log_e("calling setHasAlpha threw an exception");
return -2;
}
if(AndroidBitmap_unlockPixels(env, bitmap) < 0) {
Log_e("Failed to unlock the pixels of the Bitmap");
return -1;
}
return 0; // success
}
That's it. We are done. I've posted the whole code for copy-and-paste purposes.
The actual code isn't that big, but making all these paranoid error checks makes it a lot bigger. I hope this could be helpful to anyone.