GrayScaled Image losing its brightness in NDK(C/C++) Android - android

I am passing the Bitmap with ARGB_8888 config.
I am able to apply the grayscale effect to the image but after applying that i am losing its brightness.
I have googled a lot but found the same implementation as i have.
Here is my native implmentation ::
JNIEXPORT void JNICALL Java_com_example_ndksampleproject_MainActivity_jniConvertToGray(JNIEnv * env, jobject obj, jobject bitmapcolor,jobject bitmapgray)
{
AndroidBitmapInfo infocolor;
void* pixelscolor;
AndroidBitmapInfo infogray;
void* pixelsgray;
int ret;
int y;
int x;
LOGI("convertToGray");
if ((ret = AndroidBitmap_getInfo(env, bitmapcolor, &infocolor)) < 0) {
LOGE("AndroidBitmap_getInfo() failed ! error=%d", ret);
return;
}
if ((ret = AndroidBitmap_getInfo(env, bitmapgray, &infogray)) < 0) {
LOGE("AndroidBitmap_getInfo() failed ! error=%d", ret);
return;
}
LOGI("color image :: width is %d; height is %d; stride is %d; format is %d;flags is %d",infocolor.width,infocolor.height,infocolor.stride,infocolor.format,infocolor.flags);
if (infocolor.format != ANDROID_BITMAP_FORMAT_RGBA_8888) {
LOGE("Bitmap format is not RGBA_8888 !");
return;
}
LOGI("gray image :: width is %d; height is %d; stride is %d; format is %d;flags is %d",infogray.width,infogray.height,infogray.stride,infogray.format,infogray.flags);
if (infogray.format != ANDROID_BITMAP_FORMAT_A_8) {
LOGE("Bitmap format is not A_8 !");
return;
}
if ((ret = AndroidBitmap_lockPixels(env, bitmapcolor, &pixelscolor)) < 0) {
LOGE("AndroidBitmap_lockPixels() failed ! error=%d", ret);
}
if ((ret = AndroidBitmap_lockPixels(env, bitmapgray, &pixelsgray)) < 0) {
LOGE("AndroidBitmap_lockPixels() failed ! error=%d", ret);
}
LOGI("unlocking pixels height = %d",infocolor.height);
// modify pixels with image processing algorithm
for (y=0;y<infocolor.height;y++) {
argb * line = (argb *) pixelscolor;
uint8_t * grayline = (uint8_t *) pixelsgray;
for (x=0;x<infocolor.width;x++) {
grayline[x] = ((255-0.3 * line[x].red) + (255-0.59 * line[x].green) + (255-0.11*line[x].blue))/3;
}
pixelscolor = (char *)pixelscolor + infocolor.stride;
pixelsgray = (char *) pixelsgray + infogray.stride;
}
LOGI("unlocking pixels");
AndroidBitmap_unlockPixels(env, bitmapcolor);
AndroidBitmap_unlockPixels(env, bitmapgray);
}
Result ::
Please let me know if you need anything from my side..
Please help me to get rid of this issue as i am stuck into this from many hours.
Many thanks in Advance !!!
EDIT ::
After applying the Mark Setchell's Suggestion ::
EDITED
If you invert the image above, you get this - which looks correct to me:

Don't divide by 3 on the line where you calculate grayline[x]. Your answer is already correctly weighted because 0.3 + 0.59 + 0.11 = 1
grayline[x] = (255-0.3 * line[x].red) + (255-0.59 * line[x].green) + (255-0.11*line[x].blue);

There are two problems with your current code.
1) As mentioned by others, do not divide the final result by three. If you were calculating grayscale using the average method (e.g. gray = (R + G + B) / 3), the division would be necessary. For the ITU conversion formula you are using, there is no need for this extra division, because the fractional amounts already sum to 1.
2) The inversion occurs because you are subtracting each color value from 255. There is no need to do this.
The correct grayscale conversion code for your current formula would be:
grayline[x] = ((0.3 * line[x].red) + (0.59 * line[x].green) + (0.11*line[x].blue));

Related

Apply grayscale effect to Image using NDK(C/C++) in Android

I want to apply the grayscale effect to image using NDK.
For that i have googled a lot but found the same result which returns the image in somewhat like negative(this is what i believe).
What i want ::
For example ::
I have this original image
After applying the grayscale effect it should be like this ::
What i have tried ::
I want to achieve this functionality using NDK,so that i have created one function in .cpp file
JNIEXPORT void JNICALL Java_com_example_ndksampleproject_MainActivity_jniConvertToGray(JNIEnv * env, jobject obj, jobject bitmapcolor,jobject bitmapgray)
{
AndroidBitmapInfo infocolor;
void* pixelscolor;
AndroidBitmapInfo infogray;
void* pixelsgray;
int ret;
int y;
int x;
LOGI("convertToGray");
if ((ret = AndroidBitmap_getInfo(env, bitmapcolor, &infocolor)) < 0) {
LOGE("AndroidBitmap_getInfo() failed ! error=%d", ret);
return;
}
if ((ret = AndroidBitmap_getInfo(env, bitmapgray, &infogray)) < 0) {
LOGE("AndroidBitmap_getInfo() failed ! error=%d", ret);
return;
}
LOGI("color image :: width is %d; height is %d; stride is %d; format is %d;flags is %d",infocolor.width,infocolor.height,infocolor.stride,infocolor.format,infocolor.flags);
if (infocolor.format != ANDROID_BITMAP_FORMAT_RGBA_8888) {
LOGE("Bitmap format is not RGBA_8888 !");
return;
}
LOGI("gray image :: width is %d; height is %d; stride is %d; format is %d;flags is %d",infogray.width,infogray.height,infogray.stride,infogray.format,infogray.flags);
if (infogray.format != ANDROID_BITMAP_FORMAT_A_8) {
LOGE("Bitmap format is not A_8 !");
return;
}
if ((ret = AndroidBitmap_lockPixels(env, bitmapcolor, &pixelscolor)) < 0) {
LOGE("AndroidBitmap_lockPixels() failed ! error=%d", ret);
}
if ((ret = AndroidBitmap_lockPixels(env, bitmapgray, &pixelsgray)) < 0) {
LOGE("AndroidBitmap_lockPixels() failed ! error=%d", ret);
}
LOGI("unlocking pixels height = %d",infocolor.height);
// modify pixels with image processing algorithm
for (y=0;y<infocolor.height;y++) {
argb * line = (argb *) pixelscolor;
uint8_t * grayline = (uint8_t *) pixelsgray;
for (x=0;x<infocolor.width;x++) {
grayline[x] = 0.3 * line[x].red + 0.59 * line[x].green + 0.11*line[x].blue;
}
pixelscolor = (char *)pixelscolor + infocolor.stride;
pixelsgray = (char *) pixelsgray + infogray.stride;
}
LOGI("unlocking pixels");
AndroidBitmap_unlockPixels(env, bitmapcolor);
AndroidBitmap_unlockPixels(env, bitmapgray);
}
The above function return me the result like this ::
This effect looks like something like negative of image.
Let me know if u need anything from my side..
Please help me to solve this issue as i have stuck into this from many hours.
Many Thanks in Advance...
EDIT ::
floppy12's Suggestion ::
for (y=0;y<infocolor.height;y++) {
argb * line = (argb *) pixelscolor;
uint8_t * grayline = (uint8_t *) pixelsgray;
for (x=0;x<infocolor.width;x++) {
grayline[x] = (255-0.3 * line[x].red) + (255-0.59 * line[x].green) + (255-0.11*line[x].blue)/3;
}
pixelscolor = (char *)pixelscolor + infocolor.stride;
pixelsgray = (char *) pixelsgray + infogray.stride;
}
Output ::
EDIT 2 ::
I have made the some simple modification to the image and it returns me the image what i wanted but the image lost its brightness.
This is the changes that i have made in native function..
for (y=0;y<infocolor.height;y++) {
argb * line = (argb *) pixelscolor;
uint8_t * grayline = (uint8_t *) pixelsgray;
for (x=0;x<infocolor.width;x++) {
grayline[x] = ((255-0.3 * line[x].red) + (255-0.59 * line[x].green) + (255-0.11*line[x].blue))/3;
}
pixelscolor = (char *)pixelscolor + infocolor.stride;
pixelsgray = (char *) pixelsgray + infogray.stride;
}
Result(image is grayscaled but losing its brightness) ::
To obtain an image in grayscale, each pixel should have the same amount of red, green and blue
Maybe use the red component and affect it to both green and blue in your grayline computation
or use the formula (R+G+B)/3 = Gray
Negative images are normally obtained by by shifting each component :
NegR = 255 - grayR
and so on
So you could try to compute grayscal[x] = (255 - 0.3*line[x]) + ...
Edit for brightness:
To obtain better brightness, try to add a fixed amount to your grayscale computation:
G += Bness;
Here it seems that Bness should be negative as long as you are going from 255(black) to 0(white) for some strange reason. You want to put a down limit to not go under 0 for your grascale value, then try :
G = max(0, G+Bness);
I recommend something like Bness = -25
Edit implementation brightness:
// Declare a global variable for your brightness - outside your class
static uint8_t bness = -25;
// In your grayscale computation function
for y...
for x...
grayscale[x] = ( (255-0.3*line[x].red) + ..... ) /3 ;
int16_t gBright = grayscale[x] + bness;
grayscale[x] = MAX( 0, gBright );

32 bpp monochrome bitmap to 1 bpp TIFF

My android app uses an external lib that makes some image treatments. The final output of the treatment chain is a monochrome bitmap but saved has a color bitmap (32bpp).
The image has to be uploaded to a cloud blob, so for bandwidth concerns, i'd like to convert it to 1bpp G4 compression TIFF. I successfully integrated libTIFF in my app via JNI and now i'm writing the conversion routine in C. I'm a little stuck here.
I managed to produce a 32 BPP TIFF, but impossible to reduce to 1bpp, the output image is always unreadable. Did someone succeded to do similar task ?
More speciffically :
What should be the value of SAMPLE_PER_PIXEL and BITS_PER_SAMPLE
parameters ?
How to determine the strip size ?
How to fill each strip ? (i.e. : How to convert 32bpp pixel lines to 1 bpp pixels strips ?)
Many thanks !
UPDATE : The code produced with the precious help of Mohit Jain
int ConvertMonochrome32BppBitmapTo1BppTiff(char* bitmap, int height, int width, int resx, int resy, char const *tifffilename)
{
TIFF *tiff;
if ((tiff = TIFFOpen(tifffilename, "w")) == NULL)
{
return TC_ERROR_OPEN_FAILED;
}
// TIFF Settings
TIFFSetField(tiff, TIFFTAG_RESOLUTIONUNIT, RESUNIT_INCH);
TIFFSetField(tiff, TIFFTAG_XRESOLUTION, resx);
TIFFSetField(tiff, TIFFTAG_YRESOLUTION, resy);
TIFFSetField(tiff, TIFFTAG_COMPRESSION, COMPRESSION_CCITTFAX4); //Group4 compression
TIFFSetField(tiff, TIFFTAG_IMAGEWIDTH, width);
TIFFSetField(tiff, TIFFTAG_IMAGELENGTH, height);
TIFFSetField(tiff, TIFFTAG_ROWSPERSTRIP, 1);
TIFFSetField(tiff, TIFFTAG_SAMPLESPERPIXEL, 1);
TIFFSetField(tiff, TIFFTAG_BITSPERSAMPLE, 1);
TIFFSetField(tiff, TIFFTAG_ORIENTATION, ORIENTATION_TOPLEFT);
TIFFSetField(tiff, TIFFTAG_PLANARCONFIG, PLANARCONFIG_CONTIG);
TIFFSetField(tiff, TIFFTAG_PHOTOMETRIC, PHOTOMETRIC_MINISWHITE);
tsize_t tbufsize = (width + 7) / 8; //Tiff ScanLine buffer size for 1bpp pixel row
//Now writing image to the file one row by one
int x, y;
for (y = 0; y < height; y++)
{
char *buffer = malloc(tbufsize);
memset(buffer, 0, tbufsize);
for (x = 0; x < width; x++)
{
//offset of the 1st byte of each pixel in the input image (is enough to determine is black or white in 32 bpp monochrome bitmap)
uint32 bmpoffset = ((y * width) + x) * 4;
if (bitmap[bmpoffset] == 0) //Black pixel ?
{
uint32 tiffoffset = x / 8;
*(buffer + tiffoffset) |= (0b10000000 >> (x % 8));
}
}
if (TIFFWriteScanline(tiff, buffer, y, 0) != 1)
{
return TC_ERROR_WRITING_FAILED;
}
if (buffer)
{
free(buffer);
buffer = NULL;
}
}
TIFFClose(tiff);
tiff = NULL;
return TC_SUCCESSFULL;
}
To convert 32 bpp to 1 bpp, extract RGB and convert it into Y (luminance) and use some threshold to convert to 1 bpp.
Number of samples and bits per pixel should be 1.

inPreferredConfig() not working in Android Gingerbread

I have an app built in Android 2.2 and I'm using inPreferredConfig() to switch a bitmap to ARGB_8888 format, however, this doesn't seem to work as when checked immediately afterwards the bitmap is still in RGB_565 format. I've tried changing it to any of the other formats and neither of those work either.
The function works fine if the phone or emulator is running Android 2.2, but anything above that fails. Does anyone know why this is happening? Is inPreferredConfig() depreciated in later Android versions?
What I'm doing:
I'm using the NDK with some C code I've found to run some image processing functions (taken from http://www.ibm.com/developerworks/opensource/tutorials/os-androidndk/section5.html). The C code expects the image format to be in ARGB_8888 and although the Android documentation says that the format should already be in 8888 by default but it's definitely in 565 so I'm very confused.
I'm guessing I could convert it in C...but I'm terrible at C so I wouldn't know where to start.
My C function:
{
AndroidBitmapInfo infocolor;
void* pixelscolor;
AndroidBitmapInfo infogray;
void* pixelsgray;
int ret;
int y;
int x;
LOGI("convertToGray");
if ((ret = AndroidBitmap_getInfo(env, bitmapcolor, &infocolor)) < 0) {
LOGE("AndroidBitmap_getInfo() failed ! error=%d", ret);
return;
}
if ((ret = AndroidBitmap_getInfo(env, bitmapgray, &infogray)) < 0) {
LOGE("AndroidBitmap_getInfo() failed ! error=%d", ret);
return;
}
LOGI("color image :: width is %d; height is %d; stride is %d; format is %d;flags is %d",infocolor.width,infocolor.height,infocolor.stride,infocolor.format,infocolor.flags);
if (infocolor.format != ANDROID_BITMAP_FORMAT_RGBA_8888) {
LOGE("Bitmap format is not RGBA_8888 !");
return;
}
LOGI("gray image :: width is %d; height is %d; stride is %d; format is %d;flags is %d",infogray.width,infogray.height,infogray.stride,infogray.format,infogray.flags);
if (infogray.format != ANDROID_BITMAP_FORMAT_A_8) {
LOGE("Bitmap format is not A_8 !");
return;
}
if ((ret = AndroidBitmap_lockPixels(env, bitmapcolor, &pixelscolor)) < 0) {
LOGE("AndroidBitmap_lockPixels() failed ! error=%d", ret);
}
if ((ret = AndroidBitmap_lockPixels(env, bitmapgray, &pixelsgray)) < 0) {
LOGE("AndroidBitmap_lockPixels() failed ! error=%d", ret);
}
// modify pixels with image processing algorithm
for (y=0;y<infocolor.height;y++) {
argb * line = (argb *) pixelscolor;
uint8_t * grayline = (uint8_t *) pixelsgray;
for (x=0;x<infocolor.width;x++) {
grayline[x] = 0.3 * line[x].red + 0.59 * line[x].green + 0.11*line[x].blue;
}
pixelscolor = (char *)pixelscolor + infocolor.stride;
pixelsgray = (char *) pixelsgray + infogray.stride;
}
LOGI("unlocking pixels");
AndroidBitmap_unlockPixels(env, bitmapcolor);
AndroidBitmap_unlockPixels(env, bitmapgray);
}
My Java functions:
// load bitmap from resources
BitmapFactory.Options options = new BitmapFactory.Options();
// Make sure it is 24 bit color as our image processing algorithm expects this format
options.inPreferredConfig = Config.ARGB_8888;
bitmapOrig = BitmapFactory.decodeResource(this.getResources(), R.drawable.sampleimage,options);
if (bitmapOrig != null)
ivDisplay.setImageBitmap(bitmapOrig);
-
bitmapWip = Bitmap.createBitmap(bitmapOrig.getWidth(),bitmapOrig.getHeight(),Config.ALPHA_8);
convertToGray(bitmapOrig,bitmapWip);
ivDisplay.setImageBitmap(bitmapWip);
Thanks, N
P.S My last question of the same subject got deleted, which is annoying as I can't find any answers to this anywhere.
Images are loaded with the ARGB_8888 config by default, according to the documentation, so my guess is that it recognizes the RGB_565 format of your bitmap and changes the config for that. I don't see why this should be a problem if the original image is of RGB_565 format and has no transparency.
Here's the documentation - read the last bit:
If this is non-null, the decoder will try to decode into this internal configuration. If it is null, or the request cannot be met, the decoder will try to pick the best matching config based on the system's screen depth, and characteristics of the original image such as if it has per-pixel alpha (requiring a config that also does). Image are loaded with the ARGB_8888 config by default.
http://developer.android.com/reference/android/graphics/BitmapFactory.Options.html#inPreferredConfig
This is old, but so far not satisfactory answered:
Just ran into the same problem the other day.
So far I couldn't solve it and I'm considering writing a converter. Since I'm using OpenCV and they don't support the 565 format.
Saving the image and loading it again with different configuration works, but unfortunately for a real time camera application this is not feasible.
Have a look at this code:
How does one convert 16-bit RGB565 to 24-bit RGB888?

FFmpeg sample code for creating a video file from still images JNI Android

How i modify the following FFMPEG sample code for creating a video file from still images that i am having in my android phone. I am using JNI for invoking ffmpeg.
JNIEXPORT void JNICALL videoEncodeExample((JNIEnv *pEnv, jobject pObj, jstring filename)
{
AVCodec *codec;
AVCodecContext *c= NULL;
int i, out_size, size, x, y, outbuf_size;
FILE *f;
AVFrame *picture;
uint8_t *outbuf, *picture_buf;
printf("Video encoding\n");
/* find the mpeg1 video encoder */
codec = avcodec_find_encoder(CODEC_ID_MPEG1VIDEO);
if (!codec) {
fprintf(stderr, "codec not found\n");
exit(1);
}
c= avcodec_alloc_context();
picture= avcodec_alloc_frame();
/* put sample parameters */
c->bit_rate = 400000;
/* resolution must be a multiple of two */
c->width = 352;
c->height = 288;
/* frames per second */
c->time_base= (AVRational){1,25};
c->gop_size = 10; /* emit one intra frame every ten frames */
c->max_b_frames=1;
c->pix_fmt = PIX_FMT_YUV420P;
/* open it */
if (avcodec_open(c, codec) < 0) {
fprintf(stderr, "could not open codec\n");
exit(1);
}
f = fopen(filename, "wb");
if (!f) {
fprintf(stderr, "could not open %s\n", filename);
exit(1);
}
/* alloc image and output buffer */
outbuf_size = 100000;
outbuf = malloc(outbuf_size);
size = c->width * c->height;
picture_buf = malloc((size * 3) / 2); /* size for YUV 420 */
picture->data[0] = picture_buf;
picture->data[1] = picture->data[0] + size;
picture->data[2] = picture->data[1] + size / 4;
picture->linesize[0] = c->width;
picture->linesize[1] = c->width / 2;
picture->linesize[2] = c->width / 2;
/* encode 1 second of video */
for(i=0;i<25;i++) {
fflush(stdout);
/* prepare a dummy image */
/* Y */
for(y=0;y<c->height;y++) {
for(x=0;x<c->width;x++) {
picture->data[0][y * picture->linesize[0] + x] = x + y + i * 3;
}
}
/* Cb and Cr */
for(y=0;y<c->height/2;y++) {
for(x=0;x<c->width/2;x++) {
picture->data[1][y * picture->linesize[1] + x] = 128 + y + i * 2;
picture->data[2][y * picture->linesize[2] + x] = 64 + x + i * 5;
}
}
/* encode the image */
out_size = avcodec_encode_video(c, outbuf, outbuf_size, picture);
printf("encoding frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, out_size, f);
}
/* get the delayed frames */
for(; out_size; i++) {
fflush(stdout);
out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
printf("write frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, out_size, f);
}
/* add sequence end code to have a real mpeg file */
outbuf[0] = 0x00;
outbuf[1] = 0x00;
outbuf[2] = 0x01;
outbuf[3] = 0xb7;
fwrite(outbuf, 1, 4, f);
fclose(f);
free(picture_buf);
free(outbuf);
avcodec_close(c);
av_free(c);
av_free(picture);
printf("\n");
}
Thanks and Regards
Anish
In your sample code, the encoded image is dummy.
So, I think what you should do is to replace the dummy image to the actual images in your android device, and remember to convert its format to YUV420.
In your Code you are using image format as PIX_FMT_YUV420P. You need to convert your images to YUV420P as Android is using raw picture format YUV420SP (PIX_FMT_NV21) you still need to scale your images using libswscale library provided in ffmpeg. may be on of my answer will help you. check it. Converting YUV420SP to YUV420P

How to cast an IplImage* to a cv:Mat*?

I´m a begginer on C Language and I need to copy pixels to Android Bitmap, I´m using a piece of code of android opencv, used for a jni:
AndroidBitmapInfo info;
void* pixels;
int ret;
cv::Mat* mat;
if ((ret = AndroidBitmap_getInfo(env, bitmap, &info)) < 0 ){
LOGE("AndroidBitmap_getInfo() failed ! error=%d", ret);
return false; // can't get info
}
if (info.format != ANDROID_BITMAP_FORMAT_RGBA_8888){
LOGE("Bitmap format is not RGB_8888 !");
return false; // incompatible format
}
if ( (ret = AndroidBitmap_lockPixels(env, bitmap, &pixels)) < 0 ){
LOGE("AndroidBitmap_lockPixels() failed ! error=%d", ret);
return false; // can't get pixels
}
memcpy(pixels, mat->data, info.height * info.width * 4);
AndroidBitmap_unlockPixels(env, bitmap);
So, I have an IplImage* called pImage, but I don´t know how to convert an IplImage* to a cv::Mat*. I see a way to convert to a cv:Mat, like this:
cv::Mat mat(pImage);
But I need an cv:Mat* not an cv:Mat. Any help?
Answering to your question title:
cv::Mat* mat = new cv::Mat(pImage);
The OpenCV tutorials include an article on Interoperability with OpenCV 1.

Categories

Resources