I wanted to apply image-filter to my image and used android HelloEffects sample
It converts the image data from bitmap to texture.
After applying the image filter effect, I'd like to get the image back in jpeg format, but don't know how to do that.
I did this in one way like ,
I converted the texture image into bitmap using glreadpixles() method
I then saved the bitmap to the sd card
Texture to bitmap code
public static Bitmap SavePixels(int x, int y, int w, int h){
int b[]=new int[w*(y+h)];
int bt[]=new int[w*h];
IntBuffer ib = IntBuffer.wrap(b);
ib.position(0);
GLES20.glReadPixels(0, 0, w, h, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, ib);
for(int i=0, k=0; i<h; i++, k++)
{//remember, that OpenGL bitmap is incompatible with Android bitmap
//and so, some correction need.
for(int j=0; j<w; j++)
{
int pix=b[i*w+j];
int pb=(pix>>16)&0xff;
int pr=(pix<<16)&0x00ff0000;
int pix1=(pix&0xff00ff00) | pr | pb;
bt[(h-k-1)*w+j]=pix1;
}
}
Bitmap sb=Bitmap.createBitmap(bt, w, h, Bitmap.Config.ARGB_8888);
return sb;
}
Bitmap to internal storage code,
public static void saveImage(Bitmap finalBitmap) {
File myDir=new File("/sdcard/saved_images");
myDir.mkdirs();
Random generator = new Random();
int n = 10000;
n = generator.nextInt(n);
String fname = "Image-"+ n +".jpg";
File file = new File (myDir, fname);
if (file.exists ()) file.delete ();
try {
FileOutputStream out = new FileOutputStream(file);
finalBitmap.compress(Bitmap.CompressFormat.JPEG, 90, out);
out.flush();
out.close();
} catch (Exception e) {
e.printStackTrace();
}
}
Check the saveFrame() method mentioned in http://bigflake.com/mediacodec/ExtractMpegFramesTest_egl14.java.txt
Related
Does anyone know of a way (preferably a single app) that could scan and recreate a QR code on Android?
Eg. I point my phone at the code, and then I could display that code on the screen, so it can be scanned again by some other device. This is different from simply taking a picture of the QR code since it takes me longer to get a nice photo of the code and even if I do, the quality is still quite bad and it takes too long to scan the code with the other device.
Try zxing library, https://github.com/zxing/zxing
1. Store string data after scan as a bitmap:
public static Bitmap encodeAsBitmap(final String contentAfterScan, final BarcodeFormat format,
final int width, final int height) throws WriterException {
MultiFormatWriter writer = new MultiFormatWriter();
EnumMap<EncodeHintType, Object> hint = new EnumMap<EncodeHintType, Object>(EncodeHintType.class);
hint.put(EncodeHintType.CHARACTER_SET, "UTF-8");
BitMatrix bitMatrix = writer.encode(contentAfterScan, format, width, height, hint);
int[] pixels = new int[width * height];
for (int y = 0; y < height; y++) {
int offset = y * width;
for (int x = 0; x < width; x++) {
pixels[offset + x] = bitMatrix.get(x, y) ? 0xFF000000/*BLACK*/ : 0xFFFFFFFF/*WHITE*/;
}
}
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixels, 0, width, 0, 0, width, height);
return bitmap;
}
2. Store image into your folder
private void storeImageFromBitmap(Bitmap bitmap, String yourFolder, String imageName) {
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG, 100, bytes);
File f = new File(yourFolder + File.separator + imageName);
FileOutputStream fo;
try {
f.createNewFile();
fo = new FileOutputStream(f);
fo.write(bytes.toByteArray());
fo.close();
} catch (IOException e) {
e.printStackTrace();
}
}
I have a feature request. The current flow is for the user to scan a code (not a QR code, not sure what it is, zxing will scan it), then scan the test card.
The client has asked for me allow the user to import the test from the library. So we need to be able to scan the code off an image.
Is it possible to do this in zxing or am I forced to use the camera / feature is not possible?
Thanks!
Here is my solution. I had to downsize the image, and inver the colors for it to work with zxing. I might add a convert to gray scale, but not today..
public static String scanDataMatrixImage(Bitmap bitmap) {
bitmap = doInvert(bitmap);
double scaling = getScaling(bitmap);
Bitmap resized;
if(scaling>0) {
resized = Bitmap.createScaledBitmap(bitmap, (int) (bitmap.getWidth() * scaling), (int) (bitmap.getHeight() * scaling), true);
}
else{
resized = bitmap;
}
String contents = null;
int[] intArray = new int[resized.getWidth() * resized.getHeight()];
//copy pixel data from the Bitmap into the 'intArray' array
resized.getPixels(intArray, 0, resized.getWidth(), 0, 0, resized.getWidth(), resized.getHeight());
LuminanceSource source = new RGBLuminanceSource(resized.getWidth(), resized.getHeight(), intArray);
BinaryBitmap binaryBitmap = new BinaryBitmap(new HybridBinarizer(source));
MultiFormatReader reader = new MultiFormatReader();
try
{
Result result = reader.decode(binaryBitmap);
contents = result.getText();
} catch (
Exception e
)
{
Log.e("QrTest", "Error decoding barcode", e);
}
return contents;
}
private static double getScaling(Bitmap bitmap){
int width = bitmap.getWidth();
int height = bitmap.getHeight();
int smallest = width;
if(smallest > height){
smallest = height;
}
double ratio = 200.0/smallest;
return ratio;
}
public static Bitmap doInvert(Bitmap src) {
// create new bitmap with the same settings as source bitmap
Bitmap bmOut = Bitmap.createBitmap(src.getWidth(), src.getHeight(), src.getConfig());
// color info
int A, R, G, B;
int pixelColor;
// image size
int height = src.getHeight();
int width = src.getWidth();
// scan through every pixel
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
// get one pixel
pixelColor = src.getPixel(x, y);
// saving alpha channel
A = Color.alpha(pixelColor);
// inverting byte for each R/G/B channel
R = 255 - Color.red(pixelColor);
G = 255 - Color.green(pixelColor);
B = 255 - Color.blue(pixelColor);
// set newly-inverted pixel to output image
bmOut.setPixel(x, y, Color.argb(A, R, G, B));
}
}
// return final bitmap
return bmOut;
}
I need to pass a image data like drawable from java side to cocos2d-x through JNI. How do i implement it?. What should be the parameter for JNI function and how to cast in cocos2d-x side?
create a Java interface forJNI like:
public static native void setBG(int[] raw, int width, int height);
in c++ code do:
//Use static variable here for simplicity
int *imagedata;
int staticwidth;
int staticheight;
Texture2D *userBackgroundImage;
void Java_com_my_company_JniHelper_setBG(JNIEnv* env, jobject thiz, jintArray raw, jint width, jint height)
{
jint *carr;
carr = env->GetIntArrayElements(raw, 0);
if(carr == NULL) {
return; /* exception occurred */
}
ssize_t dataLen = (int)width * (int)height;
int *data = new int[dataLen];
for (long i = 0; i < dataLen; i++)
{
data[i] = carr[i];
}
imagedata = data;//Make a copy because it need to be done in GLThread
staticwidth = (int)width;
staticheight = (int)height;
env->ReleaseIntArrayElements(raw, carr, 0);
LOGD("set image: %d * %d", width, height);
}
Then call the following method somewhere duration layer init or other cocos2d-x code:
void createImage(const void *data, ssize_t dataLen, int width, int height)
{
Texture2D *image = new Texture2D();
if (!image->initWithData(data, dataLen, Texture2D::PixelFormat::BGRA8888, width, height, Size(width, height)))
{
delete image;
delete imagedata;
image = NULL;
imagedata = NULL;
userBackgroundImage = NULL;
return;
}
delete imagedata;
imagedata = NULL;
userBackgroundImage = image;
}
You can then use the Texture2D object to create a sprite or do whatever you want
To call this code from java:
public static int[] BitmapToRaw(Bitmap bitmap) {
Bitmap image = bitmap.copy(Bitmap.Config.ARGB_8888, false);
int width = image.getWidth();
int height = image.getHeight();
int[] raw = new int[width * height];
image.getPixels(raw, 0, width, 0, 0, width, height);
return raw;
}
Bitmap image = BitmapFactory.decodeResource(getResources(), R.drawable.bg);
JniHelper.setBG(BitmapToRaw(image), image.getWidth(), image.getHeight());
I've only ever sent image data from cocos2d-x to Java, so you'll need to find a way to reverse this method. It's used to capture a node and pass it through for screenshots.
CCNode* node = <some node>;
const CCSize& size(node->getContentSize());
CCRenderTexture* render = CCRenderTexture::create(size.width, size.height);
// render node to the texturebuffer
render->clear(0, 0, 0, 1);
render->begin();
node->visit();
render->end();
CCImage* image = render->newCCImage();
// If we don't clear then the JNI call gets corrupted.
render->clear(0, 0, 0, 1);
// Create the array to pass in
jsize length = image->getDataLen();
jintArray imageBytes = t.env->NewIntArray(length);
unsigned char* imageData = image->getData();
t.env->SetIntArrayRegion(imageBytes, 0, length, const_cast<const jint*>(reinterpret_cast<jint*>(imageData)));
t.env->CallStaticVoidMethod(t.classID, t.methodID, imageBytes, (jint)image->getWidth(), (jint)image->getHeight());
image->release();
t.env->DeleteLocalRef(imageBytes);
t.env->DeleteLocalRef(t.classID);
The Java side looks like this:
import android.graphics.Bitmap;
import android.graphics.Bitmap.Config;
public static Bitmap getImage(int[] imageData, int width, int height) {
Bitmap image = Bitmap.createBitmap(width, height, Config.ARGB_8888);
image.setPixels(imageData, 0, width, 0, 0, width, height);
return image;
}
I think the best and easy way to do it will be saving it in Java and then accessing the file from cpp and then deleting it after use.
I am developing Like an autocad app like an desktop in android using OpenGL ES2.0 . I am drawn some objects in GLSurfaceview,like lines, cirles, and linear dimensioning etc. After drawn objects on GLSurfaceview. i am capture screen of the GLSurfaceview and make the PDF File Conversion. then, open the pdf file, some objects are misssing....
This is my output First-image : my Original output , Second-image : PDF File output...
My Code:
Note: In this code, when i click the button, it will take the screenshot as image and save in sdcard location. i used Boolean condition in ondraw method if condition, why because, renderer class, ondraw method is calling anytime, anyway, this code executed without boolean condition, it saved lots of images in memory card, that's why i put this boolean condition.
MainActivity Class :
protected boolean printOptionEnable = false;
saveImageButton.setOnClickListener( new OnClickListener() {
#Override
public void onClick(View v) {
Log.v("hari", "pan button clicked");
isSaveClick = true;
myRenderer.printOptionEnable = isSaveClick;
}
} );
MyRenderer Class :
int width_surface , height_surface ;
#Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
Log.i("JO", "onSurfaceChanged");
// Adjust the viewport based on geometry changes,
// such as screen rotation
GLES20.glViewport(0, 0, width, height);
float ratio = (float) width / height;
width_surface = width ;
height_surface = height ;
}
//---------------------------------------------------------------------
#Override
public void onDrawFrame(GL10 gl) {
try {
if ( printOptionEnable ) {
printOptionEnable = false ;
Log.i("hari", "printOptionEnable if condition:"+printOptionEnable);
int w = width_surface ;
int h = height_surface ;
Log.i("hari", "w:"+w+"-----h:"+h);
int b[]=new int[(int) (w*h)];
int bt[]=new int[(int) (w*h)];
IntBuffer buffer=IntBuffer.wrap(b);
buffer.position(0);
GLES20.glReadPixels(0, 0, w, h,GLES20.GL_RGBA,GLES20.GL_UNSIGNED_BYTE, buffer);
for(int i=0; i<h; i++)
{
//remember, that OpenGL bitmap is incompatible with Android bitmap
//and so, some correction need.
for(int j=0; j<w; j++)
{
int pix=b[i*w+j];
int pb=(pix>>16)&0xff;
int pr=(pix<<16)&0x00ff0000;
int pix1=(pix&0xff00ff00) | pr | pb;
bt[(h-i-1)*w+j]=pix1;
}
}
Bitmap inBitmap = null ;
if ( inBitmap == null || !inBitmap.isMutable() ||
inBitmap.getWidth() != w || inBitmap.getHeight() != h) {
inBitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
}
//Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
inBitmap.copyPixelsFromBuffer(buffer);
//return inBitmap ;
// return Bitmap.createBitmap(bt, w, h, Bitmap.Config.ARGB_8888);
inBitmap = Bitmap.createBitmap(bt, w, h, Bitmap.Config.ARGB_8888);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
inBitmap.compress(CompressFormat.JPEG, 90, bos);
byte[] bitmapdata = bos.toByteArray();
ByteArrayInputStream fis = new ByteArrayInputStream(bitmapdata);
final Calendar c=Calendar.getInstance();
long mytimestamp=c.getTimeInMillis();
String timeStamp=String.valueOf(mytimestamp);
String myfile="hari"+timeStamp+".jpeg";
dir_image=new File(Environment.getExternalStorageDirectory()+File.separator+
"printerscreenshots"+File.separator+"image");
dir_image.mkdirs();
try {
File tmpFile = new File(dir_image,myfile);
FileOutputStream fos = new FileOutputStream(tmpFile);
byte[] buf = new byte[1024];
int len;
while ((len = fis.read(buf)) > 0) {
fos.write(buf, 0, len);
}
fis.close();
fos.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
Log.v("hari", "screenshots:"+dir_image.toString());
}
} catch(Exception e) {
e.printStackTrace() ;
}
}
Please Any one help me..
Thanks Advance
I am using the following code. The image is saved but it is BLACK.
Please see my code and tell me where I am doing wrong.
I am using this code in the Menu.
case R.id.id_menu_Save:
Bitmap bmp = SavePixels(0, 0, 800, 400, CCDirector.sharedDirector().gl);
File file = new File("/sdcard/test.jpg");
try
{
file.createNewFile();
FileOutputStream fos = new FileOutputStream(file);
bmp.compress(CompressFormat.JPEG, 100, fos);
Toast.makeText(getApplicationContext(), "Image Saved", 0).show();
Log.i("Menu Save Button", "Image saved as JPEG");
}
catch (Exception e)
{
e.printStackTrace();
}
break;
This is my Save Image Function.
public static Bitmap SavePixels(int x, int y, int w, int h, GL10 gl)
{
int b[]=new int[w*(y+h)];
int bt[]=new int[w*h];
IntBuffer ib=IntBuffer.wrap(b);
ib.position(0);
gl.glReadPixels(x, 0, w, y+h, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, ib);
for(int i=0, k=0; i<h; i++, k++)
{//remember, that OpenGL bitmap is incompatible with Android bitmap
//and so, some correction need.
for(int j=0; j<w; j++)
{
int pix=b[i*w+j];
int pb=(pix>>16)&0xff;
int pr=(pix<<16)&0x00ff0000;
int pix1=(pix&0xff00ff00) | pr | pb;
bt[(h-k-1)*w+j]=pix1;
}
}
Bitmap sb = Bitmap.createBitmap(bt, w, h, Bitmap.Config.ARGB_8888);
return sb;
}
Apart from above, what I want from you is to point me in the right direction. Like if I have to get the pixels of the screen, what class/entity should I be exploring?
Just change SavePixelx method to the below:
public static Bitmap SavePixels(int x, int y, int w, int h, GL10 gl)
{
int b[]=new int[w*(y+h)];
int bt[]=new int[w*h];
IntBuffer ib=IntBuffer.wrap(b);
ib.position(0);
gl.glReadPixels(x, 0, w, y+h, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, ib);
for(int i=0, k=0; i<h; i++, k++)
{
//remember, that OpenGL bitmap is incompatible with Android bitmap
//and so, some correction need.
for(int j=0; j<w; j++)
{
int pix=b[i*w+j];
int pb=(pix>>16)&0xff;
int pr=(pix<<16)&0xffff0000;
int pix1=(pix&0xff00ff00) | pr | pb;
bt[(h-k-1)*w+j]=pix1;
}
}
Bitmap sb = Bitmap.createBitmap(bt, w, h, Bitmap.Config.ARGB_8888);
return sb;
}
just try to change GL10.GL_RGB or make changes in bitmap.config . It might be work.