I am trying to fetch an image from a URL into a Bitmap and then using the raw data from the Bitmap am trying to create a CCSprite. The issue here is that the image is corrupted when I display the sprite. I created a standalone android only application(no cocos2dx) and used the same code to fetch and display the Bitmap and its displayed correctly. Any reason why the image is not being properly rendered in cocos2dx?
My code to fetch the image from the URL is:
String urlString = "http://www.mathewingram.com/work/wp-content/themes/thesis/rotator/335f69c5de_small.jpg";//http://graph.facebook.com/"+user.getId()+"/picture?type=large";
Bitmap pic = null;
pic = BitmapFactory.decodeStream((InputStream) new URL(urlString).getContent());
int[] pixels = new int[pic.getWidth() * pic.getHeight()];
pic.getPixels(pixels, 0, pic.getWidth(), 0, 0,pic.getWidth(),pic.getHeight());
int len = pic.getWidth()* pic.getHeight();
nativeFbUserName(pixels,len,pic.getWidth(), pic.getHeight());
The function "nativeFbUserName" is a call to a native c++ function which is :
void Java_com_WBS_Test0001_Test0001_nativeFbUserName(JNIEnv *env, jobject thiz,jintArray name, jint len, jint width, jint height) {
jint *jArr = env->GetIntArrayElements(name,NULL);
int username[len];
for (int i=0; i<len; i++){
username[i] = (int)jArr[i];
}
HelloWorld::getShared()->picLen = (int)len;
HelloWorld::getShared()->picHeight = (int)height;
HelloWorld::getShared()->picWidth = (int)width;
HelloWorld::getShared()->saveArray(username);
HelloWorld::getShared()->schedule(SEL_SCHEDULE(&HelloWorld::addSprite),0.1);
}
void HelloWorld::saveArray(int *arrayToSave)
{
arr = new int[picLen];
for(int i = 0; i < picLen; i++){
arr[i] = arrayToSave[i];
}
}
void HelloWorld::addSprite(float time)
{
this->unschedule(SEL_SCHEDULE(&HelloWorld::addSprite));
CCTexture2D *tex = new CCTexture2D();
bool val = tex->initWithData(arr,(cocos2d::CCTexture2DPixelFormat)0,picWidth,picHeight, CCSizeMake(picWidth,picHeight));
CCLog("flag is %d",val);
CCSprite *spriteToAdd = CCSprite::createWithTexture(tex);
spriteToAdd->setPosition(ccp(500, 300));
this->addChild(spriteToAdd);
}
Edit:
So I found this link Access to raw data in ARGB_8888 Android Bitmap that states that it might be a bug. Has anyone found a solution to this?
EDIT
So I just noticed a corruption of images on the lower right corner of the image.I am not sure why this is happening and how to fix it. Any ideas?
EDIT END
Answering my own question, I obtained a byte array from the bitmap using:
byte[] data = null;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
pic.compress(Bitmap.CompressFormat.JPEG, 100, baos);
data = baos.toByteArray();
And then passed this byte array to the native code.
Related
So my issue is that I get for a video call the frames in my c code as a byte array of I420. Which I then convert to NV21 and send the byte array to create the bitmap. But because I need to create a YUV Image from the byte array, and then a bitmap from that, I have a conversion overhead and that is causing delays and loss in quality.
I am wondering if there is another way to do this. Somehow so that I can create the bitmap directly in the c code, and maybe even add it to the bitmap, or a surface view from the c code? Or just simply send the bitmap to my function so I can set it there, without needing to create the bitmap in Android.
This is what I do with the byte array in the c code:
if(size == 0)
return;
jboolean isAttached;
JNIEnv *env;
jint jParticipant;
jint jWidth;
jint jHeight;
jbyteArray jRawImageBytes;
env = getJniEnv(&isAttached);
if (env == NULL)
goto FAIL0;
//LOGE(".... **** ....TRYING TO FIND CALLBACK");
LOGI("FrameReceived will reach here 1");
char *modifiedRawImageBytes = malloc(size);
memcpy(modifiedRawImageBytes, rawImageBytes, size);
jint sizeWH = width * height;
jint quarter = sizeWH/4;
jint v0 = sizeWH + quarter;
for (int u = sizeWH, v = v0, o = sizeWH; u < v0; u++, v++, o += 2) {
modifiedRawImageBytes[o] = rawImageBytes[v]; // For NV21, V first
modifiedRawImageBytes[o + 1] = rawImageBytes[u]; // For NV21, U second
}
if(remote)
{
if(frameReceivedRemoteMethod == NULL)
frameReceivedRemoteMethod = getApplicationJniMethodId(env, applicationJniObj, "vidyoConferenceFrameReceivedRemoteCallback", "(III[B)V");
if (frameReceivedRemoteMethod == NULL) {
//LOGE(".... **** ....CALLBACK NOT FOUND");
goto FAIL1;
}
}
This is what I do in the Android java code:
remoteResolution = width + "x" + height;
remoteBAOS = new ByteArrayOutputStream();
remoteYUV = new YuvImage(rawImageBytes, ImageFormat.NV21, width, height, null);
remoteYUV.compressToJpeg(new Rect(0, 0, width, height), 100, remoteBAOS);
remoteBA = remoteBAOS.toByteArray();
remoteBitmap = BitmapFactory.decodeByteArray(remoteBA, 0, remoteBA.length);
new Handler(Looper.getMainLooper()).post(new Runnable() {
#Override
public void run() {
remoteView.setImageBitmap(remoteBitmap);
}
});
This is how the sample app of the SDK I am using had the sample. but I feel that this is not at all best practice, and there has to be a way to get the Bitmap quicker from the byte array, and preferably in the c code. Any ideas on how to improve this?
EDIT:
I modified my Java code. I know use this library: https://github.com/silvaren/easyrs
so my code will be:
remoteBitmap = Nv21Image.nv21ToBitmap(rs, rawImageBytes, width, height);
new Handler(Looper.getMainLooper()).post(new Runnable() {
#Override
public void run() {
remoteView.setImageBitmap(remoteBitmap);
}
});
Where nv21ToBitmap does this:
public static Bitmap yuvToRgb(RenderScript rs, Nv21Image nv21Image) {
long startTime = System.currentTimeMillis();
Type.Builder yuvTypeBuilder = new Type.Builder(rs, Element.U8(rs))
.setX(nv21Image.nv21ByteArray.length);
Type yuvType = yuvTypeBuilder.create();
Allocation yuvAllocation = Allocation.createTyped(rs, yuvType, Allocation.USAGE_SCRIPT);
yuvAllocation.copyFrom(nv21Image.nv21ByteArray);
Type.Builder rgbTypeBuilder = new Type.Builder(rs, Element.RGBA_8888(rs));
rgbTypeBuilder.setX(nv21Image.width);
rgbTypeBuilder.setY(nv21Image.height);
Allocation rgbAllocation = Allocation.createTyped(rs, rgbTypeBuilder.create());
ScriptIntrinsicYuvToRGB yuvToRgbScript = ScriptIntrinsicYuvToRGB.create(rs, Element.RGBA_8888(rs));
yuvToRgbScript.setInput(yuvAllocation);
yuvToRgbScript.forEach(rgbAllocation);
Bitmap bitmap = Bitmap.createBitmap(nv21Image.width, nv21Image.height, Bitmap.Config.ARGB_8888);
rgbAllocation.copyTo(bitmap);
Log.d("NV21", "Conversion to Bitmap: " + (System.currentTimeMillis() - startTime) + "ms");
return bitmap;
}
This is faster. but still I feel there still is some delay. Now that I get my bitmap from renderscript instead of using a YUV Image. Is it possible to set it to my imageView somehow faster? or set it on a surfaceView somehow?
What My code does . is take a snapshot of the map convert it to gray (opencv) and then make it to byte array.
Now what I dont know how to start doing is making this Bytearray to a 2D Array,
here is a block of the code.
Date now = new Date();
android.text.format.DateFormat.format("yyyy-MM-dd_hh:mm:ss",now);
try {
StrictMode.ThreadPolicy old = StrictMode.getThreadPolicy();
StrictMode.setThreadPolicy(new StrictMode.ThreadPolicy.Builder(old)
.permitDiskWrites()
.build());
String mPath = Environment.getExternalStorageDirectory().toString() + "/" + now + ".jpg";
File imageFile = new File(mPath);
FileOutputStream out = new FileOutputStream(imageFile);
bitmap.compress(Bitmap.CompressFormat.PNG,90,out);
Log.d("Image:","Saved Snashot. Starting covertion");
//show snapshot in imageview
ImageView imgview = (ImageView) findViewById(R.id.imageView);
Bitmap smyBitmap = BitmapFactory.decodeFile(mPath);
Bitmap myBitmap = BitmapFactory.decodeFile(mPath);
imgview.setImageBitmap(smyBitmap);
Mat mat = new Mat(myBitmap.getHeight(),myBitmap.getWidth(),CvType.CV_8UC3);
Mat mat1 = new Mat(myBitmap.getHeight(),myBitmap.getWidth(),CvType.CV_8UC1);
Imgproc.cvtColor(mat,mat1,Imgproc.COLOR_BGR2GRAY);
ImageView imgview2 = (ImageView) findViewById(R.id.imageView2);
Mat tmp = new Mat (myBitmap.getWidth(), myBitmap.getHeight(), CvType.CV_8UC1);
Utils.bitmapToMat(myBitmap, tmp);
Imgproc.cvtColor(tmp, tmp, Imgproc.COLOR_RGB2GRAY);
Utils.matToBitmap(tmp, myBitmap);
// Utils.matToBitmap(mat1,img);
String mPathgray = Environment.getExternalStorageDirectory().toString() + "/" + now + "gray.jpg";
File imageFilegray = new File(mPathgray);
FileOutputStream gout = new FileOutputStream(imageFilegray);
bitmap.compress(Bitmap.CompressFormat.PNG,90,gout);
byte[] byteArray = bttobyte(myBitmap);
Log.d("location"," " + mPathgray);
imgview2.setImageBitmap(myBitmap);
Log.d("Activity", "Byte array: "+ Arrays.toString(byteArray));
}
catch (Exception e) {
e.printStackTrace();
}
}
};
mMap.snapshot(callback);
Log.d("test","test2");
}
});
}
public byte[] bttobyte(Bitmap bitmap) {
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 70, stream);
return stream.toByteArray();
}
The problem you are trying to solve is actually extracting image data from JPEG encoding in the byte array. It is not as simple as being stored pixel by pixel in a grid of width and height equal to your image size, as you seem to be implying. That is a consequence of using this
bitmap.compress(Bitmap.CompressFormat.JPEG, 70, stream);
For example that byte data actually might encode JFIF format:
0xffd8 - SOI - exactly 1 - start of image
e.g. 0xffe0 - zero or more - e.g. APP0 for JFIF - [some sort of header, either JFIF or EXIF].
0xffdb - DQT - one or more - define quantisation tables. These are the main source of quality in the image.
0xffcn - SOFn - exactly one - start of frame. SOF0 indicates a baseline DCT JPEG
0xffc4 - DHT - one or more - define Huffman tables. These are used in the final lossless encoding steps.
0xffda - SOS - exactly one (for baseline, more than one intermixed with more DHTs if progressive) - start of stream. This is where the actual data starts.
0xffd9 - EOI - exactly one - end of image.
Essentially once you have transformed your bitmap, you need a JPEG decoder to parse this data.
I think what you want is to be working with the original bitmap. For example there are APIs to extract pixel data. That is what you need if I understand what it is you want to do correctly.
Bitmap.getPixels(int[] pixels,
int offset,
int stride,
int x,
int y,
int width,
int height)
The array pixels[] is filled with data. You can then parse the array if you want to store it in a w by h 2D array by iterating copying the data. Something like:
int offset = 0;
int pixels2d[w][h]= new int[w][h];
for (int[] row : pixels2d) {
System.arraycopy(pixels, offset, row, 0, row.length);
offset += row.length;
}
You just create a new array with 2 dimensions
byte[][] picture = new byte[myBitmap.getWidth()][,myBitmap.getHeight()]
which you can access via
byte myByte = picture[x][y]
after this you iterate your original bytarray by line followed by row
Hello I am new to android, I am currently trying to print a receipt from my Android 4.4.2 table to my Zicox thermal receipt printer. I have been able to print the text so far but now I need to go a step further and print barcodes / qrcodes. Unfortunately this is way beyond my knowledge, I have googled solutions and have not found one for me yet.
These are the methods I use to generate my barcode:
/**************************************************************
* getting from com.google.zxing.client.android.encode.QRCodeEncoder
*
* See the sites below
* http://code.google.com/p/zxing/
* http://code.google.com/p/zxing/source/browse/trunk/android/src/com/google/zxing/client/android/encode/EncodeActivity.java
* http://code.google.com/p/zxing/source/browse/trunk/android/src/com/google/zxing/client/android/encode/QRCodeEncoder.java
*/
private static final int WHITE = 0xFFFFFFFF;
private static final int BLACK = 0xFF000000;
Bitmap encodeAsBitmap(String contents, BarcodeFormat format, int img_width, int img_height) throws WriterException {
String contentsToEncode = contents;
if (contentsToEncode == null) {
return null;
}
Map<EncodeHintType, Object> hints = null;
String encoding = guessAppropriateEncoding(contentsToEncode);
if (encoding != null) {
hints = new EnumMap<EncodeHintType, Object>(EncodeHintType.class);
hints.put(EncodeHintType.CHARACTER_SET, encoding);
}
MultiFormatWriter writer = new MultiFormatWriter();
BitMatrix result;
try {
result = writer.encode(contentsToEncode, format, img_width, img_height, hints);
} catch (IllegalArgumentException iae) {
// Unsupported format
return null;
}
int width = result.getWidth();
int height = result.getHeight();
int[] pixels = new int[width * height];
for (int y = 0; y < height; y++) {
int offset = y * width;
for (int x = 0; x < width; x++) {
pixels[offset + x] = result.get(x, y) ? BLACK : WHITE;
}
}
Bitmap bitmap = Bitmap.createBitmap(width, height,
Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixels, 0, width, 0, 0, width, height);
return bitmap;
}
private static String guessAppropriateEncoding(CharSequence contents) {
// Very crude at the moment
for (int i = 0; i < contents.length(); i++) {
if (contents.charAt(i) > 0xFF) {
return "UTF-8";
}
}
return null;
}
This is my onClick method that starts the entire process:
// barcode data
String barcode_data = "123456";
// barcode image
ImageView iv = new ImageView(this);
try {
barCode = encodeAsBitmap(barcode_data, BarcodeFormat.CODE_128, 300, 40);
// bitmap.getRowBytes();
iv.setImageBitmap(barCode);
} catch (WriterException e) {
e.printStackTrace();
}
iv.setLayoutParams(new RelativeLayout.LayoutParams(ViewGroup.LayoutParams.MATCH_PARENT, ViewGroup.LayoutParams.WRAP_CONTENT));
innerLayout.addView(iv);
So now I am basically able to generate and display the barcode now I want to be able to print in on my receipts.
As I can understand from your question, you basically want to print a bitmap, that contains the QR code.
You can use PrintHelper class for the same. Here's a sample code from the official documentation that shows how to use it.
private void doPhotoPrint() {
PrintHelper photoPrinter = new PrintHelper(getActivity());
photoPrinter.setScaleMode(PrintHelper.SCALE_MODE_FIT);
Bitmap bitmap = BitmapFactory.decodeResource(getResources(),
R.drawable.droids);
photoPrinter.printBitmap("droids.jpg - test print", bitmap);
}
Also, it says :
After the printBitmap() method is called, no further action from your application is required. The Android print user interface appears, allowing the user to select a printer and printing options.
Update :
The above method prints only a bitmap. For printing a whole layout (like a receipt, as you said), instructions have been laid out here. I'll try to summarize it for you here :
The first step is to to extend PrintDocumentAdapter class and override certain methods. In that adapter, there's a callback method known as onWrite() that is called for drawing content on the file to be printed. In this method, you'll have to use Canvas object to draw content. This object has all the helper methods to draw bitmap/line/text etc.
Then, when the print is requested, obtain an instance of `PrintManager' class as follows :
PrintManager printManager = (PrintManager) getActivity()
.getSystemService(Context.PRINT_SERVICE);
Then, call the print method as :
printManager.print(jobName, new CustomDocumentAdapter(getActivity()),
null);
Let me know if this helps.
I am developing an application using JNI and a third party engine (Unreal Engine 4) in charge of managing the graphics pipeline/rendering.
The third party engine is written in C++, thus the need of using JNI to bridge it with Android.
The app requires to save a screenshot on the device of what is being displayed on the screen (in other words a dump of the framebuffer).
The third party engine exposes an API that calls a custom handler, passing in the width, height and color data of the screen.
colordata is a custom container of uint8 representing RGBA components.
I successfully managed to convert the colorData to a jbyteArray and pass it as an argument to a function on the JAVA side.
On the java side things are simpler: I create a bitmap from the byteArray, flip it and save it as a jpg/png via a custom AsyncTask.
The problem:
The code works marvellously o Samsung Galaxy S4/Note3 (Both Android 5.0), whereas on a Nexus 10 Android version 5.1.1 the png that gets saved is blank.
I am afraid that the problem with this lies on a depper level than the ones I have access to, i.e. graphics card/drivers/OS version, but I am not an expert in that field, so I would like to know if someone has already experienced a similar issue or could shed some light on what is causing it.
This is the code used to bridge the engine with Java (I started c++ with this project so maybe there are ownership/memory issues in this snippet. You are more than welcome to correct me in case :))
void AndroidInterface::SaveBitmap(const TArray<FColor>& colorData, int32 width, int32 height) {
JNIEnv* env = FAndroidApplication::GetJavaEnv(true);
TArray<FColor> bitmap = colorData;
TArray<uint8> compressedBitmap;
FImageUtils::CompressImageArray(width, height, bitmap, compressedBitmap);
size_t len = width*height*compressedBitmap.GetTypeSize();
LOGD("===========Width: %i, height: %i - Len of bitmap element: %i==========", width, height, len);
jbyteArray bitmapData = env->NewByteArray(len);
LOGD("===========Called new byte array==========");
env->SetByteArrayRegion(bitmapData, 0, len, (const jbyte*)compressedBitmap.GetData() );
LOGD("===========Populated byte array==========");
check (bitmapData != NULL && "Couldn't create byte array");
jclass gameActivityClass = FAndroidApplication::FindJavaClass("com/epicgames/ue4/GameActivity");
check (gameActivityClass != nullptr && "GameActivityClassNotFound");
//get the method signature to take a game screenshot
jmethodID saveScreenshot = env->GetMethodID(gameActivityClass, "saveScreenshot", "([BII)V");
env->CallVoidMethod(AndroidInterface::sGameActivity, saveScreenshot, bitmapData, width, height);
env->DeleteLocalRef(bitmapData);
}
This is the java code in charge of converting from byte[] to Bitmap:
public void saveScreenshot(final byte[] colors, int width, int height) {
android.util.Log.d("GameActivity", "======saveScreenshot called. Width: " + width + " height: " + height + "=======");
android.util.Log.d("GameActivity", "Color content---->\n " + Arrays.toString(colors));
final BitmapFactory.Options opts = new BitmapFactory.Options();
opts.inPreferredConfig = Bitmap.Config.ARGB_8888;
final Bitmap bitmap = BitmapFactory.decodeByteArray(colors, 0, colors.length, opts);
final FlipBitmap flipBitmapTask = new FlipBitmap();
flipBitmapTask.executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR, bitmap);
}
FlipBitmap is the AsyncTask in charge of saving the bitmap to a file:
private class FlipBitmap extends AsyncTask<Bitmap, Void, File> {
#Override
protected File doInBackground(Bitmap... params) {
final Bitmap src = params[0];
final File file = new File(MainActivity.SCREENSHOT_FOLDER + "screenshot" + System.currentTimeMillis() + ".png");
final Matrix matrix = new Matrix();
matrix.setScale(1, -1);
final Bitmap dst = Bitmap.createBitmap(src, 0, 0, src.getWidth(), src.getHeight(), matrix, false);
try {
final FileOutputStream out = new FileOutputStream(file);
dst.compress(Bitmap.CompressFormat.PNG, 90, out);
out.flush();
out.close();
} catch (Exception e) {
e.printStackTrace();
}
return file;
}
#Override
protected void onPostExecute(File file) {
android.util.Log.d("GameActivity", "FlipBitmap onPostExecute");
if (file.exists()) {
final Intent i = new Intent(Intent.ACTION_SENDTO);
i.setData(Uri.parse("mailto:" + Globals.Network.MAIL_TO));
i.putExtra(Intent.EXTRA_SUBJECT, Globals.Network.MAIL_SUBJECT);
i.putExtra(Intent.EXTRA_TEXT, mBodyEmail);
i.putExtra(Intent.EXTRA_STREAM, Uri.parse("file://" + file.getAbsolutePath()));
startActivity(Intent.createChooser(i, "Invia via email"));
}
}
}
Thanks in advance!
I'm trying to implements a rtsp player based on the roman10 tutorial.
I can play a stream but each time i leave the activity a lot of memory is leaked.
After some research it appears that the bitmap which is a global jobject is the cause :
jobject createBitmap(JNIEnv *pEnv, int pWidth, int pHeight) {
int i;
//get Bitmap class and createBitmap method ID
jclass javaBitmapClass = (jclass)(*pEnv)->FindClass(pEnv, "android/graphics/Bitmap");
jmethodID mid = (*pEnv)->GetStaticMethodID(pEnv, javaBitmapClass, "createBitmap", "(IILandroid/graphics/Bitmap$Config;)Landroid/graphics/Bitmap;");
//create Bitmap.Config
//reference: https://forums.oracle.com/thread/1548728
const wchar_t* configName = L"ARGB_8888";
int len = wcslen(configName);
jstring jConfigName;
if (sizeof(wchar_t) != sizeof(jchar)) {
//wchar_t is defined as different length than jchar(2 bytes)
jchar* str = (jchar*)malloc((len+1)*sizeof(jchar));
for (i = 0; i < len; ++i) {
str[i] = (jchar)configName[i];
}
str[len] = 0;
jConfigName = (*pEnv)->NewString(pEnv, (const jchar*)str, len);
} else {
//wchar_t is defined same length as jchar(2 bytes)
jConfigName = (*pEnv)->NewString(pEnv, (const jchar*)configName, len);
}
jclass bitmapConfigClass = (*pEnv)->FindClass(pEnv, "android/graphics/Bitmap$Config");
jobject javaBitmapConfig = (*pEnv)->CallStaticObjectMethod(pEnv, bitmapConfigClass,
(*pEnv)->GetStaticMethodID(pEnv, bitmapConfigClass, "valueOf", "(Ljava/lang/String;)Landroid/graphics/Bitmap$Config;"), jConfigName);
//create the bitmap
return (*pEnv)->CallStaticObjectMethod(pEnv, javaBitmapClass, mid, pWidth, pHeight, javaBitmapConfig);
}
The bitmap is created like this :
bitmap = createBitmap(...);
When the activity is closed this method is called :
void finish(JNIEnv *pEnv) {
//unlock the bitmap
AndroidBitmap_unlockPixels(pEnv, bitmap);
av_free(buffer);
// Free the RGB image
av_free(frameRGBA);
// Free the YUV frame
av_free(decodedFrame);
// Close the codec
avcodec_close(codecCtx);
// Close the video file
avformat_close_input(&formatCtx);
}
The bitmap seems to never be freed, just unlocked.
What should i do be sure to get back all the memory ?
Note : i'm using ffmpeg 2.5.2.