This is the picture taking fuction
class Camera {
...
void capturePicture() {
Camera.Size size = mParams.getPictureSize();
int bitsPerPixel = ImageFormat.getBitsPerPixel(mParams.getPictureFormat());
int bufferSize = (int) Math.ceil(size.width * size.height * bitsPerPixel / 8d) ;
Log.d(TAG, "Picture Size : " + size.width + "\t" + size.height);
Log.d(TAG, "Picture format : " + mParams.getPictureFormat());
Log.d(TAG, "Bits per Pixel = " + bitsPerPixel);
Log.d(TAG, "Buffer Size = " + bufferSize);
byte[] buffer = new byte[1382400];
addBuffer(buffer);
Camera.ShutterCallback shutterCallback = () -> mCameraCallbacks.onShutter();
Camera.PictureCallback pictureCallback = (data, camera) -> {
mCameraControllerCallbacks.onPicture(data);
};
mCamera.takePicture(shutterCallback, pictureCallback, null, null);
}
public interface CameraCallbacks {
void onPicture(byte[] bytes);
}
The the picture size should be 3264 x 2448 however the bitsPerPixel returns -1 so I can't use it to calculate. It turn out the minimum buffer size is 1382400 I don't know why.
Here is the Activity receives the callback
public class CameraActivity extends AppCompatActivity implements Camera.CameraCallbacks
#Override
public void onPicture(byte[] bytes) {
final ByteBuffer buffer = ByteBuffer.wrap(bytes).order(ByteOrder.LITTLE_ENDIAN);
final int[] ints = new int[bytes.length / 4];
buffer.asIntBuffer().put(ints);
Log.d(TAG,"Creating Bitmap of Size : "+mCameraView.mPictureSize.width +" x "+mCameraView.mPictureSize.height);
Bitmap bitmap = Bitmap.createBitmap(ints, mCameraView.mPictureSize.width, mCameraView.mPictureSize.height, Bitmap.Config.ARGB_8888);
Intent intent = new Intent(CameraActivity.this, PicturePreviewActivity.class);
intent.putExtra("bitmap", bmp);
startActivityForResult(intent, SAVE_PICTURE_OR_NOT);
}
The code here is obviously wrong and I am having trouble rearrange these byte[] into ints[] the way bitmap accepts because I don't know the data structure inside these bytes.
Also BitmapFactory.decodeByteArray won't work because it can't read raw data.
Can anybody help me on this one?
It is not possible to retrieve uncompressed bitmap data from android camera, so the picture call back need to move from RawCallback to JpegCallback and use decodeByteArray() to obtain Bitmap.Also it is unreliable to pass Bitmap through Intent So the simplest way is to write to the receiving Activity directly.The Code became like this:
#Override
public void onPicture(byte[] bytes) {
Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, mJpeg.get().length,null);
PicturePreviewActivity.mBitmap=new WeakReference<>(bitmap);
Intent intent = new Intent(CameraActivity.this, PicturePreviewActivity.class);
startActivityForResult(intent, SAVE_PICTURE_OR_NOT);
}
}
Related
I am trying to use CameraX with ZXing library to identify barcodes,
I am using ImageAnalyzer to get ImageProxy and feed its byte array to the PlanarYUVLuminanceSource to handle it by ZXing.
My target rotation is 0 and the imageProxy that is coming from the camera is 90 degrees, so the barcode is not readable until I rotate it 90 degrees.
here is the code:
private void buildAnalyzer() {
if (android.os.Build.VERSION.SDK_INT >= android.os.Build.VERSION_CODES.LOLLIPOP) {
imageAnalysis = new ImageAnalysis.Builder()
.setTargetResolution(new Size(1080, 720))
.setTargetRotation(Surface.ROTATION_0)
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.build();
}
MultiFormatReader reader = new MultiFormatReader();
Hashtable<DecodeHintType, Object> hints = new Hashtable<DecodeHintType, Object>(
2);
Vector<BarcodeFormat> decodeFormats = new Vector<BarcodeFormat>();
decodeFormats = new Vector<BarcodeFormat>();
decodeFormats.add(BarcodeFormat.EAN_13);
hints.put(DecodeHintType.POSSIBLE_FORMATS, decodeFormats);
hints.put(DecodeHintType.TRY_HARDER, Boolean.TRUE);
reader.setHints(hints);
imageAnalysis.setAnalyzer(Executors.newFixedThreadPool(1), new ImageAnalysis.Analyzer() {
#Override
public void analyze(#NonNull ImageProxy imageProxy) {
int rotationDegrees = imageProxy.getImageInfo().getRotationDegrees();
Log.v(TAG, "Rotation + " + rotationDegrees);
Log.v(TAG, "format + " + imageProxy.getFormat());
ImageProxy.PlaneProxy[] planes = imageProxy.getPlanes();
Log.v(TAG, "planes " + planes.length);
ByteBuffer yBuffer = planes[0].getBuffer();
ByteBuffer uBuffer = planes[1].getBuffer();
ByteBuffer vBuffer = planes[2].getBuffer();
int ySize = yBuffer.remaining();
int uSize = uBuffer.remaining();
int vSize = vBuffer.remaining();
byte[] data = new byte[ySize + uSize + vSize];
//U and V are swapped
yBuffer.get(data, 0, ySize);
vBuffer.get(data, ySize, vSize);
uBuffer.get(data, ySize + vSize, uSize);
Log.v(TAG, "width : " + imageProxy.getWidth());
Log.v(TAG, "height : " + imageProxy.getHeight());
Log.v(TAG, "planes 1 size : " + ySize + " remaining: " + yBuffer.remaining());
Log.v(TAG, "planes 2 size : " + uSize + " remaining: " + uBuffer.remaining());
Log.v(TAG, "planes 3 size : " + vSize + " remaining: " + vBuffer.remaining());
Log.v(TAG, "data length : " + data.length);
PlanarYUVLuminanceSource source = new PlanarYUVLuminanceSource(data,
imageProxy.getWidth(),
imageProxy.getHeight()
, 0, 0,
imageProxy.getWidth(),
imageProxy.getHeight()
, false);
BinaryBitmap binary = new BinaryBitmap(new HybridBinarizer(source));
try {
Result result = reader.decodeWithState(binary);
Log.d(TAG, "reader read " + result.getText());
Looper looper = Looper.getMainLooper();
Handler handler = new Handler(looper);
handler.post(new Runnable() {
#Override
public void run() {
Toast.makeText(con, result.getText() + "\n" + result.getNumBits(), Toast.LENGTH_LONG).show();
}
});
} catch (NotFoundException e) {
Log.d(TAG, "exception " + e.getMessage());
}
imageProxy.close();
}
});
}
The code above works fine which indicates that there is nothing wrong with the byte, that being said, here is the solutions that I tried and never worked:
1- rotate the byte array: I used this method https://stackoverflow.com/a/15775173/4674191 to rotate the byte array "data" that I extracted from the ImageProxy but it returned an error array out of bound exception,
this method works on arrays with factor of 4 and my byte arrays doesn't meet this condition.
logically the byte array is a yuv420 image format , consist of 3 layers with certain width and height.
but what drives me crazy that the width and the height are not compatible with the array :
width = 1440 ,
height = 1080 ,
planes 1 size : 1589728 ,
planes 2 size : 794847 ,
planes 3 size : 794847
looking at the numbers they are not dividable by 1440 nor 1080 , and not a factor of 4.
2- create a bitmap from the byte array and rotate it, then extract the rotated byte array from the new bitmap:
also doesn't work because the width and height is unknown just like earlier the resolution is not compatible with the byte array numbers and the result bitmap is bunch of distorted green mess.
3- the activity is locked on portrait mode from the manifest android:screenOrientation="portrait" : same problem, imageProxy rotation is still 90
4- PlanarYUVLuminanceSource doesnt support rotateion :( don't know why.
I tried almost all the solutions on StackOverFlow and read all the documentations for cameraX and nothing seems to address this problem.
if there is a way to know the actual resolution for the byteArray it would solve the problem I guess.
byte[] data = ImageUtil.imageToJpegByteArray(imageProxy);
Bitmap bitmap= BitmapFactory.decodeByteArray(data, 0, data.size)
you can get bitmap by code above. ImageUtil class is in androidx.camera.core
androidx.camera.core.internal.utils.ImageUtil
I have implemented this extension function after a lot of research:
fun ImageProxy.toBitmap(): Bitmap {
val buffer = planes[0].buffer.apply { rewind() }
val bytes = ByteArray(buffer.capacity())
// Get bitmap
buffer.get(bytes)
val bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
// Fix rotation if needed
val angle = imageInfo.rotationDegrees.toFloat()
val matrix = Matrix().apply { postRotate(angle) }
// Return rotated bitmap
return Bitmap.createBitmap(bitmap, 0, 0, bitmap.width, bitmap.height, matrix, true)
}
You can get the ImageProxy by calling takePicture from the android camerax library:
imageCapture.takePicture(cameraExecutor, object : ImageCapture.OnImageCapturedCallback() {
override fun onCaptureSuccess(imageProxy: ImageProxy) {
val bitmap = imageProxy.toBitmap()
imageProxy.close()
}
})
I configured my code in order to get a stream of YUV_420_888 frames from my device's camera using an imageReader object and the rest of the well known camera2 API. Now I need to transform these frames to NV21 pixel format and call a native function which expect a frame in this format to perform certain computations. This is the code I am using inside the imagereader callback to rearrange the bytes of the frame:
ImageReader.OnImageAvailableListener readerListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader mReader) {
Image image = null;
image = mReader.acquireLatestImage();
if (image == null) {
return;
}
byte[] bytes = convertYUV420ToNV21(image);
nativeVideoFrame(bytes);
image.close();
}
};
private byte[] convertYUV420ToNV21(Image imgYUV420) {
byte[] rez;
ByteBuffer buffer0 = imgYUV420.getPlanes()[0].getBuffer();
ByteBuffer buffer1 = imgYUV420.getPlanes()[1].getBuffer();
ByteBuffer buffer2 = imgYUV420.getPlanes()[2].getBuffer();
int buffer0_size = buffer0.remaining();
int buffer1_size = buffer1.remaining();
int buffer2_size = buffer2.remaining();
byte[] buffer0_byte = new byte[buffer0_size];
byte[] buffer1_byte = new byte[buffer1_size];
byte[] buffer2_byte = new byte[buffer2_size];
buffer0.get(buffer0_byte, 0, buffer0_size);
buffer1.get(buffer1_byte, 0, buffer1_size);
buffer2.get(buffer2_byte, 0, buffer2_size);
ByteArrayOutputStream outputStream = new ByteArrayOutputStream( );
try {
outputStream.write( buffer0_byte );
outputStream.write( buffer1_byte );
outputStream.write( buffer2_byte );
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
rez = outputStream.toByteArray( );
return rez;
}
But I dont know why, the resulting frame is "flipped" in the horizontal direction. In other word, when I move the camera to the right, the frame after the packing procedure I have described is moving to the left, like if the sensor is placed in an antinatural position.
I hope you may understand what I mean
Thanks,
JM
It's ok for camera to produce mirrored image. If you don't want it to be mirrored - you need to perform horizontal mirroring, swapping pixels in each row.
I am developing an application using JNI and a third party engine (Unreal Engine 4) in charge of managing the graphics pipeline/rendering.
The third party engine is written in C++, thus the need of using JNI to bridge it with Android.
The app requires to save a screenshot on the device of what is being displayed on the screen (in other words a dump of the framebuffer).
The third party engine exposes an API that calls a custom handler, passing in the width, height and color data of the screen.
colordata is a custom container of uint8 representing RGBA components.
I successfully managed to convert the colorData to a jbyteArray and pass it as an argument to a function on the JAVA side.
On the java side things are simpler: I create a bitmap from the byteArray, flip it and save it as a jpg/png via a custom AsyncTask.
The problem:
The code works marvellously o Samsung Galaxy S4/Note3 (Both Android 5.0), whereas on a Nexus 10 Android version 5.1.1 the png that gets saved is blank.
I am afraid that the problem with this lies on a depper level than the ones I have access to, i.e. graphics card/drivers/OS version, but I am not an expert in that field, so I would like to know if someone has already experienced a similar issue or could shed some light on what is causing it.
This is the code used to bridge the engine with Java (I started c++ with this project so maybe there are ownership/memory issues in this snippet. You are more than welcome to correct me in case :))
void AndroidInterface::SaveBitmap(const TArray<FColor>& colorData, int32 width, int32 height) {
JNIEnv* env = FAndroidApplication::GetJavaEnv(true);
TArray<FColor> bitmap = colorData;
TArray<uint8> compressedBitmap;
FImageUtils::CompressImageArray(width, height, bitmap, compressedBitmap);
size_t len = width*height*compressedBitmap.GetTypeSize();
LOGD("===========Width: %i, height: %i - Len of bitmap element: %i==========", width, height, len);
jbyteArray bitmapData = env->NewByteArray(len);
LOGD("===========Called new byte array==========");
env->SetByteArrayRegion(bitmapData, 0, len, (const jbyte*)compressedBitmap.GetData() );
LOGD("===========Populated byte array==========");
check (bitmapData != NULL && "Couldn't create byte array");
jclass gameActivityClass = FAndroidApplication::FindJavaClass("com/epicgames/ue4/GameActivity");
check (gameActivityClass != nullptr && "GameActivityClassNotFound");
//get the method signature to take a game screenshot
jmethodID saveScreenshot = env->GetMethodID(gameActivityClass, "saveScreenshot", "([BII)V");
env->CallVoidMethod(AndroidInterface::sGameActivity, saveScreenshot, bitmapData, width, height);
env->DeleteLocalRef(bitmapData);
}
This is the java code in charge of converting from byte[] to Bitmap:
public void saveScreenshot(final byte[] colors, int width, int height) {
android.util.Log.d("GameActivity", "======saveScreenshot called. Width: " + width + " height: " + height + "=======");
android.util.Log.d("GameActivity", "Color content---->\n " + Arrays.toString(colors));
final BitmapFactory.Options opts = new BitmapFactory.Options();
opts.inPreferredConfig = Bitmap.Config.ARGB_8888;
final Bitmap bitmap = BitmapFactory.decodeByteArray(colors, 0, colors.length, opts);
final FlipBitmap flipBitmapTask = new FlipBitmap();
flipBitmapTask.executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR, bitmap);
}
FlipBitmap is the AsyncTask in charge of saving the bitmap to a file:
private class FlipBitmap extends AsyncTask<Bitmap, Void, File> {
#Override
protected File doInBackground(Bitmap... params) {
final Bitmap src = params[0];
final File file = new File(MainActivity.SCREENSHOT_FOLDER + "screenshot" + System.currentTimeMillis() + ".png");
final Matrix matrix = new Matrix();
matrix.setScale(1, -1);
final Bitmap dst = Bitmap.createBitmap(src, 0, 0, src.getWidth(), src.getHeight(), matrix, false);
try {
final FileOutputStream out = new FileOutputStream(file);
dst.compress(Bitmap.CompressFormat.PNG, 90, out);
out.flush();
out.close();
} catch (Exception e) {
e.printStackTrace();
}
return file;
}
#Override
protected void onPostExecute(File file) {
android.util.Log.d("GameActivity", "FlipBitmap onPostExecute");
if (file.exists()) {
final Intent i = new Intent(Intent.ACTION_SENDTO);
i.setData(Uri.parse("mailto:" + Globals.Network.MAIL_TO));
i.putExtra(Intent.EXTRA_SUBJECT, Globals.Network.MAIL_SUBJECT);
i.putExtra(Intent.EXTRA_TEXT, mBodyEmail);
i.putExtra(Intent.EXTRA_STREAM, Uri.parse("file://" + file.getAbsolutePath()));
startActivity(Intent.createChooser(i, "Invia via email"));
}
}
}
Thanks in advance!
I am trying to take a photo from the phones camera and place in it in a ImageButton as part of a Profile activity including all users details, and then save the image as shared pref.
If I use the following code the ImageButton simply does not update:
protected void onActivityResult(int requestCode, int resultCode, final Intent data) {
// method checks data returned form camera via startActivityForResult
super.onActivityResult(requestCode, resultCode, data);
Runnable runnable = new Runnable(){#Override
public void run() {
handler.post(new Runnable(){
#Override
public void run() {
Bundle extras = data.getExtras();
photo = (Bitmap) extras.get("data");
takeAndSetPhoto.setImageBitmap(photo);
Toast.makeText(getBaseContext(), "Image set to profile!",
Toast.LENGTH_SHORT).show();
}//edn inner run
});//end new runnab;e
}
};
new Thread(runnable).start();
}//end if result OK
}// end onActivity
Alternatively the image DOES load if I use this method but I get erros:
Allocation fail for scaled Bitmap
Out Of Memory 01-03 10:13:06.645: E/AndroidRuntime(30163):
android.view.InflateException: Binary XML file line #18: Error
inflating class
The code is using Uro for phot taken:
public void run() {
Bundle extras = data.getExtras();
Uri photoShot = data.getData(); view
takeAndSetPhoto.setImageURI(photoShot);
Toast.makeText(getBaseContext(), "Image set to profile!",
Toast.LENGTH_SHORT).show();
}//edn inner run
All suggestions appreciated.
Cheers
Ciaran
Resizing Image:
When resizing the image:
Bitmap photoBitmap = Bitmap.createScaledBitmap(photo, 100, 100, false);
It works fine on the emulator with SD card activated, but crashes on a real device (a tablet 8inch)
Problem was I was not resizing image when taken form device phone, passed image to this method before setting to the image view:
public Bitmap reSizeImage(Bitmap bitmapImage) {
// resize bitmap image passed and rerun new one
Bitmap resizedImage = null;
float factorH = h / (float) bitmapImage.getHeight();
float factorW = w / (float) bitmapImage.getWidth();
float factorToUse = (factorH > factorW) ? factorW : factorH;
try {
resizedImage = Bitmap.createScaledBitmap(bitmapImage,
(int) (bitmapImage.getWidth() * factorToUse),
(int) (bitmapImage.getHeight() * factorToUse), false);
} catch (IllegalArgumentException e) {
Log.d(TAG, "Problem resizing Image #Line 510+");
e.printStackTrace();
}
Log.d(TAG,
"in resixed, value of resized image: "
+ resizedImage.toString());
return resizedImage;
}// end reSize
I am taking pictures via the android camera API:
In order to calculate the available memory for some image processing, I want to check whether the image fits into memory.
I am doing this with those functions:
/**
* Checks if a bitmap with the specified size fits in memory
* #param bmpwidth Bitmap width
* #param bmpheight Bitmap height
* #param bmpdensity Bitmap bpp (use 2 as default)
* #return true if the bitmap fits in memory false otherwise
*/
public static boolean checkBitmapFitsInMemory(long bmpwidth,long bmpheight, int bmpdensity ){
long reqsize=bmpwidth*bmpheight*bmpdensity;
long allocNativeHeap = Debug.getNativeHeapAllocatedSize();
if ((reqsize + allocNativeHeap + Preview.getHeapPad()) >= Runtime.getRuntime().maxMemory())
{
return false;
}
return true;
}
private static long getHeapPad(){
return (long) Math.max(4*1024*1024,Runtime.getRuntime().maxMemory()*0.1);
}
Problem is: I am still getting OutOfMemoryExceptions (not on my phone, but from people who already downloaded my app)
The exception occurs in the last line of following code:
public void onPictureTaken(byte[] data, Camera camera) {
Log.d(TAG, "onPictureTaken - jpeg");
final byte[] data1 = data;
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig = Bitmap.Config.ARGB_8888;
options.inSampleSize = downscalingFactor;
Log.d(TAG, "before gc");
printFreeRam();
System.gc();
Log.d(TAG, "after gc");
printFreeRam();
Bitmap photo = BitmapFactory.decodeByteArray(data1, 0, data1.length, options);
downscalingFactor is chosen via the checkBitmapFitsInMemory() method.
I am doing this like that:
for (downscalingFactor = 1; downscalingFactor < 16; downscalingFactor ++) {
double width = (double) bestPictureSize.width / downscalingFactor;
double height = (double) bestPictureSize.height / downscalingFactor;
if(Preview.checkBitmapFitsInMemory((int) width, (int) height, 4*4)){ // 4 channels (RGBA) * 4 layers
Log.v(TAG, " supported: " + width+'x'+height);
break;
}else{
Log.v(TAG, " not supported: " + width+'x'+height);
}
}
Anyone knows why this approach is so buggy?
Try chaning this :
Bitmap photo = BitmapFactory.decodeByteArray(data1, 0, data1.length, options);
to this :
Bitmap photo = BitmapFactory.decodeByteArray(**data**, 0, data.length, options);